addresses stream frame memory pooling issue where StreamFrame objects
weren't properly returned to sync.Pool during stream cancellation
see quic-go/quic-go#5327
* chore(deps): update go-libp2p to v0.44.0
- includes self-healing UPnP port mappings after router restarts
- update go-netroute to v0.3.0
- update quic-go to v0.55.0
- add changelog entry for UPnP fix
* docs: improve provide and UPnP clarity in changelog and docs
- add alert polling rationale to changelog
- add UPnP config note with default clarification
- clarify sweep timing and prefix length explanations
- add concrete examples for time offset and record holders
- improve workers stats formatting
- add See Also section to provide-stats.md
* docs: add RISC-V prebuilt binaries to changelog and README
- highlight linux-riscv64 availability with open hardware context
- update README with arm64 builds, remove 32-bit examples
* feat: provide stats
* added N/A
* format
* workers stats alignment
* ipfs provide stat --all --compact
* consolidating compact stat
* update column alignment
* flags combinations errors
* command description
* change schedule AvgPrefixLen to float
* changelog
* alignments
* provide stat description draft
* rephrased provide-stats.md
* linking provide-stats.md from command description
* documentation test
* fix: refactor provide stat command type handling
- add extractSweepingProvider() helper to reduce nested type switching
- extract lowWorkerThreshold constant for worker availability check
- fix --lan error handling to work with buffered providers
* docs: add clarifying comments
* fix(commands): improve provide stat compact mode
- prevent panic when both columns are empty
- fix column alignment with UTF-8 characters
- only track col0MaxWidth for first column (as intended)
* test: add tests for ipfs provide stat command
- test basic functionality, flags, JSON output
- test legacy provider behavior
- test integration with content scheduling
- test disabled provider configurations
- add parseSweepStats helper with t.Helper()
* docs: improve provide command help text
- update tagline to "Control and monitor content providing"
- simplify help descriptions
- make error messages more consistent
- update tests to match new error messages
* metrics rename
```
Next reprovide at:
Next prefix:
```
updated to:
```
Next region prefix:
Next region reprovide:
```
* docs: improve Provide system documentation clarity
Enhance documentation for the Provide system to better explain how provider
records work and the differences between sweep and legacy modes.
Changes to docs/config.md:
- Provide section: add clear explanation of provider records and their role
- Provide.DHT: add provider record lifecycle and two provider systems overview
- Provide.DHT.Interval: explain relationship to expiration, contrast sweep vs legacy behavior
- Provide.DHT.SweepEnabled: rewrite to explain legacy problem, sweep solution, and efficiency gains
- Monitoring section: prioritize command-line tools (ipfs provide stat) before Prometheus
Changes to core/commands/provide.go:
- ipfs provide stat help: add explanation of provider records, TTL expiration, and how sweep batching works
Changes to docs/changelogs/v0.39.md:
- Add context about why stats matter for monitoring provider health
- Emphasize real-time monitoring workflow with watch command
- Explain what users can observe (rates, queues, worker availability)
* depend on latest kad-dht master
* docs: nits
---------
Co-authored-by: Marcin Rataj <lidel@lidel.org>
* test: add migration tests for Windows and macOS
- add dedicated CI workflow for migration tests on Windows/macOS
- workflow triggers on migration-related file changes only
* build: remove redundant go version checks
- remove GO_MIN_VERSION and check_go_version scripts
- go.mod already enforces minimum version (go 1.25)
- fixes make build on Windows
* fix: windows migration panic by reading config into memory
fixes migration panic on Windows when upgrading from v0.37 to v0.38
by reading the entire config file into memory before performing atomic
operations. this avoids file locking issues on Windows where open files
cannot be renamed.
also fixes:
- TestRepoDir to set USERPROFILE on Windows (not just HOME)
- CLI migration tests to sanitize directory names (remove colons)
minimal fix that solves the "panic: error can't be dealt with
transactionally: Access is denied" error without adding unnecessary
platform-specific complexity.
* fix: set PATH for CLI migration tests in CI
the CLI tests need the built ipfs binary to be in PATH
* fix: use ipfs shutdown for graceful daemon termination in tests
replaces platform-specific signal handling with ipfs shutdown command
which works consistently across all platforms including Windows
* fix: isolate PATH modifications in parallel migration tests
tests running in parallel with t.Parallel() were interfering with each
other through global PATH modifications via os.Setenv(). this caused
tests to download real migration binaries instead of using mocks,
leading to Windows failures due to path separator issues in external tools.
now each test builds its own custom PATH and passes it explicitly to
commands, preventing interference between parallel tests.
* chore: improve error messages in WithBackup
* fix: Windows CI migration test failures
- add .exe extension to mock migration binaries on Windows
- handle repo lock file properly in mock migration binary
- ensure lock is created and removed to prevent conflicts
* refactor: align atomicfile error handling with fs-repo-migrations
- check close error in Abort() before attempting removal
- leave temp file on rename failure for debugging (like fs-repo-15-to-16)
- improves consistency with external migration implementations
* fix: use req.Context in repo migrate to avoid double-lock
The repo migrate command was calling cctx.Context() which has a hidden
side effect: it lazily constructs the IPFS node by calling GetNode(),
which opens the repository and acquires repo.lock. When migrations then
tried to acquire the same lock, it failed with "lock is already held by us"
because go4.org/lock tracks locks per-process in a global map.
The fix uses req.Context instead, which is a plain context.Context with
no side effects. This provides what migrations need (cancellation handling)
without triggering node construction or repo opening.
Context types explained:
- req.Context: Standard Go context for request lifetime, cancellation,
and timeouts. No side effects.
- cctx.Context(): Kubo-specific method that lazily constructs the full
IPFS node (opens repo, acquires lock, initializes subsystems). Returns
the node's internal context.
Why req.Context is correct here:
- Migrations work on raw filesystem (only need ConfigRoot path)
- Command has SetDoesNotUseRepo(true) - doesn't need running node
- Migrations handle their own locking via lockfile.Lock()
- Need cancellation support but not node lifecycle
The bug only appeared with embedded migrations (v16+) because they run
in-process. External migrations (pre-v16) were separate processes, so
each had isolated state. Sequential migrations (forward then backward)
in the same process exposed this latent double-lock issue.
Also adds repo.lock acquisition to RunEmbeddedMigrations to prevent
concurrent migration access, and removes the now-unnecessary daemon
lock check from the migrate command handler.
* fix: use req.Context for migrations and autoconf in daemon startup
daemon.go was incorrectly using cctx.Context() in two critical places:
1. Line 337: migrations call - cctx.Context() triggers GetNode() which
opens the repo and acquires repo.lock BEFORE migrations run, causing
"lock is already held by us" errors when migrations try to lock
2. Line 390: autoconf client.Start() - uses context for HTTP timeouts
and background updater lifecycle, doesn't need node construction
Both now use req.Context (plain Go context) which provides:
- request lifetime and cancellation
- no side effects (doesn't construct node or open repo)
- correct lifecycle for HTTP requests and background goroutines
(cherry picked from commit f4834e797d)
* test: add migration tests for Windows and macOS
- add dedicated CI workflow for migration tests on Windows/macOS
- workflow triggers on migration-related file changes only
* build: remove redundant go version checks
- remove GO_MIN_VERSION and check_go_version scripts
- go.mod already enforces minimum version (go 1.25)
- fixes make build on Windows
* fix: windows migration panic by reading config into memory
fixes migration panic on Windows when upgrading from v0.37 to v0.38
by reading the entire config file into memory before performing atomic
operations. this avoids file locking issues on Windows where open files
cannot be renamed.
also fixes:
- TestRepoDir to set USERPROFILE on Windows (not just HOME)
- CLI migration tests to sanitize directory names (remove colons)
minimal fix that solves the "panic: error can't be dealt with
transactionally: Access is denied" error without adding unnecessary
platform-specific complexity.
* fix: set PATH for CLI migration tests in CI
the CLI tests need the built ipfs binary to be in PATH
* fix: use ipfs shutdown for graceful daemon termination in tests
replaces platform-specific signal handling with ipfs shutdown command
which works consistently across all platforms including Windows
* fix: isolate PATH modifications in parallel migration tests
tests running in parallel with t.Parallel() were interfering with each
other through global PATH modifications via os.Setenv(). this caused
tests to download real migration binaries instead of using mocks,
leading to Windows failures due to path separator issues in external tools.
now each test builds its own custom PATH and passes it explicitly to
commands, preventing interference between parallel tests.
* chore: improve error messages in WithBackup
* fix: Windows CI migration test failures
- add .exe extension to mock migration binaries on Windows
- handle repo lock file properly in mock migration binary
- ensure lock is created and removed to prevent conflicts
* refactor: align atomicfile error handling with fs-repo-migrations
- check close error in Abort() before attempting removal
- leave temp file on rename failure for debugging (like fs-repo-15-to-16)
- improves consistency with external migration implementations
* fix: use req.Context in repo migrate to avoid double-lock
The repo migrate command was calling cctx.Context() which has a hidden
side effect: it lazily constructs the IPFS node by calling GetNode(),
which opens the repository and acquires repo.lock. When migrations then
tried to acquire the same lock, it failed with "lock is already held by us"
because go4.org/lock tracks locks per-process in a global map.
The fix uses req.Context instead, which is a plain context.Context with
no side effects. This provides what migrations need (cancellation handling)
without triggering node construction or repo opening.
Context types explained:
- req.Context: Standard Go context for request lifetime, cancellation,
and timeouts. No side effects.
- cctx.Context(): Kubo-specific method that lazily constructs the full
IPFS node (opens repo, acquires lock, initializes subsystems). Returns
the node's internal context.
Why req.Context is correct here:
- Migrations work on raw filesystem (only need ConfigRoot path)
- Command has SetDoesNotUseRepo(true) - doesn't need running node
- Migrations handle their own locking via lockfile.Lock()
- Need cancellation support but not node lifecycle
The bug only appeared with embedded migrations (v16+) because they run
in-process. External migrations (pre-v16) were separate processes, so
each had isolated state. Sequential migrations (forward then backward)
in the same process exposed this latent double-lock issue.
Also adds repo.lock acquisition to RunEmbeddedMigrations to prevent
concurrent migration access, and removes the now-unnecessary daemon
lock check from the migrate command handler.
* fix: use req.Context for migrations and autoconf in daemon startup
daemon.go was incorrectly using cctx.Context() in two critical places:
1. Line 337: migrations call - cctx.Context() triggers GetNode() which
opens the repo and acquires repo.lock BEFORE migrations run, causing
"lock is already held by us" errors when migrations try to lock
2. Line 390: autoconf client.Start() - uses context for HTTP timeouts
and background updater lifecycle, doesn't need node construction
Both now use req.Context (plain Go context) which provides:
- request lifetime and cancellation
- no side effects (doesn't construct node or open repo)
- correct lifecycle for HTTP requests and background goroutines
* feat: add docker stub for deprecated ipfs/go-ipfs name
implements docker part of #10941 by creating a stub image that redirects
users from ipfs/go-ipfs to ipfs/kubo
changes:
- add stub dockerfile and script in .github/legacy/
- modify docker-image.yml to push stub to ipfs/go-ipfs with same tags as ipfs/kubo
- remove ipfs/go-ipfs from get-docker-tags.sh to prevent docker-hub job from pushing to legacy name
- stub displays clear deprecation message directing users to ipfs/kubo:release
* docs: add v0.39 changelog highlight for go-ipfs deprecation
* fix: add MFS operation limit for --flush=false
adds a global counter that tracks consecutive MFS operations performed
with --flush=false and fails with clear error after limit is reached.
this prevents unbounded memory growth while avoiding the data corruption
risks of auto-flushing.
- adds Internal.MFSNoFlushLimit config
- operations fail with actionable error at limit
- counter resets on successful flush or any --flush=true operation
- operations with --flush=true reset and don't count
this commit removes automatic flush from https://github.com/ipfs/kubo/pull/10971
and instead errors to encourage users of --flush=false to develop a habit
of calling 'ipfs files flush' periodically.
boxo will no longer auto-flush (https://github.com/ipfs/boxo/pull/1041) to
avoid corruption issues, and kubo applies the limit to 'ipfs files' commands
instead.
closes#10842
* test: add tests for MFSNoFlushLimit
tests verify the new Internal.MFSNoFlushLimit config option:
- default limit of 256 operations
- custom limit configuration
- counter reset on flush=true
- counter reset on explicit flush command
- limit=0 disables the feature
- multiple MFS command types count towards limit
* docs: explain why MFS operations fail instead of auto-flushing
addresses feedback from https://github.com/ipfs/kubo/pull/10985#pullrequestreview-3256250970
- clarify that automatic flushing at limit was considered but rejected
- explain the data corruption risks of auto-flushing
- guide users who want auto-flush to use --flush=true (default)
- document benefits of explicit failure for batch operations
(cherry picked from commit a688b7eeac)
* Filestore: provide Filestore nodes
When strategy is set to "all" (the blockstore does all the providing when a
block is written), no providing was happening to Filestore blocks that were
not written to the underlying blockstore (so, the DAG leaves, as they live in
the filesystem directly). This fixes that.
* docs: clarify filestore and urlstore fix in changelog
both filestore (local file references) and urlstore (HTTP/HTTPS URL
references) blocks are now properly provided shortly after initial add
(cherry picked from commit f63887ae96)
* fix: add MFS operation limit for --flush=false
adds a global counter that tracks consecutive MFS operations performed
with --flush=false and fails with clear error after limit is reached.
this prevents unbounded memory growth while avoiding the data corruption
risks of auto-flushing.
- adds Internal.MFSNoFlushLimit config
- operations fail with actionable error at limit
- counter resets on successful flush or any --flush=true operation
- operations with --flush=true reset and don't count
this commit removes automatic flush from https://github.com/ipfs/kubo/pull/10971
and instead errors to encourage users of --flush=false to develop a habit
of calling 'ipfs files flush' periodically.
boxo will no longer auto-flush (https://github.com/ipfs/boxo/pull/1041) to
avoid corruption issues, and kubo applies the limit to 'ipfs files' commands
instead.
closes#10842
* test: add tests for MFSNoFlushLimit
tests verify the new Internal.MFSNoFlushLimit config option:
- default limit of 256 operations
- custom limit configuration
- counter reset on flush=true
- counter reset on explicit flush command
- limit=0 disables the feature
- multiple MFS command types count towards limit
* docs: explain why MFS operations fail instead of auto-flushing
addresses feedback from https://github.com/ipfs/kubo/pull/10985#pullrequestreview-3256250970
- clarify that automatic flushing at limit was considered but rejected
- explain the data corruption risks of auto-flushing
- guide users who want auto-flush to use --flush=true (default)
- document benefits of explicit failure for batch operations
* Filestore: provide Filestore nodes
When strategy is set to "all" (the blockstore does all the providing when a
block is written), no providing was happening to Filestore blocks that were
not written to the underlying blockstore (so, the DAG leaves, as they live in
the filesystem directly). This fixes that.
* docs: clarify filestore and urlstore fix in changelog
both filestore (local file references) and urlstore (HTTP/HTTPS URL
references) blocks are now properly provided shortly after initial add
* docs: improve slow reprovide warning messages
simplify warning text and provide actionable solutions in order of preference
* feat(config): add validation for Provide.DHT settings
- validate interval doesn't exceed DHT record validity (48h)
- validate worker counts and other parameters are within valid ranges
- improve slow reprovide warning messages to reference config parameter
- add tests for all validation cases
* docs: add reprovide cycle visualization
shows traffic patterns of legacy vs sweep vs accelerated DHT
* fix(webui): show helpful errors for incompatible configurations
- show error when Gateway.NoFetch=true and WebUI is not available locally
- show error when Gateway.DeserializedResponses=false (incompatible)
- add tests for both error scenarios
* chore(webui): update to v4.9.0
https://github.com/ipfs/ipfs-webui/releases/tag/v4.9.0
* docs: add WebUI v4.9.0 update to v0.38 changelog
- highlight new diagnostics screen for troubleshooting
- include screenshots of key features in table format
- add local access URL for WebUI
- update TOC with new sections
* fix: prevent --flush=false in 'ipfs files rm' command
the 'ipfs files rm' command always flushes for safety to ensure
data integrity. this change adds an explicit error when users
try to pass --flush=false, improving ux and preventing confusion.
related to #10842
* fix: add MFS cache size limit to prevent unbounded growth
- add Internal.MFSAutoflushThreshold config (experimental)
- directories auto-flush when cache exceeds threshold with --flush=false
- prevents high memory usage issue from #10842
- default: 256 entries per directory (matching HAMT shard size)
- set to 0 to restore old behavior (risky, may cause errors)
Closes#10842
* fix: use CheckIfPinnedWithType for pin ls with names
updates to use CheckIfPinnedWithType method from https://github.com/ipfs/boxo/pull/1035,
enabling efficient pin name retrieval for 'ipfs pin ls <cid> --names'
- uses new CheckIfPinnedWithType from boxo for type-specific pin checks
- pin names are now returned when listing specific CIDs with --names flag
* test: add CLI tests for pin ls with names
tests cover:
- pin ls with specific CIDs returning names
- pin ls without CID listing all pins with names
- pin ls with --type and --names combinations
- JSON output with and without names
- pin update preserving names
- error cases (invalid CID, unpinned CID)
* docs: add pin name improvements to v0.38 changelog
covers fix for ipfs pin ls --names with specific CIDs
and RPC pin name leak fix
* fix(rpc): support pin names in Add()
passes the Name field from PinAddSettings to the API request
adds test to verify pin names work via RPC
* test: add coverage for pin names functionality
- test special characters, unicode, long names
- test concurrent operations
- test persistence across daemon restarts
- test garbage collection preservation
- fix indirect pin test logic
* chore: boxo@main with boxo#1039
* fix(pin): improve pin ls robustness and validation
- add nil check for n.Pinning with early fail-fast validation
- use pin.StringToMode() for consistent type validation
- add edge case tests for invalid types and unpinned CIDs
* refactor: consolidate Provider/Reprovider into unified Provide config
- merge Provider and Reprovider configs into single Provide section
- add fs-repo-17-to-18 migration for config consolidation
- improve migration ergonomics with common package utilities
- convert deprecated "flat" strategy to "all" during migration
- improve Provide docs
* docs: add total_provide_count metric guidance
- document how to monitor provide success rates via prometheus metrics
- add performance comparison section to changelog
- explain how to evaluate sweep vs legacy provider effectiveness
* fix: add OpenTelemetry meter provider for metrics
- set up meter provider with Prometheus exporter in daemon
- enables metrics from external libs like go-libp2p-kad-dht
- fixes missing total_provide_count_total when SweepEnabled=true
- update docs to reflect actual metric names
---------
Co-authored-by: gammazero <11790789+gammazero@users.noreply.github.com>
Co-authored-by: guillaumemichel <guillaume@michel.id>
Co-authored-by: Daniel Norman <1992255+2color@users.noreply.github.com>
Co-authored-by: Hector Sanjuan <code@hector.link>
* reprovide sweep draft
* update reprovider dep
* go mod tidy
* fix provider type
* change router type
* dual reprovider
* revert to provider.System
* back to start
* SweepingReprovider test
* fix nil pointer deref
* noop provider for nil dht
* disabled initial network estimation
* another iteration
* suppress missing self addrs err
* silence empty rt err on lan dht
* comments
* new attempt at integrating
* reverting changes in core/node/libp2p/routing.go
* removing SweepingProvider
* make reprovider optional
* add noop reprovider
* update KeyChanFunc type alias
* restore boxo KeyChanFunc
* fix missing KeyChanFunc
* test(sharness): PARALLEL=1 and timeout 30m
running sequentially to see where timeout occurs
* initialize MHStore
* revert workflow debug
* config
* config docs
* merged IpfsNode provider and reprovider
* move Provider interface to from kad-dht to node
* moved Provider interface from kad-dht to kubo/core/node
* mod_tidy
* Add Clear to Provider interface
* use latest kad-dht commit
* make linter happy
* updated boxo provide interface
* boxo PR fix
* using latest kad-dht commit
* use latest boxo release
* fix fx
* fx cyclic deps
* fix merge issues
* extended tests
* don't provide LAN DHT
* docs
* restore dual dht provider
* don't start provider before it is online
* address linter
* dual/provider fix
* add delay in provider tests for dht bootstrap
* add OfflineDelay parameter to config
* remove increase number of workers in test
* improved keystore gc process
* fix: replace incorrect logger import in coreapi
replaced github.com/labstack/gommon/log with the standard
github.com/ipfs/go-log/v2 logger used throughout kubo.
removed unused labstack dependency from go.mod files.
* fix: remove duplicate WithDefault call in provider config
* fix: use correct option method for burst workers
* fix: improve error messages for experimental sweeping provider
updated error messages to clearly indicate when commands are unavailable
due to experimental sweeping provider being enabled via Reprovider.Sweep.Enabled=true
* docs: remove obsolete KeyStoreGCInterval config
removed from config.md as option no longer exists (removed in b540fba1a)
updated keystore description to reflect gc happens at reprovide interval
* docs: add TODO placeholder changelog for experimental sweeping DHT provider
using v0.38-TODO.md name to avoid merge conflicts with master branch
and allow CI tests to run. will be renamed to v0.38.md once config
migration is added to the PR
* fix: provideKeysRec go routine
* clear keystore on close
* fix: datastore prefix
* fix: improve error handling in provideKeysRec
- close errCh channel to distinguish between nil and pending errors
- check for pending errors when provided.New closes
- handle context cancellation during error send
- prevent race condition where errors could be silently lost
this ensures DAG walk errors are always propagated correctly
* address gammazero's review
* rename BurstProvider to LegacyProvider
* use latest provider/keystore
* boxo: make mfs StartProviding async
* bump boxo
* chore: update boxo to f2b4e12fb9a8ac138ccb82aae3b51ec51d9f631c
- updated boxo dependency to specified commit
- updated go.mod and go.sum files across all modules
* use latest kad-dht/boxo
* Buffered SweepingProvider wrapper
* use latest kad-dht commit
* allow no DHT router
* use latest kad-dht & boxo
---------
Co-authored-by: Marcin Rataj <lidel@lidel.org>
Co-authored-by: gammazero <11790789+gammazero@users.noreply.github.com>
* fix: enforce identity CID size limits
- validate --inline-limit against verifcid.MaxDigestSize
- add error when --hash=identity exceeds size limit
- add tests for identity CID overflow scenarios
- update help text to show maximum inline limit
This prevents creation of unbounded identity CIDs by enforcing
the 128-byte limit defined in https://github.com/ipfs/boxo/pull/1018Fixes#6011
IPIP: https://github.com/ipfs/specs/pull/512
* Reprovider strategy: rename "flat" to "all".
Value "flat" now parses to "all". Behaviour from "all" removed.
Fixes#10864 which has detailed explanation.
* core/node/provider.go: remove unused function mfsRootProvider
It was used in the "all" strategy.
* docs: improve reprovider.strategy=all changelog framing
- highlight memory efficiency improvements
- clarify this removes v0.28 workaround
- update config.md memory requirements
- fix announce-on profile typo
* feat: deprecate Reprovider.Strategy=flat
- add deprecation warning in daemon.go when flat strategy is detected
- document that flat is deprecated in ParseReproviderStrategy comment
- add explicit test case for flat -> all mapping
- flat continues to work but users are warned to migrate to all
---------
Co-authored-by: Marcin Rataj <lidel@lidel.org>
* Initial pass at Telemetry plugin
Currently, IP Shipyard, with the help of Probelab, monitor and extract
Amino/IPFS public network metrics with the use of DHT crawlers and
bootstrappers (via peerlog plugin). For example, we log all peer IDs seen and
their AgentVersion/Addresses obtained from the `identify` protocol, which
provides insights into protocol usage, total number of peers etc.
We would like to increase the ability to obtain more insights from the network
by collecting some more information in the future, but also to give users more
control over this collection (i.e. opt-out). The information collected will
not allow unique identification of anyone and is only used for aggregation.
Now, this PR explores a way of moving in this direction:
* A new "telemetry" fx plugin is in charge of dealing with telemetry
* The FX plugin allows to plug and make decisions / take actions during the setup phase:
* We can inspect whether we are using Private Networks before the libp2p.Host has been initialized.
* We can send telemetry after the libp2p Host is initialized.
* Everything is self-contained. Custom builds can remove the plugin altogether without needing to surgically edit the code.
As for behaviour:
* The user can opt-in/out via EnvVar, file in the repo path or plugin configuration.
* Users on private networks or with custom bootstrappers are detected, offered a wall of text explaining why we need telemetry and invited to opt-in. Opt-out happens otherwise on a timeout (with no input). Their preferences are stored.
* Users on standard settings are opted-in by default. This is the status quo in Kubo already, except they don't get a chance to opt out.
The telemetry libp2p protocol is yet to be defined, but expect something similar to identify, with a protobuf being pushed to bootstrappers or to a specific telemetry node that we define. In the case of pnets, this will be done with a temporary peer.
* checkpoint
* telemetry plugin: second pass
* On first run it generates a UUID and shows a message to the user.
* UUID is persistend to "telemetry_uuid"
* Sends telemetry 1 minute after boot and every 24h
* LogEvent is the thing containing all the telemetry that is sent
* Opt-out possible via env-var or plugin configuration
* Telemetry: add changelog and environment variable documentation
* docs: improved daemon message
making it more obvious nothing was sent yet
and that user had 15m to out-out
plus some debug logs that confirm opt-out
* refactor: rename IPFS_TELEMETRY_MODE to IPFS_TELEMETRY
* fix: add User-Agent header to telemetry requests
---------
Co-authored-by: Andrew Gillis <11790789+gammazero@users.noreply.github.com>
Co-authored-by: Marcin Rataj <lidel@lidel.org>