Compare commits

...

47 Commits

Author SHA1 Message Date
Cassandra Heart
92c1f07562
update release notes 2026-02-09 00:10:20 -06:00
Cassandra Heart
12996487c3
v2.1.0.18 (#508)
* experiment: reject bad peer info messages

* v2.1.0.18 preview

* add tagged sync

* Add missing hypergraph changes

* small tweaks to sync

* allow local sync, use it for provers with workers

* missing file

* resolve build error

* resolve sync issue, remove raw sync

* resolve deletion promotion bug

* resolve sync abstraction leak from tree deletion changes

* rearrange prover sync

* remove pruning from sync

* restore removed sync flag

* fix: sync, event stream deadlock, heuristic scoring of better shards

* resolve hanging shutdown + pubsub proxy issue

* further bugfixes: sync (restore old leaf sync), pubsub shutdown, merge events

* fix: clean up rust ffi, background coverage events, and sync tweaks

* fix: linking issue for channel, connectivity test aggression, sync regression, join tests

* fix: disjoint sync, improper application of filter

* resolve sync/reel/validation deadlock

* adjust sync to handle no leaf edge cases, multi-path segment traversal

* use simpler sync

* faster, simpler sync with some debug extras

* migration to recalculate

* don't use batch

* square up the roots

* fix nil pointer

* fix: seniority calculation, sync race condition, migration

* make sync dumber

* fix: tree deletion issue

* fix: missing seniority merge request canonical serialization

* address issues from previous commit test

* stale workers should be cleared

* remove missing gap check

* rearrange collect, reduce sync logging noise

* fix: the disjoint leaf/branch sync case

* nuclear option on sync failures

* v2.1.0.18, finalized
2026-02-08 23:51:51 -06:00
Hamza Hamud
d2b0651e2d
feat: add comprehensive docker guide and add config generator via cli (#501)
* feat: add cli option

* docs: add comprehensive docker guide and update readme

* docs: refine docker guide config generation command

* feat: automate docker build and deploy workflows via Taskfile

* chore: consolidate docker files and refine documentation

- Move all Docker-related files to docker/ directory
- Consolidate DOCKER-README.md and DOCKER_GUIDE.md into docker/README.md
- Update docker/Taskfile.yaml with refined paths and new tasks
- Update root Taskfile.yaml to preserve only native build tasks
- Update docker-compose.yml to map config to root .config/
- Expand docker/README.md with comprehensive guides and troubleshooting

* chore: clean up taskfile

* fix: comments

* fix: remove additional comments

* feat: move taskfile to root

* fix: remove vdf commands

* fix: comments
2025-12-31 09:28:47 -06:00
Tyler Sturos
83624369bf
Merge pull request #500 from hhamud/fix/underscore
Fix/underscore
2025-12-19 10:59:01 -09:00
Hamza Hamud
d3778a7056 Merge branch 'develop' into fix/underscore 2025-12-19 19:04:50 +00:00
Cassandra Heart
7a4484b05b
v2.1.0.17 (#499)
* v2.1.0.17

* add release notes
2025-12-19 12:29:23 -06:00
Hamza Hamud
89116b4966 fix: mistake with underscore 2025-12-19 17:22:04 +00:00
Hamza Hamud
af5097c189
feat: convert emp install python script into bash emp (#496)
* feat: convert emp install python script into bash emp

* feat: add script
2025-12-19 08:07:47 -06:00
Hamza Hamud
a05d6b38b1
refactor: parallelize Dockerfile.source build (fixes #490) (#495) 2025-12-17 19:10:03 -06:00
Cassandra Heart
d641291822
move out full_ceremony.json for crates.io publish 2025-12-17 05:22:41 -06:00
Cassandra Heart
3f5a55dc1d
fix repo name 2025-12-17 04:54:42 -06:00
Cassandra Heart
2cd6f3d182
prepare bls48581 library for publishing (#494) 2025-12-17 04:52:18 -06:00
Black Swan
01b9b2c3b2
append quic-v1 to UDP multiaddr to make it valid (#486) 2025-12-17 02:47:43 -06:00
Cassandra Heart
ab99f105f7
v2.1.0.16 (#492) 2025-12-15 16:45:31 -06:00
Black Swan
e10950dfe4
force TCP for stream multiaddr (#487) 2025-12-15 16:19:46 -06:00
Black Swan
51eafd35d4
Post 2.1.0.15 optimizations (#491)
* implement tStringCast() for tests

* make grpc dependency direct

* import p2p module once

* fix p2p-ping after blackhole detection deprecation
2025-12-15 16:19:12 -06:00
Cassandra Heart
0425b38fa2
v2.1.0.15 (#485)
* v2.1.0.15

* add release notes
2025-12-09 21:55:18 -06:00
Cassandra Heart
8dc7e0d526
v2.1.0.14 (#484)
* v2.1.0.14

* release notes
2025-12-03 23:56:34 -06:00
Cassandra Heart
3f516b04fd
v2.1.0.13 (#483)
* v2.1.0.13

* add release notes
2025-11-29 19:59:26 -06:00
Cassandra Heart
7b923b91c4
v2.1.0.12 (#482) 2025-11-26 03:22:48 -06:00
Cassandra Heart
54584b0a63
merge conflict resolved 2025-11-21 04:45:41 -06:00
Cassandra Heart
aae0bcca59
add missing PatchNumber 2025-11-21 04:43:40 -06:00
Cassandra Heart
aac841e6e6
v2.1.0.11 (#477)
* v2.1.0.11

* v2.1.0.11, the later half
2025-11-21 04:41:02 -06:00
Black Swan
29a49fa282
Print patch number in node-info (#479)
* print patch number in node-info

* rename patchVersion to patchNumber for better differentiation between node version and patch number
2025-11-21 04:31:32 -06:00
Black Swan
26f8e6d51a
Post 2.1.0.11 fixes (#478) 2025-11-20 21:46:58 -06:00
Cassandra Heart
81f2767ab8
v2.1.0.11 (#476) 2025-11-19 17:13:34 -06:00
Cassandra Heart
21b735f841
fix: cutoff frames should use constants 2025-11-19 02:07:56 -06:00
Cassandra Heart
0a2e2fee03
v2.1.0.10 (#475) 2025-11-19 00:19:04 -06:00
Black Swan
2b33fa2a74
Post 2.1.0.9 fixes (#474)
* implement Close() in MockPubSub

* add TODO context to newPubSubProxyClient constructor

* import types/tries package only once
2025-11-19 00:02:18 -06:00
Cassandra Heart
215dd2ec99
v2.1.0.9 (#471)
* v2.1.0.9

* resolved: sync skipping, time reel disconnect for consensus nodes, proxy pubsub bugs, worker management bugs
2025-11-16 20:14:14 -06:00
Black Swan
3069420a76
-peer-info and several other minor enhancements (#470)
* align mockFrameProver with updated frameProver interface

* go mod tidy for types

* go mod tidy for node

* remove unnecessary nil check

* fix peer-info reachability printing format

* skip download of missing root folder go modules

* print peer node version for `-peer-info`

* go mod tidy for protobufs
2025-11-16 05:31:16 -06:00
Cassandra Heart
6f1cd95c69
add missing bls methods 2025-11-15 01:50:18 -06:00
Cassandra Heart
d871f2ea51
incl missed staging method call 2025-11-15 01:48:44 -06:00
Cassandra Heart
f62a98211c
v2.1.0.8 (#468) 2025-11-15 01:39:26 -06:00
Cassandra Heart
1ba9f52ad6
v2.1.0.7 (#466) 2025-11-13 23:38:04 -06:00
Cassandra Heart
f2fa7bf57f
v2.1.0.6 (#465) 2025-11-13 04:57:52 -06:00
Cassandra Heart
c797d482f9
v2.1.0.5 (#457)
* wip: conversion of hotstuff from flow into Q-oriented model

* bulk of tests

* remaining non-integration tests

* add integration test, adjust log interface, small tweaks

* further adjustments, restore full pacemaker shape

* add component lifecycle management+supervisor

* further refinements

* resolve timeout hanging

* mostly finalized state for consensus

* bulk of engine swap out

* lifecycle-ify most types

* wiring nearly complete, missing needed hooks for proposals

* plugged in, vetting message validation paths

* global consensus, plugged in and verified

* app shard now wired in too

* do not decode empty keys.yml (#456)

* remove obsolete engine.maxFrames config parameter (#454)

* default to Info log level unless debug is enabled (#453)

* respect config's  "logging" section params, remove obsolete single-file logging (#452)

* Trivial code cleanup aiming to reduce Go compiler warnings (#451)

* simplify range traversal

* simplify channel read for single select case

* delete rand.Seed() deprecated in Go 1.20 and no-op as of Go 1.24

* simplify range traversal

* simplify channel read for single select case

* remove redundant type from array

* simplify range traversal

* simplify channel read for single select case

* RC slate

* finalize 2.1.0.5

* Update comments in StrictMonotonicCounter

Fix comment formatting and clarify description.

---------

Co-authored-by: Black Swan <3999712+blacks1ne@users.noreply.github.com>
2025-11-11 05:00:17 -06:00
Cassandra Heart
4df761de20
qol: add generated files 2025-10-25 04:42:39 -05:00
Cassandra Heart
19ca2cc553
v2.1.0.4 (#450) 2025-10-25 02:55:12 -05:00
Cassandra Heart
0053dcb5e0
amend: fix-up for prover set 2025-10-24 00:28:47 -05:00
Cassandra Heart
eb0b54241d
v2.1.0.3 (#449) 2025-10-23 22:43:17 -05:00
Cassandra Heart
ab34157f03
fix: remove merge conflict markers? 2025-10-23 02:29:06 -05:00
Cassandra Heart
6d5dac23cf
amend: missing wire.go change 2025-10-23 02:13:12 -05:00
Cassandra Heart
85c7bd5307
amend: missing keys.go change 2025-10-23 01:33:30 -05:00
Cassandra Heart
53f7c2b5c9
v2.1.0.2 (#442)
* v2.1.0.2

* restore tweaks to simlibp2p

* fix: nil ref on size calc

* fix: panic should induce shutdown from event_distributor

* fix: friendlier initialization that requires less manual kickstarting for test/devnets

* fix: fewer available shards than provers should choose shard length

* fix: update stored worker registry, improve logging for debug mode

* fix: shut the fuck up, peer log

* qol: log value should be snake cased

* fix:non-archive snap sync issues

* fix: separate X448/Decaf448 signed keys, add onion key to registry

* fix: overflow arithmetic on frame number comparison

* fix: worker registration should be idempotent if inputs are same, otherwise permit updated records

* fix: remove global prover state from size calculation

* fix: divide by zero case

* fix: eager prover

* fix: broadcast listener default

* qol: diagnostic data for peer authenticator

* fix: master/worker connectivity issue in sparse networks

tight coupling of peer and workers can sometimes interfere if mesh is sparse, so give workers a pseudoidentity but publish messages with the proper peer key

* fix: reorder steps of join creation

* fix: join verify frame source + ensure domain is properly padded (unnecessary but good for consistency)

* fix: add delegate to protobuf <-> reified join conversion

* fix: preempt prover from planning with no workers

* fix: use the unallocated workers to generate a proof

* qol: underflow causes join fail in first ten frames on test/devnets

* qol: small logging tweaks for easier log correlation in debug mode

* qol: use fisher-yates shuffle to ensure prover allocations are evenly distributed when scores are equal

* qol: separate decisional logic on post-enrollment confirmation into consensus engine, proposer, and worker manager where relevant, refactor out scoring

* reuse shard descriptors for both join planning and confirm/reject decisions

* fix: add missing interface method and amend test blossomsub to use new peer id basis

* fix: only check allocations if they exist

* fix: pomw mint proof data needs to be hierarchically under global intrinsic domain

* staging temporary state under diagnostics

* fix: first phase of distributed lock refactoring

* fix: compute intrinsic locking

* fix: hypergraph intrinsic locking

* fix: token intrinsic locking

* fix: update execution engines to support new locking model

* fix: adjust tests with new execution shape

* fix: weave in lock/unlock semantics to liveness provider

* fix lock fallthrough, add missing allocation update

* qol: additional logging for diagnostics, also testnet/devnet handling for confirmations

* fix: establish grace period on halt scenario to permit recovery

* fix: support test/devnet defaults for coverage scenarios

* fix: nil ref on consensus halts for non-archive nodes

* fix: remove unnecessary prefix from prover ref

* add test coverage for fork choice behaviors and replay – once passing, blocker (2) is resolved

* fix: no fork replay on repeat for non-archive nodes, snap now behaves correctly

* rollup of pre-liveness check lock interactions

* ahead of tests, get the protobuf/metrics-related changes out so teams can prepare

* add test coverage for distributed lock behaviors – once passing, blocker (3) is resolved

* fix: blocker (3)

* Dev docs improvements (#445)

* Make install deps script more robust

* Improve testing instructions

* Worker node should stop upon OS SIGINT/SIGTERM signal (#447)

* move pebble close to Stop()

* move deferred Stop() to Start()

* add core id to worker stop log message

* create done os signal channel and stop worker upon message to it

---------

Co-authored-by: Cassandra Heart <7929478+CassOnMars@users.noreply.github.com>

---------

Co-authored-by: Daz <daz_the_corgi@proton.me>
Co-authored-by: Black Swan <3999712+blacks1ne@users.noreply.github.com>
2025-10-23 01:03:06 -05:00
Black Swan
73872da86c
enhance clarity behind worker count calculation (#444) 2025-10-06 21:19:17 -05:00
Black Swan
66b89e3f6e
ARCHITECTURE.md enhancements (#443)
* `main process` => `master process` terminology alignment

* update worker spawning details
2025-10-06 21:17:17 -05:00
692 changed files with 138671 additions and 24998 deletions

View File

@ -1,3 +1,22 @@
# Use a custom docker image name # Use a custom docker image name
# Default: quilibrium # Default: quilibrium
QUILIBRIUM_IMAGE_NAME= QUILIBRIUM_IMAGE_NAME=
# Use a custom P2P port.
# Default: 8336
QUILIBRIUM_P2P_PORT=
# Use a custom gRPC port.
# Default: 8337
QUILIBRIUM_GRPC_PORT=
# Use a custom REST port.
# Default: 8338
QUILIBRIUM_REST_PORT=
# The public DNS name or IP address for this Quilibrium node.
NODE_PUBLIC_NAME=
# Use a custom configuration directory.
# Default: .config
QUILIBRIUM_CONFIG_DIR=

View File

@ -4,7 +4,7 @@
Quilibrium is a distributed protocol that leverages advanced cryptographic Quilibrium is a distributed protocol that leverages advanced cryptographic
techniques including multi-party computation (MPC) for privacy-preserving techniques including multi-party computation (MPC) for privacy-preserving
compute. The system operates on a sharded network architecture where the main compute. The system operates on a sharded network architecture where the master
process runs global consensus while data worker processes run app-level process runs global consensus while data worker processes run app-level
consensus for their assigned shards, each maintaining their own hypergraph consensus for their assigned shards, each maintaining their own hypergraph
state, storage, and networking stack. state, storage, and networking stack.
@ -25,7 +25,7 @@ state, storage, and networking stack.
└───────────────────────────────┬─────────────────────────────────┘ └───────────────────────────────┬─────────────────────────────────┘
┌───────────────────────────────┴─────────────────────────────────┐ ┌───────────────────────────────┴─────────────────────────────────┐
Main Node Process (Core 0) Master Process (Core 0)
│ ┌──────────────┐ ┌─────────────┐ ┌─────────────────────┐ │ │ ┌──────────────┐ ┌─────────────┐ ┌─────────────────────┐ │
│ │ Global │ │ Global │ │ P2P Network │ │ │ │ Global │ │ Global │ │ P2P Network │ │
│ │ Consensus │ │ Execution │ │ (BlossomSub) │ │ │ │ Consensus │ │ Execution │ │ (BlossomSub) │ │
@ -62,7 +62,7 @@ multi-process architecture where each process has its own complete stack.
**Process Types:** **Process Types:**
##### Main Process (Core 0) ##### Master Process (Core 0)
Runs the global consensus and coordination: Runs the global consensus and coordination:
- **Global Consensus Engine**: System-wide consensus and coordination - **Global Consensus Engine**: System-wide consensus and coordination
- **Global Execution Engine**: System-level operations - **Global Execution Engine**: System-level operations
@ -81,7 +81,7 @@ Each worker manages specific shards:
- **Shard Hypergraph State**: Shard-specific state management - **Shard Hypergraph State**: Shard-specific state management
**Application Layer** (`app/`): **Application Layer** (`app/`):
- `Node`: Main node implementation (runs in main process) - `Node`: Main node implementation (runs in master process)
- `DataWorkerNode`: Worker node implementation (runs in worker processes) - `DataWorkerNode`: Worker node implementation (runs in worker processes)
**Key Subsystems:** **Key Subsystems:**
@ -90,7 +90,7 @@ Each worker manages specific shards:
Two distinct consensus implementations: Two distinct consensus implementations:
**Global Consensus** (`node/global/`) - Main Process: **Global Consensus** (`node/global/`) - Master Process:
- System-wide coordination - System-wide coordination
- Cross-shard transaction ordering - Cross-shard transaction ordering
- Global state transitions - Global state transitions
@ -113,7 +113,7 @@ Two distinct consensus implementations:
Distribution across processes: Distribution across processes:
**Global Execution Engine** - Main Process: **Global Execution Engine** - Master Process:
- System-wide operations - System-wide operations
- Cross-shard coordination - Cross-shard coordination
- Global state updates - Global state updates
@ -125,14 +125,14 @@ Distribution across processes:
##### P2P Networking (`node/p2p/`) ##### P2P Networking (`node/p2p/`)
Each process (main and workers) has its own complete P2P stack: Each process (master and workers) has its own complete P2P stack:
- **BlossomSub**: Custom pub/sub protocol - **BlossomSub**: Custom pub/sub protocol
- **Peer Management**: Discovery and connections - **Peer Management**: Discovery and connections
- **Public Channels**: Point to point authenticated message routing - **Public Channels**: Point to point authenticated message routing
- **Private Channels**: Onion-routing authenticated message channels - **Private Channels**: Onion-routing authenticated message channels
- **DHT Integration**: Distributed peer discovery - **DHT Integration**: Distributed peer discovery
Main process handles global bitmask, workers handle shard-specific bitmasks. Master process handles global bitmask, workers handle shard-specific bitmasks.
##### Storage Layer (`node/store/`) ##### Storage Layer (`node/store/`)
@ -148,7 +148,7 @@ Storage is partitioned by process responsibility (global vs shard).
##### Hypergraph (`hypergraph/`) ##### Hypergraph (`hypergraph/`)
Distributed graph structure: Distributed graph structure:
- **Global Hypergraph**: Maintained by main process - **Global Hypergraph**: Maintained by master process
- **Shard Hypergraphs**: Maintained by respective workers - **Shard Hypergraphs**: Maintained by respective workers
- **CRDT Semantics**: Conflict-free updates - **CRDT Semantics**: Conflict-free updates
- **Components**: - **Components**:
@ -293,7 +293,7 @@ Benefits:
#### 2. Sharded State Management #### 2. Sharded State Management
State is partitioned across processes: State is partitioned across processes:
- Main process: Global state and coordination - Master process: Global state and coordination
- Worker processes: Shard-specific state - Worker processes: Shard-specific state
- CRDT hypergraph for convergence - CRDT hypergraph for convergence
@ -355,7 +355,7 @@ Inter-process communication design:
### System Boundaries ### System Boundaries
#### Process Boundaries #### Process Boundaries
- Main process: Global operations - Master process: Global operations
- Worker processes: Shard operations - Worker processes: Shard operations
- Clear IPC interfaces between processes - Clear IPC interfaces between processes
@ -365,7 +365,7 @@ Inter-process communication design:
- Well-defined interaction protocols - Well-defined interaction protocols
#### State Boundaries #### State Boundaries
- Global state in main process - Global state in master process
- Shard state in worker processes - Shard state in worker processes
- CRDT merge protocols for synchronization - CRDT merge protocols for synchronization
@ -439,24 +439,25 @@ Inter-process communication design:
**Q: Where is the main entry point?** **Q: Where is the main entry point?**
- Main entry: `node/main.go` - Main entry: `node/main.go`
- The `main()` function handles initialization, spawns worker processes, and - The `main()` function handles initialization, and manages the lifecycle
manages the lifecycle
**Q: How does the multi-process architecture work?** **Q: How does the multi-process architecture work?**
- Process spawning: `node/main.go` (see `spawnDataWorkers` function) - Process spawning: `node/worker/manager.go` (see `spawnDataWorkers` function)
- Main process runs on core 0, spawns workers for cores 1+ using `exec.Command` - Master process runs on core 0, creates and starts Global Consensus Engine with
embedded Worker Manager
- Upon start, Worker Manager spawns workers for cores 1+ using `exec.Command`
- Each worker receives `--core` parameter to identify its role - Each worker receives `--core` parameter to identify its role
**Q: How do I run the node?** **Q: How do I run the node?**
- Build: `./node/build.sh` (creates static binary) - Build: `./node/build.sh` (creates static binary)
- Run: `./node/node` (starts main process which spawns workers) - Run: `./node/node` (starts master process which spawns workers)
- Configuration: `config.yml` or environment variables - Configuration: `config.yml` or environment variables
### Architecture & Design ### Architecture & Design
**Q: How do processes communicate (IPC)?** **Q: How do processes communicate (IPC)?**
- Uses IPC over private pubsub topics for structured message passing - Uses IPC over private pubsub topics for structured message passing
- Main process acts as coordinator, workers connect via IPC - Master process acts as coordinator, workers connect via IPC
**Q: Where is consensus implemented?** **Q: Where is consensus implemented?**
- App Consensus (workers): `node/consensus/app/app_consensus_engine.go` - App Consensus (workers): `node/consensus/app/app_consensus_engine.go`
@ -479,7 +480,7 @@ Inter-process communication design:
- Each process has independent P2P stack - Each process has independent P2P stack
**Q: How does cross-shard communication work?** **Q: How does cross-shard communication work?**
- Through main process coordination via P2P - Through master process coordination via P2P
- Topic-based routing in BlossomSub - Topic-based routing in BlossomSub
- Shard-aware peer connections - Shard-aware peer connections
- Majority of cross-shard communication requires only proof data, keeping shards - Majority of cross-shard communication requires only proof data, keeping shards
@ -543,7 +544,7 @@ Inter-process communication design:
- Docker: Multiple Dockerfiles for different scenarios - Docker: Multiple Dockerfiles for different scenarios
**Q: What happens when a worker crashes?** **Q: What happens when a worker crashes?**
- Main process monitors worker processes - Master process monitors worker processes
- Automatic restart logic in `spawnDataWorkers` - Automatic restart logic in `spawnDataWorkers`
- Process isolation prevents cascade failures - Process isolation prevents cascade failures
@ -572,5 +573,5 @@ Inter-process communication design:
1. BlossomSub receives message 1. BlossomSub receives message
2. Topic routing to appropriate handler 2. Topic routing to appropriate handler
3. Shard-specific messages to workers 3. Shard-specific messages to workers
4. Global messages to main process 4. Global messages to master process

View File

@ -2,9 +2,7 @@
## Testing ## Testing
Testing the [`vdf`](./vdf) and [`node`](./node) packages requires linking the See [TESTING.md](./TESTING.md) for testing instructions.
[native VDF](./crates/vdf). The `test.sh` scripts in the respective directories
help with this.
## Pull Requests ## Pull Requests

237
Cargo.lock generated
View File

@ -235,6 +235,22 @@ version = "0.5.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f59bbe95d4e52a6398ec21238d31577f2b28a9d86807f06ca59d191d8440d0bb" checksum = "f59bbe95d4e52a6398ec21238d31577f2b28a9d86807f06ca59d191d8440d0bb"
[[package]]
name = "bitcoin-internals"
version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9425c3bf7089c983facbae04de54513cce73b41c7f9ff8c845b54e7bc64ebbfb"
[[package]]
name = "bitcoin_hashes"
version = "0.13.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1930a4dabfebb8d7d9992db18ebe3ae2876f0a305fab206fd168df931ede293b"
dependencies = [
"bitcoin-internals",
"hex-conservative",
]
[[package]] [[package]]
name = "bitflags" name = "bitflags"
version = "1.3.2" version = "1.3.2"
@ -307,7 +323,7 @@ checksum = "8d696c370c750c948ada61c69a0ee2cbbb9c50b1019ddb86d9317157a99c2cae"
[[package]] [[package]]
name = "bls48581" name = "bls48581"
version = "0.1.0" version = "2.1.0"
dependencies = [ dependencies = [
"criterion 0.4.0", "criterion 0.4.0",
"hex 0.4.3", "hex 0.4.3",
@ -447,6 +463,7 @@ dependencies = [
"base64 0.22.1", "base64 0.22.1",
"criterion 0.4.0", "criterion 0.4.0",
"ed448-goldilocks-plus 0.11.2", "ed448-goldilocks-plus 0.11.2",
"ed448-rust",
"hex 0.4.3", "hex 0.4.3",
"hkdf", "hkdf",
"hmac", "hmac",
@ -576,7 +593,7 @@ version = "4.5.4"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "528131438037fd55894f62d6e9f068b8f45ac57ffa77517819645d10aed04f64" checksum = "528131438037fd55894f62d6e9f068b8f45ac57ffa77517819645d10aed04f64"
dependencies = [ dependencies = [
"heck 0.5.0", "heck",
"proc-macro2", "proc-macro2",
"quote", "quote",
"syn 2.0.100", "syn 2.0.100",
@ -881,10 +898,59 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9ed9a281f7bc9b7576e61468ba615a66a5c8cfdff42420a70aa82701a3b1e292" checksum = "9ed9a281f7bc9b7576e61468ba615a66a5c8cfdff42420a70aa82701a3b1e292"
dependencies = [ dependencies = [
"block-buffer 0.10.4", "block-buffer 0.10.4",
"const-oid",
"crypto-common", "crypto-common",
"subtle", "subtle",
] ]
[[package]]
name = "dkls23"
version = "0.1.1"
dependencies = [
"bitcoin_hashes",
"elliptic-curve",
"getrandom 0.2.15",
"hex 0.4.3",
"k256",
"p256",
"rand 0.8.5",
"serde",
"serde_bytes",
"sha3 0.10.8",
]
[[package]]
name = "dkls23_ffi"
version = "0.1.0"
dependencies = [
"criterion 0.5.1",
"dkls23",
"hex 0.4.3",
"k256",
"p256",
"rand 0.8.5",
"serde",
"serde_json",
"sha2 0.10.8",
"thiserror 1.0.63",
"uniffi",
]
[[package]]
name = "ecdsa"
version = "0.16.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ee27f32b5c5292967d2d4a9d7f1e0b0aed2c15daded5a60300e4abb9d8020bca"
dependencies = [
"der",
"digest 0.10.7",
"elliptic-curve",
"rfc6979",
"serdect 0.2.0",
"signature",
"spki",
]
[[package]] [[package]]
name = "ed448-bulletproofs" name = "ed448-bulletproofs"
version = "1.0.0" version = "1.0.0"
@ -1111,9 +1177,9 @@ checksum = "d2fabcfbdc87f4758337ca535fb41a6d701b65693ce38287d856d1674551ec9b"
[[package]] [[package]]
name = "goblin" name = "goblin"
version = "0.6.1" version = "0.8.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0d6b4de4a8eb6c46a8c77e1d3be942cb9a8bf073c22374578e5ba4b08ed0ff68" checksum = "1b363a30c165f666402fe6a3024d3bec7ebc898f96a4a23bd1c99f8dbf3f4f47"
dependencies = [ dependencies = [
"log", "log",
"plain", "plain",
@ -1153,12 +1219,6 @@ version = "0.12.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8a9ee70c43aaf417c914396645a0fa852624801b24ebb7ae78fe8272889ac888" checksum = "8a9ee70c43aaf417c914396645a0fa852624801b24ebb7ae78fe8272889ac888"
[[package]]
name = "heck"
version = "0.4.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "95505c38b4572b2d910cecb0281560f54b440a19336cbbcb27bf6ce6adc6f5a8"
[[package]] [[package]]
name = "heck" name = "heck"
version = "0.5.0" version = "0.5.0"
@ -1192,6 +1252,12 @@ version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70" checksum = "7f24254aa9a54b5c858eaee2f5bccdb46aaf0e486a595ed5fd8f86ba55232a70"
[[package]]
name = "hex-conservative"
version = "0.1.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "212ab92002354b4819390025006c897e8140934349e8635c9b077f47b4dcbd20"
[[package]] [[package]]
name = "hkdf" name = "hkdf"
version = "0.12.4" version = "0.12.4"
@ -1279,6 +1345,21 @@ dependencies = [
"wasm-bindgen", "wasm-bindgen",
] ]
[[package]]
name = "k256"
version = "0.13.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f6e3919bbaa2945715f0bb6d3934a173d1e9a59ac23767fbaaef277265a7411b"
dependencies = [
"cfg-if",
"ecdsa",
"elliptic-curve",
"once_cell",
"serdect 0.2.0",
"sha2 0.10.8",
"signature",
]
[[package]] [[package]]
name = "keccak" name = "keccak"
version = "0.1.5" version = "0.1.5"
@ -1457,12 +1538,6 @@ version = "1.21.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "42f5e15c9953c5e4ccceeb2e7382a716482c34515315f7b03532b8b4e8393d2d" checksum = "42f5e15c9953c5e4ccceeb2e7382a716482c34515315f7b03532b8b4e8393d2d"
[[package]]
name = "oneshot-uniffi"
version = "0.1.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6c548d5c78976f6955d72d0ced18c48ca07030f7a1d4024529fedd7c1c01b29c"
[[package]] [[package]]
name = "oorandom" name = "oorandom"
version = "11.1.3" version = "11.1.3"
@ -1487,6 +1562,19 @@ version = "6.6.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e2355d85b9a3786f481747ced0e0ff2ba35213a1f9bd406ed906554d7af805a1" checksum = "e2355d85b9a3786f481747ced0e0ff2ba35213a1f9bd406ed906554d7af805a1"
[[package]]
name = "p256"
version = "0.13.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c9863ad85fa8f4460f9c48cb909d38a0d689dba1f6f6988a5e3e0d31071bcd4b"
dependencies = [
"ecdsa",
"elliptic-curve",
"primeorder",
"serdect 0.2.0",
"sha2 0.10.8",
]
[[package]] [[package]]
name = "paste" name = "paste"
version = "1.0.15" version = "1.0.15"
@ -1570,6 +1658,16 @@ version = "0.2.17"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5b40af805b3121feab8a3c29f04d8ad262fa8e0561883e7653e024ae4479e6de" checksum = "5b40af805b3121feab8a3c29f04d8ad262fa8e0561883e7653e024ae4479e6de"
[[package]]
name = "primeorder"
version = "0.13.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "353e1ca18966c16d9deb1c69278edbc5f194139612772bd9537af60ac231e1e6"
dependencies = [
"elliptic-curve",
"serdect 0.2.0",
]
[[package]] [[package]]
name = "proc-macro2" name = "proc-macro2"
version = "1.0.94" version = "1.0.94"
@ -1709,6 +1807,16 @@ version = "0.8.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "adad44e29e4c806119491a7f06f03de4d1af22c3a680dd47f1e6e179439d1f56" checksum = "adad44e29e4c806119491a7f06f03de4d1af22c3a680dd47f1e6e179439d1f56"
[[package]]
name = "rfc6979"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f8dd2a808d456c4a54e300a23e9f5a67e122c3024119acbfd73e3bf664491cb2"
dependencies = [
"hmac",
"subtle",
]
[[package]] [[package]]
name = "rpm" name = "rpm"
version = "0.1.0" version = "0.1.0"
@ -1766,18 +1874,18 @@ dependencies = [
[[package]] [[package]]
name = "scroll" name = "scroll"
version = "0.11.0" version = "0.12.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "04c565b551bafbef4157586fa379538366e4385d42082f255bfd96e4fe8519da" checksum = "6ab8598aa408498679922eff7fa985c25d58a90771bd6be794434c5277eab1a6"
dependencies = [ dependencies = [
"scroll_derive", "scroll_derive",
] ]
[[package]] [[package]]
name = "scroll_derive" name = "scroll_derive"
version = "0.11.1" version = "0.12.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1db149f81d46d2deba7cd3c50772474707729550221e69588478ebf9ada425ae" checksum = "1783eabc414609e28a5ba76aee5ddd52199f7107a0b24c2e9746a1ecc34a683d"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
@ -1810,13 +1918,24 @@ dependencies = [
[[package]] [[package]]
name = "serde" name = "serde"
version = "1.0.219" version = "1.0.228"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5f0e2c6ed6606019b4e29e69dbaba95b11854410e5347d525002456dbbb786b6" checksum = "9a8e94ea7f378bd32cbbd37198a4a91436180c5bb472411e48b5ec2e2124ae9e"
dependencies = [ dependencies = [
"serde_core",
"serde_derive", "serde_derive",
] ]
[[package]]
name = "serde_bytes"
version = "0.11.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a5d440709e79d88e51ac01c4b72fc6cb7314017bb7da9eeff678aa94c10e3ea8"
dependencies = [
"serde",
"serde_core",
]
[[package]] [[package]]
name = "serde_cbor" name = "serde_cbor"
version = "0.11.2" version = "0.11.2"
@ -1828,10 +1947,19 @@ dependencies = [
] ]
[[package]] [[package]]
name = "serde_derive" name = "serde_core"
version = "1.0.219" version = "1.0.228"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "5b0276cf7f2c73365f7157c8123c21cd9a50fbbd844757af28ca1f5925fc2a00" checksum = "41d385c7d4ca58e59fc732af25c3983b67ac852c1a25000afe1175de458b67ad"
dependencies = [
"serde_derive",
]
[[package]]
name = "serde_derive"
version = "1.0.228"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d540f220d3187173da220f885ab66608367b6574e925011a9353e4badda91d79"
dependencies = [ dependencies = [
"proc-macro2", "proc-macro2",
"quote", "quote",
@ -1962,6 +2090,12 @@ version = "0.3.11"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "38b58827f4464d87d377d175e90bf58eb00fd8716ff0a62f80356b5e61555d0d" checksum = "38b58827f4464d87d377d175e90bf58eb00fd8716ff0a62f80356b5e61555d0d"
[[package]]
name = "smawk"
version = "0.3.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b7c388c1b5e93756d0c740965c41e8822f866621d41acbdf6336a6a168f8840c"
[[package]] [[package]]
name = "spki" name = "spki"
version = "0.7.3" version = "0.7.3"
@ -2032,6 +2166,9 @@ name = "textwrap"
version = "0.16.1" version = "0.16.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "23d434d3f8967a09480fb04132ebe0a3e088c173e6d0ee7897abbdf4eab0f8b9" checksum = "23d434d3f8967a09480fb04132ebe0a3e088c173e6d0ee7897abbdf4eab0f8b9"
dependencies = [
"smawk",
]
[[package]] [[package]]
name = "thiserror" name = "thiserror"
@ -2121,12 +2258,13 @@ checksum = "7dd6e30e90baa6f72411720665d41d89b9a3d039dc45b8faea1ddd07f617f6af"
[[package]] [[package]]
name = "uniffi" name = "uniffi"
version = "0.25.3" version = "0.28.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "21345172d31092fd48c47fd56c53d4ae9e41c4b1f559fb8c38c1ab1685fd919f" checksum = "4cb08c58c7ed7033150132febe696bef553f891b1ede57424b40d87a89e3c170"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"camino", "camino",
"cargo_metadata",
"clap 4.5.4", "clap 4.5.4",
"uniffi_bindgen", "uniffi_bindgen",
"uniffi_build", "uniffi_build",
@ -2136,33 +2274,32 @@ dependencies = [
[[package]] [[package]]
name = "uniffi_bindgen" name = "uniffi_bindgen"
version = "0.25.3" version = "0.28.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fd992f2929a053829d5875af1eff2ee3d7a7001cb3b9a46cc7895f2caede6940" checksum = "cade167af943e189a55020eda2c314681e223f1e42aca7c4e52614c2b627698f"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"askama", "askama",
"camino", "camino",
"cargo_metadata", "cargo_metadata",
"clap 4.5.4",
"fs-err", "fs-err",
"glob", "glob",
"goblin", "goblin",
"heck 0.4.1", "heck",
"once_cell", "once_cell",
"paste", "paste",
"serde", "serde",
"textwrap 0.16.1",
"toml", "toml",
"uniffi_meta", "uniffi_meta",
"uniffi_testing",
"uniffi_udl", "uniffi_udl",
] ]
[[package]] [[package]]
name = "uniffi_build" name = "uniffi_build"
version = "0.25.3" version = "0.28.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "001964dd3682d600084b3aaf75acf9c3426699bc27b65e96bb32d175a31c74e9" checksum = "4c7cf32576e08104b7dc2a6a5d815f37616e66c6866c2a639fe16e6d2286b75b"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"camino", "camino",
@ -2171,9 +2308,9 @@ dependencies = [
[[package]] [[package]]
name = "uniffi_checksum_derive" name = "uniffi_checksum_derive"
version = "0.25.3" version = "0.28.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "55137c122f712d9330fd985d66fa61bdc381752e89c35708c13ce63049a3002c" checksum = "802d2051a700e3ec894c79f80d2705b69d85844dafbbe5d1a92776f8f48b563a"
dependencies = [ dependencies = [
"quote", "quote",
"syn 2.0.100", "syn 2.0.100",
@ -2181,25 +2318,23 @@ dependencies = [
[[package]] [[package]]
name = "uniffi_core" name = "uniffi_core"
version = "0.25.3" version = "0.28.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6121a127a3af1665cd90d12dd2b3683c2643c5103281d0fed5838324ca1fad5b" checksum = "bc7687007d2546c454d8ae609b105daceb88175477dac280707ad6d95bcd6f1f"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"bytes", "bytes",
"camino",
"log", "log",
"once_cell", "once_cell",
"oneshot-uniffi",
"paste", "paste",
"static_assertions", "static_assertions",
] ]
[[package]] [[package]]
name = "uniffi_macros" name = "uniffi_macros"
version = "0.25.3" version = "0.28.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "11cf7a58f101fcedafa5b77ea037999b88748607f0ef3a33eaa0efc5392e92e4" checksum = "12c65a5b12ec544ef136693af8759fb9d11aefce740fb76916721e876639033b"
dependencies = [ dependencies = [
"bincode", "bincode",
"camino", "camino",
@ -2210,15 +2345,14 @@ dependencies = [
"serde", "serde",
"syn 2.0.100", "syn 2.0.100",
"toml", "toml",
"uniffi_build",
"uniffi_meta", "uniffi_meta",
] ]
[[package]] [[package]]
name = "uniffi_meta" name = "uniffi_meta"
version = "0.25.3" version = "0.28.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "71dc8573a7b1ac4b71643d6da34888273ebfc03440c525121f1b3634ad3417a2" checksum = "4a74ed96c26882dac1ca9b93ca23c827e284bacbd7ec23c6f0b0372f747d59e4"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"bytes", "bytes",
@ -2228,9 +2362,9 @@ dependencies = [
[[package]] [[package]]
name = "uniffi_testing" name = "uniffi_testing"
version = "0.25.3" version = "0.28.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "118448debffcb676ddbe8c5305fb933ab7e0123753e659a71dc4a693f8d9f23c" checksum = "6a6f984f0781f892cc864a62c3a5c60361b1ccbd68e538e6c9fbced5d82268ac"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"camino", "camino",
@ -2241,11 +2375,12 @@ dependencies = [
[[package]] [[package]]
name = "uniffi_udl" name = "uniffi_udl"
version = "0.25.3" version = "0.28.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "889edb7109c6078abe0e53e9b4070cf74a6b3468d141bdf5ef1bd4d1dc24a1c3" checksum = "037820a4cfc4422db1eaa82f291a3863c92c7d1789dc513489c36223f9b4cdfc"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"textwrap 0.16.1",
"uniffi_meta", "uniffi_meta",
"uniffi_testing", "uniffi_testing",
"weedle2", "weedle2",
@ -2408,9 +2543,9 @@ dependencies = [
[[package]] [[package]]
name = "weedle2" name = "weedle2"
version = "4.0.0" version = "5.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2e79c5206e1f43a2306fd64bdb95025ee4228960f2e6c5a8b173f3caaf807741" checksum = "998d2c24ec099a87daf9467808859f9d82b61f1d9c9701251aea037f514eae0e"
dependencies = [ dependencies = [
"nom", "nom",
] ]

View File

@ -25,7 +25,8 @@ members = [
"crates/rpm", "crates/rpm",
"crates/bulletproofs", "crates/bulletproofs",
"crates/verenc", "crates/verenc",
"crates/ferret" "crates/ferret",
"crates/dkls23_ffi"
] ]
[profile.release] [profile.release]

View File

@ -1,53 +0,0 @@
# Quilibrium Docker Instructions
## Build
The only requirements are `git` (to checkout the repository) and docker (to build the image).
Golang does not have to be installed, the docker image build process uses a build stage that provides the
correct Go environment and compiles the node down to one command.
In the repository root folder, where the [Dockerfile.source](Dockerfile.source) file is, build the docker image:
```shell
docker build -f Dockerfile.source --build-arg GIT_COMMIT=$(git log -1 --format=%h) -t quilibrium -t quilibrium:1.4.16 .
```
Use latest version instead of `1.4.16`.
The image that is built is light and safe. It is based on Alpine Linux with the Quilibrium node binary, no
source code, nor the Go development environment. The image also has the `grpcurl` tool that can be used to
query the gRPC interface.
### Task
You can also use the [Task](https://taskfile.dev/) tool, it is a simple build tool that takes care of extracting
parameters and building the image. The tasks are all defined in [Taskfile.yaml](Taskfile.yaml).
You can optionally create an `.env` file, in the same repository root folder to override specific parameters. Right now
only one optional env var is supported and that is `QUILIBRIUM_IMAGE_NAME`, if you want to change the default
image name from `quilibrium` to something else. If you are pushing your images to GitHub then you have to follow the
GitHub naming convention and use a name like `ghcr.io/mscurtescu/ceremonyclient`.
Bellow there are example interactions with `Task`.
The node version is extracted from [node/main.go](node/main.go). This version string is used to tag the image. The git
repo, branch and commit are read through the `git` command and depend on the current state of your working
directory (on what branch and at what commit you are). These last three values are used to label the image.
List tasks:
```shell
task -l
```
Show what parameters, like image name, version etc, will be used:
```shell
task status
```
Build the image (aka run the `build` task):
```shell
task build
```
## Run
In order to run a Quilibrium node using the docker image follow the instructions in the [docker](docker) subfolder.

View File

@ -1,217 +0,0 @@
# syntax=docker.io/docker/dockerfile:1.7-labs
FROM --platform=${TARGETPLATFORM} ubuntu:24.04 AS base
ENV PATH="${PATH}:/root/.cargo/bin/"
ARG TARGETOS
ARG TARGETARCH
# Install GMP 6.2 (6.3 which MacOS is using only available on Debian unstable)
RUN apt-get update && apt-get install -y \
build-essential \
curl \
git \
cmake \
libgmp-dev \
libmpfr-dev \
libmpfr6 \
wget \
m4 \
pkg-config \
gcc \
g++ \
make \
autoconf \
automake \
libtool \
libssl-dev \
python3 \
python-is-python3 \
&& rm -rf /var/lib/apt/lists/*
ARG GO_VERSION=1.23.5
RUN apt update && apt install -y wget && \
ARCH=$(dpkg --print-architecture) && \
case ${ARCH} in \
amd64) GOARCH=amd64 ;; \
arm64) GOARCH=arm64 ;; \
*) echo "Unsupported architecture: ${ARCH}" && exit 1 ;; \
esac && \
wget https://go.dev/dl/go${GO_VERSION}.linux-${GOARCH}.tar.gz && \
rm -rf /usr/local/go && \
tar -C /usr/local -xzf go${GO_VERSION}.linux-${GOARCH}.tar.gz && \
rm go${GO_VERSION}.linux-${GOARCH}.tar.gz
ENV PATH=$PATH:/usr/local/go/bin
RUN git clone https://github.com/flintlib/flint.git && \
cd flint && \
git checkout flint-3.0 && \
./bootstrap.sh && \
./configure \
--prefix=/usr/local \
--with-gmp=/usr/local \
--with-mpfr=/usr/local \
--enable-static \
--disable-shared \
CFLAGS="-O3" && \
make && \
make install && \
cd .. && \
rm -rf flint
COPY docker/rustup-init.sh /opt/rustup-init.sh
RUN /opt/rustup-init.sh -y --profile minimal
# Install uniffi-bindgen-go
RUN cargo install uniffi-bindgen-go --git https://github.com/NordSecurity/uniffi-bindgen-go --tag v0.2.1+v0.25.0
FROM base AS build
ENV GOEXPERIMENT=arenas
ENV QUILIBRIUM_SIGNATURE_CHECK=false
# Install grpcurl before building the node and client
# as to avoid needing to redo it on rebuilds
RUN go install github.com/fullstorydev/grpcurl/cmd/grpcurl@latest
WORKDIR /opt/ceremonyclient
# Copy everything except node and client so as to avoid
# invalidating the cache at this point on client or node rebuilds
COPY --exclude=node \
--exclude=client \
--exclude=sidecar . .
RUN python emp-install.py --install --tool --ot
RUN cd emp-tool && sed -i 's/add_library(${NAME} SHARED ${sources})/add_library(${NAME} STATIC ${sources})/g' CMakeLists.txt && mkdir build && cd build && cmake .. -DCMAKE_INSTALL_PREFIX=/usr/local && cd .. && make && make install && cd ..
RUN cd emp-ot && mkdir build && cd build && cmake .. -DCMAKE_INSTALL_PREFIX=/usr/local && cd .. && make && make install && cd ..
RUN go mod download
## Generate Rust bindings for channel
WORKDIR /opt/ceremonyclient/channel
RUN go mod download
RUN ./generate.sh
## Generate Rust bindings for VDF
WORKDIR /opt/ceremonyclient/vdf
RUN go mod download
RUN ./generate.sh
## Generate Rust bindings for Ferret
WORKDIR /opt/ceremonyclient/ferret
RUN go mod download
RUN ./generate.sh
## Generate Rust bindings for BLS48581
WORKDIR /opt/ceremonyclient/bls48581
RUN go mod download
RUN ./generate.sh
## Generate Rust bindings for RPM
WORKDIR /opt/ceremonyclient/rpm
RUN go mod download
RUN ./generate.sh
## Generate Rust bindings for VerEnc
WORKDIR /opt/ceremonyclient/verenc
RUN go mod download
RUN ./generate.sh
## Generate Rust bindings for Bulletproofs
WORKDIR /opt/ceremonyclient/bulletproofs
RUN go mod download
RUN ./generate.sh
FROM build AS build-node
# Build and install the node
COPY ./node /opt/ceremonyclient/node
WORKDIR /opt/ceremonyclient/node
ENV GOPROXY=direct
RUN ./build.sh && cp node /usr/bin
FROM build AS build-qclient
ARG TARGETOS
ARG TARGETARCH
# Build and install qclient
COPY ./node /opt/ceremonyclient/node
WORKDIR /opt/ceremonyclient/node
RUN go mod download
COPY ./client /opt/ceremonyclient/client
WORKDIR /opt/ceremonyclient/client
RUN go mod download
ARG BINARIES_DIR=/opt/ceremonyclient/target/release
RUN GOOS=${TARGETOS} GOARCH=${TARGETARCH} ./build.sh -o qclient
RUN cp qclient /usr/bin
# Allows exporting single binary
FROM scratch AS node
COPY --from=build-node /usr/bin/node /node
ENTRYPOINT [ "/node" ]
# Allows exporting single binary
FROM scratch AS qclient-unix
COPY --from=build-qclient /usr/bin/qclient /qclient
ENTRYPOINT [ "/qclient" ]
FROM qclient-unix AS qclient-linux
FROM qclient-unix AS qclient-darwin
FROM qclient-${TARGETOS} AS qclient
FROM ubuntu:24.04
RUN apt-get update && apt-get install libflint-dev -y
ARG NODE_VERSION
ARG GIT_REPO
ARG GIT_BRANCH
ARG GIT_COMMIT
ENV GOEXPERIMENT=arenas
LABEL org.opencontainers.image.title="Quilibrium Network Node"
LABEL org.opencontainers.image.description="Quilibrium is a decentralized alternative to platform as a service providers."
LABEL org.opencontainers.image.version=$NODE_VERSION
LABEL org.opencontainers.image.vendor=Quilibrium
LABEL org.opencontainers.image.url=https://quilibrium.com/
LABEL org.opencontainers.image.documentation=https://quilibrium.com/docs
LABEL org.opencontainers.image.source=$GIT_REPO
LABEL org.opencontainers.image.ref.name=$GIT_BRANCH
LABEL org.opencontainers.image.revision=$GIT_COMMIT
RUN apt-get update && apt-get install -y ca-certificates
COPY --from=build-node /usr/bin/node /usr/local/bin
COPY --from=build-qclient /opt/ceremonyclient/client/qclient /usr/local/bin
WORKDIR /root
ENTRYPOINT ["node"]

View File

@ -1,141 +0,0 @@
# syntax=docker.io/docker/dockerfile:1.7-labs
FROM --platform=${TARGETPLATFORM} ubuntu:24.04 AS base
ENV PATH="${PATH}:/root/.cargo/bin/"
ARG TARGETOS
ARG TARGETARCH
# Install GMP 6.2 (6.3 which MacOS is using only available on Debian unstable)
RUN --mount=type=cache,target=/usr/local/,id=usr-local-${TARGETOS}-${TARGETARCH} \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -y \
build-essential \
curl \
git \
cmake \
libgmp-dev \
libmpfr-dev \
libmpfr6 \
wget \
m4 \
pkg-config \
gcc \
g++ \
make \
autoconf \
automake \
libtool \
libssl-dev \
python3 \
python-is-python3 \
&& rm -rf /var/lib/apt/lists/*
ARG GO_VERSION=1.23.5
RUN --mount=type=cache,target=/usr/local,id=usr-local-${TARGETOS}-${TARGETARCH} \
apt update && apt install -y wget && \
ARCH=$(dpkg --print-architecture) && \
case ${ARCH} in \
amd64) GOARCH=amd64 ;; \
arm64) GOARCH=arm64 ;; \
*) echo "Unsupported architecture: ${ARCH}" && exit 1 ;; \
esac && \
wget https://go.dev/dl/go${GO_VERSION}.linux-${GOARCH}.tar.gz && \
rm -rf /usr/local/go && \
tar -C /usr/local -xzf go${GO_VERSION}.linux-${GOARCH}.tar.gz && \
rm go${GO_VERSION}.linux-${GOARCH}.tar.gz
ENV PATH=$PATH:/usr/local/go/bin
RUN --mount=type=cache,target=/usr/local,id=usr-local-${TARGETOS}-${TARGETARCH} \
git clone https://github.com/flintlib/flint.git && \
cd flint && \
git checkout flint-3.0 && \
./bootstrap.sh && \
./configure \
--prefix=/usr/local \
--with-gmp=/usr/local \
--with-mpfr=/usr/local \
--enable-static \
--disable-shared \
CFLAGS="-O3" && \
make && \
make install && \
cd .. && \
rm -rf flint
COPY docker/rustup-init.sh /opt/rustup-init.sh
RUN --mount=type=cache,target=/root/.cargo,id=cargo-${TARGETOS}-${TARGETARCH} \
--mount=type=cache,target=/opt/ceremonyclient/target/,id=target-${TARGETOS}-${TARGETARCH} \
--mount=type=cache,target=/usr/local/,id=usr-local-${TARGETOS}-${TARGETARCH} \
--mount=type=cache,target=/go/pkg/mod,id=go-mod-${TARGETOS}-${TARGETARCH} \
/opt/rustup-init.sh -y --profile minimal
# Install uniffi-bindgen-go
RUN --mount=type=cache,target=/root/.cargo,id=cargo-${TARGETOS}-${TARGETARCH} \
--mount=type=cache,target=/opt/ceremonyclient/target/,id=target-${TARGETOS}-${TARGETARCH} \
--mount=type=cache,target=/usr/local/,id=usr-local-${TARGETOS}-${TARGETARCH} \
--mount=type=cache,target=/go/pkg/mod,id=go-mod-${TARGETOS}-${TARGETARCH} \
cargo install uniffi-bindgen-go --git https://github.com/NordSecurity/uniffi-bindgen-go --tag v0.2.1+v0.25.0
FROM base AS build
ENV QUILIBRIUM_SIGNATURE_CHECK=false
WORKDIR /opt/ceremonyclient
# Copy everything except node and client so as to avoid
# invalidating the cache at this point on client or node rebuilds
COPY --exclude=node \
--exclude=client \
--exclude=sidecar . .
## Generate Rust bindings for VDF
WORKDIR /opt/ceremonyclient/vdf
RUN --mount=type=cache,target=/usr/local/,id=usr-local-${TARGETOS}-${TARGETARCH} \
--mount=type=cache,target=/go/pkg/mod,id=go-mod-${TARGETOS}-${TARGETARCH} \
--mount=type=cache,target=/root/.cache/go-build,id=go-build-${TARGETOS}-${TARGETARCH} \
go mod download
RUN --mount=type=cache,target=/root/.cargo,id=cargo-${TARGETOS}-${TARGETARCH} \
--mount=type=cache,target=/opt/ceremonyclient/target/,id=target-${TARGETOS}-${TARGETARCH} \
--mount=type=cache,target=/usr/local/,id=usr-local-${TARGETOS}-${TARGETARCH} \
--mount=type=cache,target=/go/pkg/mod,id=go-mod-${TARGETOS}-${TARGETARCH} \
--mount=type=cache,target=/root/.cache/go-build,id=go-build-${TARGETOS}-${TARGETARCH} \
./generate.sh
FROM build AS build-node
WORKDIR /opt/ceremonyclient/vdf
RUN --mount=type=cache,target=/usr/local/,id=usr-local-${TARGETOS}-${TARGETARCH} \
--mount=type=cache,target=/opt/ceremonyclient/target/,id=target-${TARGETOS}-${TARGETARCH} \
--mount=type=cache,target=/go/pkg/mod,id=go-mod-${TARGETOS}-${TARGETARCH} \
--mount=type=cache,target=/root/.cache/go-build,id=go-build-${TARGETOS}-${TARGETARCH} \
./build-test.sh -o vdf-test perftest/main.go && cp vdf-test /usr/bin
# Allows exporting single binary
FROM scratch AS vdf
COPY --from=build-node /usr/bin/vdf-test /vdf-test
ENTRYPOINT [ "/vdf-test" ]
LABEL org.opencontainers.image.title="Quilibrium Network Node"
LABEL org.opencontainers.image.description="Quilibrium is a decentralized alternative to platform as a service providers."
LABEL org.opencontainers.image.version=$NODE_VERSION
LABEL org.opencontainers.image.vendor=Quilibrium
LABEL org.opencontainers.image.url=https://quilibrium.com/
LABEL org.opencontainers.image.documentation=https://quilibrium.com/docs
LABEL org.opencontainers.image.source=$GIT_REPO
LABEL org.opencontainers.image.ref.name=$GIT_BRANCH
LABEL org.opencontainers.image.revision=$GIT_COMMIT
COPY --from=build-node /usr/bin/vdf-test /usr/local/vdf-test
WORKDIR /root
ENTRYPOINT ["vdf-test"]

View File

@ -1,143 +0,0 @@
# syntax=docker.io/docker/dockerfile:1.7-labs
FROM --platform=${TARGETPLATFORM} ubuntu:24.04 AS base-avx512
ENV PATH="${PATH}:/root/.cargo/bin/"
ARG TARGETOS
ARG TARGETARCH
# Install GMP 6.2 (6.3 which MacOS is using only available on Debian unstable)
RUN --mount=type=cache,target=/usr/local/,id=usr-local-${TARGETOS}-${TARGETARCH}-avx512 \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -y \
build-essential \
curl \
git \
cmake \
libgmp-dev \
libmpfr-dev \
libmpfr6 \
wget \
m4 \
pkg-config \
gcc \
g++ \
make \
autoconf \
automake \
libtool \
libssl-dev \
python3 \
python-is-python3 \
&& rm -rf /var/lib/apt/lists/*
ARG GO_VERSION=1.23.5
RUN --mount=type=cache,target=/usr/local,id=usr-local-${TARGETOS}-${TARGETARCH}-avx512 \
apt update && apt install -y wget && \
ARCH=$(dpkg --print-architecture) && \
case ${ARCH} in \
amd64) GOARCH=amd64 ;; \
arm64) GOARCH=arm64 ;; \
*) echo "Unsupported architecture: ${ARCH}" && exit 1 ;; \
esac && \
wget https://go.dev/dl/go${GO_VERSION}.linux-${GOARCH}.tar.gz && \
rm -rf /usr/local/go && \
tar -C /usr/local -xzf go${GO_VERSION}.linux-${GOARCH}.tar.gz && \
rm go${GO_VERSION}.linux-${GOARCH}.tar.gz
ENV PATH=$PATH:/usr/local/go/bin
# Build FLINT from source with AVX-512
RUN --mount=type=cache,target=/usr/local,id=usr-local-${TARGETOS}-${TARGETARCH}-avx512 \
git clone https://github.com/flintlib/flint.git && \
cd flint && \
git checkout flint-3.0 && \
./bootstrap.sh && \
./configure \
--prefix=/usr/local \
--with-gmp=/usr/local \
--with-mpfr=/usr/local \
--enable-avx512 \
--enable-static \
--disable-shared \
CFLAGS="-march=skylake-avx512 -mtune=skylake-avx512 -O3" && \
make -j$(nproc) && \
make install && \
cd .. && \
rm -rf flint
COPY docker/rustup-init.sh /opt/rustup-init.sh
RUN --mount=type=cache,target=/root/.cargo,id=cargo-${TARGETOS}-${TARGETARCH}-avx512 \
--mount=type=cache,target=/opt/ceremonyclient/target/,id=target-${TARGETOS}-${TARGETARCH}-avx512 \
--mount=type=cache,target=/usr/local/,id=usr-local-${TARGETOS}-${TARGETARCH}-avx512 \
--mount=type=cache,target=/go/pkg/mod,id=go-mod-${TARGETOS}-${TARGETARCH}-avx512 \
/opt/rustup-init.sh -y --profile minimal
# Install uniffi-bindgen-go
RUN --mount=type=cache,target=/root/.cargo,id=cargo-${TARGETOS}-${TARGETARCH}-avx512 \
--mount=type=cache,target=/opt/ceremonyclient/target/,id=target-${TARGETOS}-${TARGETARCH}-avx512 \
--mount=type=cache,target=/usr/local/,id=usr-local-${TARGETOS}-${TARGETARCH}-avx512 \
--mount=type=cache,target=/go/pkg/mod,id=go-mod-${TARGETOS}-${TARGETARCH}-avx512 \
cargo install uniffi-bindgen-go --git https://github.com/NordSecurity/uniffi-bindgen-go --tag v0.2.1+v0.25.0
FROM base-avx512 AS build-avx512
ENV QUILIBRIUM_SIGNATURE_CHECK=false
WORKDIR /opt/ceremonyclient
# Copy everything except node and client so as to avoid
# invalidating the cache at this point on client or node rebuilds
COPY --exclude=node \
--exclude=client \
--exclude=sidecar . .
## Generate Rust bindings for VDF
WORKDIR /opt/ceremonyclient/vdf
RUN --mount=type=cache,target=/usr/local/,id=usr-local-${TARGETOS}-${TARGETARCH}-avx512 \
--mount=type=cache,target=/go/pkg/mod,id=go-mod-${TARGETOS}-${TARGETARCH}-avx512 \
--mount=type=cache,target=/root/.cache/go-build,id=go-build-${TARGETOS}-${TARGETARCH}-avx512 \
go mod download
RUN --mount=type=cache,target=/root/.cargo,id=cargo-${TARGETOS}-${TARGETARCH}-avx512 \
--mount=type=cache,target=/opt/ceremonyclient/target/,id=target-${TARGETOS}-${TARGETARCH}-avx512 \
--mount=type=cache,target=/usr/local/,id=usr-local-${TARGETOS}-${TARGETARCH}-avx512 \
--mount=type=cache,target=/go/pkg/mod,id=go-mod-${TARGETOS}-${TARGETARCH}-avx512 \
--mount=type=cache,target=/root/.cache/go-build,id=go-build-${TARGETOS}-${TARGETARCH}-avx512 \
./generate.sh
FROM build-avx512 AS build-node-avx512
WORKDIR /opt/ceremonyclient/vdf
RUN --mount=type=cache,target=/usr/local/,id=usr-local-${TARGETOS}-${TARGETARCH}-avx512 \
--mount=type=cache,target=/opt/ceremonyclient/target/,id=target-${TARGETOS}-${TARGETARCH}-avx512 \
--mount=type=cache,target=/go/pkg/mod,id=go-mod-${TARGETOS}-${TARGETARCH}-avx512 \
--mount=type=cache,target=/root/.cache/go-build,id=go-build-${TARGETOS}-${TARGETARCH}-avx512 \
./build-test.sh -o vdf-test perftest/main.go && cp vdf-test /usr/bin
# Allows exporting single binary
FROM scratch AS vdf-avx512
COPY --from=build-node-avx512 /usr/bin/vdf-test /vdf-test
ENTRYPOINT [ "/vdf-test" ]
LABEL org.opencontainers.image.title="Quilibrium Network Node"
LABEL org.opencontainers.image.description="Quilibrium is a decentralized alternative to platform as a service providers."
LABEL org.opencontainers.image.version=$NODE_VERSION
LABEL org.opencontainers.image.vendor=Quilibrium
LABEL org.opencontainers.image.url=https://quilibrium.com/
LABEL org.opencontainers.image.documentation=https://quilibrium.com/docs
LABEL org.opencontainers.image.source=$GIT_REPO
LABEL org.opencontainers.image.ref.name=$GIT_BRANCH
LABEL org.opencontainers.image.revision=$GIT_COMMIT
COPY --from=build-node-avx512 /usr/bin/vdf-test /usr/local/vdf-test
WORKDIR /root
ENTRYPOINT ["vdf-test"]

View File

@ -1,143 +0,0 @@
# syntax=docker.io/docker/dockerfile:1.7-labs
FROM --platform=${TARGETPLATFORM} ubuntu:24.04 AS base-zen3
ENV PATH="${PATH}:/root/.cargo/bin/"
ARG TARGETOS
ARG TARGETARCH
# Install GMP 6.2 (6.3 which MacOS is using only available on Debian unstable)
RUN --mount=type=cache,target=/usr/local/,id=usr-local-${TARGETOS}-${TARGETARCH}-zen3 \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -y \
build-essential \
curl \
git \
cmake \
libgmp-dev \
libmpfr-dev \
libmpfr6 \
wget \
m4 \
pkg-config \
gcc \
g++ \
make \
autoconf \
automake \
libtool \
libssl-dev \
python3 \
python-is-python3 \
&& rm -rf /var/lib/apt/lists/*
ARG GO_VERSION=1.23.5
RUN --mount=type=cache,target=/usr/local,id=usr-local-${TARGETOS}-${TARGETARCH}-zen3 \
apt update && apt install -y wget && \
ARCH=$(dpkg --print-architecture) && \
case ${ARCH} in \
amd64) GOARCH=amd64 ;; \
arm64) GOARCH=arm64 ;; \
*) echo "Unsupported architecture: ${ARCH}" && exit 1 ;; \
esac && \
wget https://go.dev/dl/go${GO_VERSION}.linux-${GOARCH}.tar.gz && \
rm -rf /usr/local/go && \
tar -C /usr/local -xzf go${GO_VERSION}.linux-${GOARCH}.tar.gz && \
rm go${GO_VERSION}.linux-${GOARCH}.tar.gz
ENV PATH=$PATH:/usr/local/go/bin
# Build FLINT from source with AVX-512
RUN --mount=type=cache,target=/usr/local,id=usr-local-${TARGETOS}-${TARGETARCH}-zen3 \
git clone https://github.com/flintlib/flint.git && \
cd flint && \
git checkout flint-3.0 && \
./bootstrap.sh && \
./configure \
--prefix=/usr/local \
--with-gmp=/usr/local \
--with-mpfr=/usr/local \
--enable-avx2 \
--enable-static \
--disable-shared \
CFLAGS="-march=znver3 -mtune=znver3 -O3" && \
make -j$(nproc) && \
make install && \
cd .. && \
rm -rf flint
COPY docker/rustup-init.sh /opt/rustup-init.sh
RUN --mount=type=cache,target=/root/.cargo,id=cargo-${TARGETOS}-${TARGETARCH}-zen3 \
--mount=type=cache,target=/opt/ceremonyclient/target/,id=target-${TARGETOS}-${TARGETARCH}-zen3 \
--mount=type=cache,target=/usr/local/,id=usr-local-${TARGETOS}-${TARGETARCH}-zen3 \
--mount=type=cache,target=/go/pkg/mod,id=go-mod-${TARGETOS}-${TARGETARCH}-zen3 \
/opt/rustup-init.sh -y --profile minimal
# Install uniffi-bindgen-go
RUN --mount=type=cache,target=/root/.cargo,id=cargo-${TARGETOS}-${TARGETARCH}-zen3 \
--mount=type=cache,target=/opt/ceremonyclient/target/,id=target-${TARGETOS}-${TARGETARCH}-zen3 \
--mount=type=cache,target=/usr/local/,id=usr-local-${TARGETOS}-${TARGETARCH}-zen3 \
--mount=type=cache,target=/go/pkg/mod,id=go-mod-${TARGETOS}-${TARGETARCH}-zen3 \
cargo install uniffi-bindgen-go --git https://github.com/NordSecurity/uniffi-bindgen-go --tag v0.2.1+v0.25.0
FROM base-zen3 AS build-zen3
ENV QUILIBRIUM_SIGNATURE_CHECK=false
WORKDIR /opt/ceremonyclient
# Copy everything except node and client so as to avoid
# invalidating the cache at this point on client or node rebuilds
COPY --exclude=node \
--exclude=client \
--exclude=sidecar . .
## Generate Rust bindings for VDF
WORKDIR /opt/ceremonyclient/vdf
RUN --mount=type=cache,target=/usr/local/,id=usr-local-${TARGETOS}-${TARGETARCH}-zen3 \
--mount=type=cache,target=/go/pkg/mod,id=go-mod-${TARGETOS}-${TARGETARCH}-zen3 \
--mount=type=cache,target=/root/.cache/go-build,id=go-build-${TARGETOS}-${TARGETARCH}-zen3 \
go mod download
RUN --mount=type=cache,target=/root/.cargo,id=cargo-${TARGETOS}-${TARGETARCH}-zen3 \
--mount=type=cache,target=/opt/ceremonyclient/target/,id=target-${TARGETOS}-${TARGETARCH}-zen3 \
--mount=type=cache,target=/usr/local/,id=usr-local-${TARGETOS}-${TARGETARCH}-zen3 \
--mount=type=cache,target=/go/pkg/mod,id=go-mod-${TARGETOS}-${TARGETARCH}-zen3 \
--mount=type=cache,target=/root/.cache/go-build,id=go-build-${TARGETOS}-${TARGETARCH}-zen3 \
./generate.sh
FROM build-zen3 AS build-node-zen3
WORKDIR /opt/ceremonyclient/vdf
RUN --mount=type=cache,target=/usr/local/,id=usr-local-${TARGETOS}-${TARGETARCH}-zen3 \
--mount=type=cache,target=/opt/ceremonyclient/target/,id=target-${TARGETOS}-${TARGETARCH}-zen3 \
--mount=type=cache,target=/go/pkg/mod,id=go-mod-${TARGETOS}-${TARGETARCH}-zen3 \
--mount=type=cache,target=/root/.cache/go-build,id=go-build-${TARGETOS}-${TARGETARCH}-zen3 \
./build-test.sh -o vdf-test perftest/main.go && cp vdf-test /usr/bin
# Allows exporting single binary
FROM scratch AS vdf-zen3
COPY --from=build-node-zen3 /usr/bin/vdf-test /vdf-test
ENTRYPOINT [ "/vdf-test" ]
LABEL org.opencontainers.image.title="Quilibrium Network Node"
LABEL org.opencontainers.image.description="Quilibrium is a decentralized alternative to platform as a service providers."
LABEL org.opencontainers.image.version=$NODE_VERSION
LABEL org.opencontainers.image.vendor=Quilibrium
LABEL org.opencontainers.image.url=https://quilibrium.com/
LABEL org.opencontainers.image.documentation=https://quilibrium.com/docs
LABEL org.opencontainers.image.source=$GIT_REPO
LABEL org.opencontainers.image.ref.name=$GIT_BRANCH
LABEL org.opencontainers.image.revision=$GIT_COMMIT
COPY --from=build-node-zen3 /usr/bin/vdf-test /usr/local/vdf-test
WORKDIR /root
ENTRYPOINT ["vdf-test"]

View File

@ -1,143 +0,0 @@
# syntax=docker.io/docker/dockerfile:1.7-labs
FROM --platform=${TARGETPLATFORM} ubuntu:24.04 AS base-zen4
ENV PATH="${PATH}:/root/.cargo/bin/"
ARG TARGETOS
ARG TARGETARCH
# Install GMP 6.2 (6.3 which MacOS is using only available on Debian unstable)
RUN --mount=type=cache,target=/usr/local/,id=usr-local-${TARGETOS}-${TARGETARCH}-zen4 \
--mount=type=cache,target=/var/cache/apt,sharing=locked \
--mount=type=cache,target=/var/lib/apt,sharing=locked \
apt-get update && apt-get install -y \
build-essential \
curl \
git \
cmake \
libgmp-dev \
libmpfr-dev \
libmpfr6 \
wget \
m4 \
pkg-config \
gcc \
g++ \
make \
autoconf \
automake \
libtool \
libssl-dev \
python3 \
python-is-python3 \
&& rm -rf /var/lib/apt/lists/*
ARG GO_VERSION=1.23.5
RUN --mount=type=cache,target=/usr/local,id=usr-local-${TARGETOS}-${TARGETARCH}-zen4 \
apt update && apt install -y wget && \
ARCH=$(dpkg --print-architecture) && \
case ${ARCH} in \
amd64) GOARCH=amd64 ;; \
arm64) GOARCH=arm64 ;; \
*) echo "Unsupported architecture: ${ARCH}" && exit 1 ;; \
esac && \
wget https://go.dev/dl/go${GO_VERSION}.linux-${GOARCH}.tar.gz && \
rm -rf /usr/local/go && \
tar -C /usr/local -xzf go${GO_VERSION}.linux-${GOARCH}.tar.gz && \
rm go${GO_VERSION}.linux-${GOARCH}.tar.gz
ENV PATH=$PATH:/usr/local/go/bin
# Build FLINT from source with AVX-512
RUN --mount=type=cache,target=/usr/local,id=usr-local-${TARGETOS}-${TARGETARCH}-zen4 \
git clone https://github.com/flintlib/flint.git && \
cd flint && \
git checkout flint-3.0 && \
./bootstrap.sh && \
./configure \
--prefix=/usr/local \
--with-gmp=/usr/local \
--with-mpfr=/usr/local \
--enable-avx2 \
--enable-static \
--disable-shared \
CFLAGS="-march=znver4 -mtune=znver4 -O3" && \
make -j$(nproc) && \
make install && \
cd .. && \
rm -rf flint
COPY docker/rustup-init.sh /opt/rustup-init.sh
RUN --mount=type=cache,target=/root/.cargo,id=cargo-${TARGETOS}-${TARGETARCH}-zen4 \
--mount=type=cache,target=/opt/ceremonyclient/target/,id=target-${TARGETOS}-${TARGETARCH}-zen4 \
--mount=type=cache,target=/usr/local/,id=usr-local-${TARGETOS}-${TARGETARCH}-zen4 \
--mount=type=cache,target=/go/pkg/mod,id=go-mod-${TARGETOS}-${TARGETARCH}-zen4 \
/opt/rustup-init.sh -y --profile minimal
# Install uniffi-bindgen-go
RUN --mount=type=cache,target=/root/.cargo,id=cargo-${TARGETOS}-${TARGETARCH}-zen4 \
--mount=type=cache,target=/opt/ceremonyclient/target/,id=target-${TARGETOS}-${TARGETARCH}-zen4 \
--mount=type=cache,target=/usr/local/,id=usr-local-${TARGETOS}-${TARGETARCH}-zen4 \
--mount=type=cache,target=/go/pkg/mod,id=go-mod-${TARGETOS}-${TARGETARCH}-zen4 \
cargo install uniffi-bindgen-go --git https://github.com/NordSecurity/uniffi-bindgen-go --tag v0.2.1+v0.25.0
FROM base-zen4 AS build-zen4
ENV QUILIBRIUM_SIGNATURE_CHECK=false
WORKDIR /opt/ceremonyclient
# Copy everything except node and client so as to avoid
# invalidating the cache at this point on client or node rebuilds
COPY --exclude=node \
--exclude=client \
--exclude=sidecar . .
## Generate Rust bindings for VDF
WORKDIR /opt/ceremonyclient/vdf
RUN --mount=type=cache,target=/usr/local/,id=usr-local-${TARGETOS}-${TARGETARCH}-zen4 \
--mount=type=cache,target=/go/pkg/mod,id=go-mod-${TARGETOS}-${TARGETARCH}-zen4 \
--mount=type=cache,target=/root/.cache/go-build,id=go-build-${TARGETOS}-${TARGETARCH}-zen4 \
go mod download
RUN --mount=type=cache,target=/root/.cargo,id=cargo-${TARGETOS}-${TARGETARCH}-zen4 \
--mount=type=cache,target=/opt/ceremonyclient/target/,id=target-${TARGETOS}-${TARGETARCH}-zen4 \
--mount=type=cache,target=/usr/local/,id=usr-local-${TARGETOS}-${TARGETARCH}-zen4 \
--mount=type=cache,target=/go/pkg/mod,id=go-mod-${TARGETOS}-${TARGETARCH}-zen4 \
--mount=type=cache,target=/root/.cache/go-build,id=go-build-${TARGETOS}-${TARGETARCH}-zen4 \
./generate.sh
FROM build-zen4 AS build-node-zen4
WORKDIR /opt/ceremonyclient/vdf
RUN --mount=type=cache,target=/usr/local/,id=usr-local-${TARGETOS}-${TARGETARCH}-zen4 \
--mount=type=cache,target=/opt/ceremonyclient/target/,id=target-${TARGETOS}-${TARGETARCH}-zen4 \
--mount=type=cache,target=/go/pkg/mod,id=go-mod-${TARGETOS}-${TARGETARCH}-zen4 \
--mount=type=cache,target=/root/.cache/go-build,id=go-build-${TARGETOS}-${TARGETARCH}-zen4 \
./build-test.sh -o vdf-test perftest/main.go && cp vdf-test /usr/bin
# Allows exporting single binary
FROM scratch AS vdf-zen4
COPY --from=build-node-zen4 /usr/bin/vdf-test /vdf-test
ENTRYPOINT [ "/vdf-test" ]
LABEL org.opencontainers.image.title="Quilibrium Network Node"
LABEL org.opencontainers.image.description="Quilibrium is a decentralized alternative to platform as a service providers."
LABEL org.opencontainers.image.version=$NODE_VERSION
LABEL org.opencontainers.image.vendor=Quilibrium
LABEL org.opencontainers.image.url=https://quilibrium.com/
LABEL org.opencontainers.image.documentation=https://quilibrium.com/docs
LABEL org.opencontainers.image.source=$GIT_REPO
LABEL org.opencontainers.image.ref.name=$GIT_BRANCH
LABEL org.opencontainers.image.revision=$GIT_COMMIT
COPY --from=build-node-zen4 /usr/bin/vdf-test /usr/local/vdf-test
WORKDIR /root
ENTRYPOINT ["vdf-test"]

49
RELEASE-NOTES Normal file
View File

@ -0,0 +1,49 @@
# 2.1.0.18
- resolve transaction missing from certain tree methods
- resolve tree deletion corruption
- resolve seniority bug
- added DKLs23 fork
- fixed channel bug
- added raw bytestream to ferret
- added challenge derivation for ed448 in FROST
- fixed race condition in global intrinsic
- other smaller bug fixes
# 2.1.0.17
- resolve sync race condition with prover registry pruning
- update hypergraph to directly manage raw deletions
- migration to resolve records issue from above
- resolve early snapshot termination issue
- global halts are now just halts on processing non-global ops
# 2.1.0.16
- build_utils static code analysis checker for underlying slice assignment
- hypergraph snapshot manager now uses in memory snapshot instead of pebble snapshot
- hypersync can delete orphaned entries
- signature aggregation wrapper for app shards no longer expects proposer to have a proof (the proof is already in the frame)
- hook events on sync for app shards
- app shards properly sync global prover info
- coverage streaks/halt events now trigger on app shards
- peer info and key registry handlers on app shard level
- updated to pebble v2
- pebble v2 upgrade handler
- archive mode memory bug fix
- subtle underlying slice mutation bug fix
# 2.1.0.15
- Adds direct db sync mode for hypersync
- Removes blackhole detection entirely
- Enforces reachability check with new approach
- Resolves start/stop issue
# 2.1.0.14
- Resolves race condition around QC processing
- Remove noisy sync logs
- Skip unnecessary prover check for global prover info
- Fix issue with 100+ rejections/confirmations
- Resolve sync panic
# 2.1.0.13
- Extends ProverConfirm and ProverReject to have multiple filters per message
- Adds snapshot integration to allow hypersync to occur concurrently with writes
- Resolved infinitessimal rings divide-by-zero error

View File

@ -1,9 +1,9 @@
# https://taskfile.dev # https://taskfile.dev
version: '3' version: "3"
dotenv: dotenv:
- '.env' - ".env"
env: env:
DOCKER_BUILDKIT: '1' DOCKER_BUILDKIT: '1'
@ -30,7 +30,7 @@ tasks:
- echo -n "Commit :" && echo " {{.GIT_COMMIT}}" - echo -n "Commit :" && echo " {{.GIT_COMMIT}}"
- echo -n "Max Key ID:" && echo " {{.MAX_KEY_ID}}" - echo -n "Max Key ID:" && echo " {{.MAX_KEY_ID}}"
silent: true silent: true
build_node_arm64_macos: build_node_arm64_macos:
desc: Build the Quilibrium node binary for MacOS ARM. Assumes it's ran from the same platform. Outputs to node/build. desc: Build the Quilibrium node binary for MacOS ARM. Assumes it's ran from the same platform. Outputs to node/build.
cmds: cmds:
@ -55,62 +55,108 @@ tasks:
- rpm/generate.sh - rpm/generate.sh
- client/build.sh -o build/arm64_macos/qclient - client/build.sh -o build/arm64_macos/qclient
backup:
desc: Create a backup file with the critical configuration files.
prompt: You will be prompted for root access. Make sure you verify the generated backup file. Continue?
preconditions:
- sh: 'test -d .config'
msg: '.config does not exists!'
- sh: 'test -f .config/config.yml'
msg: '.config/config.yml does not exists!'
- sh: 'test -f .config/keys.yml'
msg: '.config/keys.yml does not exists!'
- sh: '! test -f backup.tar.gz'
msg: 'A previous backup.tar.gz found in the current folder!'
sources:
- '.config/config.yml'
- '.config/keys.yml'
generates:
- 'backup.tar.gz'
cmds:
- |
export TMP_DIR=$(mktemp -d)
export TASK_DIR=$(pwd)
sudo cp .config/config.yml $TMP_DIR
sudo cp .config/keys.yml $TMP_DIR
sudo chown $(whoami):$(id -gn) $TMP_DIR/*
cd $TMP_DIR
tar -czf $TASK_DIR/backup.tar.gz *
cd $TASK_DIR
sudo rm -rf $TMP_DIR
echo "Backup saved to: backup.tar.gz"
echo "Do not assume you have a backup unless you verify it!!!"
silent: true
restore:
desc: Restores a backup file with the critical configuration files.
preconditions:
- sh: '! test -d .config'
msg: '.config already exists, restore cannot be performed safely!'
- sh: 'test -f backup.tar.gz'
msg: 'backup.tar.gz not found in the current folder!'
sources:
- 'backup.tar.gz'
generates:
- '.config/config.yml'
- '.config/keys.yml'
cmds:
- |
mkdir .config
tar -xzf backup.tar.gz -C .config
echo "Backup restored from: backup.tar.gz"
silent: true
test:port:
desc: Test if the P2P port is visible to the world.
preconditions:
- sh: 'test -x "$(command -v nc)"'
msg: 'nc is not installed, install with "sudo apt install netcat"'
- sh: 'test -n "$NODE_PUBLIC_NAME"'
msg: 'The public DNS name or IP address of the server must be set in NODE_PUBLIC_NAME.'
cmds:
- 'nc -vzu ${NODE_PUBLIC_NAME} ${QUILIBRIUM_P2P_PORT:=8336}'
build_node_arm64_linux: build_node_arm64_linux:
desc: Build the Quilibrium node binary for ARM64 Linux. Outputs to node/build. desc: Build the Quilibrium node binary for ARM64 Linux. Outputs to node/build.
cmds: cmds:
- docker build --platform linux/arm64 -f Dockerfile.source --output node/build/arm64_linux --target=node . - docker build --platform linux/arm64 -f docker/Dockerfile.source --output node/build/arm64_linux --target=node .
build_qclient_arm64_linux: build_qclient_arm64_linux:
desc: Build the QClient node binary for ARM64 Linux. Outputs to client/build. desc: Build the QClient node binary for ARM64 Linux. Outputs to client/build.
cmds: cmds:
- docker build --platform linux/arm64 -f Dockerfile.source --output client/build/arm64_linux --target=qclient . - docker build --platform linux/arm64 -f docker/Dockerfile.source --output client/build/arm64_linux --target=qclient .
build_node_amd64_linux: build_node_amd64_linux:
desc: Build the Quilibrium node binary for AMD64 Linux. Outputs to node/build. desc: Build the Quilibrium node binary for AMD64 Linux. Outputs to node/build.
cmds: cmds:
- docker build --platform linux/amd64 -f Dockerfile.source --output node/build/amd64_linux --target=node . - docker build --platform linux/amd64 -f docker/Dockerfile.source --output node/build/amd64_linux --target=node .
build_conntest_amd64_linux:
desc: Build the Quilibrium node connection test binary for AMD64 Linux. Outputs to conntest/build.
cmds:
- docker build --platform linux/amd64 -f docker/Dockerfile.conntest.source --output conntest/build/amd64_linux --target=conntest .
build_node_amd64_avx512_linux: build_node_amd64_avx512_linux:
desc: Build the Quilibrium node binary for AMD64 Linux with AVX-512 extensions. Outputs to node/build. desc: Build the Quilibrium node binary for AMD64 Linux with AVX-512 extensions. Outputs to node/build.
cmds: cmds:
- docker build --platform linux/amd64 -f Dockerfile.sourceavx512 --output node/build/amd64_avx512_linux --target=node . - docker build --platform linux/amd64 -f docker/Dockerfile.sourceavx512 --output node/build/amd64_avx512_linux --target=node .
build_qclient_amd64_linux: build_qclient_amd64_linux:
desc: Build the QClient node binary for AMD64 Linux. Outputs to client/build. desc: Build the QClient node binary for AMD64 Linux. Outputs to client/build.
cmds: cmds:
- docker build --platform linux/amd64 -f Dockerfile.source --output client/build/amd64_linux --target=qclient . - docker build --platform linux/amd64 -f docker/Dockerfile.source --output client/build/amd64_linux --target=qclient .
build_qclient_amd64_avx512_linux: build_qclient_amd64_avx512_linux:
desc: Build the QClient node binary for AMD64 Linux with AVX-512 extensions. Outputs to client/build. desc: Build the QClient node binary for AMD64 Linux with AVX-512 extensions. Outputs to client/build.
cmds: cmds:
- docker build --platform linux/amd64 -f Dockerfile.sourceavx512 --output client/build/amd64_avx512_linux --target=qclient . - docker build --platform linux/amd64 -f docker/Dockerfile.sourceavx512 --output client/build/amd64_avx512_linux --target=qclient .
build_vdf_perf_analysis_amd64_linux:
cmds:
- docker build --platform linux/amd64 -f Dockerfile.vdf.source --output vdf/build/amd64_linux --target=vdf --progress=plain --no-cache .
build_vdf_perf_analysis_amd64_avx512_linux:
cmds:
- docker build --platform linux/amd64 -f Dockerfile.vdf.sourceavx512 --output vdf/build/amd64_avx512_linux --target=vdf-avx512 .
build_vdf_perf_analysis_amd64_zen3_linux:
cmds:
- docker build --platform linux/amd64 -f Dockerfile.vdf.sourcezen3 --output vdf/build/amd64_zen3_linux --target=vdf-zen3 --progress=plain --no-cache .
build_vdf_perf_analysis_amd64_zen4_linux:
cmds:
- docker build --platform linux/amd64 -f Dockerfile.vdf.sourcezen4 --output vdf/build/amd64_zen4_linux --target=vdf-zen4 --progress=plain --no-cache .
build_vdf_perf_analysis_arm64_linux:
cmds:
- docker build --platform linux/arm64 -f Dockerfile.vdf.source --output vdf/build/arm64_linux --target=vdf --progress=plain --no-cache .
build:source: build:source:
desc: Build the Quilibrium docker image from source. desc: Build the Quilibrium docker image from source.
cmds: cmds:
- | - |
docker build \ docker build \
-f Dockerfile.source \ -f docker/Dockerfile.source \
--build-arg NODE_VERSION={{.VERSION}} \ --build-arg NODE_VERSION={{.VERSION}} \
--build-arg GIT_REPO={{.GIT_REPO}} \ --build-arg GIT_REPO={{.GIT_REPO}} \
--build-arg GIT_BRANCH={{.GIT_BRANCH}} \ --build-arg GIT_BRANCH={{.GIT_BRANCH}} \
@ -131,7 +177,7 @@ tasks:
cmds: cmds:
- | - |
docker build \ docker build \
-f Dockerfile.release \ -f docker/Dockerfile.release \
--build-arg NODE_VERSION={{.VERSION}} \ --build-arg NODE_VERSION={{.VERSION}} \
--build-arg GIT_REPO={{.GIT_REPO}} \ --build-arg GIT_REPO={{.GIT_REPO}} \
--build-arg GIT_BRANCH={{.GIT_BRANCH}} \ --build-arg GIT_BRANCH={{.GIT_BRANCH}} \
@ -163,3 +209,34 @@ tasks:
desc: Test the Quilibrium docker image. desc: Test the Quilibrium docker image.
cmds: cmds:
- client/test/run_tests.sh -d 'ubuntu' -v '24.04' - client/test/run_tests.sh -d 'ubuntu' -v '24.04'
config:gen:
desc: Generate configuration and keys using Go.
cmds:
- go run utils/config-gen --config {{.CONFIG_DIR | default ".config"}}
build:node:source:
desc: Build the optimized Quilibrium node-only docker image.
cmds:
- |
docker build \
--target node-only \
-f docker/Dockerfile.source \
--build-arg NODE_VERSION={{.VERSION}} \
--build-arg GIT_REPO={{.GIT_REPO}} \
--build-arg GIT_BRANCH={{.GIT_BRANCH}} \
--build-arg GIT_COMMIT={{.GIT_COMMIT}} \
-t ${QUILIBRIUM_IMAGE_NAME:-quilibrium}:{{.VERSION}}-node-only \
-t ${QUILIBRIUM_IMAGE_NAME:-quilibrium}:node-only \
.
deploy:node:
desc: Run the Quilibrium node using host networking and external config.
cmds:
- |
docker run -d --name q-node \
--network host \
--restart unless-stopped \
-v {{.CONFIG_DIR | default "$(pwd)/.config"}}:/root/.config \
${QUILIBRIUM_IMAGE_NAME:-quilibrium}:node-only \
-signature-check=false

View File

@ -1,3 +1,8 @@
module source.quilibrium.com/quilibrium/monorepo/alias module source.quilibrium.com/quilibrium/monorepo/alias
go 1.23.2 go 1.23.2
require (
github.com/pkg/errors v0.9.1
gopkg.in/yaml.v3 v3.0.1
)

6
alias/go.sum Normal file
View File

@ -0,0 +1,6 @@
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405 h1:yhCVgyC4o1eVCa2tZl7eS0r+SDo693bJlVdllGtEeKM=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=

View File

@ -19,7 +19,7 @@ case "$os_type" in
# Check if the architecture is ARM # Check if the architecture is ARM
if [[ "$(uname -m)" == "arm64" ]]; then if [[ "$(uname -m)" == "arm64" ]]; then
# MacOS ld doesn't support -Bstatic and -Bdynamic, so it's important that there is only a static version of the library # MacOS ld doesn't support -Bstatic and -Bdynamic, so it's important that there is only a static version of the library
go build -ldflags "-linkmode 'external' -extldflags '-L$BINARIES_DIR -L/usr/local/lib/ -L/opt/homebrew/Cellar/openssl@3/3.4.1/lib -lstdc++ -lferret -ldl -lm -lcrypto -lssl'" "$@" go build -ldflags "-linkmode 'external' -extldflags '-L$BINARIES_DIR -L/usr/local/lib/ -L/opt/homebrew/Cellar/openssl@3/3.6.1/lib -lbls48581 -lferret -lbulletproofs -ldl -lm -lflint -lgmp -lmpfr -lstdc++ -lcrypto -lssl'" "$@"
else else
echo "Unsupported platform" echo "Unsupported platform"
exit 1 exit 1

View File

@ -1,13 +1,52 @@
module source.quilibrium.com/quilibrium/monorepo/bedlam module source.quilibrium.com/quilibrium/monorepo/bedlam
go 1.23.2 go 1.24.0
replace source.quilibrium.com/quilibrium/monorepo/ferret => ../ferret replace source.quilibrium.com/quilibrium/monorepo/ferret => ../ferret
replace source.quilibrium.com/quilibrium/monorepo/protobufs => ../protobufs
replace source.quilibrium.com/quilibrium/monorepo/consensus => ../consensus
replace github.com/libp2p/go-libp2p => ../go-libp2p
replace github.com/multiformats/go-multiaddr => ../go-multiaddr
require ( require (
github.com/markkurossi/tabulate v0.0.0-20230223130100-d4965869b123 github.com/markkurossi/tabulate v0.0.0-20230223130100-d4965869b123
github.com/pkg/errors v0.9.1 github.com/pkg/errors v0.9.1
source.quilibrium.com/quilibrium/monorepo/ferret v0.0.0-00010101000000-000000000000 source.quilibrium.com/quilibrium/monorepo/ferret v0.0.0-00010101000000-000000000000
) )
require golang.org/x/text v0.23.0 // indirect require (
github.com/cloudflare/circl v1.6.1 // indirect
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3 // indirect
github.com/iden3/go-iden3-crypto v0.0.17 // indirect
github.com/ipfs/go-cid v0.5.0 // indirect
github.com/klauspost/cpuid/v2 v2.2.10 // indirect
github.com/libp2p/go-buffer-pool v0.1.0 // indirect
github.com/libp2p/go-libp2p v0.0.0-00010101000000-000000000000 // indirect
github.com/minio/sha256-simd v1.0.1 // indirect
github.com/mr-tron/base58 v1.2.0 // indirect
github.com/multiformats/go-base32 v0.1.0 // indirect
github.com/multiformats/go-base36 v0.2.0 // indirect
github.com/multiformats/go-multiaddr v0.16.1 // indirect
github.com/multiformats/go-multibase v0.2.0 // indirect
github.com/multiformats/go-multicodec v0.9.1 // indirect
github.com/multiformats/go-multihash v0.2.3 // indirect
github.com/multiformats/go-varint v0.0.7 // indirect
github.com/spaolacci/murmur3 v1.1.0 // indirect
golang.org/x/crypto v0.39.0 // indirect
golang.org/x/exp v0.0.0-20250606033433-dcc06ee1d476 // indirect
golang.org/x/net v0.41.0 // indirect
golang.org/x/sys v0.33.0 // indirect
golang.org/x/text v0.26.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250303144028-a0af3efb3deb // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250303144028-a0af3efb3deb // indirect
google.golang.org/grpc v1.72.0 // indirect
google.golang.org/protobuf v1.36.6 // indirect
lukechampine.com/blake3 v1.4.1 // indirect
source.quilibrium.com/quilibrium/monorepo/consensus v0.0.0-00010101000000-000000000000 // indirect
source.quilibrium.com/quilibrium/monorepo/protobufs v0.0.0-00010101000000-000000000000 // indirect
)

View File

@ -1,6 +1,90 @@
github.com/cloudflare/circl v1.6.1 h1:zqIqSPIndyBh1bjLVVDHMPpVKqp8Su/V+6MeDzzQBQ0=
github.com/cloudflare/circl v1.6.1/go.mod h1:uddAzsPgqdMAYatqJ0lsjX1oECcQLIlRpzZh3pJrofs=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/decred/dcrd/crypto/blake256 v1.1.0 h1:zPMNGQCm0g4QTY27fOCorQW7EryeQ/U0x++OzVrdms8=
github.com/decred/dcrd/crypto/blake256 v1.1.0/go.mod h1:2OfgNZ5wDpcsFmHmCK5gZTPcCXqlm2ArzUIkw9czNJo=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 h1:NMZiJj8QnKe1LgsbDayM4UoHwbvwDRwnI3hwNaAHRnc=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0/go.mod h1:ZXNYxsqcloTdSy/rNShjYzMhyjf0LaoftYK0p+A3h40=
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3 h1:5ZPtiqj0JL5oKWmcsq4VMaAW5ukBEgSGXEN89zeH1Jo=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3/go.mod h1:ndYquD05frm2vACXE1nsccT4oJzjhw2arTS2cpUD1PI=
github.com/iden3/go-iden3-crypto v0.0.17 h1:NdkceRLJo/pI4UpcjVah4lN/a3yzxRUGXqxbWcYh9mY=
github.com/iden3/go-iden3-crypto v0.0.17/go.mod h1:dLpM4vEPJ3nDHzhWFXDjzkn1qHoBeOT/3UEhXsEsP3E=
github.com/ipfs/go-cid v0.5.0 h1:goEKKhaGm0ul11IHA7I6p1GmKz8kEYniqFopaB5Otwg=
github.com/ipfs/go-cid v0.5.0/go.mod h1:0L7vmeNXpQpUS9vt+yEARkJ8rOg43DF3iPgn4GIN0mk=
github.com/klauspost/cpuid/v2 v2.2.10 h1:tBs3QSyvjDyFTq3uoc/9xFpCuOsJQFNPiAhYdw2skhE=
github.com/klauspost/cpuid/v2 v2.2.10/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
github.com/leanovate/gopter v0.2.9 h1:fQjYxZaynp97ozCzfOyOuAGOU4aU/z37zf/tOujFk7c=
github.com/leanovate/gopter v0.2.9/go.mod h1:U2L/78B+KVFIx2VmW6onHJQzXtFb+p5y3y2Sh+Jxxv8=
github.com/libp2p/go-buffer-pool v0.1.0 h1:oK4mSFcQz7cTQIfqbe4MIj9gLW+mnanjyFtc6cdF0Y8=
github.com/libp2p/go-buffer-pool v0.1.0/go.mod h1:N+vh8gMqimBzdKkSMVuydVDq+UV5QTWy5HSiZacSbPg=
github.com/markkurossi/tabulate v0.0.0-20230223130100-d4965869b123 h1:aGg9ACNKrIa6lZ18dNT9ZsFcXga3obyOAl5Tiyx2txE= github.com/markkurossi/tabulate v0.0.0-20230223130100-d4965869b123 h1:aGg9ACNKrIa6lZ18dNT9ZsFcXga3obyOAl5Tiyx2txE=
github.com/markkurossi/tabulate v0.0.0-20230223130100-d4965869b123/go.mod h1:qPNWLW3h4173ZWYHjOgJ1wbvNyLuE1fboZilv97Aq7k= github.com/markkurossi/tabulate v0.0.0-20230223130100-d4965869b123/go.mod h1:qPNWLW3h4173ZWYHjOgJ1wbvNyLuE1fboZilv97Aq7k=
github.com/minio/sha256-simd v1.0.1 h1:6kaan5IFmwTNynnKKpDHe6FWHohJOHhCPchzK49dzMM=
github.com/minio/sha256-simd v1.0.1/go.mod h1:Pz6AKMiUdngCLpeTL/RJY1M9rUuPMYujV5xJjtbRSN8=
github.com/mr-tron/base58 v1.2.0 h1:T/HDJBh4ZCPbU39/+c3rRvE0uKBQlU27+QI8LJ4t64o=
github.com/mr-tron/base58 v1.2.0/go.mod h1:BinMc/sQntlIE1frQmRFPUoPA1Zkr8VRgBdjWI2mNwc=
github.com/multiformats/go-base32 v0.1.0 h1:pVx9xoSPqEIQG8o+UbAe7DNi51oej1NtK+aGkbLYxPE=
github.com/multiformats/go-base32 v0.1.0/go.mod h1:Kj3tFY6zNr+ABYMqeUNeGvkIC/UYgtWibDcT0rExnbI=
github.com/multiformats/go-base36 v0.2.0 h1:lFsAbNOGeKtuKozrtBsAkSVhv1p9D0/qedU9rQyccr0=
github.com/multiformats/go-base36 v0.2.0/go.mod h1:qvnKE++v+2MWCfePClUEjE78Z7P2a1UV0xHgWc0hkp4=
github.com/multiformats/go-multibase v0.2.0 h1:isdYCVLvksgWlMW9OZRYJEa9pZETFivncJHmHnnd87g=
github.com/multiformats/go-multibase v0.2.0/go.mod h1:bFBZX4lKCA/2lyOFSAoKH5SS6oPyjtnzK/XTFDPkNuk=
github.com/multiformats/go-multicodec v0.9.1 h1:x/Fuxr7ZuR4jJV4Os5g444F7xC4XmyUaT/FWtE+9Zjo=
github.com/multiformats/go-multicodec v0.9.1/go.mod h1:LLWNMtyV5ithSBUo3vFIMaeDy+h3EbkMTek1m+Fybbo=
github.com/multiformats/go-multihash v0.2.3 h1:7Lyc8XfX/IY2jWb/gI7JP+o7JEq9hOa7BFvVU9RSh+U=
github.com/multiformats/go-multihash v0.2.3/go.mod h1:dXgKXCXjBzdscBLk9JkjINiEsCKRVch90MdaGiKsvSM=
github.com/multiformats/go-varint v0.0.7 h1:sWSGR+f/eu5ABZA2ZpYKBILXTTs9JWpdEM/nEGOHFS8=
github.com/multiformats/go-varint v0.0.7/go.mod h1:r8PUYw/fD/SjBCiKOoDlGF6QawOELpZAu9eioSos/OU=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
golang.org/x/text v0.23.0 h1:D71I7dUrlY+VX0gQShAThNGHFxZ13dGLBHQLVl1mJlY= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
golang.org/x/text v0.23.0/go.mod h1:/BLNzu4aZCJ1+kcD0DNRotWKage4q2rGVAg4o22unh4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/spaolacci/murmur3 v1.1.0 h1:7c1g84S4BPRrfL5Xrdp6fOJ206sU9y293DDHaoy0bLI=
github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/otel v1.34.0 h1:zRLXxLCgL1WyKsPVrgbSdMN4c0FMkDAskSTQP+0hdUY=
go.opentelemetry.io/otel v1.34.0/go.mod h1:OWFPOQ+h4G8xpyjgqo4SxJYdDQ/qmRH+wivy7zzx9oI=
go.opentelemetry.io/otel/metric v1.34.0 h1:+eTR3U0MyfWjRDhmFMxe2SsW64QrZ84AOhvqS7Y+PoQ=
go.opentelemetry.io/otel/metric v1.34.0/go.mod h1:CEDrp0fy2D0MvkXE+dPV7cMi8tWZwX3dmaIhwPOaqHE=
go.opentelemetry.io/otel/sdk v1.34.0 h1:95zS4k/2GOy069d321O8jWgYsW3MzVV+KuSPKp7Wr1A=
go.opentelemetry.io/otel/sdk v1.34.0/go.mod h1:0e/pNiaMAqaykJGKbi+tSjWfNNHMTxoC9qANsCzbyxU=
go.opentelemetry.io/otel/sdk/metric v1.34.0 h1:5CeK9ujjbFVL5c1PhLuStg1wxA7vQv7ce1EK0Gyvahk=
go.opentelemetry.io/otel/sdk/metric v1.34.0/go.mod h1:jQ/r8Ze28zRKoNRdkjCZxfs6YvBTG1+YIqyFVFYec5w=
go.opentelemetry.io/otel/trace v1.34.0 h1:+ouXS2V8Rd4hp4580a8q23bg0azF2nI8cqLYnC8mh/k=
go.opentelemetry.io/otel/trace v1.34.0/go.mod h1:Svm7lSjQD7kG7KJ/MUHPVXSDGz2OX4h0M2jHBhmSfRE=
golang.org/x/crypto v0.39.0 h1:SHs+kF4LP+f+p14esP5jAoDpHU8Gu/v9lFRK6IT5imM=
golang.org/x/crypto v0.39.0/go.mod h1:L+Xg3Wf6HoL4Bn4238Z6ft6KfEpN0tJGo53AAPC632U=
golang.org/x/exp v0.0.0-20250606033433-dcc06ee1d476 h1:bsqhLWFR6G6xiQcb+JoGqdKdRU6WzPWmK8E0jxTjzo4=
golang.org/x/exp v0.0.0-20250606033433-dcc06ee1d476/go.mod h1:3//PLf8L/X+8b4vuAfHzxeRUl04Adcb341+IGKfnqS8=
golang.org/x/net v0.41.0 h1:vBTly1HeNPEn3wtREYfy4GZ/NECgw2Cnl+nK6Nz3uvw=
golang.org/x/net v0.41.0/go.mod h1:B/K4NNqkfmg07DQYrbwvSluqCJOOXwUjeb/5lOisjbA=
golang.org/x/sys v0.33.0 h1:q3i8TbbEz+JRD9ywIRlyRAQbM0qF7hu24q3teo2hbuw=
golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/text v0.26.0 h1:P42AVeLghgTYr4+xUnTRKDMqpar+PtX7KWuNQL21L8M=
golang.org/x/text v0.26.0/go.mod h1:QK15LZJUUQVJxhz7wXgxSy/CJaTFjd0G+YLonydOVQA=
google.golang.org/genproto/googleapis/api v0.0.0-20250303144028-a0af3efb3deb h1:p31xT4yrYrSM/G4Sn2+TNUkVhFCbG9y8itM2S6Th950=
google.golang.org/genproto/googleapis/api v0.0.0-20250303144028-a0af3efb3deb/go.mod h1:jbe3Bkdp+Dh2IrslsFCklNhweNTBgSYanP1UXhJDhKg=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250303144028-a0af3efb3deb h1:TLPQVbx1GJ8VKZxz52VAxl1EBgKXXbTiU9Fc5fZeLn4=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250303144028-a0af3efb3deb/go.mod h1:LuRYeWDFV6WOn90g357N17oMCaxpgCnbi/44qJvDn2I=
google.golang.org/grpc v1.72.0 h1:S7UkcVa60b5AAQTaO6ZKamFp1zMZSU0fGDK2WZLbBnM=
google.golang.org/grpc v1.72.0/go.mod h1:wH5Aktxcg25y1I3w7H69nHfXdOG3UiadoBtjh3izSDM=
google.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY=
google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
lukechampine.com/blake3 v1.4.1 h1:I3Smz7gso8w4/TunLKec6K2fn+kyKtDxr/xcQEN84Wg=
lukechampine.com/blake3 v1.4.1/go.mod h1:QFosUxmjB8mnrWFSNwKmvxHpfY72bmD2tQ0kBMM3kwo=

View File

@ -372,6 +372,30 @@ func (b *Bls48581KeyConstructor) VerifySignatureRaw(
return generated.BlsVerify(publicKeyG2, signatureG1, message, context) return generated.BlsVerify(publicKeyG2, signatureG1, message, context)
} }
func (b *Bls48581KeyConstructor) VerifyMultiMessageSignatureRaw(
publicKeysG2 [][]byte,
signatureG1 []byte,
messages [][]byte,
context []byte,
) bool {
if len(publicKeysG2) != len(messages) || len(publicKeysG2) == 0 {
return false
}
for _, pk := range publicKeysG2 {
if len(pk) != 585 {
return false
}
}
return generated.BlsVerifyMsigMmsg(
publicKeysG2,
signatureG1,
messages,
context,
)
}
type Bls48581Key struct { type Bls48581Key struct {
privateKey []byte privateKey []byte
publicKey []byte publicKey []byte

View File

@ -1,8 +0,0 @@
#include <bls48581.h>
// This file exists beacause of
// https://github.com/golang/go/issues/11263
void cgo_rust_task_callback_bridge_bls48581(RustTaskCallback cb, const void * taskData, int8_t status) {
cb(taskData, status);
}

View File

@ -12,60 +12,67 @@ import (
"unsafe" "unsafe"
) )
type RustBuffer = C.RustBuffer // This is needed, because as of go 1.24
// type RustBuffer C.RustBuffer cannot have methods,
// RustBuffer is treated as non-local type
type GoRustBuffer struct {
inner C.RustBuffer
}
type RustBufferI interface { type RustBufferI interface {
AsReader() *bytes.Reader AsReader() *bytes.Reader
Free() Free()
ToGoBytes() []byte ToGoBytes() []byte
Data() unsafe.Pointer Data() unsafe.Pointer
Len() int Len() uint64
Capacity() int Capacity() uint64
} }
func RustBufferFromExternal(b RustBufferI) RustBuffer { func RustBufferFromExternal(b RustBufferI) GoRustBuffer {
return RustBuffer{ return GoRustBuffer{
capacity: C.int(b.Capacity()), inner: C.RustBuffer{
len: C.int(b.Len()), capacity: C.uint64_t(b.Capacity()),
data: (*C.uchar)(b.Data()), len: C.uint64_t(b.Len()),
data: (*C.uchar)(b.Data()),
},
} }
} }
func (cb RustBuffer) Capacity() int { func (cb GoRustBuffer) Capacity() uint64 {
return int(cb.capacity) return uint64(cb.inner.capacity)
} }
func (cb RustBuffer) Len() int { func (cb GoRustBuffer) Len() uint64 {
return int(cb.len) return uint64(cb.inner.len)
} }
func (cb RustBuffer) Data() unsafe.Pointer { func (cb GoRustBuffer) Data() unsafe.Pointer {
return unsafe.Pointer(cb.data) return unsafe.Pointer(cb.inner.data)
} }
func (cb RustBuffer) AsReader() *bytes.Reader { func (cb GoRustBuffer) AsReader() *bytes.Reader {
b := unsafe.Slice((*byte)(cb.data), C.int(cb.len)) b := unsafe.Slice((*byte)(cb.inner.data), C.uint64_t(cb.inner.len))
return bytes.NewReader(b) return bytes.NewReader(b)
} }
func (cb RustBuffer) Free() { func (cb GoRustBuffer) Free() {
rustCall(func(status *C.RustCallStatus) bool { rustCall(func(status *C.RustCallStatus) bool {
C.ffi_bls48581_rustbuffer_free(cb, status) C.ffi_bls48581_rustbuffer_free(cb.inner, status)
return false return false
}) })
} }
func (cb RustBuffer) ToGoBytes() []byte { func (cb GoRustBuffer) ToGoBytes() []byte {
return C.GoBytes(unsafe.Pointer(cb.data), C.int(cb.len)) return C.GoBytes(unsafe.Pointer(cb.inner.data), C.int(cb.inner.len))
} }
func stringToRustBuffer(str string) RustBuffer { func stringToRustBuffer(str string) C.RustBuffer {
return bytesToRustBuffer([]byte(str)) return bytesToRustBuffer([]byte(str))
} }
func bytesToRustBuffer(b []byte) RustBuffer { func bytesToRustBuffer(b []byte) C.RustBuffer {
if len(b) == 0 { if len(b) == 0 {
return RustBuffer{} return C.RustBuffer{}
} }
// We can pass the pointer along here, as it is pinned // We can pass the pointer along here, as it is pinned
// for the duration of this call // for the duration of this call
@ -74,7 +81,7 @@ func bytesToRustBuffer(b []byte) RustBuffer {
data: (*C.uchar)(unsafe.Pointer(&b[0])), data: (*C.uchar)(unsafe.Pointer(&b[0])),
} }
return rustCall(func(status *C.RustCallStatus) RustBuffer { return rustCall(func(status *C.RustCallStatus) C.RustBuffer {
return C.ffi_bls48581_rustbuffer_from_bytes(foreign, status) return C.ffi_bls48581_rustbuffer_from_bytes(foreign, status)
}) })
} }
@ -84,12 +91,7 @@ type BufLifter[GoType any] interface {
} }
type BufLowerer[GoType any] interface { type BufLowerer[GoType any] interface {
Lower(value GoType) RustBuffer Lower(value GoType) C.RustBuffer
}
type FfiConverter[GoType any, FfiType any] interface {
Lift(value FfiType) GoType
Lower(value GoType) FfiType
} }
type BufReader[GoType any] interface { type BufReader[GoType any] interface {
@ -100,12 +102,7 @@ type BufWriter[GoType any] interface {
Write(writer io.Writer, value GoType) Write(writer io.Writer, value GoType)
} }
type FfiRustBufConverter[GoType any, FfiType any] interface { func LowerIntoRustBuffer[GoType any](bufWriter BufWriter[GoType], value GoType) C.RustBuffer {
FfiConverter[GoType, FfiType]
BufReader[GoType]
}
func LowerIntoRustBuffer[GoType any](bufWriter BufWriter[GoType], value GoType) RustBuffer {
// This might be not the most efficient way but it does not require knowing allocation size // This might be not the most efficient way but it does not require knowing allocation size
// beforehand // beforehand
var buffer bytes.Buffer var buffer bytes.Buffer
@ -130,31 +127,30 @@ func LiftFromRustBuffer[GoType any](bufReader BufReader[GoType], rbuf RustBuffer
return item return item
} }
func rustCallWithError[U any](converter BufLifter[error], callback func(*C.RustCallStatus) U) (U, error) { func rustCallWithError[E any, U any](converter BufReader[*E], callback func(*C.RustCallStatus) U) (U, *E) {
var status C.RustCallStatus var status C.RustCallStatus
returnValue := callback(&status) returnValue := callback(&status)
err := checkCallStatus(converter, status) err := checkCallStatus(converter, status)
return returnValue, err return returnValue, err
} }
func checkCallStatus(converter BufLifter[error], status C.RustCallStatus) error { func checkCallStatus[E any](converter BufReader[*E], status C.RustCallStatus) *E {
switch status.code { switch status.code {
case 0: case 0:
return nil return nil
case 1: case 1:
return converter.Lift(status.errorBuf) return LiftFromRustBuffer(converter, GoRustBuffer{inner: status.errorBuf})
case 2: case 2:
// when the rust code sees a panic, it tries to construct a rustbuffer // when the rust code sees a panic, it tries to construct a rustBuffer
// with the message. but if that code panics, then it just sends back // with the message. but if that code panics, then it just sends back
// an empty buffer. // an empty buffer.
if status.errorBuf.len > 0 { if status.errorBuf.len > 0 {
panic(fmt.Errorf("%s", FfiConverterStringINSTANCE.Lift(status.errorBuf))) panic(fmt.Errorf("%s", FfiConverterStringINSTANCE.Lift(GoRustBuffer{inner: status.errorBuf})))
} else { } else {
panic(fmt.Errorf("Rust panicked while handling Rust panic")) panic(fmt.Errorf("Rust panicked while handling Rust panic"))
} }
default: default:
return fmt.Errorf("unknown status code: %d", status.code) panic(fmt.Errorf("unknown status code: %d", status.code))
} }
} }
@ -165,11 +161,13 @@ func checkCallStatusUnknown(status C.RustCallStatus) error {
case 1: case 1:
panic(fmt.Errorf("function not returning an error returned an error")) panic(fmt.Errorf("function not returning an error returned an error"))
case 2: case 2:
// when the rust code sees a panic, it tries to construct a rustbuffer // when the rust code sees a panic, it tries to construct a C.RustBuffer
// with the message. but if that code panics, then it just sends back // with the message. but if that code panics, then it just sends back
// an empty buffer. // an empty buffer.
if status.errorBuf.len > 0 { if status.errorBuf.len > 0 {
panic(fmt.Errorf("%s", FfiConverterStringINSTANCE.Lift(status.errorBuf))) panic(fmt.Errorf("%s", FfiConverterStringINSTANCE.Lift(GoRustBuffer{
inner: status.errorBuf,
})))
} else { } else {
panic(fmt.Errorf("Rust panicked while handling Rust panic")) panic(fmt.Errorf("Rust panicked while handling Rust panic"))
} }
@ -179,13 +177,17 @@ func checkCallStatusUnknown(status C.RustCallStatus) error {
} }
func rustCall[U any](callback func(*C.RustCallStatus) U) U { func rustCall[U any](callback func(*C.RustCallStatus) U) U {
returnValue, err := rustCallWithError(nil, callback) returnValue, err := rustCallWithError[error](nil, callback)
if err != nil { if err != nil {
panic(err) panic(err)
} }
return returnValue return returnValue
} }
type NativeError interface {
AsError() error
}
func writeInt8(writer io.Writer, value int8) { func writeInt8(writer io.Writer, value int8) {
if err := binary.Write(writer, binary.BigEndian, value); err != nil { if err := binary.Write(writer, binary.BigEndian, value); err != nil {
panic(err) panic(err)
@ -333,63 +335,72 @@ func init() {
func uniffiCheckChecksums() { func uniffiCheckChecksums() {
// Get the bindings contract version from our ComponentInterface // Get the bindings contract version from our ComponentInterface
bindingsContractVersion := 24 bindingsContractVersion := 26
// Get the scaffolding contract version by calling the into the dylib // Get the scaffolding contract version by calling the into the dylib
scaffoldingContractVersion := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint32_t { scaffoldingContractVersion := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint32_t {
return C.ffi_bls48581_uniffi_contract_version(uniffiStatus) return C.ffi_bls48581_uniffi_contract_version()
}) })
if bindingsContractVersion != int(scaffoldingContractVersion) { if bindingsContractVersion != int(scaffoldingContractVersion) {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bls48581: UniFFI contract version mismatch") panic("bls48581: UniFFI contract version mismatch")
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bls48581_checksum_func_bls_aggregate(uniffiStatus) return C.uniffi_bls48581_checksum_func_bls_aggregate()
}) })
if checksum != 25405 { if checksum != 54030 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bls48581: uniffi_bls48581_checksum_func_bls_aggregate: UniFFI API checksum mismatch") panic("bls48581: uniffi_bls48581_checksum_func_bls_aggregate: UniFFI API checksum mismatch")
} }
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bls48581_checksum_func_bls_keygen(uniffiStatus) return C.uniffi_bls48581_checksum_func_bls_keygen()
}) })
if checksum != 58096 { if checksum != 55807 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bls48581: uniffi_bls48581_checksum_func_bls_keygen: UniFFI API checksum mismatch") panic("bls48581: uniffi_bls48581_checksum_func_bls_keygen: UniFFI API checksum mismatch")
} }
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bls48581_checksum_func_bls_sign(uniffiStatus) return C.uniffi_bls48581_checksum_func_bls_sign()
}) })
if checksum != 44903 { if checksum != 27146 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bls48581: uniffi_bls48581_checksum_func_bls_sign: UniFFI API checksum mismatch") panic("bls48581: uniffi_bls48581_checksum_func_bls_sign: UniFFI API checksum mismatch")
} }
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bls48581_checksum_func_bls_verify(uniffiStatus) return C.uniffi_bls48581_checksum_func_bls_verify()
}) })
if checksum != 59437 { if checksum != 23721 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bls48581: uniffi_bls48581_checksum_func_bls_verify: UniFFI API checksum mismatch") panic("bls48581: uniffi_bls48581_checksum_func_bls_verify: UniFFI API checksum mismatch")
} }
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bls48581_checksum_func_commit_raw(uniffiStatus) return C.uniffi_bls48581_checksum_func_bls_verify_msig_mmsg()
}) })
if checksum != 20099 { if checksum != 55801 {
// If this happens try cleaning and rebuilding your project
panic("bls48581: uniffi_bls48581_checksum_func_bls_verify_msig_mmsg: UniFFI API checksum mismatch")
}
}
{
checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bls48581_checksum_func_commit_raw()
})
if checksum != 14479 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bls48581: uniffi_bls48581_checksum_func_commit_raw: UniFFI API checksum mismatch") panic("bls48581: uniffi_bls48581_checksum_func_commit_raw: UniFFI API checksum mismatch")
} }
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bls48581_checksum_func_init(uniffiStatus) return C.uniffi_bls48581_checksum_func_init()
}) })
if checksum != 11227 { if checksum != 11227 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
@ -397,37 +408,37 @@ func uniffiCheckChecksums() {
} }
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bls48581_checksum_func_prove_multiple(uniffiStatus) return C.uniffi_bls48581_checksum_func_prove_multiple()
}) })
if checksum != 15323 { if checksum != 38907 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bls48581: uniffi_bls48581_checksum_func_prove_multiple: UniFFI API checksum mismatch") panic("bls48581: uniffi_bls48581_checksum_func_prove_multiple: UniFFI API checksum mismatch")
} }
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bls48581_checksum_func_prove_raw(uniffiStatus) return C.uniffi_bls48581_checksum_func_prove_raw()
}) })
if checksum != 64858 { if checksum != 54704 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bls48581: uniffi_bls48581_checksum_func_prove_raw: UniFFI API checksum mismatch") panic("bls48581: uniffi_bls48581_checksum_func_prove_raw: UniFFI API checksum mismatch")
} }
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bls48581_checksum_func_verify_multiple(uniffiStatus) return C.uniffi_bls48581_checksum_func_verify_multiple()
}) })
if checksum != 33757 { if checksum != 8610 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bls48581: uniffi_bls48581_checksum_func_verify_multiple: UniFFI API checksum mismatch") panic("bls48581: uniffi_bls48581_checksum_func_verify_multiple: UniFFI API checksum mismatch")
} }
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bls48581_checksum_func_verify_raw(uniffiStatus) return C.uniffi_bls48581_checksum_func_verify_raw()
}) })
if checksum != 52165 { if checksum != 15303 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bls48581: uniffi_bls48581_checksum_func_verify_raw: UniFFI API checksum mismatch") panic("bls48581: uniffi_bls48581_checksum_func_verify_raw: UniFFI API checksum mismatch")
} }
@ -531,7 +542,7 @@ func (FfiConverterString) Read(reader io.Reader) string {
length := readInt32(reader) length := readInt32(reader)
buffer := make([]byte, length) buffer := make([]byte, length)
read_length, err := reader.Read(buffer) read_length, err := reader.Read(buffer)
if err != nil { if err != nil && err != io.EOF {
panic(err) panic(err)
} }
if read_length != int(length) { if read_length != int(length) {
@ -540,7 +551,7 @@ func (FfiConverterString) Read(reader io.Reader) string {
return string(buffer) return string(buffer)
} }
func (FfiConverterString) Lower(value string) RustBuffer { func (FfiConverterString) Lower(value string) C.RustBuffer {
return stringToRustBuffer(value) return stringToRustBuffer(value)
} }
@ -573,33 +584,33 @@ func (r *BlsAggregateOutput) Destroy() {
FfiDestroyerSequenceUint8{}.Destroy(r.AggregateSignature) FfiDestroyerSequenceUint8{}.Destroy(r.AggregateSignature)
} }
type FfiConverterTypeBlsAggregateOutput struct{} type FfiConverterBlsAggregateOutput struct{}
var FfiConverterTypeBlsAggregateOutputINSTANCE = FfiConverterTypeBlsAggregateOutput{} var FfiConverterBlsAggregateOutputINSTANCE = FfiConverterBlsAggregateOutput{}
func (c FfiConverterTypeBlsAggregateOutput) Lift(rb RustBufferI) BlsAggregateOutput { func (c FfiConverterBlsAggregateOutput) Lift(rb RustBufferI) BlsAggregateOutput {
return LiftFromRustBuffer[BlsAggregateOutput](c, rb) return LiftFromRustBuffer[BlsAggregateOutput](c, rb)
} }
func (c FfiConverterTypeBlsAggregateOutput) Read(reader io.Reader) BlsAggregateOutput { func (c FfiConverterBlsAggregateOutput) Read(reader io.Reader) BlsAggregateOutput {
return BlsAggregateOutput{ return BlsAggregateOutput{
FfiConverterSequenceUint8INSTANCE.Read(reader), FfiConverterSequenceUint8INSTANCE.Read(reader),
FfiConverterSequenceUint8INSTANCE.Read(reader), FfiConverterSequenceUint8INSTANCE.Read(reader),
} }
} }
func (c FfiConverterTypeBlsAggregateOutput) Lower(value BlsAggregateOutput) RustBuffer { func (c FfiConverterBlsAggregateOutput) Lower(value BlsAggregateOutput) C.RustBuffer {
return LowerIntoRustBuffer[BlsAggregateOutput](c, value) return LowerIntoRustBuffer[BlsAggregateOutput](c, value)
} }
func (c FfiConverterTypeBlsAggregateOutput) Write(writer io.Writer, value BlsAggregateOutput) { func (c FfiConverterBlsAggregateOutput) Write(writer io.Writer, value BlsAggregateOutput) {
FfiConverterSequenceUint8INSTANCE.Write(writer, value.AggregatePublicKey) FfiConverterSequenceUint8INSTANCE.Write(writer, value.AggregatePublicKey)
FfiConverterSequenceUint8INSTANCE.Write(writer, value.AggregateSignature) FfiConverterSequenceUint8INSTANCE.Write(writer, value.AggregateSignature)
} }
type FfiDestroyerTypeBlsAggregateOutput struct{} type FfiDestroyerBlsAggregateOutput struct{}
func (_ FfiDestroyerTypeBlsAggregateOutput) Destroy(value BlsAggregateOutput) { func (_ FfiDestroyerBlsAggregateOutput) Destroy(value BlsAggregateOutput) {
value.Destroy() value.Destroy()
} }
@ -615,15 +626,15 @@ func (r *BlsKeygenOutput) Destroy() {
FfiDestroyerSequenceUint8{}.Destroy(r.ProofOfPossessionSig) FfiDestroyerSequenceUint8{}.Destroy(r.ProofOfPossessionSig)
} }
type FfiConverterTypeBlsKeygenOutput struct{} type FfiConverterBlsKeygenOutput struct{}
var FfiConverterTypeBlsKeygenOutputINSTANCE = FfiConverterTypeBlsKeygenOutput{} var FfiConverterBlsKeygenOutputINSTANCE = FfiConverterBlsKeygenOutput{}
func (c FfiConverterTypeBlsKeygenOutput) Lift(rb RustBufferI) BlsKeygenOutput { func (c FfiConverterBlsKeygenOutput) Lift(rb RustBufferI) BlsKeygenOutput {
return LiftFromRustBuffer[BlsKeygenOutput](c, rb) return LiftFromRustBuffer[BlsKeygenOutput](c, rb)
} }
func (c FfiConverterTypeBlsKeygenOutput) Read(reader io.Reader) BlsKeygenOutput { func (c FfiConverterBlsKeygenOutput) Read(reader io.Reader) BlsKeygenOutput {
return BlsKeygenOutput{ return BlsKeygenOutput{
FfiConverterSequenceUint8INSTANCE.Read(reader), FfiConverterSequenceUint8INSTANCE.Read(reader),
FfiConverterSequenceUint8INSTANCE.Read(reader), FfiConverterSequenceUint8INSTANCE.Read(reader),
@ -631,19 +642,19 @@ func (c FfiConverterTypeBlsKeygenOutput) Read(reader io.Reader) BlsKeygenOutput
} }
} }
func (c FfiConverterTypeBlsKeygenOutput) Lower(value BlsKeygenOutput) RustBuffer { func (c FfiConverterBlsKeygenOutput) Lower(value BlsKeygenOutput) C.RustBuffer {
return LowerIntoRustBuffer[BlsKeygenOutput](c, value) return LowerIntoRustBuffer[BlsKeygenOutput](c, value)
} }
func (c FfiConverterTypeBlsKeygenOutput) Write(writer io.Writer, value BlsKeygenOutput) { func (c FfiConverterBlsKeygenOutput) Write(writer io.Writer, value BlsKeygenOutput) {
FfiConverterSequenceUint8INSTANCE.Write(writer, value.SecretKey) FfiConverterSequenceUint8INSTANCE.Write(writer, value.SecretKey)
FfiConverterSequenceUint8INSTANCE.Write(writer, value.PublicKey) FfiConverterSequenceUint8INSTANCE.Write(writer, value.PublicKey)
FfiConverterSequenceUint8INSTANCE.Write(writer, value.ProofOfPossessionSig) FfiConverterSequenceUint8INSTANCE.Write(writer, value.ProofOfPossessionSig)
} }
type FfiDestroyerTypeBlsKeygenOutput struct{} type FfiDestroyerBlsKeygenOutput struct{}
func (_ FfiDestroyerTypeBlsKeygenOutput) Destroy(value BlsKeygenOutput) { func (_ FfiDestroyerBlsKeygenOutput) Destroy(value BlsKeygenOutput) {
value.Destroy() value.Destroy()
} }
@ -657,33 +668,33 @@ func (r *Multiproof) Destroy() {
FfiDestroyerSequenceUint8{}.Destroy(r.Proof) FfiDestroyerSequenceUint8{}.Destroy(r.Proof)
} }
type FfiConverterTypeMultiproof struct{} type FfiConverterMultiproof struct{}
var FfiConverterTypeMultiproofINSTANCE = FfiConverterTypeMultiproof{} var FfiConverterMultiproofINSTANCE = FfiConverterMultiproof{}
func (c FfiConverterTypeMultiproof) Lift(rb RustBufferI) Multiproof { func (c FfiConverterMultiproof) Lift(rb RustBufferI) Multiproof {
return LiftFromRustBuffer[Multiproof](c, rb) return LiftFromRustBuffer[Multiproof](c, rb)
} }
func (c FfiConverterTypeMultiproof) Read(reader io.Reader) Multiproof { func (c FfiConverterMultiproof) Read(reader io.Reader) Multiproof {
return Multiproof{ return Multiproof{
FfiConverterSequenceUint8INSTANCE.Read(reader), FfiConverterSequenceUint8INSTANCE.Read(reader),
FfiConverterSequenceUint8INSTANCE.Read(reader), FfiConverterSequenceUint8INSTANCE.Read(reader),
} }
} }
func (c FfiConverterTypeMultiproof) Lower(value Multiproof) RustBuffer { func (c FfiConverterMultiproof) Lower(value Multiproof) C.RustBuffer {
return LowerIntoRustBuffer[Multiproof](c, value) return LowerIntoRustBuffer[Multiproof](c, value)
} }
func (c FfiConverterTypeMultiproof) Write(writer io.Writer, value Multiproof) { func (c FfiConverterMultiproof) Write(writer io.Writer, value Multiproof) {
FfiConverterSequenceUint8INSTANCE.Write(writer, value.D) FfiConverterSequenceUint8INSTANCE.Write(writer, value.D)
FfiConverterSequenceUint8INSTANCE.Write(writer, value.Proof) FfiConverterSequenceUint8INSTANCE.Write(writer, value.Proof)
} }
type FfiDestroyerTypeMultiproof struct{} type FfiDestroyerMultiproof struct{}
func (_ FfiDestroyerTypeMultiproof) Destroy(value Multiproof) { func (_ FfiDestroyerMultiproof) Destroy(value Multiproof) {
value.Destroy() value.Destroy()
} }
@ -707,7 +718,7 @@ func (c FfiConverterSequenceUint8) Read(reader io.Reader) []uint8 {
return result return result
} }
func (c FfiConverterSequenceUint8) Lower(value []uint8) RustBuffer { func (c FfiConverterSequenceUint8) Lower(value []uint8) C.RustBuffer {
return LowerIntoRustBuffer[[]uint8](c, value) return LowerIntoRustBuffer[[]uint8](c, value)
} }
@ -750,7 +761,7 @@ func (c FfiConverterSequenceUint64) Read(reader io.Reader) []uint64 {
return result return result
} }
func (c FfiConverterSequenceUint64) Lower(value []uint64) RustBuffer { func (c FfiConverterSequenceUint64) Lower(value []uint64) C.RustBuffer {
return LowerIntoRustBuffer[[]uint64](c, value) return LowerIntoRustBuffer[[]uint64](c, value)
} }
@ -793,7 +804,7 @@ func (c FfiConverterSequenceSequenceUint8) Read(reader io.Reader) [][]uint8 {
return result return result
} }
func (c FfiConverterSequenceSequenceUint8) Lower(value [][]uint8) RustBuffer { func (c FfiConverterSequenceSequenceUint8) Lower(value [][]uint8) C.RustBuffer {
return LowerIntoRustBuffer[[][]uint8](c, value) return LowerIntoRustBuffer[[][]uint8](c, value)
} }
@ -817,20 +828,26 @@ func (FfiDestroyerSequenceSequenceUint8) Destroy(sequence [][]uint8) {
} }
func BlsAggregate(pks [][]uint8, sigs [][]uint8) BlsAggregateOutput { func BlsAggregate(pks [][]uint8, sigs [][]uint8) BlsAggregateOutput {
return FfiConverterTypeBlsAggregateOutputINSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI { return FfiConverterBlsAggregateOutputINSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI {
return C.uniffi_bls48581_fn_func_bls_aggregate(FfiConverterSequenceSequenceUint8INSTANCE.Lower(pks), FfiConverterSequenceSequenceUint8INSTANCE.Lower(sigs), _uniffiStatus) return GoRustBuffer{
inner: C.uniffi_bls48581_fn_func_bls_aggregate(FfiConverterSequenceSequenceUint8INSTANCE.Lower(pks), FfiConverterSequenceSequenceUint8INSTANCE.Lower(sigs), _uniffiStatus),
}
})) }))
} }
func BlsKeygen() BlsKeygenOutput { func BlsKeygen() BlsKeygenOutput {
return FfiConverterTypeBlsKeygenOutputINSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI { return FfiConverterBlsKeygenOutputINSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI {
return C.uniffi_bls48581_fn_func_bls_keygen(_uniffiStatus) return GoRustBuffer{
inner: C.uniffi_bls48581_fn_func_bls_keygen(_uniffiStatus),
}
})) }))
} }
func BlsSign(sk []uint8, msg []uint8, domain []uint8) []uint8 { func BlsSign(sk []uint8, msg []uint8, domain []uint8) []uint8 {
return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI { return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI {
return C.uniffi_bls48581_fn_func_bls_sign(FfiConverterSequenceUint8INSTANCE.Lower(sk), FfiConverterSequenceUint8INSTANCE.Lower(msg), FfiConverterSequenceUint8INSTANCE.Lower(domain), _uniffiStatus) return GoRustBuffer{
inner: C.uniffi_bls48581_fn_func_bls_sign(FfiConverterSequenceUint8INSTANCE.Lower(sk), FfiConverterSequenceUint8INSTANCE.Lower(msg), FfiConverterSequenceUint8INSTANCE.Lower(domain), _uniffiStatus),
}
})) }))
} }
@ -840,9 +857,17 @@ func BlsVerify(pk []uint8, sig []uint8, msg []uint8, domain []uint8) bool {
})) }))
} }
func BlsVerifyMsigMmsg(pks [][]uint8, sig []uint8, msgs [][]uint8, domain []uint8) bool {
return FfiConverterBoolINSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) C.int8_t {
return C.uniffi_bls48581_fn_func_bls_verify_msig_mmsg(FfiConverterSequenceSequenceUint8INSTANCE.Lower(pks), FfiConverterSequenceUint8INSTANCE.Lower(sig), FfiConverterSequenceSequenceUint8INSTANCE.Lower(msgs), FfiConverterSequenceUint8INSTANCE.Lower(domain), _uniffiStatus)
}))
}
func CommitRaw(data []uint8, polySize uint64) []uint8 { func CommitRaw(data []uint8, polySize uint64) []uint8 {
return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI { return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI {
return C.uniffi_bls48581_fn_func_commit_raw(FfiConverterSequenceUint8INSTANCE.Lower(data), FfiConverterUint64INSTANCE.Lower(polySize), _uniffiStatus) return GoRustBuffer{
inner: C.uniffi_bls48581_fn_func_commit_raw(FfiConverterSequenceUint8INSTANCE.Lower(data), FfiConverterUint64INSTANCE.Lower(polySize), _uniffiStatus),
}
})) }))
} }
@ -854,14 +879,18 @@ func Init() {
} }
func ProveMultiple(commitments [][]uint8, polys [][]uint8, indices []uint64, polySize uint64) Multiproof { func ProveMultiple(commitments [][]uint8, polys [][]uint8, indices []uint64, polySize uint64) Multiproof {
return FfiConverterTypeMultiproofINSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI { return FfiConverterMultiproofINSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI {
return C.uniffi_bls48581_fn_func_prove_multiple(FfiConverterSequenceSequenceUint8INSTANCE.Lower(commitments), FfiConverterSequenceSequenceUint8INSTANCE.Lower(polys), FfiConverterSequenceUint64INSTANCE.Lower(indices), FfiConverterUint64INSTANCE.Lower(polySize), _uniffiStatus) return GoRustBuffer{
inner: C.uniffi_bls48581_fn_func_prove_multiple(FfiConverterSequenceSequenceUint8INSTANCE.Lower(commitments), FfiConverterSequenceSequenceUint8INSTANCE.Lower(polys), FfiConverterSequenceUint64INSTANCE.Lower(indices), FfiConverterUint64INSTANCE.Lower(polySize), _uniffiStatus),
}
})) }))
} }
func ProveRaw(data []uint8, index uint64, polySize uint64) []uint8 { func ProveRaw(data []uint8, index uint64, polySize uint64) []uint8 {
return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI { return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI {
return C.uniffi_bls48581_fn_func_prove_raw(FfiConverterSequenceUint8INSTANCE.Lower(data), FfiConverterUint64INSTANCE.Lower(index), FfiConverterUint64INSTANCE.Lower(polySize), _uniffiStatus) return GoRustBuffer{
inner: C.uniffi_bls48581_fn_func_prove_raw(FfiConverterSequenceUint8INSTANCE.Lower(data), FfiConverterUint64INSTANCE.Lower(index), FfiConverterUint64INSTANCE.Lower(polySize), _uniffiStatus),
}
})) }))
} }

File diff suppressed because it is too large Load Diff

View File

@ -1,8 +1,6 @@
module source.quilibrium.com/quilibrium/monorepo/bls48581 module source.quilibrium.com/quilibrium/monorepo/bls48581
go 1.23.2 go 1.24.0
toolchain go1.23.4
replace source.quilibrium.com/quilibrium/monorepo/types => ../types replace source.quilibrium.com/quilibrium/monorepo/types => ../types
@ -18,21 +16,25 @@ require source.quilibrium.com/quilibrium/monorepo/protobufs v0.0.0-0001010100000
require ( require (
github.com/pkg/errors v0.9.1 github.com/pkg/errors v0.9.1
github.com/stretchr/testify v1.10.0 github.com/stretchr/testify v1.11.1
) )
require ( require (
github.com/cloudflare/circl v1.6.1 // indirect github.com/cloudflare/circl v1.6.1 // indirect
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3 // indirect github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3 // indirect
github.com/iden3/go-iden3-crypto v0.0.17 // indirect github.com/iden3/go-iden3-crypto v0.0.17 // indirect
github.com/ipfs/go-cid v0.5.0 // indirect github.com/ipfs/go-cid v0.5.0 // indirect
github.com/klauspost/cpuid/v2 v2.2.10 // indirect github.com/klauspost/cpuid/v2 v2.2.10 // indirect
github.com/libp2p/go-buffer-pool v0.1.0 // indirect
github.com/libp2p/go-libp2p v0.41.1 // indirect
github.com/minio/sha256-simd v1.0.1 // indirect github.com/minio/sha256-simd v1.0.1 // indirect
github.com/mr-tron/base58 v1.2.0 // indirect github.com/mr-tron/base58 v1.2.0 // indirect
github.com/multiformats/go-base32 v0.1.0 // indirect github.com/multiformats/go-base32 v0.1.0 // indirect
github.com/multiformats/go-base36 v0.2.0 // indirect github.com/multiformats/go-base36 v0.2.0 // indirect
github.com/multiformats/go-multiaddr v0.16.1 // indirect github.com/multiformats/go-multiaddr v0.16.1 // indirect
github.com/multiformats/go-multibase v0.2.0 // indirect github.com/multiformats/go-multibase v0.2.0 // indirect
github.com/multiformats/go-multicodec v0.9.1 // indirect
github.com/multiformats/go-multihash v0.2.3 // indirect github.com/multiformats/go-multihash v0.2.3 // indirect
github.com/multiformats/go-varint v0.0.7 // indirect github.com/multiformats/go-varint v0.0.7 // indirect
github.com/spaolacci/murmur3 v1.1.0 // indirect github.com/spaolacci/murmur3 v1.1.0 // indirect

View File

@ -2,6 +2,10 @@ github.com/cloudflare/circl v1.6.1 h1:zqIqSPIndyBh1bjLVVDHMPpVKqp8Su/V+6MeDzzQBQ
github.com/cloudflare/circl v1.6.1/go.mod h1:uddAzsPgqdMAYatqJ0lsjX1oECcQLIlRpzZh3pJrofs= github.com/cloudflare/circl v1.6.1/go.mod h1:uddAzsPgqdMAYatqJ0lsjX1oECcQLIlRpzZh3pJrofs=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/decred/dcrd/crypto/blake256 v1.1.0 h1:zPMNGQCm0g4QTY27fOCorQW7EryeQ/U0x++OzVrdms8=
github.com/decred/dcrd/crypto/blake256 v1.1.0/go.mod h1:2OfgNZ5wDpcsFmHmCK5gZTPcCXqlm2ArzUIkw9czNJo=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 h1:NMZiJj8QnKe1LgsbDayM4UoHwbvwDRwnI3hwNaAHRnc=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0/go.mod h1:ZXNYxsqcloTdSy/rNShjYzMhyjf0LaoftYK0p+A3h40=
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY= github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag= github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
@ -26,6 +30,10 @@ github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/leanovate/gopter v0.2.9 h1:fQjYxZaynp97ozCzfOyOuAGOU4aU/z37zf/tOujFk7c= github.com/leanovate/gopter v0.2.9 h1:fQjYxZaynp97ozCzfOyOuAGOU4aU/z37zf/tOujFk7c=
github.com/leanovate/gopter v0.2.9/go.mod h1:U2L/78B+KVFIx2VmW6onHJQzXtFb+p5y3y2Sh+Jxxv8= github.com/leanovate/gopter v0.2.9/go.mod h1:U2L/78B+KVFIx2VmW6onHJQzXtFb+p5y3y2Sh+Jxxv8=
github.com/libp2p/go-buffer-pool v0.1.0 h1:oK4mSFcQz7cTQIfqbe4MIj9gLW+mnanjyFtc6cdF0Y8=
github.com/libp2p/go-buffer-pool v0.1.0/go.mod h1:N+vh8gMqimBzdKkSMVuydVDq+UV5QTWy5HSiZacSbPg=
github.com/libp2p/go-libp2p v0.41.1 h1:8ecNQVT5ev/jqALTvisSJeVNvXYJyK4NhQx1nNRXQZE=
github.com/libp2p/go-libp2p v0.41.1/go.mod h1:DcGTovJzQl/I7HMrby5ZRjeD0kQkGiy+9w6aEkSZpRI=
github.com/minio/sha256-simd v1.0.1 h1:6kaan5IFmwTNynnKKpDHe6FWHohJOHhCPchzK49dzMM= github.com/minio/sha256-simd v1.0.1 h1:6kaan5IFmwTNynnKKpDHe6FWHohJOHhCPchzK49dzMM=
github.com/minio/sha256-simd v1.0.1/go.mod h1:Pz6AKMiUdngCLpeTL/RJY1M9rUuPMYujV5xJjtbRSN8= github.com/minio/sha256-simd v1.0.1/go.mod h1:Pz6AKMiUdngCLpeTL/RJY1M9rUuPMYujV5xJjtbRSN8=
github.com/mr-tron/base58 v1.2.0 h1:T/HDJBh4ZCPbU39/+c3rRvE0uKBQlU27+QI8LJ4t64o= github.com/mr-tron/base58 v1.2.0 h1:T/HDJBh4ZCPbU39/+c3rRvE0uKBQlU27+QI8LJ4t64o=
@ -34,10 +42,10 @@ github.com/multiformats/go-base32 v0.1.0 h1:pVx9xoSPqEIQG8o+UbAe7DNi51oej1NtK+aG
github.com/multiformats/go-base32 v0.1.0/go.mod h1:Kj3tFY6zNr+ABYMqeUNeGvkIC/UYgtWibDcT0rExnbI= github.com/multiformats/go-base32 v0.1.0/go.mod h1:Kj3tFY6zNr+ABYMqeUNeGvkIC/UYgtWibDcT0rExnbI=
github.com/multiformats/go-base36 v0.2.0 h1:lFsAbNOGeKtuKozrtBsAkSVhv1p9D0/qedU9rQyccr0= github.com/multiformats/go-base36 v0.2.0 h1:lFsAbNOGeKtuKozrtBsAkSVhv1p9D0/qedU9rQyccr0=
github.com/multiformats/go-base36 v0.2.0/go.mod h1:qvnKE++v+2MWCfePClUEjE78Z7P2a1UV0xHgWc0hkp4= github.com/multiformats/go-base36 v0.2.0/go.mod h1:qvnKE++v+2MWCfePClUEjE78Z7P2a1UV0xHgWc0hkp4=
github.com/multiformats/go-multiaddr v0.16.1 h1:fgJ0Pitow+wWXzN9do+1b8Pyjmo8m5WhGfzpL82MpCw=
github.com/multiformats/go-multiaddr v0.16.1/go.mod h1:JSVUmXDjsVFiW7RjIFMP7+Ev+h1DTbiJgVeTV/tcmP0=
github.com/multiformats/go-multibase v0.2.0 h1:isdYCVLvksgWlMW9OZRYJEa9pZETFivncJHmHnnd87g= github.com/multiformats/go-multibase v0.2.0 h1:isdYCVLvksgWlMW9OZRYJEa9pZETFivncJHmHnnd87g=
github.com/multiformats/go-multibase v0.2.0/go.mod h1:bFBZX4lKCA/2lyOFSAoKH5SS6oPyjtnzK/XTFDPkNuk= github.com/multiformats/go-multibase v0.2.0/go.mod h1:bFBZX4lKCA/2lyOFSAoKH5SS6oPyjtnzK/XTFDPkNuk=
github.com/multiformats/go-multicodec v0.9.1 h1:x/Fuxr7ZuR4jJV4Os5g444F7xC4XmyUaT/FWtE+9Zjo=
github.com/multiformats/go-multicodec v0.9.1/go.mod h1:LLWNMtyV5ithSBUo3vFIMaeDy+h3EbkMTek1m+Fybbo=
github.com/multiformats/go-multihash v0.2.3 h1:7Lyc8XfX/IY2jWb/gI7JP+o7JEq9hOa7BFvVU9RSh+U= github.com/multiformats/go-multihash v0.2.3 h1:7Lyc8XfX/IY2jWb/gI7JP+o7JEq9hOa7BFvVU9RSh+U=
github.com/multiformats/go-multihash v0.2.3/go.mod h1:dXgKXCXjBzdscBLk9JkjINiEsCKRVch90MdaGiKsvSM= github.com/multiformats/go-multihash v0.2.3/go.mod h1:dXgKXCXjBzdscBLk9JkjINiEsCKRVch90MdaGiKsvSM=
github.com/multiformats/go-varint v0.0.7 h1:sWSGR+f/eu5ABZA2ZpYKBILXTTs9JWpdEM/nEGOHFS8= github.com/multiformats/go-varint v0.0.7 h1:sWSGR+f/eu5ABZA2ZpYKBILXTTs9JWpdEM/nEGOHFS8=
@ -50,8 +58,8 @@ github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0t
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc= github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
github.com/spaolacci/murmur3 v1.1.0 h1:7c1g84S4BPRrfL5Xrdp6fOJ206sU9y293DDHaoy0bLI= github.com/spaolacci/murmur3 v1.1.0 h1:7c1g84S4BPRrfL5Xrdp6fOJ206sU9y293DDHaoy0bLI=
github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA= github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA= github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA= go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A= go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/otel v1.34.0 h1:zRLXxLCgL1WyKsPVrgbSdMN4c0FMkDAskSTQP+0hdUY= go.opentelemetry.io/otel v1.34.0 h1:zRLXxLCgL1WyKsPVrgbSdMN4c0FMkDAskSTQP+0hdUY=

3
build_utils/go.mod Normal file
View File

@ -0,0 +1,3 @@
module source.quilibrium.com/quilibrium/monorepo/build_utils
go 1.23.2

301
build_utils/main.go Normal file
View File

@ -0,0 +1,301 @@
package main
import (
"bytes"
"flag"
"fmt"
"go/ast"
"go/format"
"go/parser"
"go/token"
"os"
"path/filepath"
"strings"
)
type finding struct {
file string
pos token.Position
fn string
kind string
detail string
}
const allowDirective = "buildutils:allow-slice-alias"
func main() {
flag.Usage = func() {
fmt.Fprintf(flag.CommandLine.Output(),
"Usage: %s <file-or-directory> [...]\n"+
"Scans Go files for functions that accept slice parameters\n"+
"and either return them directly or store them in struct fields.\n",
os.Args[0])
flag.PrintDefaults()
}
flag.Parse()
if flag.NArg() == 0 {
flag.Usage()
os.Exit(1)
}
var files []string
for _, path := range flag.Args() {
expanded, err := expandPath(path)
if err != nil {
fmt.Fprintf(os.Stderr, "error enumerating %s: %v\n", path, err)
os.Exit(1)
}
files = append(files, expanded...)
}
var allFindings []finding
for _, file := range files {
fs := token.NewFileSet()
f, err := parser.ParseFile(fs, file, nil, parser.ParseComments)
if err != nil {
fmt.Fprintf(os.Stderr, "failed to parse %s: %v\n", file, err)
continue
}
allFindings = append(allFindings, analyzeFile(fs, file, f)...)
}
if len(allFindings) == 0 {
fmt.Println("No slice-to-struct assignments detected.")
return
}
for _, finding := range allFindings {
fmt.Printf("%s:%d:%d: [%s] %s in %s\n",
finding.pos.Filename,
finding.pos.Line,
finding.pos.Column,
finding.kind,
finding.detail,
finding.fn,
)
}
}
func expandPath(path string) ([]string, error) {
info, err := os.Stat(path)
if err != nil {
return nil, err
}
if !info.IsDir() {
if shouldIncludeFile(path) {
return []string{path}, nil
}
return nil, nil
}
var files []string
err = filepath.WalkDir(path, func(p string, d os.DirEntry, err error) error {
if err != nil {
return err
}
if d.IsDir() {
if d.Name() == "vendor" || d.Name() == ".git" {
return filepath.SkipDir
}
return nil
}
if shouldIncludeFile(p) {
files = append(files, p)
}
return nil
})
return files, err
}
func analyzeFile(fs *token.FileSet, filename string, file *ast.File) []finding {
var findings []finding
commentMap := ast.NewCommentMap(fs, file, file.Comments)
commentGroups := file.Comments
for _, decl := range file.Decls {
fn, ok := decl.(*ast.FuncDecl)
if !ok || fn.Body == nil || fn.Type == nil || fn.Type.Params == nil {
continue
}
if hasDirective(fs, commentMap, commentGroups, fn) {
continue
}
paramObjs := map[*ast.Object]string{}
for _, field := range fn.Type.Params.List {
if hasDirective(fs, commentMap, commentGroups, field) {
continue
}
if isSliceType(field.Type) {
for _, name := range field.Names {
if name != nil && name.Obj != nil {
paramObjs[name.Obj] = name.Name
}
}
}
}
if len(paramObjs) == 0 {
continue
}
ast.Inspect(fn.Body, func(n ast.Node) bool {
switch node := n.(type) {
case *ast.ReturnStmt:
if hasDirective(fs, commentMap, commentGroups, node) {
return true
}
for _, result := range node.Results {
if ident, ok := result.(*ast.Ident); ok {
if pname, ok := paramObjs[ident.Obj]; ok {
pos := fs.Position(ident.Pos())
findings = append(findings, finding{
file: filename,
pos: pos,
fn: fn.Name.Name,
kind: "return",
detail: fmt.Sprintf("returns slice parameter %q", pname),
})
}
}
}
case *ast.AssignStmt:
if hasDirective(fs, commentMap, commentGroups, node) {
return true
}
for i, rhsExpr := range node.Rhs {
if ident, ok := rhsExpr.(*ast.Ident); ok {
if pname, ok := paramObjs[ident.Obj]; ok && i < len(node.Lhs) {
pos := fs.Position(rhsExpr.Pos())
lhsStr := exprString(node.Lhs[i])
findings = append(findings, finding{
file: filename,
pos: pos,
fn: fn.Name.Name,
kind: "assignment",
detail: fmt.Sprintf(
"assigns slice parameter %q to %s",
pname,
lhsStr,
),
})
}
}
}
case *ast.CompositeLit:
if hasDirective(fs, commentMap, commentGroups, node) {
return true
}
for _, elt := range node.Elts {
kv, ok := elt.(*ast.KeyValueExpr)
if !ok {
continue
}
if hasDirective(fs, commentMap, commentGroups, kv) {
continue
}
if ident, ok := kv.Value.(*ast.Ident); ok {
if pname, ok := paramObjs[ident.Obj]; ok {
pos := fs.Position(kv.Value.Pos())
field := exprString(kv.Key)
findings = append(findings, finding{
file: filename,
pos: pos,
fn: fn.Name.Name,
kind: "struct literal",
detail: fmt.Sprintf(
"sets field %s to slice parameter %q",
field,
pname,
),
})
}
}
}
}
return true
})
}
return findings
}
func isSliceType(expr ast.Expr) bool {
switch t := expr.(type) {
case *ast.ArrayType:
return t.Len == nil
case *ast.Ellipsis:
return true
}
return false
}
func hasDirective(
fs *token.FileSet,
cm ast.CommentMap,
groups []*ast.CommentGroup,
node ast.Node,
) bool {
if node == nil {
return false
}
if cm != nil {
if mapped, ok := cm[node]; ok {
if commentGroupHasDirective(mapped) {
return true
}
}
}
nodePos := fs.Position(node.Pos())
for _, group := range groups {
for _, c := range group.List {
if !bytes.Contains([]byte(c.Text), []byte(allowDirective)) {
continue
}
commentPos := fs.Position(c.Slash)
if commentPos.Filename != nodePos.Filename {
continue
}
if commentPos.Line == nodePos.Line {
return true
}
if commentPos.Line+1 == nodePos.Line && commentPos.Column == 1 {
return true
}
}
}
return false
}
func commentGroupHasDirective(groups []*ast.CommentGroup) bool {
for _, group := range groups {
for _, c := range group.List {
if bytes.Contains([]byte(c.Text), []byte(allowDirective)) {
return true
}
}
}
return false
}
func exprString(expr ast.Expr) string {
if expr == nil {
return ""
}
var buf bytes.Buffer
if err := format.Node(&buf, token.NewFileSet(), expr); err != nil {
return ""
}
return buf.String()
}
func shouldIncludeFile(path string) bool {
if filepath.Ext(path) != ".go" {
return false
}
name := filepath.Base(path)
if strings.HasSuffix(name, "_test.go") {
return false
}
return true
}

View File

@ -1,8 +0,0 @@
#include <bulletproofs.h>
// This file exists beacause of
// https://github.com/golang/go/issues/11263
void cgo_rust_task_callback_bridge_bulletproofs(RustTaskCallback cb, const void * taskData, int8_t status) {
cb(taskData, status);
}

View File

@ -12,60 +12,67 @@ import (
"unsafe" "unsafe"
) )
type RustBuffer = C.RustBuffer // This is needed, because as of go 1.24
// type RustBuffer C.RustBuffer cannot have methods,
// RustBuffer is treated as non-local type
type GoRustBuffer struct {
inner C.RustBuffer
}
type RustBufferI interface { type RustBufferI interface {
AsReader() *bytes.Reader AsReader() *bytes.Reader
Free() Free()
ToGoBytes() []byte ToGoBytes() []byte
Data() unsafe.Pointer Data() unsafe.Pointer
Len() int Len() uint64
Capacity() int Capacity() uint64
} }
func RustBufferFromExternal(b RustBufferI) RustBuffer { func RustBufferFromExternal(b RustBufferI) GoRustBuffer {
return RustBuffer{ return GoRustBuffer{
capacity: C.int(b.Capacity()), inner: C.RustBuffer{
len: C.int(b.Len()), capacity: C.uint64_t(b.Capacity()),
data: (*C.uchar)(b.Data()), len: C.uint64_t(b.Len()),
data: (*C.uchar)(b.Data()),
},
} }
} }
func (cb RustBuffer) Capacity() int { func (cb GoRustBuffer) Capacity() uint64 {
return int(cb.capacity) return uint64(cb.inner.capacity)
} }
func (cb RustBuffer) Len() int { func (cb GoRustBuffer) Len() uint64 {
return int(cb.len) return uint64(cb.inner.len)
} }
func (cb RustBuffer) Data() unsafe.Pointer { func (cb GoRustBuffer) Data() unsafe.Pointer {
return unsafe.Pointer(cb.data) return unsafe.Pointer(cb.inner.data)
} }
func (cb RustBuffer) AsReader() *bytes.Reader { func (cb GoRustBuffer) AsReader() *bytes.Reader {
b := unsafe.Slice((*byte)(cb.data), C.int(cb.len)) b := unsafe.Slice((*byte)(cb.inner.data), C.uint64_t(cb.inner.len))
return bytes.NewReader(b) return bytes.NewReader(b)
} }
func (cb RustBuffer) Free() { func (cb GoRustBuffer) Free() {
rustCall(func(status *C.RustCallStatus) bool { rustCall(func(status *C.RustCallStatus) bool {
C.ffi_bulletproofs_rustbuffer_free(cb, status) C.ffi_bulletproofs_rustbuffer_free(cb.inner, status)
return false return false
}) })
} }
func (cb RustBuffer) ToGoBytes() []byte { func (cb GoRustBuffer) ToGoBytes() []byte {
return C.GoBytes(unsafe.Pointer(cb.data), C.int(cb.len)) return C.GoBytes(unsafe.Pointer(cb.inner.data), C.int(cb.inner.len))
} }
func stringToRustBuffer(str string) RustBuffer { func stringToRustBuffer(str string) C.RustBuffer {
return bytesToRustBuffer([]byte(str)) return bytesToRustBuffer([]byte(str))
} }
func bytesToRustBuffer(b []byte) RustBuffer { func bytesToRustBuffer(b []byte) C.RustBuffer {
if len(b) == 0 { if len(b) == 0 {
return RustBuffer{} return C.RustBuffer{}
} }
// We can pass the pointer along here, as it is pinned // We can pass the pointer along here, as it is pinned
// for the duration of this call // for the duration of this call
@ -74,7 +81,7 @@ func bytesToRustBuffer(b []byte) RustBuffer {
data: (*C.uchar)(unsafe.Pointer(&b[0])), data: (*C.uchar)(unsafe.Pointer(&b[0])),
} }
return rustCall(func(status *C.RustCallStatus) RustBuffer { return rustCall(func(status *C.RustCallStatus) C.RustBuffer {
return C.ffi_bulletproofs_rustbuffer_from_bytes(foreign, status) return C.ffi_bulletproofs_rustbuffer_from_bytes(foreign, status)
}) })
} }
@ -84,12 +91,7 @@ type BufLifter[GoType any] interface {
} }
type BufLowerer[GoType any] interface { type BufLowerer[GoType any] interface {
Lower(value GoType) RustBuffer Lower(value GoType) C.RustBuffer
}
type FfiConverter[GoType any, FfiType any] interface {
Lift(value FfiType) GoType
Lower(value GoType) FfiType
} }
type BufReader[GoType any] interface { type BufReader[GoType any] interface {
@ -100,12 +102,7 @@ type BufWriter[GoType any] interface {
Write(writer io.Writer, value GoType) Write(writer io.Writer, value GoType)
} }
type FfiRustBufConverter[GoType any, FfiType any] interface { func LowerIntoRustBuffer[GoType any](bufWriter BufWriter[GoType], value GoType) C.RustBuffer {
FfiConverter[GoType, FfiType]
BufReader[GoType]
}
func LowerIntoRustBuffer[GoType any](bufWriter BufWriter[GoType], value GoType) RustBuffer {
// This might be not the most efficient way but it does not require knowing allocation size // This might be not the most efficient way but it does not require knowing allocation size
// beforehand // beforehand
var buffer bytes.Buffer var buffer bytes.Buffer
@ -130,31 +127,30 @@ func LiftFromRustBuffer[GoType any](bufReader BufReader[GoType], rbuf RustBuffer
return item return item
} }
func rustCallWithError[U any](converter BufLifter[error], callback func(*C.RustCallStatus) U) (U, error) { func rustCallWithError[E any, U any](converter BufReader[*E], callback func(*C.RustCallStatus) U) (U, *E) {
var status C.RustCallStatus var status C.RustCallStatus
returnValue := callback(&status) returnValue := callback(&status)
err := checkCallStatus(converter, status) err := checkCallStatus(converter, status)
return returnValue, err return returnValue, err
} }
func checkCallStatus(converter BufLifter[error], status C.RustCallStatus) error { func checkCallStatus[E any](converter BufReader[*E], status C.RustCallStatus) *E {
switch status.code { switch status.code {
case 0: case 0:
return nil return nil
case 1: case 1:
return converter.Lift(status.errorBuf) return LiftFromRustBuffer(converter, GoRustBuffer{inner: status.errorBuf})
case 2: case 2:
// when the rust code sees a panic, it tries to construct a rustbuffer // when the rust code sees a panic, it tries to construct a rustBuffer
// with the message. but if that code panics, then it just sends back // with the message. but if that code panics, then it just sends back
// an empty buffer. // an empty buffer.
if status.errorBuf.len > 0 { if status.errorBuf.len > 0 {
panic(fmt.Errorf("%s", FfiConverterStringINSTANCE.Lift(status.errorBuf))) panic(fmt.Errorf("%s", FfiConverterStringINSTANCE.Lift(GoRustBuffer{inner: status.errorBuf})))
} else { } else {
panic(fmt.Errorf("Rust panicked while handling Rust panic")) panic(fmt.Errorf("Rust panicked while handling Rust panic"))
} }
default: default:
return fmt.Errorf("unknown status code: %d", status.code) panic(fmt.Errorf("unknown status code: %d", status.code))
} }
} }
@ -165,11 +161,13 @@ func checkCallStatusUnknown(status C.RustCallStatus) error {
case 1: case 1:
panic(fmt.Errorf("function not returning an error returned an error")) panic(fmt.Errorf("function not returning an error returned an error"))
case 2: case 2:
// when the rust code sees a panic, it tries to construct a rustbuffer // when the rust code sees a panic, it tries to construct a C.RustBuffer
// with the message. but if that code panics, then it just sends back // with the message. but if that code panics, then it just sends back
// an empty buffer. // an empty buffer.
if status.errorBuf.len > 0 { if status.errorBuf.len > 0 {
panic(fmt.Errorf("%s", FfiConverterStringINSTANCE.Lift(status.errorBuf))) panic(fmt.Errorf("%s", FfiConverterStringINSTANCE.Lift(GoRustBuffer{
inner: status.errorBuf,
})))
} else { } else {
panic(fmt.Errorf("Rust panicked while handling Rust panic")) panic(fmt.Errorf("Rust panicked while handling Rust panic"))
} }
@ -179,13 +177,17 @@ func checkCallStatusUnknown(status C.RustCallStatus) error {
} }
func rustCall[U any](callback func(*C.RustCallStatus) U) U { func rustCall[U any](callback func(*C.RustCallStatus) U) U {
returnValue, err := rustCallWithError(nil, callback) returnValue, err := rustCallWithError[error](nil, callback)
if err != nil { if err != nil {
panic(err) panic(err)
} }
return returnValue return returnValue
} }
type NativeError interface {
AsError() error
}
func writeInt8(writer io.Writer, value int8) { func writeInt8(writer io.Writer, value int8) {
if err := binary.Write(writer, binary.BigEndian, value); err != nil { if err := binary.Write(writer, binary.BigEndian, value); err != nil {
panic(err) panic(err)
@ -333,191 +335,191 @@ func init() {
func uniffiCheckChecksums() { func uniffiCheckChecksums() {
// Get the bindings contract version from our ComponentInterface // Get the bindings contract version from our ComponentInterface
bindingsContractVersion := 24 bindingsContractVersion := 26
// Get the scaffolding contract version by calling the into the dylib // Get the scaffolding contract version by calling the into the dylib
scaffoldingContractVersion := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint32_t { scaffoldingContractVersion := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint32_t {
return C.ffi_bulletproofs_uniffi_contract_version(uniffiStatus) return C.ffi_bulletproofs_uniffi_contract_version()
}) })
if bindingsContractVersion != int(scaffoldingContractVersion) { if bindingsContractVersion != int(scaffoldingContractVersion) {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bulletproofs: UniFFI contract version mismatch") panic("bulletproofs: UniFFI contract version mismatch")
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bulletproofs_checksum_func_alt_generator(uniffiStatus) return C.uniffi_bulletproofs_checksum_func_alt_generator()
}) })
if checksum != 26422 { if checksum != 26339 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bulletproofs: uniffi_bulletproofs_checksum_func_alt_generator: UniFFI API checksum mismatch") panic("bulletproofs: uniffi_bulletproofs_checksum_func_alt_generator: UniFFI API checksum mismatch")
} }
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bulletproofs_checksum_func_generate_input_commitments(uniffiStatus) return C.uniffi_bulletproofs_checksum_func_generate_input_commitments()
}) })
if checksum != 65001 { if checksum != 19822 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bulletproofs: uniffi_bulletproofs_checksum_func_generate_input_commitments: UniFFI API checksum mismatch") panic("bulletproofs: uniffi_bulletproofs_checksum_func_generate_input_commitments: UniFFI API checksum mismatch")
} }
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bulletproofs_checksum_func_generate_range_proof(uniffiStatus) return C.uniffi_bulletproofs_checksum_func_generate_range_proof()
}) })
if checksum != 40322 { if checksum != 985 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bulletproofs: uniffi_bulletproofs_checksum_func_generate_range_proof: UniFFI API checksum mismatch") panic("bulletproofs: uniffi_bulletproofs_checksum_func_generate_range_proof: UniFFI API checksum mismatch")
} }
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bulletproofs_checksum_func_hash_to_scalar(uniffiStatus) return C.uniffi_bulletproofs_checksum_func_hash_to_scalar()
}) })
if checksum != 19176 { if checksum != 13632 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bulletproofs: uniffi_bulletproofs_checksum_func_hash_to_scalar: UniFFI API checksum mismatch") panic("bulletproofs: uniffi_bulletproofs_checksum_func_hash_to_scalar: UniFFI API checksum mismatch")
} }
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bulletproofs_checksum_func_keygen(uniffiStatus) return C.uniffi_bulletproofs_checksum_func_keygen()
}) })
if checksum != 46171 { if checksum != 9609 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bulletproofs: uniffi_bulletproofs_checksum_func_keygen: UniFFI API checksum mismatch") panic("bulletproofs: uniffi_bulletproofs_checksum_func_keygen: UniFFI API checksum mismatch")
} }
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bulletproofs_checksum_func_point_addition(uniffiStatus) return C.uniffi_bulletproofs_checksum_func_point_addition()
}) })
if checksum != 6828 { if checksum != 32221 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bulletproofs: uniffi_bulletproofs_checksum_func_point_addition: UniFFI API checksum mismatch") panic("bulletproofs: uniffi_bulletproofs_checksum_func_point_addition: UniFFI API checksum mismatch")
} }
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bulletproofs_checksum_func_point_subtraction(uniffiStatus) return C.uniffi_bulletproofs_checksum_func_point_subtraction()
}) })
if checksum != 48479 { if checksum != 38806 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bulletproofs: uniffi_bulletproofs_checksum_func_point_subtraction: UniFFI API checksum mismatch") panic("bulletproofs: uniffi_bulletproofs_checksum_func_point_subtraction: UniFFI API checksum mismatch")
} }
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bulletproofs_checksum_func_scalar_addition(uniffiStatus) return C.uniffi_bulletproofs_checksum_func_scalar_addition()
}) })
if checksum != 29576 { if checksum != 60180 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bulletproofs: uniffi_bulletproofs_checksum_func_scalar_addition: UniFFI API checksum mismatch") panic("bulletproofs: uniffi_bulletproofs_checksum_func_scalar_addition: UniFFI API checksum mismatch")
} }
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bulletproofs_checksum_func_scalar_inverse(uniffiStatus) return C.uniffi_bulletproofs_checksum_func_scalar_inverse()
}) })
if checksum != 11499 { if checksum != 37774 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bulletproofs: uniffi_bulletproofs_checksum_func_scalar_inverse: UniFFI API checksum mismatch") panic("bulletproofs: uniffi_bulletproofs_checksum_func_scalar_inverse: UniFFI API checksum mismatch")
} }
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bulletproofs_checksum_func_scalar_mult(uniffiStatus) return C.uniffi_bulletproofs_checksum_func_scalar_mult()
}) })
if checksum != 6075 { if checksum != 45102 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bulletproofs: uniffi_bulletproofs_checksum_func_scalar_mult: UniFFI API checksum mismatch") panic("bulletproofs: uniffi_bulletproofs_checksum_func_scalar_mult: UniFFI API checksum mismatch")
} }
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bulletproofs_checksum_func_scalar_mult_hash_to_scalar(uniffiStatus) return C.uniffi_bulletproofs_checksum_func_scalar_mult_hash_to_scalar()
}) })
if checksum != 53652 { if checksum != 53592 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bulletproofs: uniffi_bulletproofs_checksum_func_scalar_mult_hash_to_scalar: UniFFI API checksum mismatch") panic("bulletproofs: uniffi_bulletproofs_checksum_func_scalar_mult_hash_to_scalar: UniFFI API checksum mismatch")
} }
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bulletproofs_checksum_func_scalar_mult_point(uniffiStatus) return C.uniffi_bulletproofs_checksum_func_scalar_mult_point()
}) })
if checksum != 46237 { if checksum != 61743 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bulletproofs: uniffi_bulletproofs_checksum_func_scalar_mult_point: UniFFI API checksum mismatch") panic("bulletproofs: uniffi_bulletproofs_checksum_func_scalar_mult_point: UniFFI API checksum mismatch")
} }
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bulletproofs_checksum_func_scalar_subtraction(uniffiStatus) return C.uniffi_bulletproofs_checksum_func_scalar_subtraction()
}) })
if checksum != 13728 { if checksum != 7250 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bulletproofs: uniffi_bulletproofs_checksum_func_scalar_subtraction: UniFFI API checksum mismatch") panic("bulletproofs: uniffi_bulletproofs_checksum_func_scalar_subtraction: UniFFI API checksum mismatch")
} }
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bulletproofs_checksum_func_scalar_to_point(uniffiStatus) return C.uniffi_bulletproofs_checksum_func_scalar_to_point()
}) })
if checksum != 61077 { if checksum != 51818 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bulletproofs: uniffi_bulletproofs_checksum_func_scalar_to_point: UniFFI API checksum mismatch") panic("bulletproofs: uniffi_bulletproofs_checksum_func_scalar_to_point: UniFFI API checksum mismatch")
} }
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bulletproofs_checksum_func_sign_hidden(uniffiStatus) return C.uniffi_bulletproofs_checksum_func_sign_hidden()
}) })
if checksum != 57560 { if checksum != 32104 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bulletproofs: uniffi_bulletproofs_checksum_func_sign_hidden: UniFFI API checksum mismatch") panic("bulletproofs: uniffi_bulletproofs_checksum_func_sign_hidden: UniFFI API checksum mismatch")
} }
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bulletproofs_checksum_func_sign_simple(uniffiStatus) return C.uniffi_bulletproofs_checksum_func_sign_simple()
}) })
if checksum != 5535 { if checksum != 35259 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bulletproofs: uniffi_bulletproofs_checksum_func_sign_simple: UniFFI API checksum mismatch") panic("bulletproofs: uniffi_bulletproofs_checksum_func_sign_simple: UniFFI API checksum mismatch")
} }
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bulletproofs_checksum_func_sum_check(uniffiStatus) return C.uniffi_bulletproofs_checksum_func_sum_check()
}) })
if checksum != 18164 { if checksum != 47141 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bulletproofs: uniffi_bulletproofs_checksum_func_sum_check: UniFFI API checksum mismatch") panic("bulletproofs: uniffi_bulletproofs_checksum_func_sum_check: UniFFI API checksum mismatch")
} }
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bulletproofs_checksum_func_verify_hidden(uniffiStatus) return C.uniffi_bulletproofs_checksum_func_verify_hidden()
}) })
if checksum != 55266 { if checksum != 64726 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bulletproofs: uniffi_bulletproofs_checksum_func_verify_hidden: UniFFI API checksum mismatch") panic("bulletproofs: uniffi_bulletproofs_checksum_func_verify_hidden: UniFFI API checksum mismatch")
} }
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bulletproofs_checksum_func_verify_range_proof(uniffiStatus) return C.uniffi_bulletproofs_checksum_func_verify_range_proof()
}) })
if checksum != 37611 { if checksum != 62924 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bulletproofs: uniffi_bulletproofs_checksum_func_verify_range_proof: UniFFI API checksum mismatch") panic("bulletproofs: uniffi_bulletproofs_checksum_func_verify_range_proof: UniFFI API checksum mismatch")
} }
} }
{ {
checksum := rustCall(func(uniffiStatus *C.RustCallStatus) C.uint16_t { checksum := rustCall(func(_uniffiStatus *C.RustCallStatus) C.uint16_t {
return C.uniffi_bulletproofs_checksum_func_verify_simple(uniffiStatus) return C.uniffi_bulletproofs_checksum_func_verify_simple()
}) })
if checksum != 32821 { if checksum != 27860 {
// If this happens try cleaning and rebuilding your project // If this happens try cleaning and rebuilding your project
panic("bulletproofs: uniffi_bulletproofs_checksum_func_verify_simple: UniFFI API checksum mismatch") panic("bulletproofs: uniffi_bulletproofs_checksum_func_verify_simple: UniFFI API checksum mismatch")
} }
@ -621,7 +623,7 @@ func (FfiConverterString) Read(reader io.Reader) string {
length := readInt32(reader) length := readInt32(reader)
buffer := make([]byte, length) buffer := make([]byte, length)
read_length, err := reader.Read(buffer) read_length, err := reader.Read(buffer)
if err != nil { if err != nil && err != io.EOF {
panic(err) panic(err)
} }
if read_length != int(length) { if read_length != int(length) {
@ -630,7 +632,7 @@ func (FfiConverterString) Read(reader io.Reader) string {
return string(buffer) return string(buffer)
} }
func (FfiConverterString) Lower(value string) RustBuffer { func (FfiConverterString) Lower(value string) C.RustBuffer {
return stringToRustBuffer(value) return stringToRustBuffer(value)
} }
@ -665,15 +667,15 @@ func (r *RangeProofResult) Destroy() {
FfiDestroyerSequenceUint8{}.Destroy(r.Blinding) FfiDestroyerSequenceUint8{}.Destroy(r.Blinding)
} }
type FfiConverterTypeRangeProofResult struct{} type FfiConverterRangeProofResult struct{}
var FfiConverterTypeRangeProofResultINSTANCE = FfiConverterTypeRangeProofResult{} var FfiConverterRangeProofResultINSTANCE = FfiConverterRangeProofResult{}
func (c FfiConverterTypeRangeProofResult) Lift(rb RustBufferI) RangeProofResult { func (c FfiConverterRangeProofResult) Lift(rb RustBufferI) RangeProofResult {
return LiftFromRustBuffer[RangeProofResult](c, rb) return LiftFromRustBuffer[RangeProofResult](c, rb)
} }
func (c FfiConverterTypeRangeProofResult) Read(reader io.Reader) RangeProofResult { func (c FfiConverterRangeProofResult) Read(reader io.Reader) RangeProofResult {
return RangeProofResult{ return RangeProofResult{
FfiConverterSequenceUint8INSTANCE.Read(reader), FfiConverterSequenceUint8INSTANCE.Read(reader),
FfiConverterSequenceUint8INSTANCE.Read(reader), FfiConverterSequenceUint8INSTANCE.Read(reader),
@ -681,19 +683,19 @@ func (c FfiConverterTypeRangeProofResult) Read(reader io.Reader) RangeProofResul
} }
} }
func (c FfiConverterTypeRangeProofResult) Lower(value RangeProofResult) RustBuffer { func (c FfiConverterRangeProofResult) Lower(value RangeProofResult) C.RustBuffer {
return LowerIntoRustBuffer[RangeProofResult](c, value) return LowerIntoRustBuffer[RangeProofResult](c, value)
} }
func (c FfiConverterTypeRangeProofResult) Write(writer io.Writer, value RangeProofResult) { func (c FfiConverterRangeProofResult) Write(writer io.Writer, value RangeProofResult) {
FfiConverterSequenceUint8INSTANCE.Write(writer, value.Proof) FfiConverterSequenceUint8INSTANCE.Write(writer, value.Proof)
FfiConverterSequenceUint8INSTANCE.Write(writer, value.Commitment) FfiConverterSequenceUint8INSTANCE.Write(writer, value.Commitment)
FfiConverterSequenceUint8INSTANCE.Write(writer, value.Blinding) FfiConverterSequenceUint8INSTANCE.Write(writer, value.Blinding)
} }
type FfiDestroyerTypeRangeProofResult struct{} type FfiDestroyerRangeProofResult struct{}
func (_ FfiDestroyerTypeRangeProofResult) Destroy(value RangeProofResult) { func (_ FfiDestroyerRangeProofResult) Destroy(value RangeProofResult) {
value.Destroy() value.Destroy()
} }
@ -717,7 +719,7 @@ func (c FfiConverterSequenceUint8) Read(reader io.Reader) []uint8 {
return result return result
} }
func (c FfiConverterSequenceUint8) Lower(value []uint8) RustBuffer { func (c FfiConverterSequenceUint8) Lower(value []uint8) C.RustBuffer {
return LowerIntoRustBuffer[[]uint8](c, value) return LowerIntoRustBuffer[[]uint8](c, value)
} }
@ -760,7 +762,7 @@ func (c FfiConverterSequenceSequenceUint8) Read(reader io.Reader) [][]uint8 {
return result return result
} }
func (c FfiConverterSequenceSequenceUint8) Lower(value [][]uint8) RustBuffer { func (c FfiConverterSequenceSequenceUint8) Lower(value [][]uint8) C.RustBuffer {
return LowerIntoRustBuffer[[][]uint8](c, value) return LowerIntoRustBuffer[[][]uint8](c, value)
} }
@ -785,97 +787,129 @@ func (FfiDestroyerSequenceSequenceUint8) Destroy(sequence [][]uint8) {
func AltGenerator() []uint8 { func AltGenerator() []uint8 {
return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI { return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI {
return C.uniffi_bulletproofs_fn_func_alt_generator(_uniffiStatus) return GoRustBuffer{
inner: C.uniffi_bulletproofs_fn_func_alt_generator(_uniffiStatus),
}
})) }))
} }
func GenerateInputCommitments(values [][]uint8, blinding []uint8) []uint8 { func GenerateInputCommitments(values [][]uint8, blinding []uint8) []uint8 {
return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI { return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI {
return C.uniffi_bulletproofs_fn_func_generate_input_commitments(FfiConverterSequenceSequenceUint8INSTANCE.Lower(values), FfiConverterSequenceUint8INSTANCE.Lower(blinding), _uniffiStatus) return GoRustBuffer{
inner: C.uniffi_bulletproofs_fn_func_generate_input_commitments(FfiConverterSequenceSequenceUint8INSTANCE.Lower(values), FfiConverterSequenceUint8INSTANCE.Lower(blinding), _uniffiStatus),
}
})) }))
} }
func GenerateRangeProof(values [][]uint8, blinding []uint8, bitSize uint64) RangeProofResult { func GenerateRangeProof(values [][]uint8, blinding []uint8, bitSize uint64) RangeProofResult {
return FfiConverterTypeRangeProofResultINSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI { return FfiConverterRangeProofResultINSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI {
return C.uniffi_bulletproofs_fn_func_generate_range_proof(FfiConverterSequenceSequenceUint8INSTANCE.Lower(values), FfiConverterSequenceUint8INSTANCE.Lower(blinding), FfiConverterUint64INSTANCE.Lower(bitSize), _uniffiStatus) return GoRustBuffer{
inner: C.uniffi_bulletproofs_fn_func_generate_range_proof(FfiConverterSequenceSequenceUint8INSTANCE.Lower(values), FfiConverterSequenceUint8INSTANCE.Lower(blinding), FfiConverterUint64INSTANCE.Lower(bitSize), _uniffiStatus),
}
})) }))
} }
func HashToScalar(input []uint8) []uint8 { func HashToScalar(input []uint8) []uint8 {
return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI { return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI {
return C.uniffi_bulletproofs_fn_func_hash_to_scalar(FfiConverterSequenceUint8INSTANCE.Lower(input), _uniffiStatus) return GoRustBuffer{
inner: C.uniffi_bulletproofs_fn_func_hash_to_scalar(FfiConverterSequenceUint8INSTANCE.Lower(input), _uniffiStatus),
}
})) }))
} }
func Keygen() []uint8 { func Keygen() []uint8 {
return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI { return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI {
return C.uniffi_bulletproofs_fn_func_keygen(_uniffiStatus) return GoRustBuffer{
inner: C.uniffi_bulletproofs_fn_func_keygen(_uniffiStatus),
}
})) }))
} }
func PointAddition(inputPoint []uint8, publicPoint []uint8) []uint8 { func PointAddition(inputPoint []uint8, publicPoint []uint8) []uint8 {
return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI { return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI {
return C.uniffi_bulletproofs_fn_func_point_addition(FfiConverterSequenceUint8INSTANCE.Lower(inputPoint), FfiConverterSequenceUint8INSTANCE.Lower(publicPoint), _uniffiStatus) return GoRustBuffer{
inner: C.uniffi_bulletproofs_fn_func_point_addition(FfiConverterSequenceUint8INSTANCE.Lower(inputPoint), FfiConverterSequenceUint8INSTANCE.Lower(publicPoint), _uniffiStatus),
}
})) }))
} }
func PointSubtraction(inputPoint []uint8, publicPoint []uint8) []uint8 { func PointSubtraction(inputPoint []uint8, publicPoint []uint8) []uint8 {
return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI { return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI {
return C.uniffi_bulletproofs_fn_func_point_subtraction(FfiConverterSequenceUint8INSTANCE.Lower(inputPoint), FfiConverterSequenceUint8INSTANCE.Lower(publicPoint), _uniffiStatus) return GoRustBuffer{
inner: C.uniffi_bulletproofs_fn_func_point_subtraction(FfiConverterSequenceUint8INSTANCE.Lower(inputPoint), FfiConverterSequenceUint8INSTANCE.Lower(publicPoint), _uniffiStatus),
}
})) }))
} }
func ScalarAddition(lhs []uint8, rhs []uint8) []uint8 { func ScalarAddition(lhs []uint8, rhs []uint8) []uint8 {
return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI { return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI {
return C.uniffi_bulletproofs_fn_func_scalar_addition(FfiConverterSequenceUint8INSTANCE.Lower(lhs), FfiConverterSequenceUint8INSTANCE.Lower(rhs), _uniffiStatus) return GoRustBuffer{
inner: C.uniffi_bulletproofs_fn_func_scalar_addition(FfiConverterSequenceUint8INSTANCE.Lower(lhs), FfiConverterSequenceUint8INSTANCE.Lower(rhs), _uniffiStatus),
}
})) }))
} }
func ScalarInverse(inputScalar []uint8) []uint8 { func ScalarInverse(inputScalar []uint8) []uint8 {
return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI { return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI {
return C.uniffi_bulletproofs_fn_func_scalar_inverse(FfiConverterSequenceUint8INSTANCE.Lower(inputScalar), _uniffiStatus) return GoRustBuffer{
inner: C.uniffi_bulletproofs_fn_func_scalar_inverse(FfiConverterSequenceUint8INSTANCE.Lower(inputScalar), _uniffiStatus),
}
})) }))
} }
func ScalarMult(lhs []uint8, rhs []uint8) []uint8 { func ScalarMult(lhs []uint8, rhs []uint8) []uint8 {
return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI { return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI {
return C.uniffi_bulletproofs_fn_func_scalar_mult(FfiConverterSequenceUint8INSTANCE.Lower(lhs), FfiConverterSequenceUint8INSTANCE.Lower(rhs), _uniffiStatus) return GoRustBuffer{
inner: C.uniffi_bulletproofs_fn_func_scalar_mult(FfiConverterSequenceUint8INSTANCE.Lower(lhs), FfiConverterSequenceUint8INSTANCE.Lower(rhs), _uniffiStatus),
}
})) }))
} }
func ScalarMultHashToScalar(inputScalar []uint8, publicPoint []uint8) []uint8 { func ScalarMultHashToScalar(inputScalar []uint8, publicPoint []uint8) []uint8 {
return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI { return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI {
return C.uniffi_bulletproofs_fn_func_scalar_mult_hash_to_scalar(FfiConverterSequenceUint8INSTANCE.Lower(inputScalar), FfiConverterSequenceUint8INSTANCE.Lower(publicPoint), _uniffiStatus) return GoRustBuffer{
inner: C.uniffi_bulletproofs_fn_func_scalar_mult_hash_to_scalar(FfiConverterSequenceUint8INSTANCE.Lower(inputScalar), FfiConverterSequenceUint8INSTANCE.Lower(publicPoint), _uniffiStatus),
}
})) }))
} }
func ScalarMultPoint(inputScalar []uint8, publicPoint []uint8) []uint8 { func ScalarMultPoint(inputScalar []uint8, publicPoint []uint8) []uint8 {
return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI { return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI {
return C.uniffi_bulletproofs_fn_func_scalar_mult_point(FfiConverterSequenceUint8INSTANCE.Lower(inputScalar), FfiConverterSequenceUint8INSTANCE.Lower(publicPoint), _uniffiStatus) return GoRustBuffer{
inner: C.uniffi_bulletproofs_fn_func_scalar_mult_point(FfiConverterSequenceUint8INSTANCE.Lower(inputScalar), FfiConverterSequenceUint8INSTANCE.Lower(publicPoint), _uniffiStatus),
}
})) }))
} }
func ScalarSubtraction(lhs []uint8, rhs []uint8) []uint8 { func ScalarSubtraction(lhs []uint8, rhs []uint8) []uint8 {
return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI { return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI {
return C.uniffi_bulletproofs_fn_func_scalar_subtraction(FfiConverterSequenceUint8INSTANCE.Lower(lhs), FfiConverterSequenceUint8INSTANCE.Lower(rhs), _uniffiStatus) return GoRustBuffer{
inner: C.uniffi_bulletproofs_fn_func_scalar_subtraction(FfiConverterSequenceUint8INSTANCE.Lower(lhs), FfiConverterSequenceUint8INSTANCE.Lower(rhs), _uniffiStatus),
}
})) }))
} }
func ScalarToPoint(input []uint8) []uint8 { func ScalarToPoint(input []uint8) []uint8 {
return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI { return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI {
return C.uniffi_bulletproofs_fn_func_scalar_to_point(FfiConverterSequenceUint8INSTANCE.Lower(input), _uniffiStatus) return GoRustBuffer{
inner: C.uniffi_bulletproofs_fn_func_scalar_to_point(FfiConverterSequenceUint8INSTANCE.Lower(input), _uniffiStatus),
}
})) }))
} }
func SignHidden(x []uint8, t []uint8, a []uint8, r []uint8) []uint8 { func SignHidden(x []uint8, t []uint8, a []uint8, r []uint8) []uint8 {
return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI { return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI {
return C.uniffi_bulletproofs_fn_func_sign_hidden(FfiConverterSequenceUint8INSTANCE.Lower(x), FfiConverterSequenceUint8INSTANCE.Lower(t), FfiConverterSequenceUint8INSTANCE.Lower(a), FfiConverterSequenceUint8INSTANCE.Lower(r), _uniffiStatus) return GoRustBuffer{
inner: C.uniffi_bulletproofs_fn_func_sign_hidden(FfiConverterSequenceUint8INSTANCE.Lower(x), FfiConverterSequenceUint8INSTANCE.Lower(t), FfiConverterSequenceUint8INSTANCE.Lower(a), FfiConverterSequenceUint8INSTANCE.Lower(r), _uniffiStatus),
}
})) }))
} }
func SignSimple(secret []uint8, message []uint8) []uint8 { func SignSimple(secret []uint8, message []uint8) []uint8 {
return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI { return FfiConverterSequenceUint8INSTANCE.Lift(rustCall(func(_uniffiStatus *C.RustCallStatus) RustBufferI {
return C.uniffi_bulletproofs_fn_func_sign_simple(FfiConverterSequenceUint8INSTANCE.Lower(secret), FfiConverterSequenceUint8INSTANCE.Lower(message), _uniffiStatus) return GoRustBuffer{
inner: C.uniffi_bulletproofs_fn_func_sign_simple(FfiConverterSequenceUint8INSTANCE.Lower(secret), FfiConverterSequenceUint8INSTANCE.Lower(message), _uniffiStatus),
}
})) }))
} }

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,6 @@
module source.quilibrium.com/quilibrium/monorepo/bulletproofs module source.quilibrium.com/quilibrium/monorepo/bulletproofs
go 1.23.2 go 1.24.0
replace github.com/libp2p/go-libp2p => ../go-libp2p replace github.com/libp2p/go-libp2p => ../go-libp2p
@ -19,16 +19,20 @@ require (
require ( require (
github.com/cloudflare/circl v1.6.1 // indirect github.com/cloudflare/circl v1.6.1 // indirect
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3 // indirect github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3 // indirect
github.com/iden3/go-iden3-crypto v0.0.17 // indirect github.com/iden3/go-iden3-crypto v0.0.17 // indirect
github.com/ipfs/go-cid v0.5.0 // indirect github.com/ipfs/go-cid v0.5.0 // indirect
github.com/klauspost/cpuid/v2 v2.2.10 // indirect github.com/klauspost/cpuid/v2 v2.2.10 // indirect
github.com/libp2p/go-buffer-pool v0.1.0 // indirect
github.com/libp2p/go-libp2p v0.41.1 // indirect
github.com/minio/sha256-simd v1.0.1 // indirect github.com/minio/sha256-simd v1.0.1 // indirect
github.com/mr-tron/base58 v1.2.0 // indirect github.com/mr-tron/base58 v1.2.0 // indirect
github.com/multiformats/go-base32 v0.1.0 // indirect github.com/multiformats/go-base32 v0.1.0 // indirect
github.com/multiformats/go-base36 v0.2.0 // indirect github.com/multiformats/go-base36 v0.2.0 // indirect
github.com/multiformats/go-multiaddr v0.16.1 // indirect github.com/multiformats/go-multiaddr v0.16.1 // indirect
github.com/multiformats/go-multibase v0.2.0 // indirect github.com/multiformats/go-multibase v0.2.0 // indirect
github.com/multiformats/go-multicodec v0.9.1 // indirect
github.com/multiformats/go-multihash v0.2.3 // indirect github.com/multiformats/go-multihash v0.2.3 // indirect
github.com/multiformats/go-varint v0.0.7 // indirect github.com/multiformats/go-varint v0.0.7 // indirect
github.com/spaolacci/murmur3 v1.1.0 // indirect github.com/spaolacci/murmur3 v1.1.0 // indirect

View File

@ -2,6 +2,10 @@ github.com/cloudflare/circl v1.6.1 h1:zqIqSPIndyBh1bjLVVDHMPpVKqp8Su/V+6MeDzzQBQ
github.com/cloudflare/circl v1.6.1/go.mod h1:uddAzsPgqdMAYatqJ0lsjX1oECcQLIlRpzZh3pJrofs= github.com/cloudflare/circl v1.6.1/go.mod h1:uddAzsPgqdMAYatqJ0lsjX1oECcQLIlRpzZh3pJrofs=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/decred/dcrd/crypto/blake256 v1.1.0 h1:zPMNGQCm0g4QTY27fOCorQW7EryeQ/U0x++OzVrdms8=
github.com/decred/dcrd/crypto/blake256 v1.1.0/go.mod h1:2OfgNZ5wDpcsFmHmCK5gZTPcCXqlm2ArzUIkw9czNJo=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 h1:NMZiJj8QnKe1LgsbDayM4UoHwbvwDRwnI3hwNaAHRnc=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0/go.mod h1:ZXNYxsqcloTdSy/rNShjYzMhyjf0LaoftYK0p+A3h40=
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY= github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag= github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
@ -22,6 +26,8 @@ github.com/klauspost/cpuid/v2 v2.2.10 h1:tBs3QSyvjDyFTq3uoc/9xFpCuOsJQFNPiAhYdw2
github.com/klauspost/cpuid/v2 v2.2.10/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0= github.com/klauspost/cpuid/v2 v2.2.10/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
github.com/leanovate/gopter v0.2.9 h1:fQjYxZaynp97ozCzfOyOuAGOU4aU/z37zf/tOujFk7c= github.com/leanovate/gopter v0.2.9 h1:fQjYxZaynp97ozCzfOyOuAGOU4aU/z37zf/tOujFk7c=
github.com/leanovate/gopter v0.2.9/go.mod h1:U2L/78B+KVFIx2VmW6onHJQzXtFb+p5y3y2Sh+Jxxv8= github.com/leanovate/gopter v0.2.9/go.mod h1:U2L/78B+KVFIx2VmW6onHJQzXtFb+p5y3y2Sh+Jxxv8=
github.com/libp2p/go-buffer-pool v0.1.0 h1:oK4mSFcQz7cTQIfqbe4MIj9gLW+mnanjyFtc6cdF0Y8=
github.com/libp2p/go-buffer-pool v0.1.0/go.mod h1:N+vh8gMqimBzdKkSMVuydVDq+UV5QTWy5HSiZacSbPg=
github.com/minio/sha256-simd v1.0.1 h1:6kaan5IFmwTNynnKKpDHe6FWHohJOHhCPchzK49dzMM= github.com/minio/sha256-simd v1.0.1 h1:6kaan5IFmwTNynnKKpDHe6FWHohJOHhCPchzK49dzMM=
github.com/minio/sha256-simd v1.0.1/go.mod h1:Pz6AKMiUdngCLpeTL/RJY1M9rUuPMYujV5xJjtbRSN8= github.com/minio/sha256-simd v1.0.1/go.mod h1:Pz6AKMiUdngCLpeTL/RJY1M9rUuPMYujV5xJjtbRSN8=
github.com/mr-tron/base58 v1.2.0 h1:T/HDJBh4ZCPbU39/+c3rRvE0uKBQlU27+QI8LJ4t64o= github.com/mr-tron/base58 v1.2.0 h1:T/HDJBh4ZCPbU39/+c3rRvE0uKBQlU27+QI8LJ4t64o=
@ -32,6 +38,8 @@ github.com/multiformats/go-base36 v0.2.0 h1:lFsAbNOGeKtuKozrtBsAkSVhv1p9D0/qedU9
github.com/multiformats/go-base36 v0.2.0/go.mod h1:qvnKE++v+2MWCfePClUEjE78Z7P2a1UV0xHgWc0hkp4= github.com/multiformats/go-base36 v0.2.0/go.mod h1:qvnKE++v+2MWCfePClUEjE78Z7P2a1UV0xHgWc0hkp4=
github.com/multiformats/go-multibase v0.2.0 h1:isdYCVLvksgWlMW9OZRYJEa9pZETFivncJHmHnnd87g= github.com/multiformats/go-multibase v0.2.0 h1:isdYCVLvksgWlMW9OZRYJEa9pZETFivncJHmHnnd87g=
github.com/multiformats/go-multibase v0.2.0/go.mod h1:bFBZX4lKCA/2lyOFSAoKH5SS6oPyjtnzK/XTFDPkNuk= github.com/multiformats/go-multibase v0.2.0/go.mod h1:bFBZX4lKCA/2lyOFSAoKH5SS6oPyjtnzK/XTFDPkNuk=
github.com/multiformats/go-multicodec v0.9.1 h1:x/Fuxr7ZuR4jJV4Os5g444F7xC4XmyUaT/FWtE+9Zjo=
github.com/multiformats/go-multicodec v0.9.1/go.mod h1:LLWNMtyV5ithSBUo3vFIMaeDy+h3EbkMTek1m+Fybbo=
github.com/multiformats/go-multihash v0.2.3 h1:7Lyc8XfX/IY2jWb/gI7JP+o7JEq9hOa7BFvVU9RSh+U= github.com/multiformats/go-multihash v0.2.3 h1:7Lyc8XfX/IY2jWb/gI7JP+o7JEq9hOa7BFvVU9RSh+U=
github.com/multiformats/go-multihash v0.2.3/go.mod h1:dXgKXCXjBzdscBLk9JkjINiEsCKRVch90MdaGiKsvSM= github.com/multiformats/go-multihash v0.2.3/go.mod h1:dXgKXCXjBzdscBLk9JkjINiEsCKRVch90MdaGiKsvSM=
github.com/multiformats/go-varint v0.0.7 h1:sWSGR+f/eu5ABZA2ZpYKBILXTTs9JWpdEM/nEGOHFS8= github.com/multiformats/go-varint v0.0.7 h1:sWSGR+f/eu5ABZA2ZpYKBILXTTs9JWpdEM/nEGOHFS8=
@ -42,8 +50,8 @@ github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZb
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/spaolacci/murmur3 v1.1.0 h1:7c1g84S4BPRrfL5Xrdp6fOJ206sU9y293DDHaoy0bLI= github.com/spaolacci/murmur3 v1.1.0 h1:7c1g84S4BPRrfL5Xrdp6fOJ206sU9y293DDHaoy0bLI=
github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA= github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA= github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA= go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A= go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/otel v1.34.0 h1:zRLXxLCgL1WyKsPVrgbSdMN4c0FMkDAskSTQP+0hdUY= go.opentelemetry.io/otel v1.34.0 h1:zRLXxLCgL1WyKsPVrgbSdMN4c0FMkDAskSTQP+0hdUY=

View File

@ -75,8 +75,8 @@ func (d *DoubleRatchetEncryptedChannel) EstablishTwoPartyChannel(
} }
state := NewDoubleRatchet( state := NewDoubleRatchet(
sessionKey[:36], sessionKey[:32],
sessionKey[36:64], sessionKey[32:64],
sessionKey[64:], sessionKey[64:],
isSender, isSender,
sendingSignedPrePrivateKey, sendingSignedPrePrivateKey,
@ -92,10 +92,13 @@ func (d *DoubleRatchetEncryptedChannel) EncryptTwoPartyMessage(
) (newRatchetState string, envelope *channel.P2PChannelEnvelope, err error) { ) (newRatchetState string, envelope *channel.P2PChannelEnvelope, err error) {
stateAndMessage := generated.DoubleRatchetStateAndMessage{ stateAndMessage := generated.DoubleRatchetStateAndMessage{
RatchetState: ratchetState, RatchetState: ratchetState,
Message: message, Message: message, // buildutils:allow-slice-alias this assignment is ephemeral
} }
result := DoubleRatchetEncrypt(stateAndMessage) result, err := DoubleRatchetEncrypt(stateAndMessage)
if err != nil {
return "", nil, errors.Wrap(err, "encrypt two party message")
}
envelope = &channel.P2PChannelEnvelope{} envelope = &channel.P2PChannelEnvelope{}
err = json.Unmarshal([]byte(result.Envelope), envelope) err = json.Unmarshal([]byte(result.Envelope), envelope)
if err != nil { if err != nil {
@ -120,7 +123,10 @@ func (d *DoubleRatchetEncryptedChannel) DecryptTwoPartyMessage(
Envelope: string(envelopeJson), Envelope: string(envelopeJson),
} }
result := DoubleRatchetDecrypt(stateAndEnvelope) result, err := DoubleRatchetDecrypt(stateAndEnvelope)
if err != nil {
return "", nil, errors.Wrap(err, "decrypt two party message")
}
return result.RatchetState, result.Message, nil return result.RatchetState, result.Message, nil
} }
@ -162,45 +168,88 @@ func NewTripleRatchet(
func DoubleRatchetEncrypt( func DoubleRatchetEncrypt(
ratchetStateAndMessage generated.DoubleRatchetStateAndMessage, ratchetStateAndMessage generated.DoubleRatchetStateAndMessage,
) generated.DoubleRatchetStateAndEnvelope { ) (generated.DoubleRatchetStateAndEnvelope, error) {
return generated.DoubleRatchetEncrypt(ratchetStateAndMessage) result, err := generated.DoubleRatchetEncrypt(ratchetStateAndMessage)
if err != nil {
return generated.DoubleRatchetStateAndEnvelope{}, err
}
return result, nil
} }
func DoubleRatchetDecrypt( func DoubleRatchetDecrypt(
ratchetStateAndEnvelope generated.DoubleRatchetStateAndEnvelope, ratchetStateAndEnvelope generated.DoubleRatchetStateAndEnvelope,
) generated.DoubleRatchetStateAndMessage { ) (generated.DoubleRatchetStateAndMessage, error) {
return generated.DoubleRatchetDecrypt(ratchetStateAndEnvelope) result, err := generated.DoubleRatchetDecrypt(ratchetStateAndEnvelope)
if err != nil {
return generated.DoubleRatchetStateAndMessage{}, err
}
return result, nil
} }
func TripleRatchetInitRound1( func TripleRatchetInitRound1(
ratchetStateAndMetadata generated.TripleRatchetStateAndMetadata, ratchetStateAndMetadata generated.TripleRatchetStateAndMetadata,
) generated.TripleRatchetStateAndMetadata { ) generated.TripleRatchetStateAndMetadata {
return generated.TripleRatchetInitRound1(ratchetStateAndMetadata) result, err := generated.TripleRatchetInitRound1(ratchetStateAndMetadata)
if err != nil {
return generated.TripleRatchetStateAndMetadata{
Metadata: map[string]string{"error": err.Error()},
}
}
return result
} }
func TripleRatchetInitRound2( func TripleRatchetInitRound2(
ratchetStateAndMetadata generated.TripleRatchetStateAndMetadata, ratchetStateAndMetadata generated.TripleRatchetStateAndMetadata,
) generated.TripleRatchetStateAndMetadata { ) generated.TripleRatchetStateAndMetadata {
return generated.TripleRatchetInitRound2(ratchetStateAndMetadata) result, err := generated.TripleRatchetInitRound2(ratchetStateAndMetadata)
if err != nil {
return generated.TripleRatchetStateAndMetadata{
Metadata: map[string]string{"error": err.Error()},
}
}
return result
} }
func TripleRatchetInitRound3( func TripleRatchetInitRound3(
ratchetStateAndMetadata generated.TripleRatchetStateAndMetadata, ratchetStateAndMetadata generated.TripleRatchetStateAndMetadata,
) generated.TripleRatchetStateAndMetadata { ) generated.TripleRatchetStateAndMetadata {
return generated.TripleRatchetInitRound3(ratchetStateAndMetadata) result, err := generated.TripleRatchetInitRound3(ratchetStateAndMetadata)
if err != nil {
return generated.TripleRatchetStateAndMetadata{
Metadata: map[string]string{"error": err.Error()},
}
}
return result
} }
func TripleRatchetInitRound4( func TripleRatchetInitRound4(
ratchetStateAndMetadata generated.TripleRatchetStateAndMetadata, ratchetStateAndMetadata generated.TripleRatchetStateAndMetadata,
) generated.TripleRatchetStateAndMetadata { ) generated.TripleRatchetStateAndMetadata {
return generated.TripleRatchetInitRound4(ratchetStateAndMetadata) result, err := generated.TripleRatchetInitRound4(ratchetStateAndMetadata)
if err != nil {
return generated.TripleRatchetStateAndMetadata{
Metadata: map[string]string{"error": err.Error()},
}
}
return result
} }
func TripleRatchetEncrypt( func TripleRatchetEncrypt(
ratchetStateAndMessage generated.TripleRatchetStateAndMessage, ratchetStateAndMessage generated.TripleRatchetStateAndMessage,
) generated.TripleRatchetStateAndEnvelope { ) generated.TripleRatchetStateAndEnvelope {
return generated.TripleRatchetEncrypt(ratchetStateAndMessage) result, err := generated.TripleRatchetEncrypt(ratchetStateAndMessage)
if err != nil {
return generated.TripleRatchetStateAndEnvelope{}
}
return result
} }
func TripleRatchetDecrypt( func TripleRatchetDecrypt(
ratchetStateAndEnvelope generated.TripleRatchetStateAndEnvelope, ratchetStateAndEnvelope generated.TripleRatchetStateAndEnvelope,
) generated.TripleRatchetStateAndMessage { ) generated.TripleRatchetStateAndMessage {
return generated.TripleRatchetDecrypt(ratchetStateAndEnvelope) result, err := generated.TripleRatchetDecrypt(ratchetStateAndEnvelope)
if err != nil {
return generated.TripleRatchetStateAndMessage{}
}
return result
} }

View File

@ -4,6 +4,7 @@ import (
"bytes" "bytes"
"crypto/rand" "crypto/rand"
"encoding/base64" "encoding/base64"
"encoding/json"
"fmt" "fmt"
"sort" "sort"
"testing" "testing"
@ -60,6 +61,320 @@ func remapOutputs(maps map[string]map[string]string) map[string]map[string]strin
return out return out
} }
// TestX3DHAndDoubleRatchet tests X3DH key agreement and double ratchet session
// establishment between two parties.
func TestX3DHAndDoubleRatchet(t *testing.T) {
// Generate two peers with their identity and pre-keys
// Using ScalarEd448 which produces 56-byte private keys (Scalars)
// and 57-byte public keys (Edwards compressed)
alice := generatePeer()
bob := generatePeer()
// Log key sizes for debugging
t.Logf("Alice identity private key size: %d bytes", len(alice.identityKey.Bytes()))
t.Logf("Alice identity public key size: %d bytes", len(alice.identityPubKey.ToAffineCompressed()))
t.Logf("Alice signed pre-key private size: %d bytes", len(alice.signedPreKey.Bytes()))
t.Logf("Alice signed pre-key public size: %d bytes", len(alice.signedPrePubKey.ToAffineCompressed()))
// Test X3DH key agreement
// Alice is sender, Bob is receiver
// Sender needs: own identity private, own ephemeral private, peer identity public, peer signed pre public
// Receiver needs: own identity private, own signed pre private, peer identity public, peer ephemeral public
// For X3DH, Alice uses her signedPreKey as the ephemeral key
aliceSessionKeyJson := generated.SenderX3dh(
alice.identityKey.Bytes(), // sending identity private key (56 bytes)
alice.signedPreKey.Bytes(), // sending ephemeral private key (56 bytes)
bob.identityPubKey.ToAffineCompressed(), // receiving identity public key (57 bytes)
bob.signedPrePubKey.ToAffineCompressed(), // receiving signed pre-key public (57 bytes)
96, // session key length
)
t.Logf("Alice X3DH result: %s", aliceSessionKeyJson)
// Check if Alice got an error
if len(aliceSessionKeyJson) == 0 || aliceSessionKeyJson[0] != '"' {
t.Fatalf("Alice X3DH failed: %s", aliceSessionKeyJson)
}
// Bob performs receiver side X3DH
bobSessionKeyJson := generated.ReceiverX3dh(
bob.identityKey.Bytes(), // sending identity private key (56 bytes)
bob.signedPreKey.Bytes(), // sending signed pre private key (56 bytes)
alice.identityPubKey.ToAffineCompressed(), // receiving identity public key (57 bytes)
alice.signedPrePubKey.ToAffineCompressed(), // receiving ephemeral public key (57 bytes)
96, // session key length
)
t.Logf("Bob X3DH result: %s", bobSessionKeyJson)
// Check if Bob got an error
if len(bobSessionKeyJson) == 0 || bobSessionKeyJson[0] != '"' {
t.Fatalf("Bob X3DH failed: %s", bobSessionKeyJson)
}
// Decode session keys and verify they match
var aliceSessionKeyB64, bobSessionKeyB64 string
if err := json.Unmarshal([]byte(aliceSessionKeyJson), &aliceSessionKeyB64); err != nil {
t.Fatalf("Failed to parse Alice session key: %v", err)
}
if err := json.Unmarshal([]byte(bobSessionKeyJson), &bobSessionKeyB64); err != nil {
t.Fatalf("Failed to parse Bob session key: %v", err)
}
aliceSessionKey, err := base64.StdEncoding.DecodeString(aliceSessionKeyB64)
if err != nil {
t.Fatalf("Failed to decode Alice session key: %v", err)
}
bobSessionKey, err := base64.StdEncoding.DecodeString(bobSessionKeyB64)
if err != nil {
t.Fatalf("Failed to decode Bob session key: %v", err)
}
assert.Equal(t, 96, len(aliceSessionKey), "Alice session key should be 96 bytes")
assert.Equal(t, 96, len(bobSessionKey), "Bob session key should be 96 bytes")
assert.Equal(t, aliceSessionKey, bobSessionKey, "Session keys should match")
t.Logf("X3DH session key established successfully (%d bytes)", len(aliceSessionKey))
// Now test double ratchet session establishment
// Use the DoubleRatchetEncryptedChannel interface
ch := channel.NewDoubleRatchetEncryptedChannel()
// Alice establishes session as sender
aliceState, err := ch.EstablishTwoPartyChannel(
true, // isSender
alice.identityKey.Bytes(),
alice.signedPreKey.Bytes(),
bob.identityPubKey.ToAffineCompressed(),
bob.signedPrePubKey.ToAffineCompressed(),
)
if err != nil {
t.Fatalf("Alice failed to establish channel: %v", err)
}
t.Logf("Alice established double ratchet session")
// Bob establishes session as receiver
bobState, err := ch.EstablishTwoPartyChannel(
false, // isSender (receiver)
bob.identityKey.Bytes(),
bob.signedPreKey.Bytes(),
alice.identityPubKey.ToAffineCompressed(),
alice.signedPrePubKey.ToAffineCompressed(),
)
if err != nil {
t.Fatalf("Bob failed to establish channel: %v", err)
}
t.Logf("Bob established double ratchet session")
// Debug: log the ratchet states
t.Logf("Alice initial state length: %d", len(aliceState))
t.Logf("Bob initial state length: %d", len(bobState))
// Test message encryption/decryption
testMessage := []byte("Hello, Bob! This is a secret message from Alice.")
// Alice encrypts
newAliceState, envelope, err := ch.EncryptTwoPartyMessage(aliceState, testMessage)
if err != nil {
t.Fatalf("Alice failed to encrypt: %v", err)
}
t.Logf("Alice encrypted message")
t.Logf("Alice state after encrypt length: %d", len(newAliceState))
t.Logf("Envelope: %+v", envelope)
aliceState = newAliceState
// Bob decrypts
newBobState, decrypted, err := ch.DecryptTwoPartyMessage(bobState, envelope)
if err != nil {
t.Fatalf("Bob failed to decrypt: %v", err)
}
t.Logf("Bob state after decrypt length: %d", len(newBobState))
t.Logf("Decrypted message length: %d", len(decrypted))
// Check if decryption actually worked
if len(newBobState) == 0 {
t.Logf("WARNING: Bob's new ratchet state is empty - decryption likely failed silently")
}
assert.Equal(t, testMessage, decrypted, "Decrypted message should match original")
t.Logf("Bob decrypted message successfully: %s", string(decrypted))
bobState = newBobState
// Test reverse direction: Bob sends to Alice
replyMessage := []byte("Hi Alice! Got your message.")
bobState, envelope2, err := ch.EncryptTwoPartyMessage(bobState, replyMessage)
if err != nil {
t.Fatalf("Bob failed to encrypt reply: %v", err)
}
aliceState, decrypted2, err := ch.DecryptTwoPartyMessage(aliceState, envelope2)
if err != nil {
t.Fatalf("Alice failed to decrypt reply: %v", err)
}
assert.Equal(t, replyMessage, decrypted2, "Decrypted reply should match original")
t.Logf("Alice decrypted reply successfully: %s", string(decrypted2))
// Suppress unused variable warnings
_ = aliceState
_ = bobState
}
// TestReceiverSendsFirst tests that the X3DH "receiver" CANNOT send first
// This confirms that Signal protocol requires sender to send first.
// The test is expected to fail - documenting the protocol limitation.
func TestReceiverSendsFirst(t *testing.T) {
t.Skip("Expected to fail - Signal protocol requires sender to send first")
alice := generatePeer()
bob := generatePeer()
ch := channel.NewDoubleRatchetEncryptedChannel()
// Alice establishes as sender
aliceState, err := ch.EstablishTwoPartyChannel(
true,
alice.identityKey.Bytes(),
alice.signedPreKey.Bytes(),
bob.identityPubKey.ToAffineCompressed(),
bob.signedPrePubKey.ToAffineCompressed(),
)
if err != nil {
t.Fatalf("Alice failed to establish: %v", err)
}
// Bob establishes as receiver
bobState, err := ch.EstablishTwoPartyChannel(
false,
bob.identityKey.Bytes(),
bob.signedPreKey.Bytes(),
alice.identityPubKey.ToAffineCompressed(),
alice.signedPrePubKey.ToAffineCompressed(),
)
if err != nil {
t.Fatalf("Bob failed to establish: %v", err)
}
// BOB SENDS FIRST (he's the X3DH receiver but sends first) - THIS WILL FAIL
bobMessage := []byte("Hello Alice! I'm the receiver but I'm sending first.")
bobState, envelope, err := ch.EncryptTwoPartyMessage(bobState, bobMessage)
if err != nil {
t.Fatalf("Bob (receiver) failed to encrypt first message: %v", err)
}
t.Logf("Bob (X3DH receiver) encrypted first message successfully")
// Alice decrypts - THIS FAILS because receiver can't send first
aliceState, decrypted, err := ch.DecryptTwoPartyMessage(aliceState, envelope)
if err != nil {
t.Fatalf("Alice failed to decrypt Bob's first message: %v", err)
}
assert.Equal(t, bobMessage, decrypted)
t.Logf("Alice decrypted Bob's first message: %s", string(decrypted))
_ = aliceState
_ = bobState
}
// TestHandshakePattern tests the correct handshake pattern:
// Sender (Alice) sends hello first, then receiver (Bob) can send.
func TestHandshakePattern(t *testing.T) {
alice := generatePeer()
bob := generatePeer()
ch := channel.NewDoubleRatchetEncryptedChannel()
// Alice establishes as sender
aliceState, err := ch.EstablishTwoPartyChannel(
true,
alice.identityKey.Bytes(),
alice.signedPreKey.Bytes(),
bob.identityPubKey.ToAffineCompressed(),
bob.signedPrePubKey.ToAffineCompressed(),
)
if err != nil {
t.Fatalf("Alice failed to establish: %v", err)
}
// Bob establishes as receiver
bobState, err := ch.EstablishTwoPartyChannel(
false,
bob.identityKey.Bytes(),
bob.signedPreKey.Bytes(),
alice.identityPubKey.ToAffineCompressed(),
alice.signedPrePubKey.ToAffineCompressed(),
)
if err != nil {
t.Fatalf("Bob failed to establish: %v", err)
}
// Step 1: Alice (sender) sends hello first
helloMsg := []byte("hello")
aliceState, helloEnvelope, err := ch.EncryptTwoPartyMessage(aliceState, helloMsg)
if err != nil {
t.Fatalf("Alice failed to encrypt hello: %v", err)
}
t.Logf("Alice sent hello")
// Step 2: Bob receives hello
bobState, decryptedHello, err := ch.DecryptTwoPartyMessage(bobState, helloEnvelope)
if err != nil {
t.Fatalf("Bob failed to decrypt hello: %v", err)
}
assert.Equal(t, helloMsg, decryptedHello)
t.Logf("Bob received hello: %s", string(decryptedHello))
// Step 3: Bob sends ack (now Bob can send after receiving)
ackMsg := []byte("ack")
bobState, ackEnvelope, err := ch.EncryptTwoPartyMessage(bobState, ackMsg)
if err != nil {
t.Fatalf("Bob failed to encrypt ack: %v", err)
}
t.Logf("Bob sent ack")
// Step 4: Alice receives ack
aliceState, decryptedAck, err := ch.DecryptTwoPartyMessage(aliceState, ackEnvelope)
if err != nil {
t.Fatalf("Alice failed to decrypt ack: %v", err)
}
assert.Equal(t, ackMsg, decryptedAck)
t.Logf("Alice received ack: %s", string(decryptedAck))
// Now both parties can send freely
// Bob sends a real message
bobMessage := []byte("Now I can send real messages!")
bobState, bobEnvelope, err := ch.EncryptTwoPartyMessage(bobState, bobMessage)
if err != nil {
t.Fatalf("Bob failed to encrypt message: %v", err)
}
aliceState, decryptedBob, err := ch.DecryptTwoPartyMessage(aliceState, bobEnvelope)
if err != nil {
t.Fatalf("Alice failed to decrypt Bob's message: %v", err)
}
assert.Equal(t, bobMessage, decryptedBob)
t.Logf("Alice received Bob's message: %s", string(decryptedBob))
// Alice sends a real message
aliceMessage := []byte("And I can keep sending too!")
aliceState, aliceEnvelope, err := ch.EncryptTwoPartyMessage(aliceState, aliceMessage)
if err != nil {
t.Fatalf("Alice failed to encrypt message: %v", err)
}
bobState, decryptedAlice, err := ch.DecryptTwoPartyMessage(bobState, aliceEnvelope)
if err != nil {
t.Fatalf("Bob failed to decrypt Alice's message: %v", err)
}
assert.Equal(t, aliceMessage, decryptedAlice)
t.Logf("Bob received Alice's message: %s", string(decryptedAlice))
_ = aliceState
_ = bobState
}
func TestChannel(t *testing.T) { func TestChannel(t *testing.T) {
peers := []*peer{} peers := []*peer{}
for i := 0; i < 4; i++ { for i := 0; i < 4; i++ {

View File

@ -1,8 +0,0 @@
#include <channel.h>
// This file exists beacause of
// https://github.com/golang/go/issues/11263
void cgo_rust_task_callback_bridge_channel(RustTaskCallback cb, const void * taskData, int8_t status) {
cb(taskData, status);
}

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@ -1,6 +1,6 @@
module source.quilibrium.com/quilibrium/monorepo/channel module source.quilibrium.com/quilibrium/monorepo/channel
go 1.20 go 1.24.0
// A necessary hack until source.quilibrium.com is open to all // A necessary hack until source.quilibrium.com is open to all
replace source.quilibrium.com/quilibrium/monorepo/nekryptology => ../nekryptology replace source.quilibrium.com/quilibrium/monorepo/nekryptology => ../nekryptology
@ -13,13 +13,15 @@ replace github.com/multiformats/go-multiaddr-dns => ../go-multiaddr-dns
replace github.com/libp2p/go-libp2p => ../go-libp2p replace github.com/libp2p/go-libp2p => ../go-libp2p
replace source.quilibrium.com/quilibrium/monorepo/consensus => ../consensus
replace source.quilibrium.com/quilibrium/monorepo/types => ../types replace source.quilibrium.com/quilibrium/monorepo/types => ../types
replace source.quilibrium.com/quilibrium/monorepo/utils => ../utils replace source.quilibrium.com/quilibrium/monorepo/utils => ../utils
require ( require (
github.com/pkg/errors v0.9.1 github.com/pkg/errors v0.9.1
github.com/stretchr/testify v1.10.0 github.com/stretchr/testify v1.11.1
source.quilibrium.com/quilibrium/monorepo/types v0.0.0-00010101000000-000000000000 source.quilibrium.com/quilibrium/monorepo/types v0.0.0-00010101000000-000000000000
) )
@ -28,15 +30,40 @@ require (
github.com/btcsuite/btcd v0.21.0-beta.0.20201114000516-e9c7a5ac6401 // indirect github.com/btcsuite/btcd v0.21.0-beta.0.20201114000516-e9c7a5ac6401 // indirect
github.com/bwesterb/go-ristretto v1.2.3 // indirect github.com/bwesterb/go-ristretto v1.2.3 // indirect
github.com/consensys/gnark-crypto v0.5.3 // indirect github.com/consensys/gnark-crypto v0.5.3 // indirect
github.com/kr/pretty v0.2.1 // indirect github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3 // indirect
github.com/iden3/go-iden3-crypto v0.0.17 // indirect
github.com/ipfs/go-cid v0.5.0 // indirect
github.com/klauspost/cpuid/v2 v2.2.10 // indirect
github.com/libp2p/go-buffer-pool v0.1.0 // indirect
github.com/libp2p/go-libp2p v0.41.1 // indirect
github.com/minio/sha256-simd v1.0.1 // indirect
github.com/mr-tron/base58 v1.2.0 // indirect
github.com/multiformats/go-base32 v0.1.0 // indirect
github.com/multiformats/go-base36 v0.2.0 // indirect
github.com/multiformats/go-multiaddr v0.16.1 // indirect
github.com/multiformats/go-multibase v0.2.0 // indirect
github.com/multiformats/go-multicodec v0.9.1 // indirect
github.com/multiformats/go-multihash v0.2.3 // indirect
github.com/multiformats/go-varint v0.0.7 // indirect
github.com/spaolacci/murmur3 v1.1.0 // indirect
golang.org/x/exp v0.0.0-20250606033433-dcc06ee1d476 // indirect
golang.org/x/net v0.41.0 // indirect
golang.org/x/text v0.26.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250303144028-a0af3efb3deb // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250303144028-a0af3efb3deb // indirect
google.golang.org/grpc v1.72.0 // indirect
google.golang.org/protobuf v1.36.6 // indirect
lukechampine.com/blake3 v1.4.1 // indirect
source.quilibrium.com/quilibrium/monorepo/consensus v0.0.0-00010101000000-000000000000 // indirect
source.quilibrium.com/quilibrium/monorepo/protobufs v0.0.0-00010101000000-000000000000 // indirect
) )
require ( require (
github.com/cloudflare/circl v1.6.1 // indirect github.com/cloudflare/circl v1.6.1 // indirect
github.com/davecgh/go-spew v1.1.1 // indirect github.com/davecgh/go-spew v1.1.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect github.com/pmezard/go-difflib v1.0.0 // indirect
golang.org/x/crypto v0.38.0 // indirect golang.org/x/crypto v0.39.0 // indirect
golang.org/x/sys v0.33.0 // indirect golang.org/x/sys v0.33.0 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect
source.quilibrium.com/quilibrium/monorepo/nekryptology v0.0.0-00010101000000-000000000000 source.quilibrium.com/quilibrium/monorepo/nekryptology v0.0.0-00010101000000-000000000000

View File

@ -24,22 +24,60 @@ github.com/consensys/gnark-crypto v0.5.3/go.mod h1:hOdPlWQV1gDLp7faZVeg8Y0iEPFaO
github.com/davecgh/go-spew v0.0.0-20171005155431-ecdeabc65495/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v0.0.0-20171005155431-ecdeabc65495/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/decred/dcrd/crypto/blake256 v1.1.0 h1:zPMNGQCm0g4QTY27fOCorQW7EryeQ/U0x++OzVrdms8=
github.com/decred/dcrd/crypto/blake256 v1.1.0/go.mod h1:2OfgNZ5wDpcsFmHmCK5gZTPcCXqlm2ArzUIkw9czNJo=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 h1:NMZiJj8QnKe1LgsbDayM4UoHwbvwDRwnI3hwNaAHRnc=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0/go.mod h1:ZXNYxsqcloTdSy/rNShjYzMhyjf0LaoftYK0p+A3h40=
github.com/decred/dcrd/lru v1.0.0/go.mod h1:mxKOwFd7lFjN2GZYsiz/ecgqR6kkYAl+0pz0tEMk218= github.com/decred/dcrd/lru v1.0.0/go.mod h1:mxKOwFd7lFjN2GZYsiz/ecgqR6kkYAl+0pz0tEMk218=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo= github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U= github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3 h1:5ZPtiqj0JL5oKWmcsq4VMaAW5ukBEgSGXEN89zeH1Jo=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3/go.mod h1:ndYquD05frm2vACXE1nsccT4oJzjhw2arTS2cpUD1PI=
github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU= github.com/hpcloud/tail v1.0.0/go.mod h1:ab1qPbhIpdTxEkNHXyeSf5vhxWSCs/tWer42PpOxQnU=
github.com/iden3/go-iden3-crypto v0.0.17 h1:NdkceRLJo/pI4UpcjVah4lN/a3yzxRUGXqxbWcYh9mY=
github.com/iden3/go-iden3-crypto v0.0.17/go.mod h1:dLpM4vEPJ3nDHzhWFXDjzkn1qHoBeOT/3UEhXsEsP3E=
github.com/ipfs/go-cid v0.5.0 h1:goEKKhaGm0ul11IHA7I6p1GmKz8kEYniqFopaB5Otwg=
github.com/ipfs/go-cid v0.5.0/go.mod h1:0L7vmeNXpQpUS9vt+yEARkJ8rOg43DF3iPgn4GIN0mk=
github.com/jessevdk/go-flags v0.0.0-20141203071132-1679536dcc89/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI= github.com/jessevdk/go-flags v0.0.0-20141203071132-1679536dcc89/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI= github.com/jessevdk/go-flags v1.4.0/go.mod h1:4FA24M0QyGHXBuZZK/XkWh8h0e1EYbRYJSGM75WSRxI=
github.com/jrick/logrotate v1.0.0/go.mod h1:LNinyqDIJnpAur+b8yyulnQw/wDuN1+BYKlTRt3OuAQ= github.com/jrick/logrotate v1.0.0/go.mod h1:LNinyqDIJnpAur+b8yyulnQw/wDuN1+BYKlTRt3OuAQ=
github.com/kkdai/bstream v0.0.0-20161212061736-f391b8402d23/go.mod h1:J+Gs4SYgM6CZQHDETBtE9HaSEkGmuNXF86RwHhHUvq4= github.com/kkdai/bstream v0.0.0-20161212061736-f391b8402d23/go.mod h1:J+Gs4SYgM6CZQHDETBtE9HaSEkGmuNXF86RwHhHUvq4=
github.com/kr/pretty v0.2.1 h1:Fmg33tUaq4/8ym9TJN1x7sLJnHVwhP33CNkpYV/7rwI= github.com/klauspost/cpuid/v2 v2.2.10 h1:tBs3QSyvjDyFTq3uoc/9xFpCuOsJQFNPiAhYdw2skhE=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI= github.com/klauspost/cpuid/v2 v2.2.10/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ= github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI= github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY= github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE= github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/leanovate/gopter v0.2.9 h1:fQjYxZaynp97ozCzfOyOuAGOU4aU/z37zf/tOujFk7c= github.com/leanovate/gopter v0.2.9 h1:fQjYxZaynp97ozCzfOyOuAGOU4aU/z37zf/tOujFk7c=
github.com/leanovate/gopter v0.2.9/go.mod h1:U2L/78B+KVFIx2VmW6onHJQzXtFb+p5y3y2Sh+Jxxv8= github.com/leanovate/gopter v0.2.9/go.mod h1:U2L/78B+KVFIx2VmW6onHJQzXtFb+p5y3y2Sh+Jxxv8=
github.com/libp2p/go-buffer-pool v0.1.0 h1:oK4mSFcQz7cTQIfqbe4MIj9gLW+mnanjyFtc6cdF0Y8=
github.com/libp2p/go-buffer-pool v0.1.0/go.mod h1:N+vh8gMqimBzdKkSMVuydVDq+UV5QTWy5HSiZacSbPg=
github.com/minio/sha256-simd v1.0.1 h1:6kaan5IFmwTNynnKKpDHe6FWHohJOHhCPchzK49dzMM=
github.com/minio/sha256-simd v1.0.1/go.mod h1:Pz6AKMiUdngCLpeTL/RJY1M9rUuPMYujV5xJjtbRSN8=
github.com/mr-tron/base58 v1.2.0 h1:T/HDJBh4ZCPbU39/+c3rRvE0uKBQlU27+QI8LJ4t64o=
github.com/mr-tron/base58 v1.2.0/go.mod h1:BinMc/sQntlIE1frQmRFPUoPA1Zkr8VRgBdjWI2mNwc=
github.com/multiformats/go-base32 v0.1.0 h1:pVx9xoSPqEIQG8o+UbAe7DNi51oej1NtK+aGkbLYxPE=
github.com/multiformats/go-base32 v0.1.0/go.mod h1:Kj3tFY6zNr+ABYMqeUNeGvkIC/UYgtWibDcT0rExnbI=
github.com/multiformats/go-base36 v0.2.0 h1:lFsAbNOGeKtuKozrtBsAkSVhv1p9D0/qedU9rQyccr0=
github.com/multiformats/go-base36 v0.2.0/go.mod h1:qvnKE++v+2MWCfePClUEjE78Z7P2a1UV0xHgWc0hkp4=
github.com/multiformats/go-multibase v0.2.0 h1:isdYCVLvksgWlMW9OZRYJEa9pZETFivncJHmHnnd87g=
github.com/multiformats/go-multibase v0.2.0/go.mod h1:bFBZX4lKCA/2lyOFSAoKH5SS6oPyjtnzK/XTFDPkNuk=
github.com/multiformats/go-multicodec v0.9.1 h1:x/Fuxr7ZuR4jJV4Os5g444F7xC4XmyUaT/FWtE+9Zjo=
github.com/multiformats/go-multicodec v0.9.1/go.mod h1:LLWNMtyV5ithSBUo3vFIMaeDy+h3EbkMTek1m+Fybbo=
github.com/multiformats/go-multihash v0.2.3 h1:7Lyc8XfX/IY2jWb/gI7JP+o7JEq9hOa7BFvVU9RSh+U=
github.com/multiformats/go-multihash v0.2.3/go.mod h1:dXgKXCXjBzdscBLk9JkjINiEsCKRVch90MdaGiKsvSM=
github.com/multiformats/go-varint v0.0.7 h1:sWSGR+f/eu5ABZA2ZpYKBILXTTs9JWpdEM/nEGOHFS8=
github.com/multiformats/go-varint v0.0.7/go.mod h1:r8PUYw/fD/SjBCiKOoDlGF6QawOELpZAu9eioSos/OU=
github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/ginkgo v1.6.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE= github.com/onsi/ginkgo v1.7.0/go.mod h1:lLunBs/Ym6LB5Z9jYTR76FiuTmxDTDusOGeTQH+WWjE=
github.com/onsi/gomega v1.4.1/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA= github.com/onsi/gomega v1.4.1/go.mod h1:C1qb7wdrVGGVU+Z6iS04AVkA3Q65CEZX59MT0QO5uiA=
@ -48,19 +86,39 @@ github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA= github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
github.com/spaolacci/murmur3 v1.1.0 h1:7c1g84S4BPRrfL5Xrdp6fOJ206sU9y293DDHaoy0bLI=
github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/otel v1.34.0 h1:zRLXxLCgL1WyKsPVrgbSdMN4c0FMkDAskSTQP+0hdUY=
go.opentelemetry.io/otel v1.34.0/go.mod h1:OWFPOQ+h4G8xpyjgqo4SxJYdDQ/qmRH+wivy7zzx9oI=
go.opentelemetry.io/otel/metric v1.34.0 h1:+eTR3U0MyfWjRDhmFMxe2SsW64QrZ84AOhvqS7Y+PoQ=
go.opentelemetry.io/otel/metric v1.34.0/go.mod h1:CEDrp0fy2D0MvkXE+dPV7cMi8tWZwX3dmaIhwPOaqHE=
go.opentelemetry.io/otel/sdk v1.34.0 h1:95zS4k/2GOy069d321O8jWgYsW3MzVV+KuSPKp7Wr1A=
go.opentelemetry.io/otel/sdk v1.34.0/go.mod h1:0e/pNiaMAqaykJGKbi+tSjWfNNHMTxoC9qANsCzbyxU=
go.opentelemetry.io/otel/sdk/metric v1.34.0 h1:5CeK9ujjbFVL5c1PhLuStg1wxA7vQv7ce1EK0Gyvahk=
go.opentelemetry.io/otel/sdk/metric v1.34.0/go.mod h1:jQ/r8Ze28zRKoNRdkjCZxfs6YvBTG1+YIqyFVFYec5w=
go.opentelemetry.io/otel/trace v1.34.0 h1:+ouXS2V8Rd4hp4580a8q23bg0azF2nI8cqLYnC8mh/k=
go.opentelemetry.io/otel/trace v1.34.0/go.mod h1:Svm7lSjQD7kG7KJ/MUHPVXSDGz2OX4h0M2jHBhmSfRE=
golang.org/x/crypto v0.0.0-20170930174604-9419663f5a44/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4= golang.org/x/crypto v0.0.0-20170930174604-9419663f5a44/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w= golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20200115085410-6d4e4cb37c7d/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200115085410-6d4e4cb37c7d/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200510223506-06a226fb4e37/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.0.0-20200510223506-06a226fb4e37/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4= golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/crypto v0.38.0 h1:jt+WWG8IZlBnVbomuhg2Mdq0+BBQaHbtqHEFEigjUV8= golang.org/x/crypto v0.39.0 h1:SHs+kF4LP+f+p14esP5jAoDpHU8Gu/v9lFRK6IT5imM=
golang.org/x/crypto v0.38.0/go.mod h1:MvrbAqul58NNYPKnOra203SB9vpuZW0e+RRZV+Ggqjw= golang.org/x/crypto v0.39.0/go.mod h1:L+Xg3Wf6HoL4Bn4238Z6ft6KfEpN0tJGo53AAPC632U=
golang.org/x/exp v0.0.0-20250606033433-dcc06ee1d476 h1:bsqhLWFR6G6xiQcb+JoGqdKdRU6WzPWmK8E0jxTjzo4=
golang.org/x/exp v0.0.0-20250606033433-dcc06ee1d476/go.mod h1:3//PLf8L/X+8b4vuAfHzxeRUl04Adcb341+IGKfnqS8=
golang.org/x/net v0.0.0-20180719180050-a680a1efc54d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180719180050-a680a1efc54d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg= golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.41.0 h1:vBTly1HeNPEn3wtREYfy4GZ/NECgw2Cnl+nK6Nz3uvw=
golang.org/x/net v0.41.0/go.mod h1:B/K4NNqkfmg07DQYrbwvSluqCJOOXwUjeb/5lOisjbA=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
@ -72,7 +130,17 @@ golang.org/x/sys v0.33.0/go.mod h1:BJP2sWEmIv4KK5OTEluFJCKSidICx8ciO85XgH3Ak8k=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo= golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.26.0 h1:P42AVeLghgTYr4+xUnTRKDMqpar+PtX7KWuNQL21L8M=
golang.org/x/text v0.26.0/go.mod h1:QK15LZJUUQVJxhz7wXgxSy/CJaTFjd0G+YLonydOVQA=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
google.golang.org/genproto/googleapis/api v0.0.0-20250303144028-a0af3efb3deb h1:p31xT4yrYrSM/G4Sn2+TNUkVhFCbG9y8itM2S6Th950=
google.golang.org/genproto/googleapis/api v0.0.0-20250303144028-a0af3efb3deb/go.mod h1:jbe3Bkdp+Dh2IrslsFCklNhweNTBgSYanP1UXhJDhKg=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250303144028-a0af3efb3deb h1:TLPQVbx1GJ8VKZxz52VAxl1EBgKXXbTiU9Fc5fZeLn4=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250303144028-a0af3efb3deb/go.mod h1:LuRYeWDFV6WOn90g357N17oMCaxpgCnbi/44qJvDn2I=
google.golang.org/grpc v1.72.0 h1:S7UkcVa60b5AAQTaO6ZKamFp1zMZSU0fGDK2WZLbBnM=
google.golang.org/grpc v1.72.0/go.mod h1:wH5Aktxcg25y1I3w7H69nHfXdOG3UiadoBtjh3izSDM=
google.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY=
google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
@ -81,4 +149,6 @@ gopkg.in/tomb.v1 v1.0.0-20141024135613-dd632973f1e7/go.mod h1:dt/ZhP58zS4L8KSrWD
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
lukechampine.com/blake3 v1.4.1 h1:I3Smz7gso8w4/TunLKec6K2fn+kyKtDxr/xcQEN84Wg=
lukechampine.com/blake3 v1.4.1/go.mod h1:QFosUxmjB8mnrWFSNwKmvxHpfY72bmD2tQ0kBMM3kwo=
rsc.io/tmplfunc v0.0.3/go.mod h1:AG3sTPzElb1Io3Yg4voV9AGZJuleGAwaVRxL9M49PhA= rsc.io/tmplfunc v0.0.3/go.mod h1:AG3sTPzElb1Io3Yg4voV9AGZJuleGAwaVRxL9M49PhA=

View File

@ -44,9 +44,9 @@ type Config struct {
P2P *P2PConfig `yaml:"p2p"` P2P *P2PConfig `yaml:"p2p"`
Engine *EngineConfig `yaml:"engine"` Engine *EngineConfig `yaml:"engine"`
DB *DBConfig `yaml:"db"` DB *DBConfig `yaml:"db"`
Logger *LogConfig `yaml:"logger"`
ListenGRPCMultiaddr string `yaml:"listenGrpcMultiaddr"` ListenGRPCMultiaddr string `yaml:"listenGrpcMultiaddr"`
ListenRestMultiaddr string `yaml:"listenRESTMultiaddr"` ListenRestMultiaddr string `yaml:"listenRESTMultiaddr"`
LogFile string `yaml:"logFile"`
} }
// WithDefaults returns a copy of the config with default values filled in. // WithDefaults returns a copy of the config with default values filled in.
@ -85,32 +85,31 @@ func NewConfig(configPath string) (*Config, error) {
} }
var BootstrapPeers = []string{ var BootstrapPeers = []string{
"/dns/bootstrap.quilibrium.com/udp/8336/quic-v1/p2p/Qme3g6rJWuz8HVXxpDb7aV2hiFq8bZJNqxMmwzmASzfq1M", "/dnsaddr/quinoa.quilibrium.com/udp/8339/p2p/QmP9NNzAzRjCL8gdQBkKHwyBCWJGVb3jPrQzTveYdU24kH",
"/dns/quecifer.quilibrium.com/udp/8336/quic-v1/p2p/QmdWF9bGTH5mwJXkxrG859HA5r34MxXtMSTuEikSMDSESv", "/dnsaddr/qualia.quilibrium.com/udp/8339/p2p/QmRP1UPiDg1enHgN6wEL1Y4uUh1XKg7V3QExdBKV9BUUQf",
"/dns/quagmire.quilibrium.com/udp/8336/quic-v1/p2p/QmaQ9KAaKtqXhYSQ5ARQNnn8B8474cWGvvD6PgJ4gAtMrx", "/dnsaddr/quetzalcoatl.quilibrium.com/udp/8339/p2p/QmNq4xSqrxTKKtK7J6UFEa4unjsoULP2G4qWwwH5EKmoJj",
"/ip4/204.186.74.46/udp/8316/quic-v1/p2p/QmeqBjm3iX7sdTieyto1gys5ruQrQNPKfaTGcVQQWJPYDV", // "/ip4/204.186.74.46/udp/8316/quic-v1/p2p/QmeqBjm3iX7sdTieyto1gys5ruQrQNPKfaTGcVQQWJPYDV",
"/ip4/65.109.17.13/udp/8336/quic-v1/p2p/Qmc35n99eojSvW3PkbfBczJoSX92WmnnKh3Fg114ok3oo4", "/ip4/65.109.17.13/udp/8336/quic-v1/p2p/Qmc35n99eojSvW3PkbfBczJoSX92WmnnKh3Fg114ok3oo4",
"/ip4/65.108.194.84/udp/8336/quic-v1/p2p/QmP8C7g9ZRiWzhqN2AgFu5onS6HwHzR6Vv1TCHxAhnCSnq", "/ip4/65.108.194.84/udp/8336/quic-v1/p2p/QmP8C7g9ZRiWzhqN2AgFu5onS6HwHzR6Vv1TCHxAhnCSnq",
"/ip4/15.204.100.222/udp/8336/quic-v1/p2p/Qmef3Z3RvGg49ZpDPcf2shWtJNgPJNpXrowjUcfz23YQ3V", "/ip4/15.204.100.222/udp/8336/quic-v1/p2p/Qmef3Z3RvGg49ZpDPcf2shWtJNgPJNpXrowjUcfz23YQ3V",
"/ip4/69.197.174.35/udp/8336/quic-v1/p2p/QmeprCaZKiymofPJgnp2ANR3F4pRus9PHHaxnJDh1Jwr1p", // "/ip4/69.197.174.35/udp/8336/quic-v1/p2p/QmeprCaZKiymofPJgnp2ANR3F4pRus9PHHaxnJDh1Jwr1p",
"/ip4/70.36.102.32/udp/8336/quic-v1/p2p/QmYriGRXCUiwFodqSoS4GgEcD7UVyxXPeCgQKmYne3iLSF", // "/ip4/70.36.102.32/udp/8336/quic-v1/p2p/QmYriGRXCUiwFodqSoS4GgEcD7UVyxXPeCgQKmYne3iLSF",
"/ip4/204.12.220.2/udp/8336/quic-v1/p2p/QmRw5Tw4p5v2vLPvVSAkQEiRPQGnWk9HM4xiSvgxF82CCw", // "/ip4/204.12.220.2/udp/8336/quic-v1/p2p/QmRw5Tw4p5v2vLPvVSAkQEiRPQGnWk9HM4xiSvgxF82CCw",
"/ip4/209.159.149.14/udp/8336/quic-v1/p2p/Qmcq4Lmw45tbodvdRWZ8iGgy3rUcR3dikHTj1fBXP8VJqv", // "/ip4/209.159.149.14/udp/8336/quic-v1/p2p/Qmcq4Lmw45tbodvdRWZ8iGgy3rUcR3dikHTj1fBXP8VJqv",
"/ip4/148.251.9.90/udp/8336/quic-v1/p2p/QmRpKmQ1W83s6moBFpG6D6nrttkqdQSbdCJpvfxDVGcs38", // "/ip4/148.251.9.90/udp/8336/quic-v1/p2p/QmRpKmQ1W83s6moBFpG6D6nrttkqdQSbdCJpvfxDVGcs38",
"/ip4/35.232.113.144/udp/8336/quic-v1/p2p/QmWxkBc7a17ZsLHhszLyTvKsoHMKvKae2XwfQXymiU66md", // "/ip4/35.232.113.144/udp/8336/quic-v1/p2p/QmWxkBc7a17ZsLHhszLyTvKsoHMKvKae2XwfQXymiU66md",
"/ip4/34.87.85.78/udp/8336/quic-v1/p2p/QmTGguT5XhtvZZwTLnNQTN8Bg9eUm1THWEneXXHGhMDPrz", // "/ip4/34.87.85.78/udp/8336/quic-v1/p2p/QmTGguT5XhtvZZwTLnNQTN8Bg9eUm1THWEneXXHGhMDPrz",
"/ip4/34.81.199.27/udp/8336/quic-v1/p2p/QmTMMKpzCKJCwrnUzNu6tNj4P1nL7hVqz251245wsVpGNg", // "/ip4/34.81.199.27/udp/8336/quic-v1/p2p/QmTMMKpzCKJCwrnUzNu6tNj4P1nL7hVqz251245wsVpGNg",
"/ip4/34.143.255.235/udp/8336/quic-v1/p2p/QmeifsP6Kvq8A3yabQs6CBg7prSpDSqdee8P2BDQm9EpP8", // "/ip4/34.143.255.235/udp/8336/quic-v1/p2p/QmeifsP6Kvq8A3yabQs6CBg7prSpDSqdee8P2BDQm9EpP8",
"/ip4/34.34.125.238/udp/8336/quic-v1/p2p/QmZdSyBJLm9UiDaPZ4XDkgRGXUwPcHJCmKoH6fS9Qjyko4", // "/ip4/34.34.125.238/udp/8336/quic-v1/p2p/QmZdSyBJLm9UiDaPZ4XDkgRGXUwPcHJCmKoH6fS9Qjyko4",
"/ip4/34.80.245.52/udp/8336/quic-v1/p2p/QmNmbqobt82Vre5JxUGVNGEWn2HsztQQ1xfeg6mx7X5u3f", // "/ip4/34.80.245.52/udp/8336/quic-v1/p2p/QmNmbqobt82Vre5JxUGVNGEWn2HsztQQ1xfeg6mx7X5u3f",
"/dns/bravo-1.qcommander.sh/udp/8336/quic-v1/p2p/QmURj4qEB9vNdCCKzSMq4ESEgz13nJrqazgMdGi2DBSeeC", // "/dns/bravo-1.qcommander.sh/udp/8336/quic-v1/p2p/QmURj4qEB9vNdCCKzSMq4ESEgz13nJrqazgMdGi2DBSeeC",
"/ip4/109.199.100.108/udp/8336/quic-v1/p2p/Qma9fgugQc17MDu4YRSvnhfhVre6AYZ3nZdW8dSUYbsWvm", // "/ip4/109.199.100.108/udp/8336/quic-v1/p2p/Qma9fgugQc17MDu4YRSvnhfhVre6AYZ3nZdW8dSUYbsWvm",
"/ip4/47.251.49.193/udp/8336/quic-v1/p2p/QmP6ADPmMCsB8y82oFbrKTrwYWXt1CTMJ3jGNDXRHyYJgR", // "/ip4/47.251.49.193/udp/8336/quic-v1/p2p/QmP6ADPmMCsB8y82oFbrKTrwYWXt1CTMJ3jGNDXRHyYJgR",
"/ip4/138.201.203.208/udp/8336/quic-v1/p2p/QmbNhSTd4Y64ZCbV2gAXYR4ZFDdfRBMfrgWsNg99JHxsJo", // "/ip4/138.201.203.208/udp/8336/quic-v1/p2p/QmbNhSTd4Y64ZCbV2gAXYR4ZFDdfRBMfrgWsNg99JHxsJo",
"/ip4/148.251.9.90/udp/8336/quic-v1/p2p/QmRpKmQ1W83s6moBFpG6D6nrttkqdQSbdCJpvfxDVGcs38", // "/ip4/148.251.9.90/udp/8336/quic-v1/p2p/QmRpKmQ1W83s6moBFpG6D6nrttkqdQSbdCJpvfxDVGcs38",
"/ip4/15.235.211.121/udp/8336/quic-v1/p2p/QmZHNLUSAFCkTwHiEE3vWay3wsus5fWYsNLFTFU6tPCmNR", // "/ip4/15.235.211.121/udp/8336/quic-v1/p2p/QmZHNLUSAFCkTwHiEE3vWay3wsus5fWYsNLFTFU6tPCmNR",
"/ip4/63.141.228.58/udp/8336/quic-v1/p2p/QmezARggdWKa1sw3LqE3LfZwVvtuCpXpK8WVo8EEdfakJV", // "/ip4/63.141.228.58/udp/8336/quic-v1/p2p/QmezARggdWKa1sw3LqE3LfZwVvtuCpXpK8WVo8EEdfakJV",
"/ip4/192.69.222.130/udp/8336/quic-v1/p2p/QmcKQjpQmLpbDsiif2MuakhHFyxWvqYauPsJDaXnLav7PJ",
// purged peers (keep your node online to return to this list) // purged peers (keep your node online to return to this list)
// "/ip4/204.186.74.47/udp/8317/quic-v1/p2p/Qmd233pLUDvcDW3ama27usfbG1HxKNh1V9dmWVW1SXp1pd", // "/ip4/204.186.74.47/udp/8317/quic-v1/p2p/Qmd233pLUDvcDW3ama27usfbG1HxKNh1V9dmWVW1SXp1pd",
// "/ip4/186.233.184.181/udp/8336/quic-v1/p2p/QmW6QDvKuYqJYYMP5tMZSp12X3nexywK28tZNgqtqNpEDL", // "/ip4/186.233.184.181/udp/8336/quic-v1/p2p/QmW6QDvKuYqJYYMP5tMZSp12X3nexywK28tZNgqtqNpEDL",
@ -293,7 +292,6 @@ func LoadConfig(configPath string, proverKey string, skipGenesisCheck bool) (
ProvingKeyId: "default-proving-key", ProvingKeyId: "default-proving-key",
Filter: "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff", Filter: "ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff",
GenesisSeed: genesisSeed, GenesisSeed: genesisSeed,
MaxFrames: -1,
PendingCommitWorkers: 4, PendingCommitWorkers: 4,
}, },
} }
@ -446,7 +444,7 @@ type setter struct {
ver string ver string
} }
func (s setter) String() string { return *s.dst } func (s setter) String() string { return "" }
func (s setter) Set(_ string) error { func (s setter) Set(_ string) error {
*s.dst = s.value *s.dst = s.value
*s.dstver = s.ver *s.dstver = s.ver

View File

@ -90,7 +90,6 @@ type EngineConfig struct {
ProvingKeyId string `yaml:"provingKeyId"` ProvingKeyId string `yaml:"provingKeyId"`
Filter string `yaml:"filter"` Filter string `yaml:"filter"`
GenesisSeed string `yaml:"genesisSeed"` GenesisSeed string `yaml:"genesisSeed"`
MaxFrames int64 `yaml:"maxFrames"`
PendingCommitWorkers int64 `yaml:"pendingCommitWorkers"` PendingCommitWorkers int64 `yaml:"pendingCommitWorkers"`
MinimumPeersRequired int `yaml:"minimumPeersRequired"` MinimumPeersRequired int `yaml:"minimumPeersRequired"`
StatsMultiaddr string `yaml:"statsMultiaddr"` StatsMultiaddr string `yaml:"statsMultiaddr"`
@ -108,6 +107,11 @@ type EngineConfig struct {
DataWorkerP2PMultiaddrs []string `yaml:"dataWorkerP2PMultiaddrs"` DataWorkerP2PMultiaddrs []string `yaml:"dataWorkerP2PMultiaddrs"`
// Configuration to specify data worker stream multiaddrs // Configuration to specify data worker stream multiaddrs
DataWorkerStreamMultiaddrs []string `yaml:"dataWorkerStreamMultiaddrs"` DataWorkerStreamMultiaddrs []string `yaml:"dataWorkerStreamMultiaddrs"`
// Configuration to manually override data worker p2p multiaddrs in peer info
DataWorkerAnnounceP2PMultiaddrs []string `yaml:"dataWorkerAnnounceP2PMultiaddrs"`
// Configuration to manually override data worker stream multiaddrs in peer
// info
DataWorkerAnnounceStreamMultiaddrs []string `yaml:"dataWorkerAnnounceStreamMultiaddrs"`
// Number of data worker processes to spawn. // Number of data worker processes to spawn.
DataWorkerCount int `yaml:"dataWorkerCount"` DataWorkerCount int `yaml:"dataWorkerCount"`
// Specific shard filters for the data workers. // Specific shard filters for the data workers.

View File

@ -17,7 +17,12 @@ replace source.quilibrium.com/quilibrium/monorepo/go-libp2p-blossomsub => ../go-
replace source.quilibrium.com/quilibrium/monorepo/utils => ../utils replace source.quilibrium.com/quilibrium/monorepo/utils => ../utils
require ( require (
go.uber.org/zap v1.27.0
gopkg.in/yaml.v2 v2.4.0 gopkg.in/yaml.v2 v2.4.0
github.com/libp2p/go-libp2p v0.41.1
github.com/pkg/errors v0.9.1
github.com/cloudflare/circl v1.6.1
github.com/stretchr/testify v1.10.0
source.quilibrium.com/quilibrium/monorepo/go-libp2p-blossomsub v0.0.0-00010101000000-000000000000 source.quilibrium.com/quilibrium/monorepo/go-libp2p-blossomsub v0.0.0-00010101000000-000000000000
source.quilibrium.com/quilibrium/monorepo/utils v0.0.0-00010101000000-000000000000 source.quilibrium.com/quilibrium/monorepo/utils v0.0.0-00010101000000-000000000000
) )
@ -35,18 +40,16 @@ require (
github.com/pmezard/go-difflib v1.0.0 // indirect github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/rogpeppe/go-internal v1.14.1 // indirect github.com/rogpeppe/go-internal v1.14.1 // indirect
go.uber.org/multierr v1.11.0 // indirect go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.0 // indirect
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c // indirect gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c // indirect
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect
) )
require ( require (
github.com/cloudflare/circl v1.6.1
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 // indirect github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 // indirect
github.com/ipfs/go-cid v0.5.0 // indirect github.com/ipfs/go-cid v0.5.0 // indirect
github.com/klauspost/cpuid/v2 v2.2.10 // indirect github.com/klauspost/cpuid/v2 v2.2.10 // indirect
github.com/libp2p/go-buffer-pool v0.1.0 // indirect github.com/libp2p/go-buffer-pool v0.1.0 // indirect
github.com/libp2p/go-libp2p v0.41.1
github.com/minio/sha256-simd v1.0.1 // indirect github.com/minio/sha256-simd v1.0.1 // indirect
github.com/mr-tron/base58 v1.2.0 // indirect github.com/mr-tron/base58 v1.2.0 // indirect
github.com/multiformats/go-base32 v0.1.0 // indirect github.com/multiformats/go-base32 v0.1.0 // indirect
@ -56,9 +59,7 @@ require (
github.com/multiformats/go-multicodec v0.9.1 // indirect github.com/multiformats/go-multicodec v0.9.1 // indirect
github.com/multiformats/go-multihash v0.2.3 // indirect github.com/multiformats/go-multihash v0.2.3 // indirect
github.com/multiformats/go-varint v0.0.7 // indirect github.com/multiformats/go-varint v0.0.7 // indirect
github.com/pkg/errors v0.9.1
github.com/spaolacci/murmur3 v1.1.0 // indirect github.com/spaolacci/murmur3 v1.1.0 // indirect
github.com/stretchr/testify v1.10.0
golang.org/x/crypto v0.39.0 // indirect golang.org/x/crypto v0.39.0 // indirect
golang.org/x/exp v0.0.0-20250606033433-dcc06ee1d476 // indirect golang.org/x/exp v0.0.0-20250606033433-dcc06ee1d476 // indirect
golang.org/x/sys v0.33.0 // indirect golang.org/x/sys v0.33.0 // indirect

View File

@ -329,6 +329,8 @@ gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/natefinch/lumberjack.v2 v2.2.1 h1:bBRl1b0OH9s/DuPhuXpNl+VtCaJXFZ5/uEFST95x9zc=
gopkg.in/natefinch/lumberjack.v2 v2.2.1/go.mod h1:YD8tP3GAjkrDg1eZH7EGmyESg/lsYskCTPBJVb9jqSc=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=

46
config/logger.go Normal file
View File

@ -0,0 +1,46 @@
package config
import (
"io"
"github.com/pkg/errors"
"go.uber.org/zap"
"source.quilibrium.com/quilibrium/monorepo/utils/logging"
)
type LogConfig struct {
Path string `yaml:"path"`
MaxSize int `yaml:"maxSize"`
MaxBackups int `yaml:"maxBackups"`
MaxAge int `yaml:"maxAge"`
Compress bool `yaml:"compress"`
}
func (c *Config) CreateLogger(coreId uint, debug bool) (
*zap.Logger,
io.Closer,
error,
) {
if c.Logger != nil {
logger, closer, err := logging.NewRotatingFileLogger(
debug,
coreId,
c.Logger.Path,
c.Logger.MaxSize,
c.Logger.MaxBackups,
c.Logger.MaxAge,
c.Logger.Compress,
)
return logger, closer, errors.Wrap(err, "create logger")
}
var logger *zap.Logger
var err error
if debug {
logger, err = zap.NewDevelopment()
} else {
logger, err = zap.NewProduction()
}
return logger, io.NopCloser(nil), errors.Wrap(err, "create logger")
}

View File

@ -22,58 +22,60 @@ const (
) )
type P2PConfig struct { type P2PConfig struct {
D int `yaml:"d"` D int `yaml:"d"`
DLo int `yaml:"dLo"` DLo int `yaml:"dLo"`
DHi int `yaml:"dHi"` DHi int `yaml:"dHi"`
DScore int `yaml:"dScore"` DScore int `yaml:"dScore"`
DOut int `yaml:"dOut"` DOut int `yaml:"dOut"`
HistoryLength int `yaml:"historyLength"` HistoryLength int `yaml:"historyLength"`
HistoryGossip int `yaml:"historyGossip"` HistoryGossip int `yaml:"historyGossip"`
DLazy int `yaml:"dLazy"` DLazy int `yaml:"dLazy"`
GossipFactor float64 `yaml:"gossipFactor"` GossipFactor float64 `yaml:"gossipFactor"`
GossipRetransmission int `yaml:"gossipRetransmission"` GossipRetransmission int `yaml:"gossipRetransmission"`
HeartbeatInitialDelay time.Duration `yaml:"heartbeatInitialDelay"` HeartbeatInitialDelay time.Duration `yaml:"heartbeatInitialDelay"`
HeartbeatInterval time.Duration `yaml:"heartbeatInterval"` HeartbeatInterval time.Duration `yaml:"heartbeatInterval"`
FanoutTTL time.Duration `yaml:"fanoutTTL"` FanoutTTL time.Duration `yaml:"fanoutTTL"`
PrunePeers int `yaml:"prunePeers"` PrunePeers int `yaml:"prunePeers"`
PruneBackoff time.Duration `yaml:"pruneBackoff"` PruneBackoff time.Duration `yaml:"pruneBackoff"`
UnsubscribeBackoff time.Duration `yaml:"unsubscribeBackoff"` UnsubscribeBackoff time.Duration `yaml:"unsubscribeBackoff"`
Connectors int `yaml:"connectors"` Connectors int `yaml:"connectors"`
MaxPendingConnections int `yaml:"maxPendingConnections"` MaxPendingConnections int `yaml:"maxPendingConnections"`
ConnectionTimeout time.Duration `yaml:"connectionTimeout"` ConnectionTimeout time.Duration `yaml:"connectionTimeout"`
DirectConnectTicks uint64 `yaml:"directConnectTicks"` DirectConnectTicks uint64 `yaml:"directConnectTicks"`
DirectConnectInitialDelay time.Duration `yaml:"directConnectInitialDelay"` DirectConnectInitialDelay time.Duration `yaml:"directConnectInitialDelay"`
OpportunisticGraftTicks uint64 `yaml:"opportunisticGraftTicks"` OpportunisticGraftTicks uint64 `yaml:"opportunisticGraftTicks"`
OpportunisticGraftPeers int `yaml:"opportunisticGraftPeers"` OpportunisticGraftPeers int `yaml:"opportunisticGraftPeers"`
GraftFloodThreshold time.Duration `yaml:"graftFloodThreshold"` GraftFloodThreshold time.Duration `yaml:"graftFloodThreshold"`
MaxIHaveLength int `yaml:"maxIHaveLength"` MaxIHaveLength int `yaml:"maxIHaveLength"`
MaxIHaveMessages int `yaml:"maxIHaveMessages"` MaxIHaveMessages int `yaml:"maxIHaveMessages"`
MaxIDontWantMessages int `yaml:"maxIDontWantMessages"` MaxIDontWantMessages int `yaml:"maxIDontWantMessages"`
IWantFollowupTime time.Duration `yaml:"iWantFollowupTime"` IWantFollowupTime time.Duration `yaml:"iWantFollowupTime"`
IDontWantMessageThreshold int `yaml:"iDontWantMessageThreshold"` IDontWantMessageThreshold int `yaml:"iDontWantMessageThreshold"`
IDontWantMessageTTL int `yaml:"iDontWantMessageTTL"` IDontWantMessageTTL int `yaml:"iDontWantMessageTTL"`
BootstrapPeers []string `yaml:"bootstrapPeers"` BootstrapPeers []string `yaml:"bootstrapPeers"`
ListenMultiaddr string `yaml:"listenMultiaddr"` ListenMultiaddr string `yaml:"listenMultiaddr"`
StreamListenMultiaddr string `yaml:"streamListenMultiaddr"` StreamListenMultiaddr string `yaml:"streamListenMultiaddr"`
PeerPrivKey string `yaml:"peerPrivKey"` AnnounceListenMultiaddr string `yaml:"announceListenMultiaddr"`
TraceLogFile string `yaml:"traceLogFile"` AnnounceStreamListenMultiaddr string `yaml:"announceStreamListenMultiaddr"`
TraceLogStdout bool `yaml:"traceLogStdout"` PeerPrivKey string `yaml:"peerPrivKey"`
Network uint8 `yaml:"network"` TraceLogFile string `yaml:"traceLogFile"`
LowWatermarkConnections int `yaml:"lowWatermarkConnections"` TraceLogStdout bool `yaml:"traceLogStdout"`
HighWatermarkConnections int `yaml:"highWatermarkConnections"` Network uint8 `yaml:"network"`
DirectPeers []string `yaml:"directPeers"` LowWatermarkConnections int `yaml:"lowWatermarkConnections"`
GRPCServerRateLimit int `yaml:"grpcServerRateLimit"` HighWatermarkConnections int `yaml:"highWatermarkConnections"`
MinBootstrapPeers int `yaml:"minBootstrapPeers"` DirectPeers []string `yaml:"directPeers"`
BootstrapParallelism int `yaml:"bootstrapParallelism"` GRPCServerRateLimit int `yaml:"grpcServerRateLimit"`
DiscoveryParallelism int `yaml:"discoveryParallelism"` MinBootstrapPeers int `yaml:"minBootstrapPeers"`
DiscoveryPeerLookupLimit int `yaml:"discoveryPeerLookupLimit"` BootstrapParallelism int `yaml:"bootstrapParallelism"`
PingTimeout time.Duration `yaml:"pingTimeout"` DiscoveryParallelism int `yaml:"discoveryParallelism"`
PingPeriod time.Duration `yaml:"pingPeriod"` DiscoveryPeerLookupLimit int `yaml:"discoveryPeerLookupLimit"`
PingAttempts int `yaml:"pingAttempts"` PingTimeout time.Duration `yaml:"pingTimeout"`
ValidateQueueSize int `yaml:"validateQueueSize"` PingPeriod time.Duration `yaml:"pingPeriod"`
ValidateWorkers int `yaml:"validateWorkers"` PingAttempts int `yaml:"pingAttempts"`
SubscriptionQueueSize int `yaml:"subscriptionQueueSize"` ValidateQueueSize int `yaml:"validateQueueSize"`
PeerOutboundQueueSize int `yaml:"peerOutboundQueueSize"` ValidateWorkers int `yaml:"validateWorkers"`
SubscriptionQueueSize int `yaml:"subscriptionQueueSize"`
PeerOutboundQueueSize int `yaml:"peerOutboundQueueSize"`
} }
// WithDefaults returns a copy of the P2PConfig with any missing fields set to // WithDefaults returns a copy of the P2PConfig with any missing fields set to

View File

@ -12,8 +12,8 @@ func GetMinimumVersionCutoff() time.Time {
// Gets the minimum patch version This should only be set in a release series // Gets the minimum patch version This should only be set in a release series
// if there is something in the patch update that is needed to cut off // if there is something in the patch update that is needed to cut off
// unupgraded peers. Be sure to update this to 0x00 for any new minor release. // unupgraded peers. Be sure to update this to 0x00 for any new minor release.
func GetMinimumPatchVersion() byte { func GetMinimumPatchNumber() byte {
return 0x00 return 0x04
} }
func GetMinimumVersion() []byte { func GetMinimumVersion() []byte {
@ -43,9 +43,9 @@ func FormatVersion(version []byte) string {
} }
func GetPatchNumber() byte { func GetPatchNumber() byte {
return 0x01 return 0x12
} }
func GetRCNumber() byte { func GetRCNumber() byte {
return 0x06 return 0x45
} }

36
conntest/build.sh Executable file
View File

@ -0,0 +1,36 @@
#!/bin/bash
set -euxo pipefail
# This script builds the conntest binary for the current platform and statically links it with VDF static lib.
# Assumes that the VDF library has been built by running the generate.sh script in the `../vdf` directory.
ROOT_DIR="${ROOT_DIR:-$( cd "$(dirname "$(realpath "$( dirname "${BASH_SOURCE[0]}" )")")" >/dev/null 2>&1 && pwd )}"
CONNTEST_DIR="$ROOT_DIR/conntest"
BINARIES_DIR="$ROOT_DIR/target/release"
pushd "$CONNTEST_DIR" > /dev/null
export CGO_ENABLED=1
os_type="$(uname)"
case "$os_type" in
"Darwin")
# Check if the architecture is ARM
if [[ "$(uname -m)" == "arm64" ]]; then
# MacOS ld doesn't support -Bstatic and -Bdynamic, so it's important that there is only a static version of the library
go build -ldflags "-linkmode 'external' -extldflags '-L$BINARIES_DIR -L/usr/local/lib/ -L/opt/homebrew/Cellar/openssl@3/3.5.0/lib -lbls48581 -lvdf -lchannel -lferret -lverenc -lbulletproofs -lrpm -ldl -lm -lflint -lgmp -lmpfr -lstdc++ -lcrypto -lssl'" "$@"
else
echo "Unsupported platform"
exit 1
fi
;;
"Linux")
export CGO_LDFLAGS="-L/usr/local/lib -lflint -lgmp -lmpfr -ldl -lm -L$BINARIES_DIR -lstdc++ -lvdf -lchannel -lferret -lverenc -lbulletproofs -lbls48581 -lrpm -lcrypto -lssl -static"
go build -ldflags "-linkmode 'external'" "$@"
;;
*)
echo "Unsupported platform"
exit 1
;;
esac

193
conntest/go.mod Normal file
View File

@ -0,0 +1,193 @@
module source.quilibrium.com/quilibrium/monorepo/conntest
go 1.24.0
replace source.quilibrium.com/quilibrium/monorepo/nekryptology => ../nekryptology
replace source.quilibrium.com/quilibrium/monorepo/bls48581 => ../bls48581
replace source.quilibrium.com/quilibrium/monorepo/bulletproofs => ../bulletproofs
replace source.quilibrium.com/quilibrium/monorepo/ferret => ../ferret
replace source.quilibrium.com/quilibrium/monorepo/channel => ../channel
replace source.quilibrium.com/quilibrium/monorepo/bedlam => ../bedlam
replace source.quilibrium.com/quilibrium/monorepo/types => ../types
replace source.quilibrium.com/quilibrium/monorepo/vdf => ../vdf
replace source.quilibrium.com/quilibrium/monorepo/verenc => ../verenc
replace source.quilibrium.com/quilibrium/monorepo/protobufs => ../protobufs
replace source.quilibrium.com/quilibrium/monorepo/utils => ../utils
replace source.quilibrium.com/quilibrium/monorepo/config => ../config
replace source.quilibrium.com/quilibrium/monorepo/node => ../node
replace source.quilibrium.com/quilibrium/monorepo/hypergraph => ../hypergraph
replace source.quilibrium.com/quilibrium/monorepo/consensus => ../consensus
replace source.quilibrium.com/quilibrium/monorepo/rpm => ../rpm
replace github.com/multiformats/go-multiaddr => ../go-multiaddr
replace github.com/multiformats/go-multiaddr-dns => ../go-multiaddr-dns
replace github.com/libp2p/go-libp2p => ../go-libp2p
replace github.com/libp2p/go-libp2p-kad-dht => ../go-libp2p-kad-dht
replace source.quilibrium.com/quilibrium/monorepo/go-libp2p-blossomsub => ../go-libp2p-blossomsub
replace source.quilibrium.com/quilibrium/monorepo/lifecycle => ../lifecycle
require (
github.com/libp2p/go-libp2p v0.41.1
source.quilibrium.com/quilibrium/monorepo/config v0.0.0-00010101000000-000000000000
source.quilibrium.com/quilibrium/monorepo/go-libp2p-blossomsub v0.0.0-00010101000000-000000000000
source.quilibrium.com/quilibrium/monorepo/node v0.0.0-00010101000000-000000000000
)
require (
github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus v1.0.1 // indirect
github.com/grpc-ecosystem/go-grpc-middleware/v2 v2.1.0 // indirect
github.com/libp2p/go-libp2p-kad-dht v0.23.0 // indirect
github.com/libp2p/go-libp2p-routing-helpers v0.7.2 // indirect
github.com/libp2p/go-yamux/v5 v5.0.1 // indirect
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect
github.com/pion/datachannel v1.5.10 // indirect
github.com/pion/dtls/v2 v2.2.12 // indirect
github.com/pion/dtls/v3 v3.0.6 // indirect
github.com/pion/ice/v4 v4.0.10 // indirect
github.com/pion/interceptor v0.1.40 // indirect
github.com/pion/logging v0.2.3 // indirect
github.com/pion/mdns/v2 v2.0.7 // indirect
github.com/pion/randutil v0.1.0 // indirect
github.com/pion/rtcp v1.2.15 // indirect
github.com/pion/rtp v1.8.19 // indirect
github.com/pion/sctp v1.8.39 // indirect
github.com/pion/sdp/v3 v3.0.13 // indirect
github.com/pion/srtp/v3 v3.0.6 // indirect
github.com/pion/stun v0.6.1 // indirect
github.com/pion/stun/v3 v3.0.0 // indirect
github.com/pion/transport/v2 v2.2.10 // indirect
github.com/pion/transport/v3 v3.0.7 // indirect
github.com/pion/turn/v4 v4.0.2 // indirect
github.com/pion/webrtc/v4 v4.1.2 // indirect
github.com/wlynxg/anet v0.0.5 // indirect
go.opentelemetry.io/auto/sdk v1.1.0 // indirect
go.opentelemetry.io/otel v1.34.0 // indirect
go.opentelemetry.io/otel/metric v1.34.0 // indirect
go.opentelemetry.io/otel/trace v1.34.0 // indirect
go.uber.org/mock v0.5.2 // indirect
golang.org/x/time v0.12.0 // indirect
google.golang.org/protobuf v1.36.6 // indirect
gopkg.in/yaml.v2 v2.4.0 // indirect
source.quilibrium.com/quilibrium/monorepo/consensus v0.0.0-00010101000000-000000000000 // indirect
source.quilibrium.com/quilibrium/monorepo/protobufs v0.0.0-00010101000000-000000000000 // indirect
source.quilibrium.com/quilibrium/monorepo/types v0.0.0-00010101000000-000000000000 // indirect
source.quilibrium.com/quilibrium/monorepo/utils v0.0.0-00010101000000-000000000000 // indirect
)
require (
github.com/gorilla/websocket v1.5.3 // indirect
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3 // indirect
github.com/multiformats/go-multiaddr v0.16.1 // indirect
go.uber.org/atomic v1.11.0 // indirect
google.golang.org/genproto/googleapis/api v0.0.0-20250303144028-a0af3efb3deb // indirect
google.golang.org/genproto/googleapis/rpc v0.0.0-20250303144028-a0af3efb3deb // indirect
gopkg.in/natefinch/lumberjack.v2 v2.2.1 // indirect
source.quilibrium.com/quilibrium/monorepo/lifecycle v0.0.0-00010101000000-000000000000 // indirect
)
require (
github.com/benbjohnson/clock v1.3.5 // indirect
github.com/beorn7/perks v1.0.1 // indirect
github.com/cespare/xxhash/v2 v2.3.0 // indirect
github.com/cloudflare/circl v1.6.1 // indirect
github.com/davidlazar/go-crypto v0.0.0-20200604182044-b73af7476f6c // indirect
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 // indirect
github.com/flynn/noise v1.1.0 // indirect
github.com/francoispqt/gojay v1.2.13 // indirect
github.com/go-logr/logr v1.4.2 // indirect
github.com/go-logr/stdr v1.2.2 // indirect
github.com/gogo/protobuf v1.3.2 // indirect
github.com/google/gopacket v1.1.19 // indirect
github.com/google/uuid v1.6.0 // indirect
github.com/hashicorp/errwrap v1.1.0 // indirect
github.com/hashicorp/go-multierror v1.1.1 // indirect
github.com/hashicorp/golang-lru v0.5.4 // indirect
github.com/hashicorp/golang-lru/v2 v2.0.7 // indirect
github.com/huin/goupnp v1.3.0 // indirect
github.com/iden3/go-iden3-crypto v0.0.17 // indirect
github.com/ipfs/boxo v0.10.0 // indirect
github.com/ipfs/go-cid v0.5.0 // indirect
github.com/ipfs/go-datastore v0.8.2 // indirect
github.com/ipfs/go-log v1.0.5 // indirect
github.com/ipfs/go-log/v2 v2.5.1 // indirect
github.com/ipld/go-ipld-prime v0.20.0 // indirect
github.com/jackpal/go-nat-pmp v1.0.2 // indirect
github.com/jbenet/go-temp-err-catcher v0.1.0 // indirect
github.com/klauspost/compress v1.18.0 // indirect
github.com/klauspost/cpuid/v2 v2.2.10 // indirect
github.com/koron/go-ssdp v0.0.6 // indirect
github.com/libp2p/go-buffer-pool v0.1.0 // indirect
github.com/libp2p/go-cidranger v1.1.0 // indirect
github.com/libp2p/go-flow-metrics v0.2.0 // indirect
github.com/libp2p/go-libp2p-asn-util v0.4.1 // indirect
github.com/libp2p/go-libp2p-kbucket v0.6.3 // indirect
github.com/libp2p/go-libp2p-record v0.2.0 // indirect
github.com/libp2p/go-msgio v0.3.0 // indirect
github.com/libp2p/go-netroute v0.2.2 // indirect
github.com/libp2p/go-reuseport v0.4.0 // indirect
github.com/marten-seemann/tcp v0.0.0-20210406111302-dfbc87cc63fd // indirect
github.com/mattn/go-isatty v0.0.20 // indirect
github.com/miekg/dns v1.1.66 // indirect
github.com/mikioh/tcpinfo v0.0.0-20190314235526-30a79bb1804b // indirect
github.com/mikioh/tcpopt v0.0.0-20190314235656-172688c1accc // indirect
github.com/minio/sha256-simd v1.0.1 // indirect
github.com/mr-tron/base58 v1.2.0 // indirect
github.com/multiformats/go-base32 v0.1.0 // indirect
github.com/multiformats/go-base36 v0.2.0 // indirect
github.com/multiformats/go-multiaddr-dns v0.4.1 // indirect
github.com/multiformats/go-multiaddr-fmt v0.1.0 // indirect
github.com/multiformats/go-multibase v0.2.0 // indirect
github.com/multiformats/go-multicodec v0.9.1 // indirect
github.com/multiformats/go-multihash v0.2.3 // indirect
github.com/multiformats/go-multistream v0.6.1 // indirect
github.com/multiformats/go-varint v0.0.7 // indirect
github.com/opentracing/opentracing-go v1.2.0 // indirect
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/polydawn/refmt v0.89.0 // indirect
github.com/prometheus/client_golang v1.22.0 // indirect
github.com/prometheus/client_model v0.6.2 // indirect
github.com/prometheus/common v0.64.0 // indirect
github.com/prometheus/procfs v0.16.1 // indirect
github.com/quic-go/qpack v0.5.1 // indirect
github.com/quic-go/quic-go v0.54.0 // indirect
github.com/quic-go/webtransport-go v0.9.0 // indirect
github.com/spaolacci/murmur3 v1.1.0 // indirect
github.com/whyrusleeping/go-keyspace v0.0.0-20160322163242-5b898ac5add1 // indirect
go.opencensus.io v0.24.0 // indirect
go.uber.org/dig v1.19.0 // indirect
go.uber.org/fx v1.24.0 // indirect
go.uber.org/multierr v1.11.0 // indirect
go.uber.org/zap v1.27.0
golang.org/x/crypto v0.39.0 // indirect
golang.org/x/exp v0.0.0-20250606033433-dcc06ee1d476 // indirect
golang.org/x/mod v0.25.0 // indirect
golang.org/x/net v0.41.0 // indirect
golang.org/x/sync v0.17.0 // indirect
golang.org/x/sys v0.36.0 // indirect
golang.org/x/text v0.26.0 // indirect
golang.org/x/tools v0.34.0 // indirect
gonum.org/v1/gonum v0.13.0 // indirect
google.golang.org/grpc v1.72.0 // indirect
lukechampine.com/blake3 v1.4.1 // indirect
)

700
conntest/go.sum Normal file
View File

@ -0,0 +1,700 @@
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.31.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.37.0/go.mod h1:TS1dMSSfndXH133OKGwekG838Om/cQT0BUHV3HcBgoo=
dmitri.shuralyov.com/app/changes v0.0.0-20180602232624-0a106ad413e3/go.mod h1:Yl+fi1br7+Rr3LqpNJf1/uxUdtRUV+Tnj0o93V2B9MU=
dmitri.shuralyov.com/html/belt v0.0.0-20180602232347-f7d459c86be0/go.mod h1:JLBrvjyP0v+ecvNYvCpyZgu5/xkfAUhi6wJj28eUfSU=
dmitri.shuralyov.com/service/change v0.0.0-20181023043359-a85b471d5412/go.mod h1:a1inKt/atXimZ4Mv927x+r7UpyzRUf4emIoiiSC2TN4=
dmitri.shuralyov.com/state v0.0.0-20180228185332-28bcc343414c/go.mod h1:0PRwlb0D6DFvNNtx+9ybjezNCa8XF0xaYcETyp6rHWU=
git.apache.org/thrift.git v0.0.0-20180902110319-2566ecd5d999/go.mod h1:fPE2ZNJGynbRyZ4dJvy6G277gSllfV2HJqblrnkyeyg=
github.com/BurntSushi/toml v0.3.1/go.mod h1:xHWCNGjB5oqiDr8zfno3MHue2Ht5sIBksp03qcyfWMU=
github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239/go.mod h1:2FmKhYUyUczH0OGQWaF5ceTx0UBShxjsH6f8oGKYe2c=
github.com/benbjohnson/clock v1.1.0/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
github.com/benbjohnson/clock v1.3.5 h1:VvXlSJBzZpA/zum6Sj74hxwYI2DIxRWuNIoXAzHZz5o=
github.com/benbjohnson/clock v1.3.5/go.mod h1:J11/hYXuz8f4ySSvYwY0FKfm+ezbsZBKZxNJlLklBHA=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM=
github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw=
github.com/bradfitz/go-smtpd v0.0.0-20170404230938-deb6d6237625/go.mod h1:HYsPBTaaSFSlLx/70C2HPIMNZpVV8+vt/A+FMnYP11g=
github.com/buger/jsonparser v0.0.0-20181115193947-bf1c66bbce23/go.mod h1:bbYlZJ7hK1yFx9hf58LP0zeX7UjIGs20ufpu3evjr+s=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs=
github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cloudflare/circl v1.6.1 h1:zqIqSPIndyBh1bjLVVDHMPpVKqp8Su/V+6MeDzzQBQ0=
github.com/cloudflare/circl v1.6.1/go.mod h1:uddAzsPgqdMAYatqJ0lsjX1oECcQLIlRpzZh3pJrofs=
github.com/cncf/udpa/go v0.0.0-20191209042840-269d4d468f6f/go.mod h1:M8M6+tZqaGXZJjfX53e64911xZQV5JYwmTeXPW+k8Sc=
github.com/coreos/go-systemd v0.0.0-20181012123002-c6f51f82210d/go.mod h1:F5haX7vjVVG0kc13fIWeqUViNPyEJxv/OmvnBo0Yme4=
github.com/cpuguy83/go-md2man/v2 v2.0.0-20190314233015-f79a8a8ca69d/go.mod h1:maD7wRr/U5Z6m/iR4s+kqSMx2CaBsrgA7czyZG/E6dU=
github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/davidlazar/go-crypto v0.0.0-20200604182044-b73af7476f6c h1:pFUpOrbxDR6AkioZ1ySsx5yxlDQZ8stG2b88gTPxgJU=
github.com/davidlazar/go-crypto v0.0.0-20200604182044-b73af7476f6c/go.mod h1:6UhI8N9EjYm1c2odKpFpAYeR8dsBeM7PtzQhRgxRr9U=
github.com/decred/dcrd/crypto/blake256 v1.1.0 h1:zPMNGQCm0g4QTY27fOCorQW7EryeQ/U0x++OzVrdms8=
github.com/decred/dcrd/crypto/blake256 v1.1.0/go.mod h1:2OfgNZ5wDpcsFmHmCK5gZTPcCXqlm2ArzUIkw9czNJo=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0 h1:NMZiJj8QnKe1LgsbDayM4UoHwbvwDRwnI3hwNaAHRnc=
github.com/decred/dcrd/dcrec/secp256k1/v4 v4.4.0/go.mod h1:ZXNYxsqcloTdSy/rNShjYzMhyjf0LaoftYK0p+A3h40=
github.com/dustin/go-humanize v1.0.0/go.mod h1:HtrtbFcZ19U5GC7JDqmcUSB87Iq5E25KnS6fMYU6eOk=
github.com/envoyproxy/go-control-plane v0.9.0/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.1-0.20191026205805-5f8ba28d4473/go.mod h1:YTl/9mNaCwkRvm6d1a2C3ymFceY/DCBVvsKhRF0iEA4=
github.com/envoyproxy/go-control-plane v0.9.4/go.mod h1:6rpuAdCZL397s3pYoYcLgu1mIlRU8Am5FuJP05cCM98=
github.com/envoyproxy/protoc-gen-validate v0.1.0/go.mod h1:iSmxcyjqTsJpI2R4NaDN7+kN2VEUnK/pcBlmesArF7c=
github.com/flynn/go-shlex v0.0.0-20150515145356-3f9db97f8568/go.mod h1:xEzjJPgXI435gkrCt3MPfRiAkVrwSbHsst4LCFVfpJc=
github.com/flynn/noise v1.1.0 h1:KjPQoQCEFdZDiP03phOvGi11+SVVhBG2wOWAorLsstg=
github.com/flynn/noise v1.1.0/go.mod h1:xbMo+0i6+IGbYdJhF31t2eR1BIU0CYc12+BNAKwUTag=
github.com/francoispqt/gojay v1.2.13 h1:d2m3sFjloqoIUQU3TsHBgj6qg/BVGlTBeHDUmyJnXKk=
github.com/francoispqt/gojay v1.2.13/go.mod h1:ehT5mTG4ua4581f1++1WLG0vPdaA9HaiDsoyrBGkyDY=
github.com/frankban/quicktest v1.14.4 h1:g2rn0vABPOOXmZUj+vbmUp0lPoXEMuhTpIluN0XL9UY=
github.com/frankban/quicktest v1.14.4/go.mod h1:4ptaffx2x8+WTWXmUCuVU6aPUX1/Mz7zb5vbUoiM6w0=
github.com/fsnotify/fsnotify v1.4.7/go.mod h1:jwhsz4b93w/PPRr/qN1Yymfu8t87LnFCMoQvtojpjFo=
github.com/ghodss/yaml v1.0.0/go.mod h1:4dBDuWmgqj2HViK6kFavaiC9ZROes6MMH2rRYeMEF04=
github.com/gliderlabs/ssh v0.1.1/go.mod h1:U7qILu1NlMHj9FlMhZLlkCdDnU1DBEAqr0aevW3Awn0=
github.com/go-errors/errors v1.0.1/go.mod h1:f4zRHt4oKfwPJE5k8C9vpYG+aDHdBFUsgrm6/TyX73Q=
github.com/go-logr/logr v1.2.2/go.mod h1:jdQByPbusPIv2/zmleS9BjJVeZ6kBagPoEUsqbVz/1A=
github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY=
github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY=
github.com/go-logr/stdr v1.2.2 h1:hSWxHoqTgW2S2qGc0LTAI563KZ5YKYRhT3MFKZMbjag=
github.com/go-logr/stdr v1.2.2/go.mod h1:mMo/vtBO5dYbehREoey6XUKy/eSumjCCveDpRre4VKE=
github.com/go-yaml/yaml v2.1.0+incompatible/go.mod h1:w2MrLa16VYP0jy6N7M5kHaCkaLENm+P+Tv+MfurjSw0=
github.com/gogo/protobuf v1.1.1/go.mod h1:r8qH/GZQm5c6nD/R0oafs1akxWv10x8SbQlK7atdtwQ=
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
github.com/golang/glog v0.0.0-20160126235308-23def4e6c14b/go.mod h1:SBH7ygxi8pfUlaOkMMuAQtPIUF8ecWP5IEl/CR7VP2Q=
github.com/golang/groupcache v0.0.0-20200121045136-8c9f03a8e57e/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da/go.mod h1:cIg4eruTrX1D+g88fzRXU5OdNfaM+9IcxsU14FzY7Hc=
github.com/golang/lint v0.0.0-20180702182130-06c8688daad7/go.mod h1:tluoj9z5200jBnyusfRPU2LqT6J+DAorxEvtC7LHB+E=
github.com/golang/mock v1.1.1/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/mock v1.2.0/go.mod h1:oTYuIxOrZwtPieC+H1uAHpcLFnEyAGVDL/k47Jfbm0A=
github.com/golang/protobuf v1.2.0/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.1/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.3.2/go.mod h1:6lQm79b+lXiMfvg/cZm0SGofjICqVBUtrP5yJMmIC1U=
github.com/golang/protobuf v1.4.0-rc.1/go.mod h1:ceaxUfeHdC40wWswd/P6IGgMaK3YpKi5j83Wpe3EHw8=
github.com/golang/protobuf v1.4.0-rc.1.0.20200221234624-67d41d38c208/go.mod h1:xKAWHe0F5eneWXFV3EuXVDTCmh+JuBKY0li0aMyXATA=
github.com/golang/protobuf v1.4.0-rc.2/go.mod h1:LlEzMj4AhA7rCAGe4KMBDvJI+AwstrUpVNzEA03Pprs=
github.com/golang/protobuf v1.4.0-rc.4.0.20200313231945-b860323f09d0/go.mod h1:WU3c8KckQ9AFe+yFwt9sWVRKCVIyN9cPHBJSNnbL67w=
github.com/golang/protobuf v1.4.0/go.mod h1:jodUvKwWbYaEsadDk5Fwe5c77LiNKVO9IDvqG2KuDX0=
github.com/golang/protobuf v1.4.1/go.mod h1:U8fpvMrcmy5pZrNK1lt4xCsGvpyWQ/VVv6QDs8UjoX8=
github.com/golang/protobuf v1.4.3/go.mod h1:oDoupMAO8OvCJWAcko0GGGIgR6R6ocIYbsSw735rRwI=
github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek=
github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps=
github.com/google/btree v0.0.0-20180813153112-4030bb1f1f0c/go.mod h1:lNA+9X1NB3Zf8V7Ke586lFgjr2dZNuvo3lPJSGZ5JPQ=
github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M=
github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.3.1/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU=
github.com/google/go-cmp v0.4.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.0/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.3/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE=
github.com/google/go-cmp v0.5.8/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY=
github.com/google/go-cmp v0.7.0 h1:wk8382ETsv4JYUZwIsn6YpYiWiBsYLSJiTsyBybVuN8=
github.com/google/go-cmp v0.7.0/go.mod h1:pXiqmnSA92OHEEa9HXL2W4E7lf9JzCmGVUdgjX3N/iU=
github.com/google/go-github v17.0.0+incompatible/go.mod h1:zLgOLi98H3fifZn+44m+umXrS52loVEgC2AApnigrVQ=
github.com/google/go-querystring v1.0.0/go.mod h1:odCYkC5MyYFN7vkCjXpyrEuKhc/BUO6wN/zVPAxq5ck=
github.com/google/gopacket v1.1.19 h1:ves8RnFZPGiFnTS0uPQStjwru6uO6h+nlr9j6fL7kF8=
github.com/google/gopacket v1.1.19/go.mod h1:iJ8V8n6KS+z2U1A8pUwu8bW5SyEMkXJB8Yo/Vo+TKTo=
github.com/google/martian v2.1.0+incompatible/go.mod h1:9I4somxYTbIHy5NJKHRl3wXiIaQGbYVAs8BPL6v8lEs=
github.com/google/pprof v0.0.0-20181206194817-3ea8567a2e57/go.mod h1:zfwlbNMJ+OItoe0UupaVj+oy1omPYYDuagoSzA8v9mc=
github.com/google/renameio v0.1.0/go.mod h1:KWCgfxg9yswjAJkECMjeO8J8rahYeXnNhOm40UhjYkI=
github.com/google/uuid v1.1.2/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0=
github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo=
github.com/googleapis/gax-go v2.0.0+incompatible/go.mod h1:SFVmujtThgffbyetf+mdk2eWhX2bMyUtNHzFKcPA9HY=
github.com/googleapis/gax-go/v2 v2.0.3/go.mod h1:LLvjysVCY1JZeum8Z6l8qUty8fiNwE08qbEPm1M08qg=
github.com/gopherjs/gopherjs v0.0.0-20181017120253-0766667cb4d1/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gopherjs/gopherjs v0.0.0-20190430165422-3e4dfb77656c h1:7lF+Vz0LqiRidnzC1Oq86fpX1q/iEv2KJdrCtttYjT4=
github.com/gopherjs/gopherjs v0.0.0-20190430165422-3e4dfb77656c/go.mod h1:wJfORRmW1u3UXTncJ5qlYoELFm8eSnnEO6hX4iZ3EWY=
github.com/gorilla/websocket v1.5.3 h1:saDtZ6Pbx/0u+bgYQ3q96pZgCzfhKXGPqt7kZ72aNNg=
github.com/gorilla/websocket v1.5.3/go.mod h1:YR8l580nyteQvAITg2hZ9XVh4b55+EU/adAjf1fMHhE=
github.com/gregjones/httpcache v0.0.0-20180305231024-9cad4c3443a7/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA=
github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus v1.0.1 h1:qnpSQwGEnkcRpTqNOIR6bJbR0gAorgP9CSALpRcKoAA=
github.com/grpc-ecosystem/go-grpc-middleware/providers/prometheus v1.0.1/go.mod h1:lXGCsh6c22WGtjr+qGHj1otzZpV/1kwTMAqkwZsnWRU=
github.com/grpc-ecosystem/go-grpc-middleware/v2 v2.1.0 h1:pRhl55Yx1eC7BZ1N+BBWwnKaMyD8uC+34TLdndZMAKk=
github.com/grpc-ecosystem/go-grpc-middleware/v2 v2.1.0/go.mod h1:XKMd7iuf/RGPSMJ/U4HP0zS2Z9Fh8Ps9a+6X26m/tmI=
github.com/grpc-ecosystem/grpc-gateway v1.5.0/go.mod h1:RSKVYQBd5MCa4OVpNdGskqpgL2+G+NZTnrVHpWWfpdw=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3 h1:5ZPtiqj0JL5oKWmcsq4VMaAW5ukBEgSGXEN89zeH1Jo=
github.com/grpc-ecosystem/grpc-gateway/v2 v2.26.3/go.mod h1:ndYquD05frm2vACXE1nsccT4oJzjhw2arTS2cpUD1PI=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/errwrap v1.1.0 h1:OxrOeh75EUXMY8TBjag2fzXGZ40LB6IKw45YeGUDY2I=
github.com/hashicorp/errwrap v1.1.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/hashicorp/golang-lru v0.5.4 h1:YDjusn29QI/Das2iO9M0BHnIbxPeyuCHsjMW+lJfyTc=
github.com/hashicorp/golang-lru v0.5.4/go.mod h1:iADmTwqILo4mZ8BN3D2Q6+9jd8WM5uGBxy+E8yxSoD4=
github.com/hashicorp/golang-lru/v2 v2.0.7 h1:a+bsQ5rvGLjzHuww6tVxozPZFVghXaHOwFs4luLUK2k=
github.com/hashicorp/golang-lru/v2 v2.0.7/go.mod h1:QeFd9opnmA6QUJc5vARoKUSoFhyfM2/ZepoAG6RGpeM=
github.com/huin/goupnp v1.3.0 h1:UvLUlWDNpoUdYzb2TCn+MuTWtcjXKSza2n6CBdQ0xXc=
github.com/huin/goupnp v1.3.0/go.mod h1:gnGPsThkYa7bFi/KWmEysQRf48l2dvR5bxr2OFckNX8=
github.com/iden3/go-iden3-crypto v0.0.17 h1:NdkceRLJo/pI4UpcjVah4lN/a3yzxRUGXqxbWcYh9mY=
github.com/iden3/go-iden3-crypto v0.0.17/go.mod h1:dLpM4vEPJ3nDHzhWFXDjzkn1qHoBeOT/3UEhXsEsP3E=
github.com/ipfs/boxo v0.10.0 h1:tdDAxq8jrsbRkYoF+5Rcqyeb91hgWe2hp7iLu7ORZLY=
github.com/ipfs/boxo v0.10.0/go.mod h1:Fg+BnfxZ0RPzR0nOodzdIq3A7KgoWAOWsEIImrIQdBM=
github.com/ipfs/go-cid v0.0.7/go.mod h1:6Ux9z5e+HpkQdckYoX1PG/6xqKspzlEIR5SDmgqgC/I=
github.com/ipfs/go-cid v0.5.0 h1:goEKKhaGm0ul11IHA7I6p1GmKz8kEYniqFopaB5Otwg=
github.com/ipfs/go-cid v0.5.0/go.mod h1:0L7vmeNXpQpUS9vt+yEARkJ8rOg43DF3iPgn4GIN0mk=
github.com/ipfs/go-datastore v0.8.2 h1:Jy3wjqQR6sg/LhyY0NIePZC3Vux19nLtg7dx0TVqr6U=
github.com/ipfs/go-datastore v0.8.2/go.mod h1:W+pI1NsUsz3tcsAACMtfC+IZdnQTnC/7VfPoJBQuts0=
github.com/ipfs/go-detect-race v0.0.1 h1:qX/xay2W3E4Q1U7d9lNs1sU9nvguX0a7319XbyQ6cOk=
github.com/ipfs/go-detect-race v0.0.1/go.mod h1:8BNT7shDZPo99Q74BpGMK+4D8Mn4j46UU0LZ723meps=
github.com/ipfs/go-ipfs-util v0.0.2 h1:59Sswnk1MFaiq+VcaknX7aYEyGyGDAA73ilhEK2POp8=
github.com/ipfs/go-ipfs-util v0.0.2/go.mod h1:CbPtkWJzjLdEcezDns2XYaehFVNXG9zrdrtMecczcsQ=
github.com/ipfs/go-log v1.0.5 h1:2dOuUCB1Z7uoczMWgAyDck5JLb72zHzrMnGnCNNbvY8=
github.com/ipfs/go-log v1.0.5/go.mod h1:j0b8ZoR+7+R99LD9jZ6+AJsrzkPbSXbZfGakb5JPtIo=
github.com/ipfs/go-log/v2 v2.1.3/go.mod h1:/8d0SH3Su5Ooc31QlL1WysJhvyOTDCjcCZ9Axpmri6g=
github.com/ipfs/go-log/v2 v2.5.1 h1:1XdUzF7048prq4aBjDQQ4SL5RxftpRGdXhNRwKSAlcY=
github.com/ipfs/go-log/v2 v2.5.1/go.mod h1:prSpmC1Gpllc9UYWxDiZDreBYw7zp4Iqp1kOLU9U5UI=
github.com/ipld/go-ipld-prime v0.20.0 h1:Ud3VwE9ClxpO2LkCYP7vWPc0Fo+dYdYzgxUJZ3uRG4g=
github.com/ipld/go-ipld-prime v0.20.0/go.mod h1:PzqZ/ZR981eKbgdr3y2DJYeD/8bgMawdGVlJDE8kK+M=
github.com/jackpal/go-nat-pmp v1.0.2 h1:KzKSgb7qkJvOUTqYl9/Hg/me3pWgBmERKrTGD7BdWus=
github.com/jackpal/go-nat-pmp v1.0.2/go.mod h1:QPH045xvCAeXUZOxsnwmrtiCoxIr9eob+4orBN1SBKc=
github.com/jbenet/go-temp-err-catcher v0.1.0 h1:zpb3ZH6wIE8Shj2sKS+khgRvf7T7RABoLk/+KKHggpk=
github.com/jbenet/go-temp-err-catcher v0.1.0/go.mod h1:0kJRvmDZXNMIiJirNPEYfhpPwbGVtZVWC34vc5WLsDk=
github.com/jellevandenhooff/dkim v0.0.0-20150330215556-f50fe3d243e1/go.mod h1:E0B/fFc00Y+Rasa88328GlI/XbtyysCtTHZS8h7IrBU=
github.com/json-iterator/go v1.1.6/go.mod h1:+SdeFBvtyEkXs7REEP0seUULqWtbJapLOCVDaaPEHmU=
github.com/jstemmer/go-junit-report v0.0.0-20190106144839-af01ea7f8024/go.mod h1:6v2b51hI/fHJwM22ozAgKL4VKDeJcHhJFhtBdhmNjmU=
github.com/jtolds/gls v4.20.0+incompatible h1:xdiiI2gbIgH/gLH7ADydsJ1uDOEzR8yvV7C0MuV77Wo=
github.com/jtolds/gls v4.20.0+incompatible/go.mod h1:QJZ7F/aHp+rZTRtaJ1ow/lLfFfVYBRgL+9YlvaHOwJU=
github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8=
github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck=
github.com/klauspost/compress v1.18.0 h1:c/Cqfb0r+Yi+JtIEq73FWXVkRonBlf0CRNYc8Zttxdo=
github.com/klauspost/compress v1.18.0/go.mod h1:2Pp+KzxcywXVXMr50+X0Q/Lsb43OQHYWRCY2AiWywWQ=
github.com/klauspost/cpuid/v2 v2.0.4/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.0.9/go.mod h1:FInQzS24/EEf25PyTYn52gqo7WaD8xa0213Md/qVLRg=
github.com/klauspost/cpuid/v2 v2.2.3/go.mod h1:RVVoqg1df56z8g3pUjL/3lE5UfnlrJX8tyFgg4nqhuY=
github.com/klauspost/cpuid/v2 v2.2.6/go.mod h1:Lcz8mBdAVJIBVzewtcLocK12l3Y+JytZYpaMropDUws=
github.com/klauspost/cpuid/v2 v2.2.10 h1:tBs3QSyvjDyFTq3uoc/9xFpCuOsJQFNPiAhYdw2skhE=
github.com/klauspost/cpuid/v2 v2.2.10/go.mod h1:hqwkgyIinND0mEev00jJYCxPNVRVXFQeu1XKlok6oO0=
github.com/koron/go-ssdp v0.0.6 h1:Jb0h04599eq/CY7rB5YEqPS83HmRfHP2azkxMN2rFtU=
github.com/koron/go-ssdp v0.0.6/go.mod h1:0R9LfRJGek1zWTjN3JUNlm5INCDYGpRDfAptnct63fI=
github.com/kr/pretty v0.1.0/go.mod h1:dAy3ld7l9f0ibDNOQOHHMYYIIbhfbHSm3C4ZsoJORNo=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/pty v1.1.3/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kylelemons/godebug v1.1.0 h1:RPNrshWIDI6G2gRW9EHilWtl7Z6Sb1BR0xunSBf0SNc=
github.com/kylelemons/godebug v1.1.0/go.mod h1:9/0rRGxNHcop5bhtWyNeEfOS8JIWk580+fNqagV/RAw=
github.com/leanovate/gopter v0.2.9 h1:fQjYxZaynp97ozCzfOyOuAGOU4aU/z37zf/tOujFk7c=
github.com/leanovate/gopter v0.2.9/go.mod h1:U2L/78B+KVFIx2VmW6onHJQzXtFb+p5y3y2Sh+Jxxv8=
github.com/libp2p/go-buffer-pool v0.1.0 h1:oK4mSFcQz7cTQIfqbe4MIj9gLW+mnanjyFtc6cdF0Y8=
github.com/libp2p/go-buffer-pool v0.1.0/go.mod h1:N+vh8gMqimBzdKkSMVuydVDq+UV5QTWy5HSiZacSbPg=
github.com/libp2p/go-cidranger v1.1.0 h1:ewPN8EZ0dd1LSnrtuwd4709PXVcITVeuwbag38yPW7c=
github.com/libp2p/go-cidranger v1.1.0/go.mod h1:KWZTfSr+r9qEo9OkI9/SIEeAtw+NNoU0dXIXt15Okic=
github.com/libp2p/go-flow-metrics v0.2.0 h1:EIZzjmeOE6c8Dav0sNv35vhZxATIXWZg6j/C08XmmDw=
github.com/libp2p/go-flow-metrics v0.2.0/go.mod h1:st3qqfu8+pMfh+9Mzqb2GTiwrAGjIPszEjZmtksN8Jc=
github.com/libp2p/go-libp2p-asn-util v0.4.1 h1:xqL7++IKD9TBFMgnLPZR6/6iYhawHKHl950SO9L6n94=
github.com/libp2p/go-libp2p-asn-util v0.4.1/go.mod h1:d/NI6XZ9qxw67b4e+NgpQexCIiFYJjErASrYW4PFDN8=
github.com/libp2p/go-libp2p-kbucket v0.6.3 h1:p507271wWzpy2f1XxPzCQG9NiN6R6lHL9GiSErbQQo0=
github.com/libp2p/go-libp2p-kbucket v0.6.3/go.mod h1:RCseT7AH6eJWxxk2ol03xtP9pEHetYSPXOaJnOiD8i0=
github.com/libp2p/go-libp2p-record v0.2.0 h1:oiNUOCWno2BFuxt3my4i1frNrt7PerzB3queqa1NkQ0=
github.com/libp2p/go-libp2p-record v0.2.0/go.mod h1:I+3zMkvvg5m2OcSdoL0KPljyJyvNDFGKX7QdlpYUcwk=
github.com/libp2p/go-libp2p-routing-helpers v0.7.2 h1:xJMFyhQ3Iuqnk9Q2dYE1eUTzsah7NLw3Qs2zjUV78T0=
github.com/libp2p/go-libp2p-routing-helpers v0.7.2/go.mod h1:cN4mJAD/7zfPKXBcs9ze31JGYAZgzdABEm+q/hkswb8=
github.com/libp2p/go-libp2p-testing v0.12.0 h1:EPvBb4kKMWO29qP4mZGyhVzUyR25dvfUIK5WDu6iPUA=
github.com/libp2p/go-libp2p-testing v0.12.0/go.mod h1:KcGDRXyN7sQCllucn1cOOS+Dmm7ujhfEyXQL5lvkcPg=
github.com/libp2p/go-msgio v0.3.0 h1:mf3Z8B1xcFN314sWX+2vOTShIE0Mmn2TXn3YCUQGNj0=
github.com/libp2p/go-msgio v0.3.0/go.mod h1:nyRM819GmVaF9LX3l03RMh10QdOroF++NBbxAb0mmDM=
github.com/libp2p/go-netroute v0.2.2 h1:Dejd8cQ47Qx2kRABg6lPwknU7+nBnFRpko45/fFPuZ8=
github.com/libp2p/go-netroute v0.2.2/go.mod h1:Rntq6jUAH0l9Gg17w5bFGhcC9a+vk4KNXs6s7IljKYE=
github.com/libp2p/go-reuseport v0.4.0 h1:nR5KU7hD0WxXCJbmw7r2rhRYruNRl2koHw8fQscQm2s=
github.com/libp2p/go-reuseport v0.4.0/go.mod h1:ZtI03j/wO5hZVDFo2jKywN6bYKWLOy8Se6DrI2E1cLU=
github.com/libp2p/go-yamux/v5 v5.0.1 h1:f0WoX/bEF2E8SbE4c/k1Mo+/9z0O4oC/hWEA+nfYRSg=
github.com/libp2p/go-yamux/v5 v5.0.1/go.mod h1:en+3cdX51U0ZslwRdRLrvQsdayFt3TSUKvBGErzpWbU=
github.com/lunixbochs/vtclean v1.0.0/go.mod h1:pHhQNgMf3btfWnGBVipUOjRYhoOsdGqdm/+2c2E2WMI=
github.com/mailru/easyjson v0.0.0-20190312143242-1de009706dbe/go.mod h1:C1wdFJiN94OJF2b5HbByQZoLdCWB1Yqtg26g4irojpc=
github.com/marcopolo/simnet v0.0.1 h1:rSMslhPz6q9IvJeFWDoMGxMIrlsbXau3NkuIXHGJxfg=
github.com/marcopolo/simnet v0.0.1/go.mod h1:WDaQkgLAjqDUEBAOXz22+1j6wXKfGlC5sD5XWt3ddOs=
github.com/marten-seemann/tcp v0.0.0-20210406111302-dfbc87cc63fd h1:br0buuQ854V8u83wA0rVZ8ttrq5CpaPZdvrK0LP2lOk=
github.com/marten-seemann/tcp v0.0.0-20210406111302-dfbc87cc63fd/go.mod h1:QuCEs1Nt24+FYQEqAAncTDPJIuGs+LxK1MCiFL25pMU=
github.com/mattn/go-isatty v0.0.14/go.mod h1:7GGIvUiUoEMVVmxf/4nioHXj79iQHKdU27kJ6hsGG94=
github.com/mattn/go-isatty v0.0.20 h1:xfD0iDuEKnDkl03q4limB+vH+GxLEtL/jb4xVJSWWEY=
github.com/mattn/go-isatty v0.0.20/go.mod h1:W+V8PltTTMOvKvAeJH7IuucS94S2C6jfK/D7dTCTo3Y=
github.com/matttproud/golang_protobuf_extensions v1.0.1/go.mod h1:D8He9yQNgCq6Z5Ld7szi9bcBfOoFv/3dc6xSMkL2PC0=
github.com/microcosm-cc/bluemonday v1.0.1/go.mod h1:hsXNsILzKxV+sX77C5b8FSuKF00vh2OMYv+xgHpAMF4=
github.com/miekg/dns v1.1.66 h1:FeZXOS3VCVsKnEAd+wBkjMC3D2K+ww66Cq3VnCINuJE=
github.com/miekg/dns v1.1.66/go.mod h1:jGFzBsSNbJw6z1HYut1RKBKHA9PBdxeHrZG8J+gC2WE=
github.com/mikioh/tcp v0.0.0-20190314235350-803a9b46060c h1:bzE/A84HN25pxAuk9Eej1Kz9OUelF97nAc82bDquQI8=
github.com/mikioh/tcp v0.0.0-20190314235350-803a9b46060c/go.mod h1:0SQS9kMwD2VsyFEB++InYyBJroV/FRmBgcydeSUcJms=
github.com/mikioh/tcpinfo v0.0.0-20190314235526-30a79bb1804b h1:z78hV3sbSMAUoyUMM0I83AUIT6Hu17AWfgjzIbtrYFc=
github.com/mikioh/tcpinfo v0.0.0-20190314235526-30a79bb1804b/go.mod h1:lxPUiZwKoFL8DUUmalo2yJJUCxbPKtm8OKfqr2/FTNU=
github.com/mikioh/tcpopt v0.0.0-20190314235656-172688c1accc h1:PTfri+PuQmWDqERdnNMiD9ZejrlswWrCpBEZgWOiTrc=
github.com/mikioh/tcpopt v0.0.0-20190314235656-172688c1accc/go.mod h1:cGKTAVKx4SxOuR/czcZ/E2RSJ3sfHs8FpHhQ5CWMf9s=
github.com/minio/blake2b-simd v0.0.0-20160723061019-3f5f724cb5b1/go.mod h1:pD8RvIylQ358TN4wwqatJ8rNavkEINozVn9DtGI3dfQ=
github.com/minio/sha256-simd v0.1.1-0.20190913151208-6de447530771/go.mod h1:B5e1o+1/KgNmWrSQK08Y6Z1Vb5pwIktudl0J58iy0KM=
github.com/minio/sha256-simd v1.0.0/go.mod h1:OuYzVNI5vcoYIAmbIvHPl3N3jUzVedXbKy5RFepssQM=
github.com/minio/sha256-simd v1.0.1 h1:6kaan5IFmwTNynnKKpDHe6FWHohJOHhCPchzK49dzMM=
github.com/minio/sha256-simd v1.0.1/go.mod h1:Pz6AKMiUdngCLpeTL/RJY1M9rUuPMYujV5xJjtbRSN8=
github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q=
github.com/modern-go/reflect2 v1.0.1/go.mod h1:bx2lNnkwVCuqBIxFjflWJWanXIb3RllmbCylyMrvgv0=
github.com/mr-tron/base58 v1.1.0/go.mod h1:xcD2VGqlgYjBdcBLw+TuYLr8afG+Hj8g2eTVqeSzSU8=
github.com/mr-tron/base58 v1.1.3/go.mod h1:BinMc/sQntlIE1frQmRFPUoPA1Zkr8VRgBdjWI2mNwc=
github.com/mr-tron/base58 v1.2.0 h1:T/HDJBh4ZCPbU39/+c3rRvE0uKBQlU27+QI8LJ4t64o=
github.com/mr-tron/base58 v1.2.0/go.mod h1:BinMc/sQntlIE1frQmRFPUoPA1Zkr8VRgBdjWI2mNwc=
github.com/multiformats/go-base32 v0.0.3/go.mod h1:pLiuGC8y0QR3Ue4Zug5UzK9LjgbkL8NSQj0zQ5Nz/AA=
github.com/multiformats/go-base32 v0.1.0 h1:pVx9xoSPqEIQG8o+UbAe7DNi51oej1NtK+aGkbLYxPE=
github.com/multiformats/go-base32 v0.1.0/go.mod h1:Kj3tFY6zNr+ABYMqeUNeGvkIC/UYgtWibDcT0rExnbI=
github.com/multiformats/go-base36 v0.1.0/go.mod h1:kFGE83c6s80PklsHO9sRn2NCoffoRdUUOENyW/Vv6sM=
github.com/multiformats/go-base36 v0.2.0 h1:lFsAbNOGeKtuKozrtBsAkSVhv1p9D0/qedU9rQyccr0=
github.com/multiformats/go-base36 v0.2.0/go.mod h1:qvnKE++v+2MWCfePClUEjE78Z7P2a1UV0xHgWc0hkp4=
github.com/multiformats/go-multiaddr-fmt v0.1.0 h1:WLEFClPycPkp4fnIzoFoV9FVd49/eQsuaL3/CWe167E=
github.com/multiformats/go-multiaddr-fmt v0.1.0/go.mod h1:hGtDIW4PU4BqJ50gW2quDuPVjyWNZxToGUh/HwTZYJo=
github.com/multiformats/go-multibase v0.0.3/go.mod h1:5+1R4eQrT3PkYZ24C3W2Ue2tPwIdYQD509ZjSb5y9Oc=
github.com/multiformats/go-multibase v0.2.0 h1:isdYCVLvksgWlMW9OZRYJEa9pZETFivncJHmHnnd87g=
github.com/multiformats/go-multibase v0.2.0/go.mod h1:bFBZX4lKCA/2lyOFSAoKH5SS6oPyjtnzK/XTFDPkNuk=
github.com/multiformats/go-multicodec v0.9.1 h1:x/Fuxr7ZuR4jJV4Os5g444F7xC4XmyUaT/FWtE+9Zjo=
github.com/multiformats/go-multicodec v0.9.1/go.mod h1:LLWNMtyV5ithSBUo3vFIMaeDy+h3EbkMTek1m+Fybbo=
github.com/multiformats/go-multihash v0.0.13/go.mod h1:VdAWLKTwram9oKAatUcLxBNUjdtcVwxObEQBtRfuyjc=
github.com/multiformats/go-multihash v0.2.3 h1:7Lyc8XfX/IY2jWb/gI7JP+o7JEq9hOa7BFvVU9RSh+U=
github.com/multiformats/go-multihash v0.2.3/go.mod h1:dXgKXCXjBzdscBLk9JkjINiEsCKRVch90MdaGiKsvSM=
github.com/multiformats/go-multistream v0.6.1 h1:4aoX5v6T+yWmc2raBHsTvzmFhOI8WVOer28DeBBEYdQ=
github.com/multiformats/go-multistream v0.6.1/go.mod h1:ksQf6kqHAb6zIsyw7Zm+gAuVo57Qbq84E27YlYqavqw=
github.com/multiformats/go-varint v0.0.5/go.mod h1:3Ls8CIEsrijN6+B7PbrXRPxHRPuXSrVKRY101jdMZYE=
github.com/multiformats/go-varint v0.0.6/go.mod h1:3Ls8CIEsrijN6+B7PbrXRPxHRPuXSrVKRY101jdMZYE=
github.com/multiformats/go-varint v0.0.7 h1:sWSGR+f/eu5ABZA2ZpYKBILXTTs9JWpdEM/nEGOHFS8=
github.com/multiformats/go-varint v0.0.7/go.mod h1:r8PUYw/fD/SjBCiKOoDlGF6QawOELpZAu9eioSos/OU=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA=
github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ=
github.com/neelance/astrewrite v0.0.0-20160511093645-99348263ae86/go.mod h1:kHJEU3ofeGjhHklVoIGuVj85JJwZ6kWPaJwCIxgnFmo=
github.com/neelance/sourcemap v0.0.0-20151028013722-8c68805598ab/go.mod h1:Qr6/a/Q4r9LP1IltGz7tA7iOK1WonHEYhu1HRBA7ZiM=
github.com/opentracing/opentracing-go v1.2.0 h1:uEJPy/1a5RIPAJ0Ov+OIO8OxWu77jEv+1B0VhjKrZUs=
github.com/opentracing/opentracing-go v1.2.0/go.mod h1:GxEUsuufX4nBwe+T+Wl9TAgYrxe9dPLANfrWvHYVTgc=
github.com/openzipkin/zipkin-go v0.1.1/go.mod h1:NtoC/o8u3JlF1lSlyPNswIbeQH9bJTmOf0Erfk+hxe8=
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58 h1:onHthvaw9LFnH4t2DcNVpwGmV9E1BkGknEliJkfwQj0=
github.com/pbnjay/memory v0.0.0-20210728143218-7b4eea64cf58/go.mod h1:DXv8WO4yhMYhSNPKjeNKa5WY9YCIEBRbNzFFPJbWO6Y=
github.com/pion/datachannel v1.5.10 h1:ly0Q26K1i6ZkGf42W7D4hQYR90pZwzFOjTq5AuCKk4o=
github.com/pion/datachannel v1.5.10/go.mod h1:p/jJfC9arb29W7WrxyKbepTU20CFgyx5oLo8Rs4Py/M=
github.com/pion/dtls/v2 v2.2.7/go.mod h1:8WiMkebSHFD0T+dIU+UeBaoV7kDhOW5oDCzZ7WZ/F9s=
github.com/pion/dtls/v2 v2.2.12 h1:KP7H5/c1EiVAAKUmXyCzPiQe5+bCJrpOeKg/L05dunk=
github.com/pion/dtls/v2 v2.2.12/go.mod h1:d9SYc9fch0CqK90mRk1dC7AkzzpwJj6u2GU3u+9pqFE=
github.com/pion/dtls/v3 v3.0.6 h1:7Hkd8WhAJNbRgq9RgdNh1aaWlZlGpYTzdqjy9x9sK2E=
github.com/pion/dtls/v3 v3.0.6/go.mod h1:iJxNQ3Uhn1NZWOMWlLxEEHAN5yX7GyPvvKw04v9bzYU=
github.com/pion/ice/v4 v4.0.10 h1:P59w1iauC/wPk9PdY8Vjl4fOFL5B+USq1+xbDcN6gT4=
github.com/pion/ice/v4 v4.0.10/go.mod h1:y3M18aPhIxLlcO/4dn9X8LzLLSma84cx6emMSu14FGw=
github.com/pion/interceptor v0.1.40 h1:e0BjnPcGpr2CFQgKhrQisBU7V3GXK6wrfYrGYaU6Jq4=
github.com/pion/interceptor v0.1.40/go.mod h1:Z6kqH7M/FYirg3frjGJ21VLSRJGBXB/KqaTIrdqnOic=
github.com/pion/logging v0.2.2/go.mod h1:k0/tDVsRCX2Mb2ZEmTqNa7CWsQPc+YYCB7Q+5pahoms=
github.com/pion/logging v0.2.3 h1:gHuf0zpoh1GW67Nr6Gj4cv5Z9ZscU7g/EaoC/Ke/igI=
github.com/pion/logging v0.2.3/go.mod h1:z8YfknkquMe1csOrxK5kc+5/ZPAzMxbKLX5aXpbpC90=
github.com/pion/mdns/v2 v2.0.7 h1:c9kM8ewCgjslaAmicYMFQIde2H9/lrZpjBkN8VwoVtM=
github.com/pion/mdns/v2 v2.0.7/go.mod h1:vAdSYNAT0Jy3Ru0zl2YiW3Rm/fJCwIeM0nToenfOJKA=
github.com/pion/randutil v0.1.0 h1:CFG1UdESneORglEsnimhUjf33Rwjubwj6xfiOXBa3mA=
github.com/pion/randutil v0.1.0/go.mod h1:XcJrSMMbbMRhASFVOlj/5hQial/Y8oH/HVo7TBZq+j8=
github.com/pion/rtcp v1.2.15 h1:LZQi2JbdipLOj4eBjK4wlVoQWfrZbh3Q6eHtWtJBZBo=
github.com/pion/rtcp v1.2.15/go.mod h1:jlGuAjHMEXwMUHK78RgX0UmEJFV4zUKOFHR7OP+D3D0=
github.com/pion/rtp v1.8.19 h1:jhdO/3XhL/aKm/wARFVmvTfq0lC/CvN1xwYKmduly3c=
github.com/pion/rtp v1.8.19/go.mod h1:bAu2UFKScgzyFqvUKmbvzSdPr+NGbZtv6UB2hesqXBk=
github.com/pion/sctp v1.8.39 h1:PJma40vRHa3UTO3C4MyeJDQ+KIobVYRZQZ0Nt7SjQnE=
github.com/pion/sctp v1.8.39/go.mod h1:cNiLdchXra8fHQwmIoqw0MbLLMs+f7uQ+dGMG2gWebE=
github.com/pion/sdp/v3 v3.0.13 h1:uN3SS2b+QDZnWXgdr69SM8KB4EbcnPnPf2Laxhty/l4=
github.com/pion/sdp/v3 v3.0.13/go.mod h1:88GMahN5xnScv1hIMTqLdu/cOcUkj6a9ytbncwMCq2E=
github.com/pion/srtp/v3 v3.0.6 h1:E2gyj1f5X10sB/qILUGIkL4C2CqK269Xq167PbGCc/4=
github.com/pion/srtp/v3 v3.0.6/go.mod h1:BxvziG3v/armJHAaJ87euvkhHqWe9I7iiOy50K2QkhY=
github.com/pion/stun v0.6.1 h1:8lp6YejULeHBF8NmV8e2787BogQhduZugh5PdhDyyN4=
github.com/pion/stun v0.6.1/go.mod h1:/hO7APkX4hZKu/D0f2lHzNyvdkTGtIy3NDmLR7kSz/8=
github.com/pion/stun/v3 v3.0.0 h1:4h1gwhWLWuZWOJIJR9s2ferRO+W3zA/b6ijOI6mKzUw=
github.com/pion/stun/v3 v3.0.0/go.mod h1:HvCN8txt8mwi4FBvS3EmDghW6aQJ24T+y+1TKjB5jyU=
github.com/pion/transport/v2 v2.2.1/go.mod h1:cXXWavvCnFF6McHTft3DWS9iic2Mftcz1Aq29pGcU5g=
github.com/pion/transport/v2 v2.2.4/go.mod h1:q2U/tf9FEfnSBGSW6w5Qp5PFWRLRj3NjLhCCgpRK4p0=
github.com/pion/transport/v2 v2.2.10 h1:ucLBLE8nuxiHfvkFKnkDQRYWYfp8ejf4YBOPfaQpw6Q=
github.com/pion/transport/v2 v2.2.10/go.mod h1:sq1kSLWs+cHW9E+2fJP95QudkzbK7wscs8yYgQToO5E=
github.com/pion/transport/v3 v3.0.7 h1:iRbMH05BzSNwhILHoBoAPxoB9xQgOaJk+591KC9P1o0=
github.com/pion/transport/v3 v3.0.7/go.mod h1:YleKiTZ4vqNxVwh77Z0zytYi7rXHl7j6uPLGhhz9rwo=
github.com/pion/turn/v4 v4.0.2 h1:ZqgQ3+MjP32ug30xAbD6Mn+/K4Sxi3SdNOTFf+7mpps=
github.com/pion/turn/v4 v4.0.2/go.mod h1:pMMKP/ieNAG/fN5cZiN4SDuyKsXtNTr0ccN7IToA1zs=
github.com/pion/webrtc/v4 v4.1.2 h1:mpuUo/EJ1zMNKGE79fAdYNFZBX790KE7kQQpLMjjR54=
github.com/pion/webrtc/v4 v4.1.2/go.mod h1:xsCXiNAmMEjIdFxAYU0MbB3RwRieJsegSB2JZsGN+8U=
github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/polydawn/refmt v0.89.0 h1:ADJTApkvkeBZsN0tBTx8QjpD9JkmxbKp0cxfr9qszm4=
github.com/polydawn/refmt v0.89.0/go.mod h1:/zvteZs/GwLtCgZ4BL6CBsk9IKIlexP43ObX9AxTqTw=
github.com/prometheus/client_golang v0.8.0/go.mod h1:7SWBe2y4D6OKWSNQJUaRYU/AaXPKyh/dDVn+NZz0KFw=
github.com/prometheus/client_golang v1.22.0 h1:rb93p9lokFEsctTys46VnV1kLCDpVZ0a/Y92Vm0Zc6Q=
github.com/prometheus/client_golang v1.22.0/go.mod h1:R7ljNsLXhuQXYZYtw6GAE9AZg8Y7vEW5scdCXrWRXC0=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190812154241-14fe0d1b01d4/go.mod h1:xMI15A0UPsDsEKsMN9yxemIoYk6Tm2C1GtYGdfGttqA=
github.com/prometheus/client_model v0.6.2 h1:oBsgwpGs7iVziMvrGhE53c/GrLUsZdHnqNwqPLxwZyk=
github.com/prometheus/client_model v0.6.2/go.mod h1:y3m2F6Gdpfy6Ut/GBsUqTWZqCUvMVzSfMLjcu6wAwpE=
github.com/prometheus/common v0.0.0-20180801064454-c7de2306084e/go.mod h1:daVV7qP5qjZbuso7PdcryaAu0sAZbrN9i7WWcTMWvro=
github.com/prometheus/common v0.64.0 h1:pdZeA+g617P7oGv1CzdTzyeShxAGrTBsolKNOLQPGO4=
github.com/prometheus/common v0.64.0/go.mod h1:0gZns+BLRQ3V6NdaerOhMbwwRbNh9hkGINtQAsP5GS8=
github.com/prometheus/procfs v0.0.0-20180725123919-05ee40e3a273/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.16.1 h1:hZ15bTNuirocR6u0JZ6BAHHmwS1p8B4P6MRqxtzMyRg=
github.com/prometheus/procfs v0.16.1/go.mod h1:teAbpZRB1iIAJYREa1LsoWUXykVXA1KlTmWl8x/U+Is=
github.com/quic-go/qpack v0.5.1 h1:giqksBPnT/HDtZ6VhtFKgoLOWmlyo9Ei6u9PqzIMbhI=
github.com/quic-go/qpack v0.5.1/go.mod h1:+PC4XFrEskIVkcLzpEkbLqq1uCoxPhQuvK5rH1ZgaEg=
github.com/quic-go/quic-go v0.54.0 h1:6s1YB9QotYI6Ospeiguknbp2Znb/jZYjZLRXn9kMQBg=
github.com/quic-go/quic-go v0.54.0/go.mod h1:e68ZEaCdyviluZmy44P6Iey98v/Wfz6HCjQEm+l8zTY=
github.com/quic-go/webtransport-go v0.9.0 h1:jgys+7/wm6JarGDrW+lD/r9BGqBAmqY/ssklE09bA70=
github.com/quic-go/webtransport-go v0.9.0/go.mod h1:4FUYIiUc75XSsF6HShcLeXXYZJ9AGwo/xh3L8M/P1ao=
github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4=
github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ=
github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc=
github.com/russross/blackfriday v1.5.2/go.mod h1:JO/DiYxRf+HjHt06OyowR9PTA263kcR/rfWxYHBV53g=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=
github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo=
github.com/shurcooL/component v0.0.0-20170202220835-f88ec8f54cc4/go.mod h1:XhFIlyj5a1fBNx5aJTbKoIq0mNaPvOagO+HjB3EtxrY=
github.com/shurcooL/events v0.0.0-20181021180414-410e4ca65f48/go.mod h1:5u70Mqkb5O5cxEA8nxTsgrgLehJeAw6Oc4Ab1c/P1HM=
github.com/shurcooL/github_flavored_markdown v0.0.0-20181002035957-2122de532470/go.mod h1:2dOwnU2uBioM+SGy2aZoq1f/Sd1l9OkAeAUvjSyvgU0=
github.com/shurcooL/go v0.0.0-20180423040247-9e1955d9fb6e/go.mod h1:TDJrrUr11Vxrven61rcy3hJMUqaf/CLWYhHNPmT14Lk=
github.com/shurcooL/go-goon v0.0.0-20170922171312-37c2f522c041/go.mod h1:N5mDOmsrJOB+vfqUK+7DmDyjhSLIIBnXo9lvZJj3MWQ=
github.com/shurcooL/gofontwoff v0.0.0-20180329035133-29b52fc0a18d/go.mod h1:05UtEgK5zq39gLST6uB0cf3NEHjETfB4Fgr3Gx5R9Vw=
github.com/shurcooL/gopherjslib v0.0.0-20160914041154-feb6d3990c2c/go.mod h1:8d3azKNyqcHP1GaQE/c6dDgjkgSx2BZ4IoEi4F1reUI=
github.com/shurcooL/highlight_diff v0.0.0-20170515013008-09bb4053de1b/go.mod h1:ZpfEhSmds4ytuByIcDnOLkTHGUI6KNqRNPDLHDk+mUU=
github.com/shurcooL/highlight_go v0.0.0-20181028180052-98c3abbbae20/go.mod h1:UDKB5a1T23gOMUJrI+uSuH0VRDStOiUVSjBTRDVBVag=
github.com/shurcooL/home v0.0.0-20181020052607-80b7ffcb30f9/go.mod h1:+rgNQw2P9ARFAs37qieuu7ohDNQ3gds9msbT2yn85sg=
github.com/shurcooL/htmlg v0.0.0-20170918183704-d01228ac9e50/go.mod h1:zPn1wHpTIePGnXSHpsVPWEktKXHr6+SS6x/IKRb7cpw=
github.com/shurcooL/httperror v0.0.0-20170206035902-86b7830d14cc/go.mod h1:aYMfkZ6DWSJPJ6c4Wwz3QtW22G7mf/PEgaB9k/ik5+Y=
github.com/shurcooL/httpfs v0.0.0-20171119174359-809beceb2371/go.mod h1:ZY1cvUeJuFPAdZ/B6v7RHavJWZn2YPVFQ1OSXhCGOkg=
github.com/shurcooL/httpgzip v0.0.0-20180522190206-b1c53ac65af9/go.mod h1:919LwcH0M7/W4fcZ0/jy0qGght1GIhqyS/EgWGH2j5Q=
github.com/shurcooL/issues v0.0.0-20181008053335-6292fdc1e191/go.mod h1:e2qWDig5bLteJ4fwvDAc2NHzqFEthkqn7aOZAOpj+PQ=
github.com/shurcooL/issuesapp v0.0.0-20180602232740-048589ce2241/go.mod h1:NPpHK2TI7iSaM0buivtFUc9offApnI0Alt/K8hcHy0I=
github.com/shurcooL/notifications v0.0.0-20181007000457-627ab5aea122/go.mod h1:b5uSkrEVM1jQUspwbixRBhaIjIzL2xazXp6kntxYle0=
github.com/shurcooL/octicon v0.0.0-20181028054416-fa4f57f9efb2/go.mod h1:eWdoE5JD4R5UVWDucdOPg1g2fqQRq78IQa9zlOV1vpQ=
github.com/shurcooL/reactions v0.0.0-20181006231557-f2e0b4ca5b82/go.mod h1:TCR1lToEk4d2s07G3XGfz2QrgHXg4RJBvjrOozvoWfk=
github.com/shurcooL/sanitized_anchor_name v0.0.0-20170918181015-86672fcb3f95/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/shurcooL/sanitized_anchor_name v1.0.0/go.mod h1:1NzhyTcUVG4SuEtjjoZeVRXNmyL/1OwPU0+IJeTBvfc=
github.com/shurcooL/users v0.0.0-20180125191416-49c67e49c537/go.mod h1:QJTqeLYEDaXHZDBsXlPCDqdhQuJkuw4NOtaxYe3xii4=
github.com/shurcooL/webdavfs v0.0.0-20170829043945-18c3829fa133/go.mod h1:hKmq5kWdCj2z2KEozexVbfEZIWiTjhE0+UjmZgPqehw=
github.com/smartystreets/assertions v1.2.0 h1:42S6lae5dvLc7BrLu/0ugRtcFVjoJNMC/N3yZFZkDFs=
github.com/smartystreets/assertions v1.2.0/go.mod h1:tcbTF8ujkAEcZ8TElKY+i30BzYlVhC/LOxJk7iOWnoo=
github.com/smartystreets/goconvey v1.7.2 h1:9RBaZCeXEQ3UselpuwUQHltGVXvdwm6cv1hgR6gDIPg=
github.com/smartystreets/goconvey v1.7.2/go.mod h1:Vw0tHAZW6lzCRk3xgdin6fKYcG+G3Pg9vgXWeJpQFMM=
github.com/sourcegraph/annotate v0.0.0-20160123013949-f4cad6c6324d/go.mod h1:UdhH50NIW0fCiwBSr0co2m7BnFLdv4fQTgdqdJTHFeE=
github.com/sourcegraph/syntaxhighlight v0.0.0-20170531221838-bd320f5d308e/go.mod h1:HuIsMU8RRBOtsCgI77wP899iHVBQpCmg4ErYMZB+2IA=
github.com/spaolacci/murmur3 v1.1.0 h1:7c1g84S4BPRrfL5Xrdp6fOJ206sU9y293DDHaoy0bLI=
github.com/spaolacci/murmur3 v1.1.0/go.mod h1:JwIasOWyU6f++ZhiEuf87xNszmSA2myDM2Kzu9HwQUA=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.1/go.mod h1:w2LPCIKwWwSfY2zedu0+kehJoqGctiVI29o6fzry7u4=
github.com/stretchr/testify v1.8.3/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
github.com/tarm/serial v0.0.0-20180830185346-98f6abe2eb07/go.mod h1:kDXzergiv9cbyO7IOYJZWg1U88JhDg3PB6klq9Hg2pA=
github.com/urfave/cli v1.22.10/go.mod h1:Gos4lmkARVdJ6EkW0WaNv/tZAAMe9V7XWyB60NtXRu0=
github.com/viant/assertly v0.4.8/go.mod h1:aGifi++jvCrUaklKEKT0BU95igDNaqkvz+49uaYMPRU=
github.com/viant/toolbox v0.24.0/go.mod h1:OxMCG57V0PXuIP2HNQrtJf2CjqdmbrOx5EkMILuUhzM=
github.com/warpfork/go-wish v0.0.0-20220906213052-39a1cc7a02d0 h1:GDDkbFiaK8jsSDJfjId/PEGEShv6ugrt4kYsC5UIDaQ=
github.com/warpfork/go-wish v0.0.0-20220906213052-39a1cc7a02d0/go.mod h1:x6AKhvSSexNrVSrViXSHUEbICjmGXhtgABaHIySUSGw=
github.com/whyrusleeping/go-keyspace v0.0.0-20160322163242-5b898ac5add1 h1:EKhdznlJHPMoKr0XTrX+IlJs1LH3lyx2nfr1dOlZ79k=
github.com/whyrusleeping/go-keyspace v0.0.0-20160322163242-5b898ac5add1/go.mod h1:8UvriyWtv5Q5EOgjHaSseUEdkQfvwFv1I/In/O2M9gc=
github.com/wlynxg/anet v0.0.3/go.mod h1:eay5PRQr7fIVAMbTbchTnO9gG65Hg/uYGdc7mguHxoA=
github.com/wlynxg/anet v0.0.5 h1:J3VJGi1gvo0JwZ/P1/Yc/8p63SoW98B5dHkYDmpgvvU=
github.com/wlynxg/anet v0.0.5/go.mod h1:eay5PRQr7fIVAMbTbchTnO9gG65Hg/uYGdc7mguHxoA=
github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.3.5/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
go.opencensus.io v0.18.0/go.mod h1:vKdFvxhtzZ9onBp9VKHK8z/sRpBMnKAsufL7wlDrCOA=
go.opencensus.io v0.24.0 h1:y73uSU6J157QMP2kn2r30vwW1A2W2WFwSCGnAVxeaD0=
go.opencensus.io v0.24.0/go.mod h1:vNK8G9p7aAivkbmorf4v+7Hgx+Zs0yY+0fOtgBfjQKo=
go.opentelemetry.io/auto/sdk v1.1.0 h1:cH53jehLUN6UFLY71z+NDOiNJqDdPRaXzTel0sJySYA=
go.opentelemetry.io/auto/sdk v1.1.0/go.mod h1:3wSPjt5PWp2RhlCcmmOial7AvC4DQqZb7a7wCow3W8A=
go.opentelemetry.io/otel v1.34.0 h1:zRLXxLCgL1WyKsPVrgbSdMN4c0FMkDAskSTQP+0hdUY=
go.opentelemetry.io/otel v1.34.0/go.mod h1:OWFPOQ+h4G8xpyjgqo4SxJYdDQ/qmRH+wivy7zzx9oI=
go.opentelemetry.io/otel/metric v1.34.0 h1:+eTR3U0MyfWjRDhmFMxe2SsW64QrZ84AOhvqS7Y+PoQ=
go.opentelemetry.io/otel/metric v1.34.0/go.mod h1:CEDrp0fy2D0MvkXE+dPV7cMi8tWZwX3dmaIhwPOaqHE=
go.opentelemetry.io/otel/sdk v1.34.0 h1:95zS4k/2GOy069d321O8jWgYsW3MzVV+KuSPKp7Wr1A=
go.opentelemetry.io/otel/sdk v1.34.0/go.mod h1:0e/pNiaMAqaykJGKbi+tSjWfNNHMTxoC9qANsCzbyxU=
go.opentelemetry.io/otel/sdk/metric v1.34.0 h1:5CeK9ujjbFVL5c1PhLuStg1wxA7vQv7ce1EK0Gyvahk=
go.opentelemetry.io/otel/sdk/metric v1.34.0/go.mod h1:jQ/r8Ze28zRKoNRdkjCZxfs6YvBTG1+YIqyFVFYec5w=
go.opentelemetry.io/otel/trace v1.34.0 h1:+ouXS2V8Rd4hp4580a8q23bg0azF2nI8cqLYnC8mh/k=
go.opentelemetry.io/otel/trace v1.34.0/go.mod h1:Svm7lSjQD7kG7KJ/MUHPVXSDGz2OX4h0M2jHBhmSfRE=
go.uber.org/atomic v1.6.0/go.mod h1:sABNBOSYdrvTF6hTgEIbc7YasKWGhgEQZyfxyTvoXHQ=
go.uber.org/atomic v1.7.0/go.mod h1:fEN4uk6kAWBTFdckzkM89CLk9XfWZrxpCo0nPH17wJc=
go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE=
go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=
go.uber.org/dig v1.19.0 h1:BACLhebsYdpQ7IROQ1AGPjrXcP5dF80U3gKoFzbaq/4=
go.uber.org/dig v1.19.0/go.mod h1:Us0rSJiThwCv2GteUN0Q7OKvU7n5J4dxZ9JKUXozFdE=
go.uber.org/fx v1.24.0 h1:wE8mruvpg2kiiL1Vqd0CC+tr0/24XIB10Iwp2lLWzkg=
go.uber.org/fx v1.24.0/go.mod h1:AmDeGyS+ZARGKM4tlH4FY2Jr63VjbEDJHtqXTGP5hbo=
go.uber.org/goleak v1.1.11-0.20210813005559-691160354723/go.mod h1:cwTWslyiVhfpKIDGSZEM2HlOvcqm+tG4zioyIeLoqMQ=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.uber.org/mock v0.5.2 h1:LbtPTcP8A5k9WPXj54PPPbjcI4Y6lhyOZXn+VS7wNko=
go.uber.org/mock v0.5.2/go.mod h1:wLlUxC2vVTPTaE3UD51E0BGOAElKrILxhVSDYQLld5o=
go.uber.org/multierr v1.5.0/go.mod h1:FeouvMocqHpRaaGuG9EjoKcStLC43Zu/fmqdUMPcKYU=
go.uber.org/multierr v1.6.0/go.mod h1:cdWPpRnG4AhwMwsgIHip0KRBQjJy5kYEpYjJxpXp9iU=
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0=
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y=
go.uber.org/tools v0.0.0-20190618225709-2cfd321de3ee/go.mod h1:vJERXedbb3MVM5f9Ejo0C68/HhF8uaILCdgjnY+goOA=
go.uber.org/zap v1.16.0/go.mod h1:MA8QOfq0BHJwdXa996Y4dYkAqRKB8/1K1QMMZVaNZjQ=
go.uber.org/zap v1.19.1/go.mod h1:j3DNczoxDZroyBnOT1L/Q79cfUMGZxlv/9dzN7SM1rI=
go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8=
go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E=
go4.org v0.0.0-20180809161055-417644f6feb5/go.mod h1:MkTOUMDaeVYJUOUsaDXIhWPZYa1yOyC1qaOBpL57BhE=
golang.org/x/build v0.0.0-20190111050920-041ab4dc3f9d/go.mod h1:OWs+y06UdEOHN4y+MfF/py+xQ/tYqIWW03b70/CG9Rw=
golang.org/x/crypto v0.0.0-20181030102418-4d3f4d9ffa16/go.mod h1:6SG95UA2DQfeDnfUPMdvaQW0Q7yPrPDi9nlGo2tz2b4=
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190313024323-a1f597ede03a/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
golang.org/x/crypto v0.0.0-20190510104115-cbcb75029529/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
golang.org/x/crypto v0.0.0-20200602180216-279210d13fed/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
golang.org/x/crypto v0.0.0-20210322153248-0c34fe9e7dc2/go.mod h1:T9bdIzuCu7OtxOm1hfPfRQxPLYneinmdGuTeoZ9dtd4=
golang.org/x/crypto v0.0.0-20210921155107-089bfa567519/go.mod h1:GvvjBRRGRdwPK5ydBHafDWAxML/pGHZbMvKqRZ5+Abc=
golang.org/x/crypto v0.0.0-20220525230936-793ad666bf5e/go.mod h1:IxCIyHEi3zRg3s0A5j5BB6A9Jmi73HwBIUl50j+osU4=
golang.org/x/crypto v0.1.0/go.mod h1:RecgLatLF4+eUMCP1PoPZQb+cVrJcOPbHkTkbkB9sbw=
golang.org/x/crypto v0.8.0/go.mod h1:mRqEX+O9/h5TFCrQhkgjo2yKi0yYA+9ecGkdQoHrywE=
golang.org/x/crypto v0.12.0/go.mod h1:NF0Gs7EO5K4qLn+Ylc+fih8BSTeIjAP05siRnAh98yw=
golang.org/x/crypto v0.13.0/go.mod h1:y6Z2r+Rw4iayiXXAIxJIDAJ1zMW4yaTpebo8fPOliYc=
golang.org/x/crypto v0.18.0/go.mod h1:R0j02AL6hcrfOiy9T4ZYp/rcWeMxM3L6QYxlOuEG1mg=
golang.org/x/crypto v0.19.0/go.mod h1:Iy9bg/ha4yyC70EfRS8jz+B6ybOBKMaSxLj6P6oBDfU=
golang.org/x/crypto v0.23.0/go.mod h1:CKFgDieR+mRhux2Lsu27y0fO304Db0wZe70UKqHu0v8=
golang.org/x/crypto v0.31.0/go.mod h1:kDsLvtWBEx7MV9tJOj9bnXsPbxwJQ6csT/x4KIN4Ssk=
golang.org/x/crypto v0.39.0 h1:SHs+kF4LP+f+p14esP5jAoDpHU8Gu/v9lFRK6IT5imM=
golang.org/x/crypto v0.39.0/go.mod h1:L+Xg3Wf6HoL4Bn4238Z6ft6KfEpN0tJGo53AAPC632U=
golang.org/x/exp v0.0.0-20190121172915-509febef88a4/go.mod h1:CJ0aWSM057203Lf6IL+f9T1iT9GByDxfZKAQTCR3kQA=
golang.org/x/exp v0.0.0-20230725012225-302865e7556b/go.mod h1:FXUEEKJgO7OQYeo8N01OfiKP8RXMtf6e8aTskBGqWdc=
golang.org/x/exp v0.0.0-20250606033433-dcc06ee1d476 h1:bsqhLWFR6G6xiQcb+JoGqdKdRU6WzPWmK8E0jxTjzo4=
golang.org/x/exp v0.0.0-20250606033433-dcc06ee1d476/go.mod h1:3//PLf8L/X+8b4vuAfHzxeRUl04Adcb341+IGKfnqS8=
golang.org/x/lint v0.0.0-20180702182130-06c8688daad7/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20181026193005-c67002cb31c3/go.mod h1:UVdnD1Gm6xHRNCYTkRU2/jEulfH38KcIWyp/GAMgvoE=
golang.org/x/lint v0.0.0-20190227174305-5b3e6a55c961/go.mod h1:wehouNa3lNwaWXcvxsM5YxQ5yQlVC4a0KAMCusXpPoU=
golang.org/x/lint v0.0.0-20190313153728-d0100b6bd8b3/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20190930215403-16217165b5de/go.mod h1:6SW0HCj/g11FgYtHlgUYUwCkIfeOF89ocIRzGO/8vkc=
golang.org/x/lint v0.0.0-20200302205851-738671d3881b/go.mod h1:3xt1FjdF8hUf6vQPIChWIBhFzV8gjjsPE/fR3IyQdNY=
golang.org/x/mod v0.0.0-20190513183733-4bf6d317e70e/go.mod h1:mXi4GBBbnImb6dmsKGUJ2LatrhH/nqhxcFungHvyanc=
golang.org/x/mod v0.1.1-0.20191105210325-c90efee705ee/go.mod h1:QqPTAvyqsEbceGzBzNggFXnrqF1CaUcvgkdR5Ot7KZg=
golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.4.2/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA=
golang.org/x/mod v0.6.0-dev.0.20220419223038-86c51ed26bb4/go.mod h1:jJ57K6gSWd91VN4djpZkiMVwK6gcyfeH4XE8wZrZaV4=
golang.org/x/mod v0.6.0/go.mod h1:4mET923SAdbXp2ki8ey+zGs1SLqsuM2Y0uvdZR/fUNI=
golang.org/x/mod v0.8.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.11.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.12.0/go.mod h1:iBbtSCu2XBx23ZKBPSOrRkjjQPZFPuis4dIYUhu/chs=
golang.org/x/mod v0.15.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.17.0/go.mod h1:hTbmBsO62+eylJbnUtE2MGJUyE7QWk4xUqPFrRgJ+7c=
golang.org/x/mod v0.25.0 h1:n7a+ZbQKQA/Ysbyb0/6IbB1H/X41mKgbhfv7AfG/44w=
golang.org/x/mod v0.25.0/go.mod h1:IXM97Txy2VM4PJ3gI61r1YEk/gAj6zAHN3AdZt6S9Ww=
golang.org/x/net v0.0.0-20180724234803-3673e40ba225/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180826012351-8a410e7b638d/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20180906233101-161cd47e91fd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181029044818-c44066c5c816/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20181106065722-10aee1819953/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190108225652-1e06a53dbb7e/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190213061140-3a22650c66bd/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4=
golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190313220215-9f648a60d977/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg=
golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s=
golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20201110031124-69a78807bb2b/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU=
golang.org/x/net v0.0.0-20210119194325-5f4716e94777/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210226172049-e18ecbb05110/go.mod h1:m0MpNAwzfU5UDzcl9v0D8zg8gWTRqZa9RBIspLL5mdg=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220722155237-a158d28d115b/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
golang.org/x/net v0.1.0/go.mod h1:Cx3nUiGt4eDBEyega/BKRp+/AlGL8hYe7U9odMt2Cco=
golang.org/x/net v0.6.0/go.mod h1:2Tu9+aMcznHK/AK1HMvgo6xiTLG5rD5rZLDS+rp2Bjs=
golang.org/x/net v0.9.0/go.mod h1:d48xBJpPfHeWQsugry2m+kC02ZBRGRgulfHnEXEuWns=
golang.org/x/net v0.10.0/go.mod h1:0qNGK6F8kojg2nk9dLZ2mShWaEBan6FAoqfSigmmuDg=
golang.org/x/net v0.14.0/go.mod h1:PpSgVXXLK0OxS0F31C1/tv6XNguvCrnXIDrFMspZIUI=
golang.org/x/net v0.15.0/go.mod h1:idbUs1IY1+zTqbi8yxTbhexhEEk5ur9LInksu6HrEpk=
golang.org/x/net v0.20.0/go.mod h1:z8BVo6PvndSri0LbOE3hAn0apkU+1YvI6E70E9jsnvY=
golang.org/x/net v0.21.0/go.mod h1:bIjVDfnllIU7BJ2DNgfnXvpSvtn8VRwhlsaeUTyUS44=
golang.org/x/net v0.25.0/go.mod h1:JkAGAh7GEvH74S6FOH42FLoXpXbE/aqXSrIQjXgsiwM=
golang.org/x/net v0.41.0 h1:vBTly1HeNPEn3wtREYfy4GZ/NECgw2Cnl+nK6Nz3uvw=
golang.org/x/net v0.41.0/go.mod h1:B/K4NNqkfmg07DQYrbwvSluqCJOOXwUjeb/5lOisjbA=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20181017192945-9dcd33a902f4/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20181203162652-d668ce993890/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20190226205417-e64efc72b421/go.mod h1:gOpvHmFTYa4IltrdGE7lF6nIHvwfUNPOp7c8zoXwtLw=
golang.org/x/perf v0.0.0-20180704124530-6e6d33e29852/go.mod h1:JLpeXjPJfIyPr5TlbXLkXWLhP8nz10XfvxElABhCtcw=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181221193216-37e7f081c4d4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.7.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.10.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
golang.org/x/sync v0.17.0 h1:l60nONMj9l5drqw6jlhIELNv9I0A4OFgRsG9k2oT9Ug=
golang.org/x/sync v0.17.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
golang.org/x/sys v0.0.0-20180830151530-49385e6e1522/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20180909124046-d0be0721c37e/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20181029174526-d69651ed3497/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
golang.org/x/sys v0.0.0-20190316082340-a2f829d7f35f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200602225109-6fdc65e7d980/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210423082822-04245dca01da/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210510120138-977fb7262007/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210615035016-665e8c7367d1/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220704084225-05e143d24a9e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220722155257-8c9f86f7a55f/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.1.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.7.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.8.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.11.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.12.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.16.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.20.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.28.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.36.0 h1:KVRy2GtZBrk1cBYA7MKu5bEZFxQk4NIDV6RLVcC8o0k=
golang.org/x/sys v0.36.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks=
golang.org/x/telemetry v0.0.0-20240228155512-f48c80bd79b2/go.mod h1:TeRTkGYfJXctD9OcfyVLyj2J3IxLnKwHJR8f4D8a3YE=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.1.0/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.7.0/go.mod h1:P32HKFT3hSsZrRxla30E9HqToFYAQPCMs/zFMBUFqPY=
golang.org/x/term v0.8.0/go.mod h1:xPskH00ivmX89bAKVGSKKtLOWNx2+17Eiy94tnKShWo=
golang.org/x/term v0.11.0/go.mod h1:zC9APTIj3jG3FdV/Ons+XE1riIZXG4aZ4GTHiPZJPIU=
golang.org/x/term v0.12.0/go.mod h1:owVbMEjm3cBLCHdkQu9b1opXd4ETQWc3BhuQGKgXgvU=
golang.org/x/term v0.16.0/go.mod h1:yn7UURbUtPyrVJPGPq404EukNFxcm/foM+bV/bfcDsY=
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
golang.org/x/term v0.20.0/go.mod h1:8UkIAJTvZgivsXaD6/pH6U9ecQzZ45awqEOzuCvwpFY=
golang.org/x/term v0.27.0/go.mod h1:iMsnZpn0cago0GOrHO2+Y7u7JPn5AylBrcoWkElMTSM=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.7/go.mod h1:u+2+/6zg+i71rQMx5EYifcz6MCKuco9NR6JIITiCfzQ=
golang.org/x/text v0.4.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.7.0/go.mod h1:mrYo+phRRbMaCq/xk9113O4dZlRixOauAjOtrjsXDZ8=
golang.org/x/text v0.9.0/go.mod h1:e1OnstbJyHTd6l/uOt8jFFHp6TRDWZR/bV3emEE/zU8=
golang.org/x/text v0.12.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.13.0/go.mod h1:TvPlkZtksWOMsz7fbANvkp4WM8x/WCo/om8BMLbz+aE=
golang.org/x/text v0.14.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.15.0/go.mod h1:18ZOQIKpY8NJVqYksKHtTdi31H5itFRjB5/qKTNYzSU=
golang.org/x/text v0.21.0/go.mod h1:4IBbMaMmOPCJ8SecivzSH54+73PCFmPWxNTLm+vZkEQ=
golang.org/x/text v0.26.0 h1:P42AVeLghgTYr4+xUnTRKDMqpar+PtX7KWuNQL21L8M=
golang.org/x/text v0.26.0/go.mod h1:QK15LZJUUQVJxhz7wXgxSy/CJaTFjd0G+YLonydOVQA=
golang.org/x/time v0.0.0-20180412165947-fbb02b2291d2/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.0.0-20181108054448-85acf8d2951c/go.mod h1:tRJNPiyCQ0inRvYxbN9jk5I+vvW/OXSQhTDSoE431IQ=
golang.org/x/time v0.12.0 h1:ScB/8o8olJvc+CQPWrK3fPZNfh7qgwCrY0zJmoEQLSE=
golang.org/x/time v0.12.0/go.mod h1:CDIdPxbZBQxdj6cxyCIdrNogrJKMJ7pr37NYpMcMDSg=
golang.org/x/tools v0.0.0-20180828015842-6cd1fcedba52/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20181030000716-a0a13e073c7b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190114222345-bf090417da8b/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ=
golang.org/x/tools v0.0.0-20190226205152-f727befe758c/go.mod h1:9Yl7xja0Znq3iFh3HoIrodX9oNMXvdceNzlUR8zjMvY=
golang.org/x/tools v0.0.0-20190311212946-11955173bddd/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190328211700-ab21143f2384/go.mod h1:LCzVGOaR6xXOjkQ3onu1FJEFr0SW1gC7cKk1uF8kGRs=
golang.org/x/tools v0.0.0-20190524140312-2c0ae7006135/go.mod h1:RgjU9mgBXZiqYHBnxXauZ1Gv1EHHAz9KjViQ78xBX0Q=
golang.org/x/tools v0.0.0-20190621195816-6e04913cbbac/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc=
golang.org/x/tools v0.0.0-20191029041327-9cc4af7d6b2c/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191029190741-b9c20aec41a5/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo=
golang.org/x/tools v0.0.0-20200130002326-2f3ba24bd6e7/go.mod h1:TB2adYChydJhpapKDTa4BR/hXlZSLoq2Wpct/0txZ28=
golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE=
golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA=
golang.org/x/tools v0.1.5/go.mod h1:o0xws9oXOQQZyjljx8fwUC0k7L1pTE6eaCbjGeHmOkk=
golang.org/x/tools v0.1.12/go.mod h1:hNGJHUnrk76NpqgfD5Aqm5Crs+Hm0VOH/i9J2+nxYbc=
golang.org/x/tools v0.2.0/go.mod h1:y4OqIKeOV/fWJetJ8bXPU1sEVniLMIyDAZWeHdV+NTA=
golang.org/x/tools v0.6.0/go.mod h1:Xwgl3UAJ/d3gWutnCtw505GrjyAbvKui8lOU390QaIU=
golang.org/x/tools v0.13.0/go.mod h1:HvlwmtVNQAhOuCjW7xxvovg8wbNq7LwfXh/k7wXUl58=
golang.org/x/tools v0.21.1-0.20240508182429-e35e4ccd0d2d/go.mod h1:aiJjzUbINMkxbQROHiO6hDPo2LHcIPhhQsa9DLh0yGk=
golang.org/x/tools v0.34.0 h1:qIpSLOxeCYGg9TrcJokLBG4KFA6d795g0xkBkiESGlo=
golang.org/x/tools v0.34.0/go.mod h1:pAP9OwEaY1CAW3HOmg3hLZC5Z0CCmzjAF2UQMSqNARg=
golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0=
gonum.org/v1/gonum v0.13.0 h1:a0T3bh+7fhRyqeNbiC3qVHYmkiQgit3wnNan/2c0HMM=
gonum.org/v1/gonum v0.13.0/go.mod h1:/WPYRckkfWrhWefxyYTfrTtQR0KH4iyHNuzxqXAKyAU=
google.golang.org/api v0.0.0-20180910000450-7ca32eb868bf/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0=
google.golang.org/api v0.0.0-20181030000543-1d582fd0359e/go.mod h1:4mhQ8q/RsB7i+udVvVy5NUi08OU8ZlA0gRVgrF7VFY0=
google.golang.org/api v0.1.0/go.mod h1:UGEZY7KEX120AnNLIHFMKIo4obdJhkp2tPbaPlQx13Y=
google.golang.org/appengine v1.1.0/go.mod h1:EbEs0AVv82hx2wNQdGPgUI5lhzA/G0D9YwlJXL52JkM=
google.golang.org/appengine v1.2.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.3.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/appengine v1.4.0/go.mod h1:xpcJRLb0r/rnEns0DIKYYv+WjYCduHsrkT7/EB5XEv4=
google.golang.org/genproto v0.0.0-20180817151627-c66870c02cf8/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20180831171423-11092d34479b/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20181029155118-b69ba1387ce2/go.mod h1:JiN7NxoALGmiZfu7CAH4rXhgtRTLTxftemlI0sWmxmc=
google.golang.org/genproto v0.0.0-20181202183823-bd91e49a0898/go.mod h1:7Ep/1NZk928CDR8SjdVbjWNpdIf6nzjE3BTgJDr2Atg=
google.golang.org/genproto v0.0.0-20190306203927-b5d61aea6440/go.mod h1:VzzqZJRnGkLBvHegQrXjBqPurQTc5/KpmUdxsrq26oE=
google.golang.org/genproto v0.0.0-20190819201941-24fa4b261c55/go.mod h1:DMBHOl98Agz4BDEuKkezgsaosCRResVns1a3J2ZsMNc=
google.golang.org/genproto v0.0.0-20200526211855-cb27e3aa2013/go.mod h1:NbSheEEYHJ7i3ixzK3sjbqSGDJWnxyFXZblF3eUsNvo=
google.golang.org/genproto/googleapis/api v0.0.0-20250303144028-a0af3efb3deb h1:p31xT4yrYrSM/G4Sn2+TNUkVhFCbG9y8itM2S6Th950=
google.golang.org/genproto/googleapis/api v0.0.0-20250303144028-a0af3efb3deb/go.mod h1:jbe3Bkdp+Dh2IrslsFCklNhweNTBgSYanP1UXhJDhKg=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250303144028-a0af3efb3deb h1:TLPQVbx1GJ8VKZxz52VAxl1EBgKXXbTiU9Fc5fZeLn4=
google.golang.org/genproto/googleapis/rpc v0.0.0-20250303144028-a0af3efb3deb/go.mod h1:LuRYeWDFV6WOn90g357N17oMCaxpgCnbi/44qJvDn2I=
google.golang.org/grpc v1.14.0/go.mod h1:yo6s7OP7yaDglbqo1J04qKzAhqBH6lvTonzMVmEdcZw=
google.golang.org/grpc v1.16.0/go.mod h1:0JHn/cJsOMiMfNA9+DeHDlAU7KAAB5GDlYFpa9MZMio=
google.golang.org/grpc v1.17.0/go.mod h1:6QZJwpn2B+Zp71q/5VxRsJ6NXXVCE5NRUHRo+f3cWCs=
google.golang.org/grpc v1.19.0/go.mod h1:mqu4LbDTu4XGKhr4mRzUsmM4RtVoemTSY81AxZiDr8c=
google.golang.org/grpc v1.23.0/go.mod h1:Y5yQAOtifL1yxbo5wqy6BxZv8vAUGQwXBOALyacEbxg=
google.golang.org/grpc v1.25.1/go.mod h1:c3i+UQWmh7LiEpx4sFZnkU36qjEYZ0imhYfXVyQciAY=
google.golang.org/grpc v1.27.0/go.mod h1:qbnxyOmOxrQa7FizSgH+ReBfzJrCY1pSN7KXBS8abTk=
google.golang.org/grpc v1.33.2/go.mod h1:JMHMWHQWaTccqQQlmk3MJZS+GWXOdAesneDmEnv2fbc=
google.golang.org/grpc v1.72.0 h1:S7UkcVa60b5AAQTaO6ZKamFp1zMZSU0fGDK2WZLbBnM=
google.golang.org/grpc v1.72.0/go.mod h1:wH5Aktxcg25y1I3w7H69nHfXdOG3UiadoBtjh3izSDM=
google.golang.org/protobuf v0.0.0-20200109180630-ec00e32a8dfd/go.mod h1:DFci5gLYBciE7Vtevhsrf46CRTquxDuWsQurQQe4oz8=
google.golang.org/protobuf v0.0.0-20200221191635-4d8936d0db64/go.mod h1:kwYJMbMJ01Woi6D6+Kah6886xMZcty6N08ah7+eCXa0=
google.golang.org/protobuf v0.0.0-20200228230310-ab0ca4ff8a60/go.mod h1:cfTl7dwQJ+fmap5saPgwCLgHXTUD7jkjRqWcaiX5VyM=
google.golang.org/protobuf v1.20.1-0.20200309200217-e05f789c0967/go.mod h1:A+miEFZTKqfCUM6K7xSMQL9OKL/b6hQv+e19PK+JZNE=
google.golang.org/protobuf v1.21.0/go.mod h1:47Nbq4nVaFHyn7ilMalzfO3qCViNmqZ2kzikPIcrTAo=
google.golang.org/protobuf v1.22.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.0/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.23.1-0.20200526195155-81db48ad09cc/go.mod h1:EGpADcykh3NcUnDUJcl1+ZksZNG86OlYog2l/sGQquU=
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.36.6 h1:z1NpPI8ku2WgiWnf+t9wTPsn6eP1L7ksHUlkfLvd9xY=
google.golang.org/protobuf v1.36.6/go.mod h1:jduwjTPXsFjZGTmRluh+L6NjiWu7pchiJ2/5YcXBHnY=
gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI=
gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw=
gopkg.in/natefinch/lumberjack.v2 v2.2.1 h1:bBRl1b0OH9s/DuPhuXpNl+VtCaJXFZ5/uEFST95x9zc=
gopkg.in/natefinch/lumberjack.v2 v2.2.1/go.mod h1:YD8tP3GAjkrDg1eZH7EGmyESg/lsYskCTPBJVb9jqSc=
gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI=
gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
grpc.go4.org v0.0.0-20170609214715-11d0a25b4919/go.mod h1:77eQGdRu53HpSqPFJFmuJdjuHRquDANNeA4x7B8WQ9o=
honnef.co/go/tools v0.0.0-20180728063816-88497007e858/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190102054323-c2f93a96b099/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190106161140-3f1c8253044a/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.0-20190523083050-ea95bdfd59fc/go.mod h1:rf3lG4BRIbNafJWhAfAdb/ePZxsR/4RtNHQocxwk9r4=
honnef.co/go/tools v0.0.1-2019.2.3/go.mod h1:a3bituU0lyd329TUQxRnasdCoJDkEUEAqEt0JzvZhAg=
lukechampine.com/blake3 v1.1.6/go.mod h1:tkKEOtDkNtklkXtLNEOGNq5tcV90tJiA1vAA12R78LA=
lukechampine.com/blake3 v1.2.1/go.mod h1:0OFRp7fBtAylGVCO40o87sbupkyIGgbpv1+M1k1LM6k=
lukechampine.com/blake3 v1.4.1 h1:I3Smz7gso8w4/TunLKec6K2fn+kyKtDxr/xcQEN84Wg=
lukechampine.com/blake3 v1.4.1/go.mod h1:QFosUxmjB8mnrWFSNwKmvxHpfY72bmD2tQ0kBMM3kwo=
pgregory.net/rapid v1.2.0 h1:keKAYRcjm+e1F0oAuU5F5+YPAWcyxNNRK2wud503Gnk=
pgregory.net/rapid v1.2.0/go.mod h1:PY5XlDGj0+V1FCq0o192FdRhpKHGTRIWBgqjDBTrq04=
sourcegraph.com/sourcegraph/go-diff v0.5.0/go.mod h1:kuch7UrkMzY0X+p9CRK03kfuPQ2zzQcaEFbx8wA8rck=
sourcegraph.com/sqs/pbtypes v0.0.0-20180604144634-d3ebe8f20ae4/go.mod h1:ketZ/q3QxT9HOBeFhu6RdvsftgpsbFHBF5Cas6cDKZ0=

70
conntest/main.go Normal file
View File

@ -0,0 +1,70 @@
package main
import (
"bufio"
"encoding/hex"
"flag"
"fmt"
"os"
"path/filepath"
"strings"
"github.com/libp2p/go-libp2p/core/peer"
"go.uber.org/zap"
"source.quilibrium.com/quilibrium/monorepo/config"
"source.quilibrium.com/quilibrium/monorepo/go-libp2p-blossomsub/pb"
"source.quilibrium.com/quilibrium/monorepo/node/p2p"
)
var (
configDirectory = flag.String(
"config",
filepath.Join(".", ".config"),
"the configuration directory",
)
)
func main() {
flag.Parse()
cfg, err := config.LoadConfig(*configDirectory, "", false)
if err != nil {
panic(err)
}
logger, _ := zap.NewProduction()
pubsub := p2p.NewBlossomSub(cfg.P2P, cfg.Engine, logger, 0, p2p.ConfigDir(*configDirectory))
fmt.Print("Enter bitmask in hex (no 0x prefix): ")
reader := bufio.NewReader(os.Stdin)
bitmaskHex, _ := reader.ReadString('\n')
bitmaskHex = strings.TrimRight(bitmaskHex, "\n")
logger.Info("subscribing to bitmask")
bitmask, err := hex.DecodeString(bitmaskHex)
if err != nil {
panic(err)
}
err = pubsub.Subscribe(bitmask, func(message *pb.Message) error {
logger.Info(
"received message",
zap.String("bitmask", hex.EncodeToString(message.Bitmask)),
zap.String("peer", peer.ID(message.From).String()),
zap.String("message", string(message.Data)),
)
return nil
})
if err != nil {
panic(err)
}
for {
fmt.Print(peer.ID(pubsub.GetPeerID()).String() + "> ")
message, _ := reader.ReadString('\n')
message = strings.TrimRight(message, "\n")
err = pubsub.PublishToBitmask(bitmask, []byte(message))
if err != nil {
logger.Error("error sending", zap.Error(err))
}
}
}

18
consensus/.mockery.yaml Normal file
View File

@ -0,0 +1,18 @@
dir: "{{.InterfaceDir}}/mock"
outpkg: "mock"
filename: "{{.InterfaceName | snakecase}}.go"
mockname: "{{.InterfaceName}}"
all: True
with-expecter: False
include-auto-generated: False
disable-func-mocks: True
fail-on-missing: True
disable-version-string: True
resolve-type-alias: False
packages:
source.quilibrium.com/quilibrium/monorepo/consensus:
config:
dir: "mocks"
outpkg: "mocks"

View File

@ -1,300 +1,689 @@
# Consensus State Machine # Consensus State Machine
A generic, extensible state machine implementation for building Byzantine Fault The Consensus State Machine has been swapped with a fork of the HotStuff implementation by Flow.
Tolerant (BFT) consensus protocols. This library provides a framework for
implementing round-based consensus algorithms with cryptographic proofs.
## Overview High level, there's a few key differences:
The state machine manages consensus engine state transitions through a 1. Terminology
well-defined set of states and events. It supports generic type parameters to Flow uses view as the term for the monotonically incrementing consensus state changes, either by
allow different implementations of state data, votes, peer identities, and Quorum Certificate or Timeout Certificate. We use the term rank, following our existing
collected mutations. terminology we maintained for the original CSM.
## Features Flow also refers to the incremental states as blocks. We used the generic term state, as our
incremental states are named frames, but other projects may have a different state bearing
object.
- **Generic Implementation**: Supports custom types for state data, votes, peer 2. Generics
IDs, and collected data Flow's core data structures leaked into the consensus package, resulting in state bearing types
- **Byzantine Fault Tolerance**: Provides BFT consensus with < 1/3 byzantine and other types being hard-defined to the types flow uses in the higher level of the protocol.
nodes, flexible to other probabilistic BFT implementations We adopted generic parameters instead, StateT, VoteT, PeerIDT, and CollectedT, to refer to
- **Round-based Consensus**: Implements a round-based state transition pattern state-bearing types, vote types, peer identification types, and collected (via mempool or mixer)
- **Pluggable Providers**: Extensible through provider interfaces for different payloads.
consensus behaviors
- **Event-driven Architecture**: State transitions triggered by events with
optional guard conditions
- **Concurrent Safe**: Thread-safe implementation with proper mutex usage
- **Timeout Support**: Configurable timeouts for each state with automatic
transitions
- **Transition Listeners**: Observable state transitions for monitoring and
debugging
## Core Concepts
### States # License
The state machine progresses through the following states: Flow and Quilibrium are license compatible, with their protocol also utilizing the AGPL.
Regardless, we leave their license in full below for reference:
1. **StateStopped**: Initial state, engine is not running GNU AFFERO GENERAL PUBLIC LICENSE
2. **StateStarting**: Engine is initializing Version 3, 19 November 2007
3. **StateLoading**: Loading data and syncing with network
4. **StateCollecting**: Collecting data/mutations for consensus round
5. **StateLivenessCheck**: Checking peer liveness before proving
6. **StateProving**: Generating cryptographic proof (leader only)
7. **StatePublishing**: Publishing proposed state
8. **StateVoting**: Voting on proposals
9. **StateFinalizing**: Finalizing consensus round
10. **StateVerifying**: Verifying and publishing results
11. **StateStopping**: Engine is shutting down
### Events Copyright (C) 2007 Free Software Foundation, Inc. <https://fsf.org/>
Everyone is permitted to copy and distribute verbatim copies
of this license document, but changing it is not allowed.
Events trigger state transitions: Preamble
- `EventStart`, `EventStop`: Lifecycle events
- `EventSyncComplete`: Synchronization finished
- `EventCollectionDone`: Mutation collection complete
- `EventLivenessCheckReceived`: Peer liveness confirmed
- `EventProverSignal`: Leader selection complete
- `EventProofComplete`: Proof generation finished
- `EventProposalReceived`: New proposal received
- `EventVoteReceived`: Vote received
- `EventQuorumReached`: Voting quorum achieved
- `EventConfirmationReceived`: State confirmation received
- And more...
### Type Constraints The GNU Affero General Public License is a free, copyleft license for
software and other kinds of works, specifically designed to ensure
cooperation with the community in the case of network server software.
All generic type parameters must implement the `Unique` interface: The licenses for most software and other practical works are designed
to take away your freedom to share and change the works. By contrast,
our General Public Licenses are intended to guarantee your freedom to
share and change all versions of a program--to make sure it remains free
software for all its users.
```go When we speak of free software, we are referring to freedom, not
type Unique interface { price. Our General Public Licenses are designed to make sure that you
Identity() Identity // Returns a unique string identifier have the freedom to distribute copies of free software (and charge for
} them if you wish), that you receive source code or can get it if you
``` want it, that you can change the software or use pieces of it in new
free programs, and that you know you can do these things.
## Provider Interfaces Developers that use our General Public Licenses protect your rights
with two steps: (1) assert copyright on the software, and (2) offer
you this License which gives you legal permission to copy, distribute
and/or modify the software.
### SyncProvider A secondary benefit of defending all users' freedom is that
improvements made in alternate versions of the program, if they
receive widespread use, become available for other developers to
incorporate. Many developers of free software are heartened and
encouraged by the resulting cooperation. However, in the case of
software used on network servers, this result may fail to come about.
The GNU General Public License permits making a modified version and
letting the public access it on a server without ever releasing its
source code to the public.
Handles initial state synchronization: The GNU Affero General Public License is designed specifically to
ensure that, in such cases, the modified source code becomes available
to the community. It requires the operator of a network server to
provide the source code of the modified version running there to the
users of that server. Therefore, public use of a modified version, on
a publicly accessible server, gives the public access to the source
code of the modified version.
```go An older license, called the Affero General Public License and
type SyncProvider[StateT Unique] interface { published by Affero, was designed to accomplish similar goals. This is
Synchronize( a different license, not a version of the Affero GPL, but Affero has
existing *StateT, released a new version of the Affero GPL which permits relicensing under
ctx context.Context, this license.
) (<-chan *StateT, <-chan error)
}
```
### VotingProvider The precise terms and conditions for copying, distribution and
modification follow.
Manages the voting process: TERMS AND CONDITIONS
```go 0. Definitions.
type VotingProvider[StateT Unique, VoteT Unique, PeerIDT Unique] interface {
SendProposal(proposal *StateT, ctx context.Context) error
DecideAndSendVote(
proposals map[Identity]*StateT,
ctx context.Context,
) (PeerIDT, *VoteT, error)
IsQuorum(votes map[Identity]*VoteT, ctx context.Context) (bool, error)
FinalizeVotes(
proposals map[Identity]*StateT,
votes map[Identity]*VoteT,
ctx context.Context,
) (*StateT, PeerIDT, error)
SendConfirmation(finalized *StateT, ctx context.Context) error
}
```
### LeaderProvider "This License" refers to version 3 of the GNU Affero General Public License.
Handles leader selection and proof generation: "Copyright" also means copyright-like laws that apply to other kinds of
works, such as semiconductor masks.
```go "The Program" refers to any copyrightable work licensed under this
type LeaderProvider[ License. Each licensee is addressed as "you". "Licensees" and
StateT Unique, "recipients" may be individuals or organizations.
PeerIDT Unique,
CollectedT Unique,
] interface {
GetNextLeaders(prior *StateT, ctx context.Context) ([]PeerIDT, error)
ProveNextState(
prior *StateT,
collected CollectedT,
ctx context.Context,
) (*StateT, error)
}
```
### LivenessProvider To "modify" a work means to copy from or adapt all or part of the work
in a fashion requiring copyright permission, other than the making of an
exact copy. The resulting work is called a "modified version" of the
earlier work or a work "based on" the earlier work.
Manages peer liveness checks: A "covered work" means either the unmodified Program or a work based
on the Program.
```go To "propagate" a work means to do anything with it that, without
type LivenessProvider[ permission, would make you directly or secondarily liable for
StateT Unique, infringement under applicable copyright law, except executing it on a
PeerIDT Unique, computer or modifying a private copy. Propagation includes copying,
CollectedT Unique, distribution (with or without modification), making available to the
] interface { public, and in some countries other activities as well.
Collect(ctx context.Context) (CollectedT, error)
SendLiveness(prior *StateT, collected CollectedT, ctx context.Context) error
}
```
## Usage To "convey" a work means any kind of propagation that enables other
parties to make or receive copies. Mere interaction with a user through
a computer network, with no transfer of a copy, is not conveying.
### Basic Setup An interactive user interface displays "Appropriate Legal Notices"
to the extent that it includes a convenient and prominently visible
feature that (1) displays an appropriate copyright notice, and (2)
tells the user that there is no warranty for the work (except to the
extent that warranties are provided), that licensees may convey the
work under this License, and how to view a copy of this License. If
the interface presents a list of user commands or options, such as a
menu, a prominent item in the list meets this criterion.
```go 1. Source Code.
// Define your types implementing Unique
type MyState struct {
Round uint64
Hash string
}
func (s MyState) Identity() string { return s.Hash }
type MyVote struct { The "source code" for a work means the preferred form of the work
Voter string for making modifications to it. "Object code" means any non-source
Value bool form of a work.
}
func (v MyVote) Identity() string { return v.Voter }
type MyPeerID struct { A "Standard Interface" means an interface that either is an official
ID string standard defined by a recognized standards body, or, in the case of
} interfaces specified for a particular programming language, one that
func (p MyPeerID) Identity() string { return p.ID } is widely used among developers working in that language.
type MyCollected struct { The "System Libraries" of an executable work include anything, other
Data []byte than the work as a whole, that (a) is included in the normal form of
} packaging a Major Component, but which is not part of that Major
func (c MyCollected) Identity() string { return string(c.Data) } Component, and (b) serves only to enable use of the work with that
Major Component, or to implement a Standard Interface for which an
implementation is available to the public in source code form. A
"Major Component", in this context, means a major essential component
(kernel, window system, and so on) of the specific operating system
(if any) on which the executable work runs, or a compiler used to
produce the work, or an object code interpreter used to run it.
// Implement providers The "Corresponding Source" for a work in object code form means all
syncProvider := &MySyncProvider{} the source code needed to generate, install, and (for an executable
votingProvider := &MyVotingProvider{} work) run the object code and to modify the work, including scripts to
leaderProvider := &MyLeaderProvider{} control those activities. However, it does not include the work's
livenessProvider := &MyLivenessProvider{} System Libraries, or general-purpose tools or generally available free
programs which are used unmodified in performing those activities but
which are not part of the work. For example, Corresponding Source
includes interface definition files associated with source files for
the work, and the source code for shared libraries and dynamically
linked subprograms that the work is specifically designed to require,
such as by intimate data communication or control flow between those
subprograms and other parts of the work.
// Create state machine The Corresponding Source need not include anything that users
sm := consensus.NewStateMachine[MyState, MyVote, MyPeerID, MyCollected]( can regenerate automatically from other parts of the Corresponding
MyPeerID{ID: "node1"}, // This node's ID Source.
&MyState{Round: 0, Hash: "genesis"}, // Initial state
true, // shouldEmitReceiveEventsOnSends
3, // minimumProvers
syncProvider,
votingProvider,
leaderProvider,
livenessProvider,
nil, // Optional trace logger
)
// Add transition listener The Corresponding Source for a work in source code form is that
sm.AddListener(&MyTransitionListener{}) same work.
// Start the state machine 2. Basic Permissions.
if err := sm.Start(); err != nil {
log.Fatal(err)
}
// Receive external events All rights granted under this License are granted for the term of
sm.ReceiveProposal(peer, proposal) copyright on the Program, and are irrevocable provided the stated
sm.ReceiveVote(voter, vote) conditions are met. This License explicitly affirms your unlimited
sm.ReceiveLivenessCheck(peer, collected) permission to run the unmodified Program. The output from running a
sm.ReceiveConfirmation(peer, confirmation) covered work is covered by this License only if the output, given its
content, constitutes a covered work. This License acknowledges your
rights of fair use or other equivalent, as provided by copyright law.
// Stop the state machine You may make, run and propagate covered works that you do not
if err := sm.Stop(); err != nil { convey, without conditions so long as your license otherwise remains
log.Fatal(err) in force. You may convey covered works to others for the sole purpose
} of having them make modifications exclusively for you, or provide you
``` with facilities for running those works, provided that you comply with
the terms of this License in conveying all material for which you do
not control copyright. Those thus making or running the covered works
for you must do so exclusively on your behalf, under your direction
and control, on terms that prohibit them from making any copies of
your copyrighted material outside their relationship with you.
### Implementing Providers Conveying under any other circumstances is permitted solely under
the conditions stated below. Sublicensing is not allowed; section 10
makes it unnecessary.
See the `example/generic_consensus_example.go` for a complete working example 3. Protecting Users' Legal Rights From Anti-Circumvention Law.
with mock provider implementations.
## State Flow No covered work shall be deemed part of an effective technological
measure under any applicable law fulfilling obligations under article
11 of the WIPO copyright treaty adopted on 20 December 1996, or
similar laws prohibiting or restricting circumvention of such
measures.
The typical consensus flow: When you convey a covered work, you waive any legal power to forbid
circumvention of technological measures to the extent such circumvention
is effected by exercising rights under this License with respect to
the covered work, and you disclaim any intention to limit operation or
modification of the work as a means of enforcing, against the work's
users, your or third parties' legal rights to forbid circumvention of
technological measures.
1. **Start****Starting** → **Loading** 4. Conveying Verbatim Copies.
2. **Loading**: Synchronize with network
3. **Collecting**: Gather mutations/changes
4. **LivenessCheck**: Verify peer availability
5. **Proving**: Leader generates proof
6. **Publishing**: Leader publishes proposal
7. **Voting**: All nodes vote on proposals
8. **Finalizing**: Aggregate votes and determine outcome
9. **Verifying**: Confirm and apply state changes
10. Loop back to **Collecting** for next round
## Configuration You may convey verbatim copies of the Program's source code as you
receive it, in any medium, provided that you conspicuously and
appropriately publish on each copy an appropriate copyright notice;
keep intact all notices stating that this License and any
non-permissive terms added in accord with section 7 apply to the code;
keep intact all notices of the absence of any warranty; and give all
recipients a copy of this License along with the Program.
### Constructor Parameters You may charge any price or no price for each copy that you convey,
and you may offer support or warranty protection for a fee.
- `id`: This node's peer ID 5. Conveying Modified Source Versions.
- `initialState`: Starting state (can be nil)
- `shouldEmitReceiveEventsOnSends`: Whether to emit receive events for own
messages
- `minimumProvers`: Minimum number of active provers required
- `traceLogger`: Optional logger for debugging state transitions
### State Timeouts You may convey a work based on the Program, or the modifications to
produce it from the Program, in the form of source code under the
terms of section 4, provided that you also meet all of these conditions:
Each state can have a configured timeout that triggers an automatic transition: a) The work must carry prominent notices stating that you modified
it, and giving a relevant date.
- **Starting**: 1 second → `EventInitComplete` b) The work must carry prominent notices stating that it is
- **Loading**: 10 minutes → `EventSyncComplete` released under this License and any conditions added under section
- **Collecting**: 1 second → `EventCollectionDone` 7. This requirement modifies the requirement in section 4 to
- **LivenessCheck**: 1 second → `EventLivenessTimeout` "keep intact all notices".
- **Proving**: 120 seconds → `EventPublishTimeout`
- **Publishing**: 1 second → `EventPublishTimeout`
- **Voting**: 10 seconds → `EventVotingTimeout`
- **Finalizing**: 1 second → `EventAggregationDone`
- **Verifying**: 1 second → `EventVerificationDone`
- **Stopping**: 30 seconds → `EventCleanupComplete`
## Thread Safety c) You must license the entire work, as a whole, under this
License to anyone who comes into possession of a copy. This
License will therefore apply, along with any applicable section 7
additional terms, to the whole of the work, and all its parts,
regardless of how they are packaged. This License gives no
permission to license the work in any other way, but it does not
invalidate such permission if you have separately received it.
The state machine is thread-safe. All public methods properly handle concurrent d) If the work has interactive user interfaces, each must display
access through mutex locks. State behaviors run in separate goroutines with Appropriate Legal Notices; however, if the Program has interactive
proper cancellation support. interfaces that do not display Appropriate Legal Notices, your
work need not make them do so.
## Error Handling A compilation of a covered work with other separate and independent
works, which are not by their nature extensions of the covered work,
and which are not combined with it such as to form a larger program,
in or on a volume of a storage or distribution medium, is called an
"aggregate" if the compilation and its resulting copyright are not
used to limit the access or legal rights of the compilation's users
beyond what the individual works permit. Inclusion of a covered work
in an aggregate does not cause this License to apply to the other
parts of the aggregate.
- Provider errors are logged but don't crash the state machine 6. Conveying Non-Source Forms.
- The state machine continues operating and may retry operations
- Critical errors during state transitions are returned to callers
- Use the `TraceLogger` interface for debugging
## Best Practices You may convey a covered work in object code form under the terms
of sections 4 and 5, provided that you also convey the
machine-readable Corresponding Source under the terms of this License,
in one of these ways:
1. **Message Isolation**: When implementing providers, always deep-copy data a) Convey the object code in, or embodied in, a physical product
before sending to prevent shared state between state machine and other (including a physical distribution medium), accompanied by the
handlers Corresponding Source fixed on a durable physical medium
2. **Nil Handling**: Provider implementations should handle nil prior states customarily used for software interchange.
gracefully
3. **Context Usage**: Respect context cancellation in long-running operations
4. **Quorum Size**: Set appropriate quorum size based on your network (typically
2f+1 for f failures)
5. **Timeout Configuration**: Adjust timeouts based on network conditions and
proof generation time
## Example b) Convey the object code in, or embodied in, a physical product
(including a physical distribution medium), accompanied by a
written offer, valid for at least three years and valid for as
long as you offer spare parts or customer support for that product
model, to give anyone who possesses the object code either (1) a
copy of the Corresponding Source for all the software in the
product that is covered by this License, on a durable physical
medium customarily used for software interchange, for a price no
more than your reasonable cost of physically performing this
conveying of source, or (2) access to copy the
Corresponding Source from a network server at no charge.
See `example/generic_consensus_example.go` for a complete working example c) Convey individual copies of the object code with a copy of the
demonstrating: written offer to provide the Corresponding Source. This
- Mock provider implementations alternative is allowed only occasionally and noncommercially, and
- Multi-node consensus network only if you received the object code with such an offer, in accord
- Byzantine node behavior with subsection 6b.
- Message passing between nodes
- State transition monitoring
## Testing d) Convey the object code by offering access from a designated
place (gratis or for a charge), and offer equivalent access to the
Corresponding Source in the same way through the same place at no
further charge. You need not require recipients to copy the
Corresponding Source along with the object code. If the place to
copy the object code is a network server, the Corresponding Source
may be on a different server (operated by you or a third party)
that supports equivalent copying facilities, provided you maintain
clear directions next to the object code saying where to find the
Corresponding Source. Regardless of what server hosts the
Corresponding Source, you remain obligated to ensure that it is
available for as long as needed to satisfy these requirements.
The package includes comprehensive tests in `state_machine_test.go` covering: e) Convey the object code using peer-to-peer transmission, provided
- State transitions you inform other peers where the object code and Corresponding
- Event handling Source of the work are being offered to the general public at no
- Concurrent operations charge under subsection 6d.
- Byzantine scenarios
- Timeout behavior A separable portion of the object code, whose source code is excluded
from the Corresponding Source as a System Library, need not be
included in conveying the object code work.
A "User Product" is either (1) a "consumer product", which means any
tangible personal property which is normally used for personal, family,
or household purposes, or (2) anything designed or sold for incorporation
into a dwelling. In determining whether a product is a consumer product,
doubtful cases shall be resolved in favor of coverage. For a particular
product received by a particular user, "normally used" refers to a
typical or common use of that class of product, regardless of the status
of the particular user or of the way in which the particular user
actually uses, or expects or is expected to use, the product. A product
is a consumer product regardless of whether the product has substantial
commercial, industrial or non-consumer uses, unless such uses represent
the only significant mode of use of the product.
"Installation Information" for a User Product means any methods,
procedures, authorization keys, or other information required to install
and execute modified versions of a covered work in that User Product from
a modified version of its Corresponding Source. The information must
suffice to ensure that the continued functioning of the modified object
code is in no case prevented or interfered with solely because
modification has been made.
If you convey an object code work under this section in, or with, or
specifically for use in, a User Product, and the conveying occurs as
part of a transaction in which the right of possession and use of the
User Product is transferred to the recipient in perpetuity or for a
fixed term (regardless of how the transaction is characterized), the
Corresponding Source conveyed under this section must be accompanied
by the Installation Information. But this requirement does not apply
if neither you nor any third party retains the ability to install
modified object code on the User Product (for example, the work has
been installed in ROM).
The requirement to provide Installation Information does not include a
requirement to continue to provide support service, warranty, or updates
for a work that has been modified or installed by the recipient, or for
the User Product in which it has been modified or installed. Access to a
network may be denied when the modification itself materially and
adversely affects the operation of the network or violates the rules and
protocols for communication across the network.
Corresponding Source conveyed, and Installation Information provided,
in accord with this section must be in a format that is publicly
documented (and with an implementation available to the public in
source code form), and must require no special password or key for
unpacking, reading or copying.
7. Additional Terms.
"Additional permissions" are terms that supplement the terms of this
License by making exceptions from one or more of its conditions.
Additional permissions that are applicable to the entire Program shall
be treated as though they were included in this License, to the extent
that they are valid under applicable law. If additional permissions
apply only to part of the Program, that part may be used separately
under those permissions, but the entire Program remains governed by
this License without regard to the additional permissions.
When you convey a copy of a covered work, you may at your option
remove any additional permissions from that copy, or from any part of
it. (Additional permissions may be written to require their own
removal in certain cases when you modify the work.) You may place
additional permissions on material, added by you to a covered work,
for which you have or can give appropriate copyright permission.
Notwithstanding any other provision of this License, for material you
add to a covered work, you may (if authorized by the copyright holders of
that material) supplement the terms of this License with terms:
a) Disclaiming warranty or limiting liability differently from the
terms of sections 15 and 16 of this License; or
b) Requiring preservation of specified reasonable legal notices or
author attributions in that material or in the Appropriate Legal
Notices displayed by works containing it; or
c) Prohibiting misrepresentation of the origin of that material, or
requiring that modified versions of such material be marked in
reasonable ways as different from the original version; or
d) Limiting the use for publicity purposes of names of licensors or
authors of the material; or
e) Declining to grant rights under trademark law for use of some
trade names, trademarks, or service marks; or
f) Requiring indemnification of licensors and authors of that
material by anyone who conveys the material (or modified versions of
it) with contractual assumptions of liability to the recipient, for
any liability that these contractual assumptions directly impose on
those licensors and authors.
All other non-permissive additional terms are considered "further
restrictions" within the meaning of section 10. If the Program as you
received it, or any part of it, contains a notice stating that it is
governed by this License along with a term that is a further
restriction, you may remove that term. If a license document contains
a further restriction but permits relicensing or conveying under this
License, you may add to a covered work material governed by the terms
of that license document, provided that the further restriction does
not survive such relicensing or conveying.
If you add terms to a covered work in accord with this section, you
must place, in the relevant source files, a statement of the
additional terms that apply to those files, or a notice indicating
where to find the applicable terms.
Additional terms, permissive or non-permissive, may be stated in the
form of a separately written license, or stated as exceptions;
the above requirements apply either way.
8. Termination.
You may not propagate or modify a covered work except as expressly
provided under this License. Any attempt otherwise to propagate or
modify it is void, and will automatically terminate your rights under
this License (including any patent licenses granted under the third
paragraph of section 11).
However, if you cease all violation of this License, then your
license from a particular copyright holder is reinstated (a)
provisionally, unless and until the copyright holder explicitly and
finally terminates your license, and (b) permanently, if the copyright
holder fails to notify you of the violation by some reasonable means
prior to 60 days after the cessation.
Moreover, your license from a particular copyright holder is
reinstated permanently if the copyright holder notifies you of the
violation by some reasonable means, this is the first time you have
received notice of violation of this License (for any work) from that
copyright holder, and you cure the violation prior to 30 days after
your receipt of the notice.
Termination of your rights under this section does not terminate the
licenses of parties who have received copies or rights from you under
this License. If your rights have been terminated and not permanently
reinstated, you do not qualify to receive new licenses for the same
material under section 10.
9. Acceptance Not Required for Having Copies.
You are not required to accept this License in order to receive or
run a copy of the Program. Ancillary propagation of a covered work
occurring solely as a consequence of using peer-to-peer transmission
to receive a copy likewise does not require acceptance. However,
nothing other than this License grants you permission to propagate or
modify any covered work. These actions infringe copyright if you do
not accept this License. Therefore, by modifying or propagating a
covered work, you indicate your acceptance of this License to do so.
10. Automatic Licensing of Downstream Recipients.
Each time you convey a covered work, the recipient automatically
receives a license from the original licensors, to run, modify and
propagate that work, subject to this License. You are not responsible
for enforcing compliance by third parties with this License.
An "entity transaction" is a transaction transferring control of an
organization, or substantially all assets of one, or subdividing an
organization, or merging organizations. If propagation of a covered
work results from an entity transaction, each party to that
transaction who receives a copy of the work also receives whatever
licenses to the work the party's predecessor in interest had or could
give under the previous paragraph, plus a right to possession of the
Corresponding Source of the work from the predecessor in interest, if
the predecessor has it or can get it with reasonable efforts.
You may not impose any further restrictions on the exercise of the
rights granted or affirmed under this License. For example, you may
not impose a license fee, royalty, or other charge for exercise of
rights granted under this License, and you may not initiate litigation
(including a cross-claim or counterclaim in a lawsuit) alleging that
any patent claim is infringed by making, using, selling, offering for
sale, or importing the Program or any portion of it.
11. Patents.
A "contributor" is a copyright holder who authorizes use under this
License of the Program or a work on which the Program is based. The
work thus licensed is called the contributor's "contributor version".
A contributor's "essential patent claims" are all patent claims
owned or controlled by the contributor, whether already acquired or
hereafter acquired, that would be infringed by some manner, permitted
by this License, of making, using, or selling its contributor version,
but do not include claims that would be infringed only as a
consequence of further modification of the contributor version. For
purposes of this definition, "control" includes the right to grant
patent sublicenses in a manner consistent with the requirements of
this License.
Each contributor grants you a non-exclusive, worldwide, royalty-free
patent license under the contributor's essential patent claims, to
make, use, sell, offer for sale, import and otherwise run, modify and
propagate the contents of its contributor version.
In the following three paragraphs, a "patent license" is any express
agreement or commitment, however denominated, not to enforce a patent
(such as an express permission to practice a patent or covenant not to
sue for patent infringement). To "grant" such a patent license to a
party means to make such an agreement or commitment not to enforce a
patent against the party.
If you convey a covered work, knowingly relying on a patent license,
and the Corresponding Source of the work is not available for anyone
to copy, free of charge and under the terms of this License, through a
publicly available network server or other readily accessible means,
then you must either (1) cause the Corresponding Source to be so
available, or (2) arrange to deprive yourself of the benefit of the
patent license for this particular work, or (3) arrange, in a manner
consistent with the requirements of this License, to extend the patent
license to downstream recipients. "Knowingly relying" means you have
actual knowledge that, but for the patent license, your conveying the
covered work in a country, or your recipient's use of the covered work
in a country, would infringe one or more identifiable patents in that
country that you have reason to believe are valid.
If, pursuant to or in connection with a single transaction or
arrangement, you convey, or propagate by procuring conveyance of, a
covered work, and grant a patent license to some of the parties
receiving the covered work authorizing them to use, propagate, modify
or convey a specific copy of the covered work, then the patent license
you grant is automatically extended to all recipients of the covered
work and works based on it.
A patent license is "discriminatory" if it does not include within
the scope of its coverage, prohibits the exercise of, or is
conditioned on the non-exercise of one or more of the rights that are
specifically granted under this License. You may not convey a covered
work if you are a party to an arrangement with a third party that is
in the business of distributing software, under which you make payment
to the third party based on the extent of your activity of conveying
the work, and under which the third party grants, to any of the
parties who would receive the covered work from you, a discriminatory
patent license (a) in connection with copies of the covered work
conveyed by you (or copies made from those copies), or (b) primarily
for and in connection with specific products or compilations that
contain the covered work, unless you entered into that arrangement,
or that patent license was granted, prior to 28 March 2007.
Nothing in this License shall be construed as excluding or limiting
any implied license or other defenses to infringement that may
otherwise be available to you under applicable patent law.
12. No Surrender of Others' Freedom.
If conditions are imposed on you (whether by court order, agreement or
otherwise) that contradict the conditions of this License, they do not
excuse you from the conditions of this License. If you cannot convey a
covered work so as to satisfy simultaneously your obligations under this
License and any other pertinent obligations, then as a consequence you may
not convey it at all. For example, if you agree to terms that obligate you
to collect a royalty for further conveying from those to whom you convey
the Program, the only way you could satisfy both those terms and this
License would be to refrain entirely from conveying the Program.
13. Remote Network Interaction; Use with the GNU General Public License.
Notwithstanding any other provision of this License, if you modify the
Program, your modified version must prominently offer all users
interacting with it remotely through a computer network (if your version
supports such interaction) an opportunity to receive the Corresponding
Source of your version by providing access to the Corresponding Source
from a network server at no charge, through some standard or customary
means of facilitating copying of software. This Corresponding Source
shall include the Corresponding Source for any work covered by version 3
of the GNU General Public License that is incorporated pursuant to the
following paragraph.
Notwithstanding any other provision of this License, you have
permission to link or combine any covered work with a work licensed
under version 3 of the GNU General Public License into a single
combined work, and to convey the resulting work. The terms of this
License will continue to apply to the part which is the covered work,
but the work with which it is combined will remain governed by version
3 of the GNU General Public License.
14. Revised Versions of this License.
The Free Software Foundation may publish revised and/or new versions of
the GNU Affero General Public License from time to time. Such new versions
will be similar in spirit to the present version, but may differ in detail to
address new problems or concerns.
Each version is given a distinguishing version number. If the
Program specifies that a certain numbered version of the GNU Affero General
Public License "or any later version" applies to it, you have the
option of following the terms and conditions either of that numbered
version or of any later version published by the Free Software
Foundation. If the Program does not specify a version number of the
GNU Affero General Public License, you may choose any version ever published
by the Free Software Foundation.
If the Program specifies that a proxy can decide which future
versions of the GNU Affero General Public License can be used, that proxy's
public statement of acceptance of a version permanently authorizes you
to choose that version for the Program.
Later license versions may give you additional or different
permissions. However, no additional obligations are imposed on any
author or copyright holder as a result of your choosing to follow a
later version.
15. Disclaimer of Warranty.
THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY
APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT
HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY
OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO,
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM
IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF
ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
16. Limitation of Liability.
IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING
WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS
THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY
GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE
USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF
DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD
PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS),
EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF
SUCH DAMAGES.
17. Interpretation of Sections 15 and 16.
If the disclaimer of warranty and limitation of liability provided
above cannot be given local legal effect according to their terms,
reviewing courts shall apply local law that most closely approximates
an absolute waiver of all civil liability in connection with the
Program, unless a warranty or assumption of liability accompanies a
copy of the Program in return for a fee.
END OF TERMS AND CONDITIONS
How to Apply These Terms to Your New Programs
If you develop a new program, and you want it to be of the greatest
possible use to the public, the best way to achieve this is to make it
free software which everyone can redistribute and change under these terms.
To do so, attach the following notices to the program. It is safest
to attach them to the start of each source file to most effectively
state the exclusion of warranty; and each file should have at least
the "copyright" line and a pointer to where the full notice is found.
<one line to give the program's name and a brief idea of what it does.>
Copyright (C) <year> <name of author>
This program is free software: you can redistribute it and/or modify
it under the terms of the GNU Affero General Public License as published by
the Free Software Foundation, either version 3 of the License, or
(at your option) any later version.
This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
GNU Affero General Public License for more details.
You should have received a copy of the GNU Affero General Public License
along with this program. If not, see <https://www.gnu.org/licenses/>.
Also add information on how to contact you by electronic and paper mail.
If your software can interact with users remotely through a computer
network, you should also make sure that it provides a way for users to
get its source. For example, if your program is a web application, its
interface could display a "Source" link that leads users to an archive
of the code. There are many ways you could offer source, and different
solutions will be better for different programs; see section 13 for the
specific requirements.
You should also get your employer (if you work as a programmer) or school,
if any, to sign a "copyright disclaimer" for the program, if necessary.
For more information on this, and how to apply and follow the GNU AGPL, see
<https://www.gnu.org/licenses/>.

View File

@ -0,0 +1,156 @@
package consensus
import "source.quilibrium.com/quilibrium/monorepo/consensus/models"
// A committee provides a subset of the protocol.State, which is restricted to
// exactly those nodes that participate in the current HotStuff instance: the
// state of all legitimate HotStuff participants for the specified rank.
// Legitimate HotStuff participants have NON-ZERO WEIGHT.
//
// For the purposes of validating votes, timeouts, quorum certificates, and
// timeout certificates we consider a committee which is static over the course
// of an rank. Although committee members may be ejected, or have their weight
// change during an rank, we ignore these changes. For these purposes we use
// the Replicas and *ByRank methods.
//
// When validating proposals, we take into account changes to the committee
// during the course of an rank. In particular, if a node is ejected, we will
// immediately reject all future proposals from that node. For these purposes we
// use the DynamicCommittee and *ByState methods.
// Replicas defines the consensus committee for the purposes of validating
// votes, timeouts, quorum certificates, and timeout certificates. Any consensus
// committee member who was authorized to contribute to consensus AT THE
// BEGINNING of the rank may produce valid votes and timeouts for the entire
// rank, even if they are later ejected. So for validating votes/timeouts we
// use *ByRank methods.
//
// Since the voter committee is considered static over an rank:
// - we can query identities by rank
// - we don't need the full state ancestry prior to validating messages
type Replicas interface {
// LeaderForRank returns the identity of the leader for a given rank.
// CAUTION: per liveness requirement of HotStuff, the leader must be
// fork-independent. Therefore, a node retains its proposer rank
// slots even if it is slashed. Its proposal is simply considered
// invalid, as it is not from a legitimate participant.
// Returns the following expected errors for invalid inputs:
// - model.ErrRankUnknown if no rank containing the given rank is
// known
LeaderForRank(rank uint64) (models.Identity, error)
// QuorumThresholdForRank returns the minimum total weight for a supermajority
// at the given rank. This weight threshold is computed using the total weight
// of the initial committee and is static over the course of an rank.
// Returns the following expected errors for invalid inputs:
// - model.ErrRankUnknown if no rank containing the given rank is
// known
QuorumThresholdForRank(rank uint64) (uint64, error)
// TimeoutThresholdForRank returns the minimum total weight of observed
// timeout states required to safely timeout for the given rank. This weight
// threshold is computed using the total weight of the initial committee and
// is static over the course of an rank.
// Returns the following expected errors for invalid inputs:
// - model.ErrRankUnknown if no rank containing the given rank is
// known
TimeoutThresholdForRank(rank uint64) (uint64, error)
// Self returns our own node identifier.
// TODO: ultimately, the own identity of the node is necessary for signing.
// Ideally, we would move the method for checking whether an Identifier
// refers to this node to the signer. This would require some
// refactoring of EventHandler (postponed to later)
Self() models.Identity
// IdentitiesByRank returns a list of the legitimate HotStuff participants
// for the rank given by the input rank.
// The returned list of HotStuff participants:
// - contains nodes that are allowed to submit votes or timeouts within the
// given rank (un-ejected, non-zero weight at the beginning of the rank)
// - is ordered in the canonical order
// - contains no duplicates.
//
// CAUTION: DO NOT use this method for validating state proposals.
//
// Returns the following expected errors for invalid inputs:
// - model.ErrRankUnknown if no rank containing the given rank is
// known
//
IdentitiesByRank(
rank uint64,
) ([]models.WeightedIdentity, error)
// IdentityByRank returns the full Identity for specified HotStuff
// participant. The node must be a legitimate HotStuff participant with
// NON-ZERO WEIGHT at the specified state.
//
// ERROR conditions:
// - model.InvalidSignerError if participantID does NOT correspond to an
// authorized HotStuff participant at the specified state.
//
// Returns the following expected errors for invalid inputs:
// - model.ErrRankUnknown if no rank containing the given rank is
// known
//
IdentityByRank(
rank uint64,
participantID models.Identity,
) (models.WeightedIdentity, error)
}
// DynamicCommittee extends Replicas to provide the consensus committee for the
// purposes of validating proposals. The proposer committee reflects
// state-to-state changes in the identity table to support immediately rejecting
// proposals from nodes after they are ejected. For validating proposals, we use
// *ByState methods.
//
// Since the proposer committee can change at any state:
// - we query by state ID
// - we must have incorporated the full state ancestry prior to validating
// messages
type DynamicCommittee interface {
Replicas
// IdentitiesByState returns a list of the legitimate HotStuff participants
// for the given state. The returned list of HotStuff participants:
// - contains nodes that are allowed to submit proposals, votes, and
// timeouts (un-ejected, non-zero weight at current state)
// - is ordered in the canonical order
// - contains no duplicates.
//
// ERROR conditions:
// - state.ErrUnknownSnapshotReference if the stateID is for an unknown state
IdentitiesByState(stateID models.Identity) ([]models.WeightedIdentity, error)
// IdentityByState returns the full Identity for specified HotStuff
// participant. The node must be a legitimate HotStuff participant with
// NON-ZERO WEIGHT at the specified state.
// ERROR conditions:
// - model.InvalidSignerError if participantID does NOT correspond to an
// authorized HotStuff participant at the specified state.
// - state.ErrUnknownSnapshotReference if the stateID is for an unknown state
IdentityByState(
stateID models.Identity,
participantID models.Identity,
) (models.WeightedIdentity, error)
}
// StateSignerDecoder defines how to convert the ParentSignerIndices field
// within a particular state header to the identifiers of the nodes which signed
// the state.
type StateSignerDecoder[StateT models.Unique] interface {
// DecodeSignerIDs decodes the signer indices from the given state header into
// full node IDs.
// Note: A state header contains a quorum certificate for its parent, which
// proves that the consensus committee has reached agreement on validity of
// parent state. Consequently, the returned IdentifierList contains the
// consensus participants that signed the parent state.
// Expected Error returns during normal operations:
// - consensus.InvalidSignerIndicesError if signer indices included in the
// header do not encode a valid subset of the consensus committee
DecodeSignerIDs(
state *models.State[StateT],
) ([]models.WeightedIdentity, error)
}

View File

@ -0,0 +1,453 @@
package consensus
import (
"time"
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
)
// ProposalViolationConsumer consumes outbound notifications about
// HotStuff-protocol violations. Such notifications are produced by the active
// consensus participants and consensus follower.
//
// Implementations must:
// - be concurrency safe
// - be non-blocking
// - handle repetition of the same events (with some processing overhead).
type ProposalViolationConsumer[
StateT models.Unique,
VoteT models.Unique,
] interface {
// OnInvalidStateDetected notifications are produced by components that have
// detected that a state proposal is invalid and need to report it. Most of
// the time such state can be detected by calling Validator.ValidateProposal.
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking;
// and must handle repetition of the same events (with some processing
// overhead).
OnInvalidStateDetected(err *models.InvalidProposalError[StateT, VoteT])
// OnDoubleProposeDetected notifications are produced by the Finalization
// Logic whenever a double state proposal (equivocation) was detected.
// Equivocation occurs when the same leader proposes two different states for
// the same rank.
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking;
// and must handle repetition of the same events (with some processing
// overhead).
OnDoubleProposeDetected(*models.State[StateT], *models.State[StateT])
}
// VoteAggregationViolationConsumer consumes outbound notifications about
// HotStuff-protocol violations specifically invalid votes during processing.
// Such notifications are produced by the Vote Aggregation logic.
//
// Implementations must:
// - be concurrency safe
// - be non-blocking
// - handle repetition of the same events (with some processing overhead).
type VoteAggregationViolationConsumer[
StateT models.Unique,
VoteT models.Unique,
] interface {
// OnDoubleVotingDetected notifications are produced by the Vote Aggregation
// logic whenever a double voting (same voter voting for different states at
// the same rank) was detected.
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking; and must handle
// repetition of the same events (with some processing overhead).
OnDoubleVotingDetected(*VoteT, *VoteT)
// OnInvalidVoteDetected notifications are produced by the Vote Aggregation
// logic whenever an invalid vote was detected.
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking; and must handle
// repetition of the same events (with some processing overhead).
OnInvalidVoteDetected(err models.InvalidVoteError[VoteT])
// OnVoteForInvalidStateDetected notifications are produced by the Vote
// Aggregation logic whenever vote for invalid proposal was detected.
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking; and must handle
// repetition of the same events (with some processing overhead).
OnVoteForInvalidStateDetected(
vote *VoteT,
invalidProposal *models.SignedProposal[StateT, VoteT],
)
}
// TimeoutAggregationViolationConsumer consumes outbound notifications about
// Active Pacemaker violations specifically invalid timeouts during processing.
// Such notifications are produced by the Timeout Aggregation logic.
//
// Implementations must:
// - be concurrency safe
// - be non-blocking
// - handle repetition of the same events (with some processing overhead).
type TimeoutAggregationViolationConsumer[VoteT models.Unique] interface {
// OnDoubleTimeoutDetected notifications are produced by the Timeout
// Aggregation logic whenever a double timeout (same replica producing two
// different timeouts at the same rank) was detected.
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking; and must handle
// repetition of the same events (with some processing overhead).
OnDoubleTimeoutDetected(
*models.TimeoutState[VoteT],
*models.TimeoutState[VoteT],
)
// OnInvalidTimeoutDetected notifications are produced by the Timeout
// Aggregation logic whenever an invalid timeout was detected.
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking; and must handle
// repetition of the same events (with some processing overhead).
OnInvalidTimeoutDetected(err models.InvalidTimeoutError[VoteT])
}
// FinalizationConsumer consumes outbound notifications produced by the logic
// tracking forks and finalization. Such notifications are produced by the
// active consensus participants, and generally potentially relevant to the
// larger node. The notifications are emitted in the order in which the
// finalization algorithm makes the respective steps.
//
// Implementations must:
// - be concurrency safe
// - be non-blocking
// - handle repetition of the same events (with some processing overhead).
type FinalizationConsumer[StateT models.Unique] interface {
// OnStateIncorporated notifications are produced by the Finalization Logic
// whenever a state is incorporated into the consensus state.
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking; and must handle
// repetition of the same events (with some processing overhead).
OnStateIncorporated(*models.State[StateT])
// OnFinalizedState notifications are produced by the Finalization Logic
// whenever a state has been finalized. They are emitted in the order the
// states are finalized.
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking; and must handle
// repetition of the same events (with some processing overhead).
OnFinalizedState(*models.State[StateT])
}
// ParticipantConsumer consumes outbound notifications produced by consensus
// participants actively proposing states, voting, collecting & aggregating
// votes to QCs, and participating in the pacemaker (sending timeouts,
// collecting & aggregating timeouts to TCs).
// Implementations must:
// - be concurrency safe
// - be non-blocking
// - handle repetition of the same events (with some processing overhead).
type ParticipantConsumer[
StateT models.Unique,
VoteT models.Unique,
] interface {
// OnEventProcessed notifications are produced by the EventHandler when it is
// done processing and hands control back to the EventLoop to wait for the
// next event.
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking; and must handle
// repetition of the same events (with some processing overhead).
OnEventProcessed()
// OnStart notifications are produced by the EventHandler when it starts
// blocks recovery and prepares for handling incoming events from EventLoop.
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking; and must handle
// repetition of the same events (with some processing overhead).
OnStart(currentRank uint64)
// OnReceiveProposal notifications are produced by the EventHandler when it
// starts processing a state.
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking; and must handle
// repetition of the same events (with some processing overhead).
OnReceiveProposal(
currentRank uint64,
proposal *models.SignedProposal[StateT, VoteT],
)
// OnReceiveQuorumCertificate notifications are produced by the EventHandler
// when it starts processing a QuorumCertificate [QC] constructed by the
// node's internal vote aggregator.
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking; and must handle
// repetition of the same events (with some processing overhead).
OnReceiveQuorumCertificate(currentRank uint64, qc models.QuorumCertificate)
// OnReceiveTimeoutCertificate notifications are produced by the EventHandler
// when it starts processing a TimeoutCertificate [TC] constructed by the
// node's internal timeout aggregator.
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking; and must handle
// repetition of the same events (with some processing overhead).
OnReceiveTimeoutCertificate(currentRank uint64, tc models.TimeoutCertificate)
// OnPartialTimeoutCertificate notifications are produced by the EventHandler
// when it starts processing partial TC constructed by local timeout
// aggregator.
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking; and must handle
// repetition of the same events (with some processing overhead).
OnPartialTimeoutCertificate(
currentRank uint64,
partialTimeoutCertificate *PartialTimeoutCertificateCreated,
)
// OnLocalTimeout notifications are produced by the EventHandler when it
// reacts to expiry of round duration timer. Such a notification indicates
// that the Pacemaker's timeout was processed by the system.
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking; and must handle
// repetition of the same events (with some processing overhead).
OnLocalTimeout(currentRank uint64)
// OnRankChange notifications are produced by Pacemaker when it transitions to
// a new rank based on processing a QC or TC. The arguments specify the
// oldRank (first argument), and the newRank to which the Pacemaker
// transitioned (second argument).
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking; and must handle
// repetition of the same events (with some processing overhead).
OnRankChange(oldRank, newRank uint64)
// OnQuorumCertificateTriggeredRankChange notifications are produced by
// Pacemaker when it moves to a new rank based on processing a QC. The
// arguments specify the qc (first argument), which triggered the rank change,
// and the newRank to which the Pacemaker transitioned (second argument).
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking;
// and must handle repetition of the same events (with some processing
// overhead).
OnQuorumCertificateTriggeredRankChange(
oldRank uint64,
newRank uint64,
qc models.QuorumCertificate,
)
// OnTimeoutCertificateTriggeredRankChange notifications are produced by
// Pacemaker when it moves to a new rank based on processing a TC. The
// arguments specify the tc (first argument), which triggered the rank change,
// and the newRank to which the Pacemaker transitioned (second argument).
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking; and must handle
// repetition of the same events (with some processing overhead).
OnTimeoutCertificateTriggeredRankChange(
oldRank uint64,
newRank uint64,
tc models.TimeoutCertificate,
)
// OnStartingTimeout notifications are produced by Pacemaker. Such a
// notification indicates that the Pacemaker is now waiting for the system to
// (receive and) process states or votes. The specific timeout type is
// contained in the TimerInfo.
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking; and must handle
// repetition of the same events (with some processing overhead).
OnStartingTimeout(startTime, endTime time.Time)
// OnCurrentRankDetails notifications are produced by the EventHandler during
// the course of a rank with auxiliary information. These notifications are
// generally not produced for all ranks (for example skipped ranks). These
// notifications are guaranteed to be produced for all ranks we enter after
// fully processing a message.
// Example 1:
// - We are in rank 8. We process a QC with rank 10, causing us to enter
// rank 11.
// - Then this notification will be produced for rank 11.
// Example 2:
// - We are in rank 8. We process a proposal with rank 10, which contains a
// TC for rank 9 and TC.NewestQC for rank 8.
// - The QC would allow us to enter rank 9 and the TC would allow us to
// enter rank 10, so after fully processing the message we are in rank 10.
// - Then this notification will be produced for rank 10, but not rank 9
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking; and must handle
// repetition of the same events (with some processing overhead).
OnCurrentRankDetails(
currentRank, finalizedRank uint64,
currentLeader models.Identity,
)
}
// VoteCollectorConsumer consumes outbound notifications produced by HotStuff's
// vote aggregation component. These events are primarily intended for the
// HotStuff-internal state machine (EventHandler), but might also be relevant to
// the larger node in which HotStuff is running.
//
// Implementations must:
// - be concurrency safe
// - be non-blocking
// - handle repetition of the same events (with some processing overhead).
type VoteCollectorConsumer[VoteT models.Unique] interface {
// OnQuorumCertificateConstructedFromVotes notifications are produced by the
// VoteAggregator component, whenever it constructs a QC from votes.
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking; and must handle
// repetition of the same events (with some processing overhead).
OnQuorumCertificateConstructedFromVotes(models.QuorumCertificate)
// OnVoteProcessed notifications are produced by the Vote Aggregation logic,
// each time we successfully ingest a valid vote.
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking; and must handle
// repetition of the same events (with some processing overhead).
OnVoteProcessed(vote *VoteT)
}
// TimeoutCollectorConsumer consumes outbound notifications produced by
// HotStuff's timeout aggregation component. These events are primarily intended
// for the HotStuff-internal state machine (EventHandler), but might also be
// relevant to the larger node in which HotStuff is running.
//
// Caution: the events are not strictly ordered by increasing ranks! The
// notifications are emitted by concurrent processing logic. Over larger time
// scales, the emitted events are for statistically increasing ranks. However,
// on short time scales there are _no_ monotonicity guarantees w.r.t. the
// events' ranks.
//
// Implementations must:
// - be concurrency safe
// - be non-blocking
// - handle repetition of the same events (with some processing overhead).
type TimeoutCollectorConsumer[VoteT models.Unique] interface {
// OnTimeoutCertificateConstructedFromTimeouts notifications are produced by
// the TimeoutProcessor component, whenever it constructs a TC based on
// TimeoutStates from a supermajority of consensus participants.
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking; and must handle
// repetition of the same events (with some processing overhead).
OnTimeoutCertificateConstructedFromTimeouts(
certificate models.TimeoutCertificate,
)
// OnPartialTimeoutCertificateCreated notifications are produced by the
// TimeoutProcessor component, whenever it collected TimeoutStates from a
// superminority of consensus participants for a specific rank. Along with the
// rank, it reports the newest QC and TC (for previous rank) discovered in
// process of timeout collection. Per convention, the newest QC is never nil,
// while the TC for the previous rank might be nil.
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking; and must handle
// repetition of the same events (with some processing overhead).
OnPartialTimeoutCertificateCreated(
rank uint64,
newestQC models.QuorumCertificate,
lastRankTC models.TimeoutCertificate,
)
// OnNewQuorumCertificateDiscovered notifications are produced by the
// TimeoutCollector component, whenever it discovers new QC included in
// timeout state.
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking; and must handle
// repetition of the same events (with some processing overhead).
OnNewQuorumCertificateDiscovered(certificate models.QuorumCertificate)
// OnNewTimeoutCertificateDiscovered notifications are produced by the
// TimeoutCollector component, whenever it discovers new TC included in
// timeout state.
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking; and must handle
// repetition of the same events (with some processing overhead).
OnNewTimeoutCertificateDiscovered(certificate models.TimeoutCertificate)
// OnTimeoutProcessed notifications are produced by the Timeout Aggregation
// logic, each time we successfully ingest a valid timeout.
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking; and must handle
// repetition of the same events (with some processing overhead).
OnTimeoutProcessed(timeout *models.TimeoutState[VoteT])
}
// CommunicatorConsumer consumes outbound notifications produced by HotStuff and
// it's components. Notifications allow the HotStuff core algorithm to
// communicate with the other actors of the consensus process.
// Implementations must:
// - be concurrency safe
// - be non-blocking
// - handle repetition of the same events (with some processing overhead).
type CommunicatorConsumer[StateT models.Unique, VoteT models.Unique] interface {
// OnOwnVote notifies about intent to send a vote for the given parameters to
// the specified recipient.
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking; and must handle
// repetition of the same events (with some processing overhead).
OnOwnVote(vote *VoteT, recipientID models.Identity)
// OnOwnTimeout notifies about intent to broadcast the given timeout
// state to all actors of the consensus process.
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking; and must handle
// repetition of the same events (with some processing overhead).
OnOwnTimeout(timeout *models.TimeoutState[VoteT])
// OnOwnProposal notifies about intent to broadcast the given state proposal
// to all actors of the consensus process. delay is to hold the proposal
// before broadcasting it. Useful to control the state production rate.
// Prerequisites:
// Implementation must be concurrency safe; Non-blocking;
// and must handle repetition of the same events (with some processing
// overhead).
OnOwnProposal(
proposal *models.SignedProposal[StateT, VoteT],
targetPublicationTime time.Time,
)
}
// FollowerConsumer consumes outbound notifications produced by consensus
// followers. It is a subset of the notifications produced by consensus
// participants.
// Implementations must:
// - be concurrency safe
// - be non-blocking
// - handle repetition of the same events (with some processing overhead).
type FollowerConsumer[StateT models.Unique, VoteT models.Unique] interface {
ProposalViolationConsumer[StateT, VoteT]
FinalizationConsumer[StateT]
}
// Consumer consumes outbound notifications produced by consensus participants.
// Notifications are consensus-internal state changes which are potentially
// relevant to the larger node in which HotStuff is running. The notifications
// are emitted in the order in which the HotStuff algorithm makes the respective
// steps.
//
// Implementations must:
// - be concurrency safe
// - be non-blocking
// - handle repetition of the same events (with some processing overhead).
type Consumer[StateT models.Unique, VoteT models.Unique] interface {
FollowerConsumer[StateT, VoteT]
CommunicatorConsumer[StateT, VoteT]
ParticipantConsumer[StateT, VoteT]
}
// VoteAggregationConsumer consumes outbound notifications produced by Vote
// Aggregation logic. It is a subset of the notifications produced by consensus
// participants.
// Implementations must:
// - be concurrency safe
// - be non-blocking
// - handle repetition of the same events (with some processing overhead).
type VoteAggregationConsumer[
StateT models.Unique,
VoteT models.Unique,
] interface {
VoteAggregationViolationConsumer[StateT, VoteT]
VoteCollectorConsumer[VoteT]
}
// TimeoutAggregationConsumer consumes outbound notifications produced by Vote
// Aggregation logic. It is a subset of the notifications produced by consensus
// participants.
// Implementations must:
// - be concurrency safe
// - be non-blocking
// - handle repetition of the same events (with some processing overhead).
type TimeoutAggregationConsumer[VoteT models.Unique] interface {
TimeoutAggregationViolationConsumer[VoteT]
TimeoutCollectorConsumer[VoteT]
}

View File

@ -0,0 +1,84 @@
package consensus
import (
"context"
"time"
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
"source.quilibrium.com/quilibrium/monorepo/lifecycle"
)
// PartialTimeoutCertificateCreated represents a notification emitted by the
// TimeoutProcessor component, whenever it has collected TimeoutStates from a
// superminority of consensus participants for a specific rank. Along with the
// rank, it reports the newest QuorumCertificate and TimeoutCertificate (for
// previous rank) discovered during timeout collection. Per convention, the
// newest QuorumCertificate is never nil, while the TimeoutCertificate for the
// previous rank might be nil.
type PartialTimeoutCertificateCreated struct {
Rank uint64
NewestQuorumCertificate models.QuorumCertificate
PriorRankTimeoutCertificate models.TimeoutCertificate
}
// EventHandler runs a state machine to process proposals, QuorumCertificate and
// local timeouts. Not concurrency safe.
type EventHandler[StateT models.Unique, VoteT models.Unique] interface {
// OnReceiveQuorumCertificate processes a valid quorumCertificate constructed
// by internal vote aggregator or discovered in TimeoutState. All inputs
// should be validated before feeding into this function. Assuming trusted
// data. No errors are expected during normal operation.
OnReceiveQuorumCertificate(quorumCertificate models.QuorumCertificate) error
// OnReceiveTimeoutCertificate processes a valid timeoutCertificate
// constructed by internal timeout aggregator, discovered in TimeoutState or
// broadcast over the network. All inputs should be validated before feeding
// into this function. Assuming trusted data. No errors are expected during
// normal operation.
OnReceiveTimeoutCertificate(
timeoutCertificate models.TimeoutCertificate,
) error
// OnReceiveProposal processes a state proposal received from another HotStuff
// consensus participant. All inputs should be validated before feeding into
// this function. Assuming trusted data. No errors are expected during normal
// operation.
OnReceiveProposal(proposal *models.SignedProposal[StateT, VoteT]) error
// OnLocalTimeout handles a local timeout event by creating a
// models.TimeoutState and broadcasting it. No errors are expected during
// normal operation.
OnLocalTimeout() error
// OnPartialTimeoutCertificateCreated handles notification produces by the
// internal timeout aggregator. If the notification is for the current rank,
// a corresponding models.TimeoutState is broadcast to the consensus
// committee. No errors are expected during normal operation.
OnPartialTimeoutCertificateCreated(
partialTimeoutCertificate *PartialTimeoutCertificateCreated,
) error
// TimeoutChannel returns a channel that sends a signal on timeout.
TimeoutChannel() <-chan time.Time
// Start starts the event handler. No errors are expected during normal
// operation.
// CAUTION: EventHandler is not concurrency safe. The Start method must be
// executed by the same goroutine that also calls the other business logic
// methods, or concurrency safety has to be implemented externally.
Start(ctx context.Context) error
}
// EventLoop performs buffer and processing of incoming proposals and QCs.
type EventLoop[StateT models.Unique, VoteT models.Unique] interface {
lifecycle.Component
TimeoutCollectorConsumer[VoteT]
VoteCollectorConsumer[VoteT]
SubmitProposal(proposal *models.SignedProposal[StateT, VoteT])
}
// FollowerLoop only follows certified states, does not actively process the
// collection of proposals and QC/TCs.
type FollowerLoop[StateT models.Unique, VoteT models.Unique] interface {
AddCertifiedState(certifiedState *models.CertifiedState[StateT])
}

View File

@ -0,0 +1,23 @@
package consensus
import "source.quilibrium.com/quilibrium/monorepo/consensus/models"
// Finalizer is used by the consensus algorithm to inform other components for
// (such as the protocol state) about finalization of states.
//
// Since we have two different protocol states: one for the main consensus,
// the other for the collection cluster consensus, the Finalizer interface
// allows the two different protocol states to provide different implementations
// for updating its state when a state has been finalized.
//
// Updating the protocol state should always succeed when the data is
// consistent. However, in case the protocol state is corrupted, error should be
// returned and the consensus algorithm should halt. So the error returned from
// MakeFinal is for the protocol state to report exceptions.
type Finalizer interface {
// MakeFinal will declare a state and all of its ancestors as finalized, which
// makes it an immutable part of the time reel. Returning an error indicates
// some fatal condition and will cause the finalization logic to terminate.
MakeFinal(stateID models.Identity) error
}

View File

@ -0,0 +1,106 @@
package consensus
import "source.quilibrium.com/quilibrium/monorepo/consensus/models"
// FinalityProof represents a finality proof for a State. By convention, a
// FinalityProof is immutable. Finality in Jolteon/HotStuff is determined by the
// 2-chain rule:
//
// There exists a _certified_ state C, such that State.Rank + 1 = C.Rank
type FinalityProof[StateT models.Unique] struct {
State *models.State[StateT]
CertifiedChild *models.CertifiedState[StateT]
}
// Forks maintains an in-memory data-structure of all states whose rank-number
// is larger or equal to the latest finalized state. The latest finalized state
// is defined as the finalized state with the largest rank number. When adding
// states, Forks automatically updates its internal state (including finalized
// states). Furthermore, states whose rank number is smaller than the latest
// finalized state are pruned automatically.
//
// PREREQUISITES:
// Forks expects that only states are added that can be connected to its latest
// finalized state (without missing interim ancestors). If this condition is
// violated, Forks will raise an error and ignore the state.
type Forks[StateT models.Unique] interface {
// GetStatesForRank returns all known states for the given rank
GetStatesForRank(rank uint64) []*models.State[StateT]
// GetState returns (*models.State[StateT], true) if the state with the
// specified id was found and (nil, false) otherwise.
GetState(stateID models.Identity) (*models.State[StateT], bool)
// FinalizedRank returns the largest rank number where a finalized state is
// known
FinalizedRank() uint64
// FinalizedState returns the finalized state with the largest rank number
FinalizedState() *models.State[StateT]
// FinalityProof returns the latest finalized state and a certified child from
// the subsequent rank, which proves finality.
// CAUTION: method returns (nil, false), when Forks has not yet finalized any
// states beyond the finalized root state it was initialized with.
FinalityProof() (*FinalityProof[StateT], bool)
// AddValidatedState appends the validated state to the tree of pending
// states and updates the latest finalized state (if applicable). Unless the
// parent is below the pruning threshold (latest finalized rank), we require
// that the parent is already stored in Forks. Calling this method with
// previously processed states leaves the consensus state invariant (though,
// it will potentially cause some duplicate processing).
// Notes:
// - Method `AddCertifiedState(..)` should be used preferably, if a QC
// certifying `state` is already known. This is generally the case for the
// consensus follower.
// - Method `AddValidatedState` is intended for active consensus
// participants, which fully validate states (incl. payload), i.e. QCs are
// processed as part of validated proposals.
//
// Possible error returns:
// - model.MissingStateError if the parent does not exist in the forest (but
// is above the pruned rank). From the perspective of Forks, this error is
// benign (no-op).
// - model.InvalidStateError if the state is invalid (see
// `Forks.EnsureStateIsValidExtension` for details). From the perspective
// of Forks, this error is benign (no-op). However, we assume all states
// are fully verified, i.e. they should satisfy all consistency
// requirements. Hence, this error is likely an indicator of a bug in the
// compliance layer.
// - model.ByzantineThresholdExceededError if conflicting QCs or conflicting
// finalized states have been detected (violating a foundational consensus
// guarantees). This indicates that there are 1/3+ Byzantine nodes
// (weighted by seniority) in the network, breaking the safety guarantees
// of HotStuff (or there is a critical bug / data corruption). Forks
// cannot recover from this exception.
// - All other errors are potential symptoms of bugs or state corruption.
AddValidatedState(proposal *models.State[StateT]) error
// AddCertifiedState appends the given certified state to the tree of pending
// states and updates the latest finalized state (if finalization progressed).
// Unless the parent is below the pruning threshold (latest finalized rank),
// we require that the parent is already stored in Forks. Calling this method
// with previously processed states leaves the consensus state invariant
// (though, it will potentially cause some duplicate processing).
//
// Possible error returns:
// - model.MissingStateError if the parent does not exist in the forest (but
// is above the pruned rank). From the perspective of Forks, this error is
// benign (no-op).
// - model.InvalidStateError if the state is invalid (see
// `Forks.EnsureStateIsValidExtension` for details). From the perspective
// of Forks, this error is benign (no-op). However, we assume all states
// are fully verified, i.e. they should satisfy all consistency
// requirements. Hence, this error is likely an indicator of a bug in the
// compliance layer.
// - model.ByzantineThresholdExceededError if conflicting QCs or conflicting
// finalized states have been detected (violating a foundational consensus
// guarantees). This indicates that there are 1/3+ Byzantine nodes
// (weighted by seniority) in the network, breaking the safety guarantees
// of HotStuff (or there is a critical bug / data corruption). Forks
// cannot recover from this exception.
// - All other errors are potential symptoms of bugs or state corruption.
AddCertifiedState(certifiedState *models.CertifiedState[StateT]) error
}

View File

@ -0,0 +1,30 @@
package consensus
import (
"context"
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
)
// LeaderProvider handles leader selection. State is provided, if relevant to
// the upstream consensus engine.
type LeaderProvider[
StateT models.Unique,
PeerIDT models.Unique,
CollectedT models.Unique,
] interface {
// GetNextLeaders returns a list of node indices, in priority order. Note that
// it is assumed that if no error is returned, GetNextLeaders should produce
// a non-empty list. If a list of size smaller than minimumProvers is
// provided, the liveness check will loop until the list is greater than that.
GetNextLeaders(ctx context.Context, prior *StateT) ([]PeerIDT, error)
// ProveNextState prepares a non-finalized new state from the prior, to be
// proposed and voted upon. Provided context may be canceled, should be used
// to halt long-running prover operations.
ProveNextState(
ctx context.Context,
rank uint64,
filter []byte,
priorState models.Identity,
) (*StateT, error)
}

View File

@ -0,0 +1,29 @@
package consensus
import (
"context"
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
)
// LivenessProvider handles liveness announcements ahead of proving, to
// pre-emptively choose the next prover. In expected leader scenarios, this
// enables a peer to determine if an honest next prover is offline, so that it
// can publish the next state without waiting.
type LivenessProvider[
StateT models.Unique,
PeerIDT models.Unique,
CollectedT models.Unique,
] interface {
// Collect returns the collected mutation operations ahead of liveness
// announcements.
Collect(
ctx context.Context,
frameNumber uint64,
rank uint64,
) (CollectedT, error)
// SendLiveness announces liveness ahead of the next prover deterimination and
// subsequent proving. Provides prior state and collected mutation operations
// if relevant.
SendLiveness(ctx context.Context, prior *StateT, collected CollectedT) error
}

View File

@ -0,0 +1,65 @@
package consensus
import (
"context"
"time"
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
)
// Pacemaker defines a standard set of methods for handling pacemaker behaviors
// in the consensus engine.
type Pacemaker interface {
ProposalDurationProvider
// CurrentRank returns the current rank
CurrentRank() uint64
// LatestQuorumCertificate returns the latest quorum certificate seen.
LatestQuorumCertificate() models.QuorumCertificate
// PriorRankTimeoutCertificate returns the prior rank's timeout certificate,
// if it exists.
PriorRankTimeoutCertificate() models.TimeoutCertificate
// ReceiveQuorumCertificate handles an incoming quorum certificate, advancing
// to a new rank if applicable.
ReceiveQuorumCertificate(
quorumCertificate models.QuorumCertificate,
) (*models.NextRank, error)
// ReceiveTimeoutCertificate handles an incoming timeout certificate,
// advancing to a new rank if applicable.
ReceiveTimeoutCertificate(
timeoutCertificate models.TimeoutCertificate,
) (*models.NextRank, error)
// TimeoutCh provides a channel for timing out on the current rank.
TimeoutCh() <-chan time.Time
// Start starts the pacemaker, takes a cancellable context.
Start(ctx context.Context)
}
// ProposalDurationProvider generates the target publication time for state
// proposals.
type ProposalDurationProvider interface {
// TargetPublicationTime is intended to be called by the EventHandler,
// whenever it wants to publish a new proposal. The event handler inputs
// - proposalRank: the rank it is proposing for,
// - timeRankEntered: the time when the EventHandler entered this rank
// - parentStateId: the ID of the parent state, which the EventHandler is
// building on
// TargetPublicationTime returns the time stamp when the new proposal should
// be broadcasted. For a given rank where we are the primary, suppose the
// actual time we are done building our proposal is P:
// - if P < TargetPublicationTime(..), then the EventHandler should wait
// until `TargetPublicationTime` to broadcast the proposal
// - if P >= TargetPublicationTime(..), then the EventHandler should
// immediately broadcast the proposal
//
// Note: Technically, our metrics capture the publication delay relative to
// this function's _latest_ call. Currently, the EventHandler is the only
// caller of this function, and only calls it once.
//
// Concurrency safe.
TargetPublicationTime(
proposalRank uint64,
timeRankEntered time.Time,
parentStateId models.Identity,
) time.Time
}

View File

@ -0,0 +1,25 @@
package consensus
import "source.quilibrium.com/quilibrium/monorepo/consensus/models"
// StateProducer is responsible for producing new state proposals. It is a
// service component to HotStuff's main state machine (implemented in the
// EventHandler). The StateProducer's central purpose is to mediate concurrent
// signing requests to its embedded `hotstuff.SafetyRules` during state
// production. The actual work of producing a state proposal is delegated to the
// embedded `consensus.LeaderProvider`.
type StateProducer[StateT models.Unique, VoteT models.Unique] interface {
// MakeStateProposal builds a new HotStuff state proposal using the given
// rank, the given quorum certificate for its parent and [optionally] a
// timeout certificate for last rank (could be nil).
// Error Returns:
// - model.NoVoteError if it is not safe for us to vote (our proposal
// includes our vote) for this rank. This can happen if we have already
// proposed or timed out this rank.
// - generic error in case of unexpected failure
MakeStateProposal(
rank uint64,
qc models.QuorumCertificate,
lastRankTC models.TimeoutCertificate,
) (*models.SignedProposal[StateT, VoteT], error)
}

View File

@ -0,0 +1,73 @@
package consensus
import "source.quilibrium.com/quilibrium/monorepo/consensus/models"
// SafetyRules enforces all consensus rules that guarantee safety. It produces
// votes for the given states or TimeoutState for the given ranks, only if all
// safety rules are satisfied. In particular, SafetyRules guarantees a
// foundational security theorem for HotStuff, which we utilize also outside of
// consensus (e.g. queuing pending states for execution, verification, sealing
// etc):
//
// THEOREM: For each rank, there can be at most 1 certified state.
//
// Implementations are generally *not* concurrency safe.
type SafetyRules[StateT models.Unique, VoteT models.Unique] interface {
// ProduceVote takes a state proposal and current rank, and decides whether to
// vote for the state. Voting is deterministic, i.e. voting for same proposal
// will always result in the same vote.
// Returns:
// * (vote, nil): On the _first_ state for the current rank that is safe to
// vote for. Subsequently, voter does _not_ vote for any _other_ state with
// the same (or lower) rank. SafetyRules internally caches and persists its
// latest vote. As long as the SafetyRules' internal state remains
// unchanged, ProduceVote will return its cached for identical inputs.
// * (nil, model.NoVoteError): If the safety module decides that it is not
// safe to vote for the given state. This is a sentinel error and
// _expected_ during normal operation.
// All other errors are unexpected and potential symptoms of uncovered edge
// cases or corrupted internal state (fatal).
ProduceVote(
proposal *models.SignedProposal[StateT, VoteT],
curRank uint64,
) (*VoteT, error)
// ProduceTimeout takes current rank, highest locally known QC and TC
// (optional, must be nil if and only if QC is for previous rank) and decides
// whether to produce timeout for current rank.
// Returns:
// * (timeout, nil): It is safe to timeout for current rank using newestQC
// and lastRankTC.
// * (nil, model.NoTimeoutError): If replica is not part of the authorized
// consensus committee (anymore) and therefore is not authorized to produce
// a valid timeout state. This sentinel error is _expected_ during normal
// operation, e.g. during the grace-period after Rank switchover or after
// the replica self-ejected.
// All other errors are unexpected and potential symptoms of uncovered edge
// cases or corrupted internal state (fatal).
ProduceTimeout(
curRank uint64,
newestQC models.QuorumCertificate,
lastRankTC models.TimeoutCertificate,
) (*models.TimeoutState[VoteT], error)
// SignOwnProposal takes an unsigned state proposal and produces a vote for
// it. Vote is a cryptographic commitment to the proposal. By adding the vote
// to an unsigned proposal, the caller constructs a signed state proposal.
// This method has to be used only by the leader, which must be the proposer
// of the state (or an exception is returned).
// Implementors must guarantee that:
// - vote on the proposal satisfies safety rules
// - maximum one proposal is signed per rank
// Returns:
// * (vote, nil): the passed unsigned proposal is a valid one, and it's safe
// to make a proposal. Subsequently, leader does _not_ produce any _other_
// proposal with the same (or lower) rank.
// * (nil, model.NoVoteError): according to HotStuff's Safety Rules, it is
// not safe to sign the given proposal. This could happen because we have
// already proposed or timed out for the given rank. This is a sentinel
// error and _expected_ during normal operation.
// All other errors are unexpected and potential symptoms of uncovered edge
// cases or corrupted internal state (fatal).
SignOwnProposal(unsignedProposal *models.Proposal[StateT]) (*VoteT, error)
}

View File

@ -0,0 +1,161 @@
package consensus
import (
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
)
// WeightedSignatureAggregator aggregates signatures of the same signature
// scheme and the same message from different signers. The public keys and
// message are agreed upon upfront. It is also recommended to only aggregate
// signatures generated with keys representing equivalent security-bit level.
// Furthermore, a weight [unsigned int64] is assigned to each signer ID. The
// WeightedSignatureAggregator internally tracks the total weight of all
// collected signatures. Implementations must be concurrency safe.
type WeightedSignatureAggregator interface {
// Verify verifies the signature under the stored public keys and message.
// Expected errors during normal operations:
// - model.InvalidSignerError if signerID is invalid (not a consensus
// participant)
// - model.ErrInvalidSignature if signerID is valid but signature is
// cryptographically invalid
Verify(signerID models.Identity, sig []byte) error
// TrustedAdd adds a signature to the internal set of signatures and adds the
// signer's weight to the total collected weight, iff the signature is _not_ a
// duplicate. The total weight of all collected signatures (excluding
// duplicates) is returned regardless of any returned error.
// Expected errors during normal operations:
// - model.InvalidSignerError if signerID is invalid (not a consensus
// participant)
// - model.DuplicatedSignerError if the signer has been already added
TrustedAdd(signerID models.Identity, sig []byte) (
totalWeight uint64,
exception error,
)
// TotalWeight returns the total weight presented by the collected signatures.
TotalWeight() uint64
// Aggregate aggregates the signatures and returns the aggregated consensus.
// The function performs a final verification and errors if the aggregated
// signature is invalid. This is required for the function safety since
// `TrustedAdd` allows adding invalid signatures.
// The function errors with:
// - model.InsufficientSignaturesError if no signatures have been added yet
// - model.InvalidSignatureIncludedError if:
// -- some signature(s), included via TrustedAdd, fail to deserialize
// (regardless of the aggregated public key)
// -- or all signatures deserialize correctly but some signature(s),
// included via TrustedAdd, are invalid (while aggregated public key is
// valid)
// - model.InvalidAggregatedKeyError if all signatures deserialize correctly
// but the signer's proving public keys sum up to an invalid key (BLS
// identity public key). Any aggregated signature would fail the
// cryptographic verification under the identity public key and therefore
// such signature is considered invalid. Such scenario can only happen if
// proving public keys of signers were forged to add up to the identity
// public key. Under the assumption that all proving key PoPs are valid,
// this error case can only happen if all signers are malicious and
// colluding. If there is at least one honest signer, there is a
// negligible probability that the aggregated key is identity.
//
// The function is thread-safe.
Aggregate() ([]models.WeightedIdentity, models.AggregatedSignature, error)
}
// TimeoutSignatureAggregator aggregates timeout signatures for one particular
// rank. When instantiating a TimeoutSignatureAggregator, the following
// information is supplied:
// - The rank for which the aggregator collects timeouts.
// - For each replicas that is authorized to send a timeout at this particular
// rank: the node ID, public proving keys, and weight
//
// Timeouts for other ranks or from non-authorized replicas are rejected.
// In their TimeoutStates, replicas include a signature over the pair (rank,
// newestQCRank), where `rank` is the rank number the timeout is for and
// `newestQCRank` is the rank of the newest QC known to the replica.
// TimeoutSignatureAggregator collects these signatures, internally tracks the
// total weight of all collected signatures. Note that in general the signed
// messages are different, which makes the aggregation a comparatively expensive
// operation. Upon calling `Aggregate`, the TimeoutSignatureAggregator
// aggregates all valid signatures collected up to this point. The aggregate
// signature is guaranteed to be correct, as only valid signatures are accepted
// as inputs.
// TimeoutSignatureAggregator internally tracks the total weight of all
// collected signatures. Implementations must be concurrency safe.
type TimeoutSignatureAggregator interface {
// VerifyAndAdd verifies the signature under the stored public keys and adds
// the signature and the corresponding highest QC to the internal set.
// Internal set and collected weight is modified iff signature _is_ valid.
// The total weight of all collected signatures (excluding duplicates) is
// returned regardless of any returned error.
// Expected errors during normal operations:
// - model.InvalidSignerError if signerID is invalid (not a consensus
// participant)
// - model.DuplicatedSignerError if the signer has been already added
// - model.ErrInvalidSignature if signerID is valid but signature is
// cryptographically invalid
VerifyAndAdd(
signerID models.Identity,
sig []byte,
newestQCRank uint64,
) (totalWeight uint64, exception error)
// TotalWeight returns the total weight presented by the collected signatures.
TotalWeight() uint64
// Rank returns the rank that this instance is aggregating signatures for.
Rank() uint64
// Aggregate aggregates the signatures and returns with additional data.
// Aggregated signature will be returned as SigData of timeout certificate.
// Caller can be sure that resulting signature is valid.
// Expected errors during normal operations:
// - model.InsufficientSignaturesError if no signatures have been added yet
Aggregate() (
signersInfo []TimeoutSignerInfo,
aggregatedSig models.AggregatedSignature,
exception error,
)
}
// TimeoutSignerInfo is a helper structure that stores the QC ranks that each
// signer contributed to a TC. Used as result of
// TimeoutSignatureAggregator.Aggregate()
type TimeoutSignerInfo struct {
NewestQCRank uint64
Signer models.Identity
}
// StateSignatureData is an intermediate struct for Packer to pack the
// aggregated signature data into raw bytes or unpack from raw bytes.
type StateSignatureData struct {
Signers []models.WeightedIdentity
Signature []byte
}
// Packer packs aggregated signature data into raw bytes to be used in state
// header.
type Packer interface {
// Pack serializes the provided StateSignatureData into a precursor format of
// a QC. rank is the rank of the state that the aggregated signature is for.
// sig is the aggregated signature data.
// Expected error returns during normal operations:
// * none; all errors are symptoms of inconsistent input data or corrupted
// internal state.
Pack(rank uint64, sig *StateSignatureData) (
signerIndices []byte,
sigData []byte,
err error,
)
// Unpack de-serializes the provided signature data.
// sig is the aggregated signature data
// It returns:
// - (sigData, nil) if successfully unpacked the signature data
// - (nil, model.InvalidFormatError) if failed to unpack the signature data
Unpack(signerIdentities []models.WeightedIdentity, sigData []byte) (
*StateSignatureData,
error,
)
}

View File

@ -0,0 +1,39 @@
package consensus
import (
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
)
// Signer is responsible for creating votes, proposals for a given state.
type Signer[StateT models.Unique, VoteT models.Unique] interface {
// CreateVote creates a vote for the given state. No error returns are
// expected during normal operations (incl. presence of byz. actors).
CreateVote(state *models.State[StateT]) (*VoteT, error)
// CreateTimeout creates a timeout for given rank. No errors return are
// expected during normal operations(incl presence of byz. actors).
CreateTimeout(
curRank uint64,
newestQC models.QuorumCertificate,
previousRankTimeoutCert models.TimeoutCertificate,
) (*models.TimeoutState[VoteT], error)
}
type SignatureAggregator interface {
VerifySignatureMultiMessage(
publicKeys [][]byte,
signature []byte,
messages [][]byte,
context []byte,
) bool
VerifySignatureRaw(
publicKey []byte,
signature []byte,
message []byte,
context []byte,
) bool
Aggregate(
publicKeys [][]byte,
signatures [][]byte,
) (models.AggregatedSignature, error)
}

View File

@ -0,0 +1,18 @@
package consensus
import "source.quilibrium.com/quilibrium/monorepo/consensus/models"
// ConsensusStore defines the methods required for internal state that should
// persist between restarts of the consensus engine.
type ConsensusStore[VoteT models.Unique] interface {
ReadOnlyConsensusStore[VoteT]
PutConsensusState(state *models.ConsensusState[VoteT]) error
PutLivenessState(state *models.LivenessState) error
}
// ReadOnlyConsensusStore defines the methods required for reading internal
// state persisted between restarts of the consensus engine.
type ReadOnlyConsensusStore[VoteT models.Unique] interface {
GetConsensusState(filter []byte) (*models.ConsensusState[VoteT], error)
GetLivenessState(filter []byte) (*models.LivenessState, error)
}

View File

@ -0,0 +1,29 @@
package consensus
import (
"context"
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
)
// SyncProvider handles synchronization management
type SyncProvider[StateT models.Unique] interface {
// Performs synchronization to set internal state. Note that it is assumed
// that errors are transient and synchronization should be reattempted on
// failure. If some other process for synchronization is used and this should
// be bypassed, send nil on the error channel. Provided context may be
// canceled, should be used to halt long-running sync operations.
Synchronize(
ctx context.Context,
existing *StateT,
) (<-chan *StateT, <-chan error)
// Enqueues state information to begin synchronization with a given peer. If
// expectedIdentity is provided, may use this to determine if the initial
// frameNumber for which synchronization begins is the correct fork.
AddState(
sourcePeerID []byte,
frameNumber uint64,
expectedIdentity []byte,
)
}

View File

@ -0,0 +1,127 @@
package consensus
import (
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
"source.quilibrium.com/quilibrium/monorepo/lifecycle"
)
// TimeoutAggregator verifies and aggregates timeout states to build timeout
// certificates [TCs]. When enough timeout states are collected, it builds a TC
// and sends it to the EventLoop TimeoutAggregator also detects protocol
// violation, including invalid timeouts, double timeout, etc and notifies a
// HotStuff consumer for slashing.
type TimeoutAggregator[VoteT models.Unique] interface {
lifecycle.Component
// AddTimeout verifies and aggregates a timeout state.
// This method can be called concurrently, timeouts will be queued and
// processed asynchronously.
AddTimeout(timeoutState *models.TimeoutState[VoteT])
// PruneUpToRank deletes all `TimeoutCollector`s _below_ to the given rank, as
// well as related indices. We only retain and process `TimeoutCollector`s,
// whose rank is equal or larger than `lowestRetainedRank`. If
// `lowestRetainedRank` is smaller than the previous value, the previous value
// is kept and the method call is a NoOp. This value should be set to the
// latest active rank maintained by `Pacemaker`.
PruneUpToRank(lowestRetainedRank uint64)
}
// TimeoutCollector collects all timeout states for a specified rank. On the
// happy path, it generates a TimeoutCertificate when enough timeouts have been
// collected. The TimeoutCollector is a higher-level structure that orchestrates
// deduplication, caching and processing of timeouts, delegating those tasks to
// underlying modules (such as TimeoutProcessor). Implementations of
// TimeoutCollector must be concurrency safe.
type TimeoutCollector[VoteT models.Unique] interface {
// AddTimeout adds a Timeout State to the collector. When TSs from
// strictly more than 1/3 of consensus participants (measured by weight) were
// collected, the callback for partial TC will be triggered. After collecting
// TSs from a supermajority, a TC will be created and passed to the EventLoop.
// Expected error returns during normal operations:
// * timeoutcollector.ErrTimeoutForIncompatibleRank - submitted timeout for
// incompatible rank
// All other exceptions are symptoms of potential state corruption.
AddTimeout(timeoutState *models.TimeoutState[VoteT]) error
// Rank returns the rank that this instance is collecting timeouts for.
// This method is useful when adding the newly created timeout collector to
// timeout collectors map.
Rank() uint64
}
// TimeoutProcessor ingests Timeout States for a particular rank. It
// implements the algorithms for validating TSs, orchestrates their low-level
// aggregation and emits `OnPartialTimeoutCertificateCreated` and `OnTimeoutCertificateConstructedFromTimeouts`
// notifications. TimeoutProcessor cannot deduplicate TSs (this should be
// handled by the higher-level TimeoutCollector) and errors instead. Depending
// on their implementation, a TimeoutProcessor might drop timeouts or attempt to
// construct a TC.
type TimeoutProcessor[VoteT models.Unique] interface {
// Process performs processing of single timeout state. This function is safe
// to call from multiple goroutines. Expected error returns during normal
// operations:
// * timeoutcollector.ErrTimeoutForIncompatibleRank - submitted timeout for
// incompatible rank
// * models.InvalidTimeoutError - submitted invalid timeout(invalid structure
// or invalid signature)
// * models.DuplicatedSignerError if a timeout from the same signer was
// previously already added. It does _not necessarily_ imply that the
// timeout is invalid or the sender is equivocating.
// All other errors should be treated as exceptions.
Process(timeout *models.TimeoutState[VoteT]) error
}
// TimeoutCollectorFactory performs creation of TimeoutCollector for a given
// rank
type TimeoutCollectorFactory[VoteT models.Unique] interface {
// Create is a factory method to generate a TimeoutCollector for a given rank
// Expected error returns during normal operations:
// * models.ErrRankUnknown no rank containing the given rank is known
// All other errors should be treated as exceptions.
Create(rank uint64) (TimeoutCollector[VoteT], error)
}
// TimeoutProcessorFactory performs creation of TimeoutProcessor for a given
// rank
type TimeoutProcessorFactory[VoteT models.Unique] interface {
// Create is a factory method to generate a TimeoutProcessor for a given rank
// Expected error returns during normal operations:
// * models.ErrRankUnknown no rank containing the given rank is known
// All other errors should be treated as exceptions.
Create(rank uint64) (TimeoutProcessor[VoteT], error)
}
// TimeoutCollectors encapsulates the functionality to generate, store and prune
// `TimeoutCollector` instances (one per rank). Its main purpose is to provide a
// higher-level API to `TimeoutAggregator` for managing and interacting with the
// rank-specific `TimeoutCollector` instances. Implementations are concurrency
// safe.
type TimeoutCollectors[VoteT models.Unique] interface {
// GetOrCreateCollector retrieves the TimeoutCollector for the specified
// rank or creates one if none exists. When creating a timeout collector,
// the rank is used to query the consensus committee for the respective
// Rank the rank belongs to.
// It returns:
// - (collector, true, nil) if no collector can be found by the rank, and a
// new collector was created.
// - (collector, false, nil) if the collector can be found by the rank.
// - (nil, false, error) if running into any exception creating the timeout
// collector.
// Expected error returns during normal operations:
// * models.BelowPrunedThresholdError if rank is below the pruning threshold
// * models.ErrRankUnknown if rank is not yet pruned but no rank containing
// the given rank is known
GetOrCreateCollector(rank uint64) (
collector TimeoutCollector[VoteT],
created bool,
err error,
)
// PruneUpToRank prunes the timeout collectors with ranks _below_ the given
// value, i.e. we only retain and process timeout collectors, whose ranks are
// equal or larger than `lowestRetainedRank`. If `lowestRetainedRank` is
// smaller than the previous value, the previous value is kept and the method
// call is a NoOp.
PruneUpToRank(lowestRetainedRank uint64)
}

View File

@ -0,0 +1,102 @@
package consensus
import (
"encoding/hex"
"time"
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
)
// TraceLogger defines a simple tracing interface
type TraceLogger interface {
Trace(message string, params ...LogParam)
Error(message string, err error, params ...LogParam)
With(params ...LogParam) TraceLogger
}
type LogParam struct {
key string
value any
kind string
}
func StringParam(key string, value string) LogParam {
return LogParam{
key: key,
value: value,
kind: "string",
}
}
func Uint64Param(key string, value uint64) LogParam {
return LogParam{
key: key,
value: value,
kind: "uint64",
}
}
func Uint32Param(key string, value uint32) LogParam {
return LogParam{
key: key,
value: value,
kind: "uint32",
}
}
func Int64Param(key string, value int64) LogParam {
return LogParam{
key: key,
value: value,
kind: "int64",
}
}
func Int32Param(key string, value int32) LogParam {
return LogParam{
key: key,
value: value,
kind: "int32",
}
}
func IdentityParam(key string, value models.Identity) LogParam {
return LogParam{
key: key,
value: hex.EncodeToString([]byte(value)),
kind: "string",
}
}
func HexParam(key string, value []byte) LogParam {
return LogParam{
key: key,
value: hex.EncodeToString(value),
kind: "string",
}
}
func TimeParam(key string, value time.Time) LogParam {
return LogParam{
key: key,
value: value,
kind: "time",
}
}
func (l LogParam) GetKey() string {
return l.key
}
func (l LogParam) GetValue() any {
return l.value
}
func (l LogParam) GetKind() string {
return l.kind
}
type nilTracer struct{}
func (nilTracer) Trace(message string) {}
func (nilTracer) Error(message string, err error) {}

View File

@ -0,0 +1,32 @@
package consensus
import (
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
)
// Validator provides functions to validate QuorumCertificate, proposals and
// votes.
type Validator[StateT models.Unique, VoteT models.Unique] interface {
// ValidateQuorumCertificate checks the validity of a QuorumCertificate.
// During normal operations, the following error returns are expected:
// * models.InvalidQuorumCertificateError if the QuorumCertificate is invalid
ValidateQuorumCertificate(qc models.QuorumCertificate) error
// ValidateTimeoutCertificate checks the validity of a TimeoutCertificate.
// During normal operations, the following error returns are expected:
// * models.InvalidTimeoutCertificateError if the TimeoutCertificate is
// invalid
ValidateTimeoutCertificate(tc models.TimeoutCertificate) error
// ValidateProposal checks the validity of a proposal.
// During normal operations, the following error returns are expected:
// * models.InvalidProposalError if the state is invalid
ValidateProposal(proposal *models.SignedProposal[StateT, VoteT]) error
// ValidateVote checks the validity of a vote.
// Returns the full entity for the voter. During normal operations,
// the following errors are expected:
// * models.InvalidVoteError for invalid votes
ValidateVote(vote *VoteT) (*models.WeightedIdentity, error)
}

View File

@ -0,0 +1,45 @@
package consensus
import "source.quilibrium.com/quilibrium/monorepo/consensus/models"
// Verifier is the component responsible for the cryptographic integrity of
// votes, proposals and QC's against the state they are signing.
type Verifier[VoteT models.Unique] interface {
// VerifyVote checks the cryptographic validity of a vote's `SigData` w.r.t.
// the rank and stateID. It is the responsibility of the calling code to
// ensure that `voter` is authorized to vote.
// Return values:
// * nil if `sigData` is cryptographically valid
// * models.InvalidFormatError if the signature has an incompatible format.
// * models.ErrInvalidSignature is the signature is invalid
// * unexpected errors should be treated as symptoms of bugs or uncovered
// edge cases in the logic (i.e. as fatal)
VerifyVote(vote *VoteT) error
// VerifyQC checks the cryptographic validity of a QC's `SigData` w.r.t. the
// given rank and stateID. It is the responsibility of the calling code to
// ensure that all `signers` are authorized, without duplicates.
// Return values:
// * nil if `sigData` is cryptographically valid
// * models.InvalidFormatError if `sigData` has an incompatible format
// * models.InsufficientSignaturesError if `signers is empty.
// Depending on the order of checks in the higher-level logic this error
// might be an indicator of a external byzantine input or an internal bug.
// * models.ErrInvalidSignature if a signature is invalid
// * unexpected errors should be treated as symptoms of bugs or uncovered
// edge cases in the logic (i.e. as fatal)
VerifyQuorumCertificate(quorumCertificate models.QuorumCertificate) error
// VerifyTimeoutCertificate checks cryptographic validity of the TC's
// `sigData` w.r.t. the given rank. It is the responsibility of the calling
// code to ensure that all `signers` are authorized, without duplicates.
// Return values:
// * nil if `sigData` is cryptographically valid
// * models.InsufficientSignaturesError if `signers is empty.
// * models.InvalidFormatError if `signers`/`highQCRanks` have differing
// lengths
// * models.ErrInvalidSignature if a signature is invalid
// * unexpected errors should be treated as symptoms of bugs or uncovered
// edge cases in the logic (i.e. as fatal)
VerifyTimeoutCertificate(timeoutCertificate models.TimeoutCertificate) error
}

View File

@ -0,0 +1,44 @@
package consensus
import (
"context"
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
)
// VotingProvider handles voting logic by deferring decisions, collection, and
// state finalization to an outside implementation.
type VotingProvider[
StateT models.Unique,
VoteT models.Unique,
PeerIDT models.Unique,
] interface {
// SignVote signs a proposal, produces an output vote for aggregation and
// broadcasting.
SignVote(
ctx context.Context,
state *models.State[StateT],
) (*VoteT, error)
// SignVote signs a proposal, produces an output vote for aggregation and
// broadcasting.
SignTimeoutVote(
ctx context.Context,
filter []byte,
currentRank uint64,
newestQuorumCertificateRank uint64,
) (*VoteT, error)
FinalizeQuorumCertificate(
ctx context.Context,
state *models.State[StateT],
aggregatedSignature models.AggregatedSignature,
) (models.QuorumCertificate, error)
// Produces a timeout certificate. Assumes VotingProvider will reorganize
// latestQuorumCertificateRanks in signer order.
FinalizeTimeout(
ctx context.Context,
rank uint64,
latestQuorumCertificate models.QuorumCertificate,
latestQuorumCertificateRanks []TimeoutSignerInfo,
aggregatedSignature models.AggregatedSignature,
) (models.TimeoutCertificate, error)
}

View File

@ -0,0 +1,10 @@
package consensus
// WeightProvider defines the methods for handling weighted differentiation of
// voters, such as seniority, or stake.
type WeightProvider interface {
// GetWeightForBitmask returns the total weight of the given bitmask for the
// prover set under the filter. Bitmask is expected to be in ascending ring
// order.
GetWeightForBitmask(filter []byte, bitmask []byte) uint64
}

View File

@ -0,0 +1,50 @@
package counters
import "sync/atomic"
// StrictMonotonicCounter is a helper struct which implements a strict monotonic
// counter. StrictMonotonicCounter is implemented using atomic operations and
// doesn't allow to set a value which is lower or equal to the already stored
// ne. The counter is implemented solely with non-blocking atomic operations for
// concurrency safety.
type StrictMonotonicCounter struct {
atomicCounter uint64
}
// NewMonotonicCounter creates new counter with initial value
func NewMonotonicCounter(initialValue uint64) StrictMonotonicCounter {
return StrictMonotonicCounter{
atomicCounter: initialValue,
}
}
// Set updates value of counter if and only if it's strictly larger than value
// which is already stored. Returns true if update was successful or false if
// stored value is larger.
func (c *StrictMonotonicCounter) Set(newValue uint64) bool {
for {
oldValue := c.Value()
if newValue <= oldValue {
return false
}
if atomic.CompareAndSwapUint64(&c.atomicCounter, oldValue, newValue) {
return true
}
}
}
// Value returns value which is stored in atomic variable
func (c *StrictMonotonicCounter) Value() uint64 {
return atomic.LoadUint64(&c.atomicCounter)
}
// Increment atomically increments counter and returns updated value
func (c *StrictMonotonicCounter) Increment() uint64 {
for {
oldValue := c.Value()
newValue := oldValue + 1
if atomic.CompareAndSwapUint64(&c.atomicCounter, oldValue, newValue) {
return newValue
}
}
}

View File

@ -0,0 +1,825 @@
package eventhandler
import (
"context"
"errors"
"fmt"
"time"
"source.quilibrium.com/quilibrium/monorepo/consensus"
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
)
// EventHandler is the main handler for individual events that trigger state
// transition. It exposes API to handle one event at a time synchronously.
// EventHandler is *not concurrency safe*. Please use the EventLoop to ensure
// that only a single go-routine executes the EventHandler's algorithms.
// EventHandler is implemented in event-driven way, it reacts to incoming events
// and performs certain actions. It doesn't perform any actions on its own.
// There are 3 main responsibilities of EventHandler, vote, propose, timeout.
// There are specific scenarios that lead to each of those actions.
// - create vote: voting logic is triggered by OnReceiveProposal, after
// receiving proposal we have all required information to create a valid
// vote. Compliance engine makes sure that we receive proposals, whose
// parents are known. Creating a vote can be triggered ONLY by receiving
// proposal.
// - create timeout: creating models.TimeoutState is triggered by
// OnLocalTimeout, after reaching deadline for current round. EventHandler
// gets notified about it and has to create a models.TimeoutState and
// broadcast it to other replicas. Creating a TO can be triggered by
// reaching round deadline or triggered as part of Bracha broadcast when
// superminority of replicas have contributed to TC creation and created a
// partial TC.
// - create a proposal: proposing logic is more complicated. Creating a
// proposal is triggered by the EventHandler receiving a QC or TC that
// induces a rank change to a rank where the replica is primary. As an edge
// case, the EventHandler can receive a QC or TC that triggers the rank
// change, but we can't create a proposal in case we are missing parent
// state the newest QC refers to. In case we already have the QC, but are
// still missing the respective parent, OnReceiveProposal can trigger the
// proposing logic as well, but only when receiving proposal for rank lower
// than active rank. To summarize, to make a valid proposal for rank N we
// need to have a QC or TC for N-1 and know the proposal with stateID
// NewestQC.Identifier.
//
// Not concurrency safe.
type EventHandler[
StateT models.Unique,
VoteT models.Unique,
PeerIDT models.Unique,
CollectedT models.Unique,
] struct {
tracer consensus.TraceLogger
paceMaker consensus.Pacemaker
stateProducer consensus.StateProducer[StateT, VoteT]
forks consensus.Forks[StateT]
store consensus.ConsensusStore[VoteT]
committee consensus.Replicas
safetyRules consensus.SafetyRules[StateT, VoteT]
notifier consensus.Consumer[StateT, VoteT]
}
var _ consensus.EventHandler[*nilUnique, *nilUnique] = (*EventHandler[
*nilUnique, *nilUnique, *nilUnique, *nilUnique,
])(nil)
// NewEventHandler creates an EventHandler instance with initial components.
func NewEventHandler[
StateT models.Unique,
VoteT models.Unique,
PeerIDT models.Unique,
CollectedT models.Unique,
](
paceMaker consensus.Pacemaker,
stateProducer consensus.StateProducer[StateT, VoteT],
forks consensus.Forks[StateT],
store consensus.ConsensusStore[VoteT],
committee consensus.Replicas,
safetyRules consensus.SafetyRules[StateT, VoteT],
notifier consensus.Consumer[StateT, VoteT],
tracer consensus.TraceLogger,
) (*EventHandler[StateT, VoteT, PeerIDT, CollectedT], error) {
e := &EventHandler[StateT, VoteT, PeerIDT, CollectedT]{
paceMaker: paceMaker,
stateProducer: stateProducer,
forks: forks,
store: store,
safetyRules: safetyRules,
committee: committee,
notifier: notifier,
tracer: tracer,
}
return e, nil
}
// OnReceiveQuorumCertificate processes a valid qc constructed by internal vote
// aggregator or discovered in TimeoutState. All inputs should be validated
// before feeding into this function. Assuming trusted data. No errors are
// expected during normal operation.
func (e *EventHandler[
StateT,
VoteT,
PeerIDT,
CollectedT,
]) OnReceiveQuorumCertificate(qc models.QuorumCertificate) error {
curRank := e.paceMaker.CurrentRank()
e.tracer.Trace(
"received QC",
consensus.Uint64Param("current_rank", curRank),
consensus.Uint64Param("qc_rank", qc.GetRank()),
consensus.IdentityParam("state_id", qc.Identity()),
)
e.notifier.OnReceiveQuorumCertificate(curRank, qc)
defer e.notifier.OnEventProcessed()
newRankEvent, err := e.paceMaker.ReceiveQuorumCertificate(qc)
if err != nil {
return fmt.Errorf("could not process QC: %w", err)
}
if newRankEvent == nil {
e.tracer.Trace("QC didn't trigger rank change, nothing to do")
return nil
}
// current rank has changed, go to new rank
e.tracer.Trace("QC triggered rank change, starting new rank now")
return e.proposeForNewRankIfPrimary()
}
// OnReceiveTimeoutCertificate processes a valid tc constructed by internal
// timeout aggregator, discovered in TimeoutState or broadcast over the network.
// All inputs should be validated before feeding into this function. Assuming
// trusted data. No errors are expected during normal operation.
func (e *EventHandler[
StateT,
VoteT,
PeerIDT,
CollectedT,
]) OnReceiveTimeoutCertificate(tc models.TimeoutCertificate) error {
curRank := e.paceMaker.CurrentRank()
e.tracer.Trace(
"received TC",
consensus.Uint64Param("current_rank", curRank),
consensus.Uint64Param("tc_rank", tc.GetRank()),
consensus.Uint64Param(
"tc_newest_qc_rank",
tc.GetLatestQuorumCert().GetRank(),
),
consensus.IdentityParam(
"tc_newest_qc_state_id",
tc.GetLatestQuorumCert().Identity(),
),
)
e.notifier.OnReceiveTimeoutCertificate(curRank, tc)
defer e.notifier.OnEventProcessed()
newRankEvent, err := e.paceMaker.ReceiveTimeoutCertificate(tc)
if err != nil {
return fmt.Errorf("could not process TC for rank %d: %w", tc.GetRank(), err)
}
if newRankEvent == nil {
e.tracer.Trace("TC didn't trigger rank change, nothing to do",
consensus.Uint64Param("current_rank", curRank),
consensus.Uint64Param("tc_rank", tc.GetRank()),
consensus.Uint64Param(
"tc_newest_qc_rank",
tc.GetLatestQuorumCert().GetRank(),
),
consensus.IdentityParam(
"tc_newest_qc_state_id",
tc.GetLatestQuorumCert().Identity(),
))
return nil
}
// current rank has changed, go to new rank
e.tracer.Trace("TC triggered rank change, starting new rank now",
consensus.Uint64Param("current_rank", curRank),
consensus.Uint64Param("tc_rank", tc.GetRank()),
consensus.Uint64Param(
"tc_newest_qc_rank",
tc.GetLatestQuorumCert().GetRank(),
),
consensus.IdentityParam(
"tc_newest_qc_state_id",
tc.GetLatestQuorumCert().Identity(),
))
return e.proposeForNewRankIfPrimary()
}
// OnReceiveProposal processes a state proposal received from another HotStuff
// consensus participant.
// All inputs should be validated before feeding into this function. Assuming
// trusted data. No errors are expected during normal operation.
func (e *EventHandler[
StateT,
VoteT,
PeerIDT,
CollectedT,
]) OnReceiveProposal(proposal *models.SignedProposal[StateT, VoteT]) error {
state := proposal.State
curRank := e.paceMaker.CurrentRank()
e.tracer.Trace(
"proposal received from compliance engine",
consensus.Uint64Param("current_rank", curRank),
consensus.Uint64Param("state_rank", state.Rank),
consensus.IdentityParam("state_id", state.Identifier),
consensus.Uint64Param("qc_rank", state.ParentQuorumCertificate.GetRank()),
consensus.IdentityParam("proposer_id", state.ProposerID),
)
e.notifier.OnReceiveProposal(curRank, proposal)
defer e.notifier.OnEventProcessed()
// ignore stale proposals
if (*state).Rank < e.forks.FinalizedRank() {
e.tracer.Trace(
"stale proposal",
consensus.Uint64Param("current_rank", curRank),
consensus.Uint64Param("state_rank", state.Rank),
consensus.IdentityParam("state_id", state.Identifier),
consensus.Uint64Param("qc_rank", state.ParentQuorumCertificate.GetRank()),
consensus.IdentityParam("proposer_id", state.ProposerID),
)
return nil
}
// store the state.
err := e.forks.AddValidatedState(proposal.State)
if err != nil {
return fmt.Errorf(
"cannot add proposal to forks (%x): %w",
state.Identifier,
err,
)
}
_, err = e.paceMaker.ReceiveQuorumCertificate(
proposal.State.ParentQuorumCertificate,
)
if err != nil {
return fmt.Errorf(
"could not process QC for state %x: %w",
state.Identifier,
err,
)
}
_, err = e.paceMaker.ReceiveTimeoutCertificate(
proposal.PreviousRankTimeoutCertificate,
)
if err != nil {
return fmt.Errorf(
"could not process TC for state %x: %w",
state.Identifier,
err,
)
}
// if the state is for the current rank, then try voting for this state
err = e.processStateForCurrentRank(proposal)
if err != nil {
return fmt.Errorf("failed processing current state: %w", err)
}
e.tracer.Trace(
"proposal processed from compliance engine",
consensus.Uint64Param("current_rank", curRank),
consensus.Uint64Param("state_rank", state.Rank),
consensus.IdentityParam("state_id", state.Identifier),
consensus.Uint64Param("qc_rank", state.ParentQuorumCertificate.GetRank()),
consensus.IdentityParam("proposer_id", state.ProposerID),
)
// nothing to do if this proposal is for current rank
if proposal.State.Rank == e.paceMaker.CurrentRank() {
return nil
}
return e.proposeForNewRankIfPrimary()
}
// TimeoutChannel returns the channel for subscribing the waiting timeout on
// receiving state or votes for the current rank.
func (e *EventHandler[
StateT,
VoteT,
PeerIDT,
CollectedT,
]) TimeoutChannel() <-chan time.Time {
return e.paceMaker.TimeoutCh()
}
// OnLocalTimeout handles a local timeout event by creating a
// models.TimeoutState and broadcasting it. No errors are expected during normal
// operation.
func (e *EventHandler[
StateT,
VoteT,
PeerIDT,
CollectedT,
]) OnLocalTimeout() error {
curRank := e.paceMaker.CurrentRank()
e.tracer.Trace(
"timeout received from event loop",
consensus.Uint64Param("current_rank", curRank),
)
e.notifier.OnLocalTimeout(curRank)
defer e.notifier.OnEventProcessed()
err := e.broadcastTimeoutStateIfAuthorized()
if err != nil {
return fmt.Errorf(
"unexpected exception while processing timeout in rank %d: %w",
curRank,
err,
)
}
return nil
}
// OnPartialTimeoutCertificateCreated handles notification produces by the
// internal timeout aggregator. If the notification is for the current rank, a
// corresponding models.TimeoutState is broadcast to the consensus committee. No
// errors are expected during normal operation.
func (e *EventHandler[
StateT,
VoteT,
PeerIDT,
CollectedT,
]) OnPartialTimeoutCertificateCreated(
partialTC *consensus.PartialTimeoutCertificateCreated,
) error {
curRank := e.paceMaker.CurrentRank()
previousRankTimeoutCert := partialTC.PriorRankTimeoutCertificate
e.tracer.Trace(
"constructed partial TC",
consensus.Uint64Param("current_rank", curRank),
consensus.Uint64Param(
"qc_rank",
partialTC.NewestQuorumCertificate.GetRank(),
),
)
e.notifier.OnPartialTimeoutCertificate(curRank, partialTC)
defer e.notifier.OnEventProcessed()
// process QC, this might trigger rank change
_, err := e.paceMaker.ReceiveQuorumCertificate(
partialTC.NewestQuorumCertificate,
)
if err != nil {
return fmt.Errorf("could not process newest QC: %w", err)
}
// process TC, this might trigger rank change
_, err = e.paceMaker.ReceiveTimeoutCertificate(previousRankTimeoutCert)
if err != nil {
return fmt.Errorf(
"could not process TC for rank %d: %w",
previousRankTimeoutCert.GetRank(),
err,
)
}
// NOTE: in other cases when we have observed a rank change we will trigger
// proposing logic, this is desired logic for handling proposal, QC and TC.
// However, observing a partial TC means that superminority have timed out and
// there was at least one honest replica in that set. Honest replicas will
// never vote after timing out for current rank meaning we won't be able to
// collect supermajority of votes for a proposal made after observing partial
// TC.
// by definition, we are allowed to produce timeout state if we have received
// partial TC for current rank
if e.paceMaker.CurrentRank() != partialTC.Rank {
return nil
}
e.tracer.Trace(
"partial TC generated for current rank, broadcasting timeout",
consensus.Uint64Param("current_rank", curRank),
consensus.Uint64Param(
"qc_rank",
partialTC.NewestQuorumCertificate.GetRank(),
),
)
err = e.broadcastTimeoutStateIfAuthorized()
if err != nil {
return fmt.Errorf(
"unexpected exception while processing partial TC in rank %d: %w",
partialTC.Rank,
err,
)
}
return nil
}
// Start starts the event handler. No errors are expected during normal
// operation. CAUTION: EventHandler is not concurrency safe. The Start method
// must be executed by the same goroutine that also calls the other business
// logic methods, or concurrency safety has to be implemented externally.
func (e *EventHandler[
StateT,
VoteT,
PeerIDT,
CollectedT,
]) Start(ctx context.Context) error {
e.notifier.OnStart(e.paceMaker.CurrentRank())
defer e.notifier.OnEventProcessed()
e.paceMaker.Start(ctx)
err := e.proposeForNewRankIfPrimary()
if err != nil {
return fmt.Errorf("could not start new rank: %w", err)
}
return nil
}
// broadcastTimeoutStateIfAuthorized attempts to generate a
// models.TimeoutState, adds it to `timeoutAggregator` and broadcasts it to the
// consensus commettee. We check, whether this node, at the current rank, is
// part of the consensus committee. Otherwise, this method is functionally a
// no-op. For example, right after an rank switchover a consensus node might
// still be online but not part of the _active_ consensus committee anymore.
// Consequently, it should not broadcast timeouts anymore. No errors are
// expected during normal operation.
func (e *EventHandler[
StateT,
VoteT,
PeerIDT,
CollectedT,
]) broadcastTimeoutStateIfAuthorized() error {
curRank := e.paceMaker.CurrentRank()
newestQC := e.paceMaker.LatestQuorumCertificate()
previousRankTimeoutCert := e.paceMaker.PriorRankTimeoutCertificate()
if newestQC.GetRank()+1 == curRank {
// in case last rank has ended with QC and TC, make sure that only QC is
// included otherwise such timeout is invalid. This case is possible if TC
// has included QC with the same rank as the TC itself, meaning that
// newestQC.Rank == previousRankTimeoutCert.Rank
previousRankTimeoutCert = nil
}
timeout, err := e.safetyRules.ProduceTimeout(
curRank,
newestQC,
previousRankTimeoutCert,
)
if err != nil {
if models.IsNoTimeoutError(err) {
e.tracer.Error(
"not generating timeout as this node is not part of the active committee",
err,
consensus.Uint64Param("current_rank", curRank),
)
return nil
}
return fmt.Errorf("could not produce timeout: %w", err)
}
// raise a notification to broadcast timeout
e.notifier.OnOwnTimeout(timeout)
e.tracer.Trace(
"broadcast TimeoutState done",
consensus.Uint64Param("current_rank", curRank),
)
return nil
}
// proposeForNewRankIfPrimary will only be called when we may be able to propose
// a state, after processing a new event.
// - after entering a new rank as a result of processing a QC or TC, then we
// may propose for the newly entered rank
// - after receiving a proposal (but not changing rank), if that proposal is
// referenced by our highest known QC, and the proposal was previously
// unknown, then we can propose a state in the current rank
//
// Enforced INVARIANTS:
// - There will at most be `OnOwnProposal` notification emitted for ranks
// where this node is the leader, and none if another node is the leader.
// This holds irrespective of restarts. Formally, this prevents proposal
// equivocation.
//
// It reads the current rank, and generates a proposal if we are the leader.
// No errors are expected during normal operation.
func (e *EventHandler[
StateT,
VoteT,
PeerIDT,
CollectedT,
]) proposeForNewRankIfPrimary() error {
start := time.Now() // track the start time
curRank := e.paceMaker.CurrentRank()
e.tracer.Trace(
"deciding to propose",
consensus.Uint64Param("current_rank", curRank),
consensus.IdentityParam("self", e.committee.Self()),
)
currentLeader, err := e.committee.LeaderForRank(curRank)
if err != nil {
return fmt.Errorf(
"failed to determine primary for new rank %d: %w",
curRank,
err,
)
}
finalizedRank := e.forks.FinalizedRank()
e.notifier.OnCurrentRankDetails(curRank, finalizedRank, currentLeader)
// check that I am the primary for this rank
if e.committee.Self() != currentLeader {
e.tracer.Trace(
"not current leader, waiting",
consensus.Uint64Param("current_rank", curRank),
consensus.Uint64Param("finalized_rank", finalizedRank),
consensus.IdentityParam("leader_id", currentLeader),
)
return nil
}
// attempt to generate proposal:
newestQC := e.paceMaker.LatestQuorumCertificate()
previousRankTimeoutCert := e.paceMaker.PriorRankTimeoutCertificate()
_, found := e.forks.GetState(newestQC.Identity())
if !found {
// we don't know anything about state referenced by our newest QC, in this
// case we can't create a valid proposal since we can't guarantee validity
// of state payload.
e.tracer.Trace(
"haven't synced the latest state yet; can't propose",
consensus.Uint64Param("current_rank", curRank),
consensus.Uint64Param("finalized_rank", finalizedRank),
consensus.IdentityParam("leader_id", currentLeader),
)
return nil
}
e.tracer.Trace(
"generating proposal as leader",
consensus.Uint64Param("current_rank", curRank),
consensus.Uint64Param("finalized_rank", finalizedRank),
consensus.IdentityParam("leader_id", currentLeader),
)
// Sanity checks to make sure that resulting proposal is valid:
// In its proposal, the leader for rank N needs to present evidence that it
// has legitimately entered rank N. As evidence, we include a QC or TC for
// rank N-1, which should always be available as the PaceMaker advances to
// rank N only after observing a QC or TC from rank N-1. Moreover, QC and TC
// are always processed together. As EventHandler is strictly single-threaded
// without reentrancy, we must have a QC or TC for the prior rank (curRank-1).
// Failing one of these sanity checks is a symptom of state corruption or a
// severe implementation bug.
if newestQC.GetRank()+1 != curRank {
if previousRankTimeoutCert == nil {
return fmt.Errorf("possible state corruption, expected previousRankTimeoutCert to be not nil")
}
if previousRankTimeoutCert.GetRank()+1 != curRank {
return fmt.Errorf(
"possible state corruption, don't have QC(rank=%d) and TC(rank=%d) for previous rank(currentRank=%d)",
newestQC.GetRank(),
previousRankTimeoutCert.GetRank(),
curRank,
)
}
} else {
// In case last rank has ended with QC and TC, make sure that only QC is
// included, otherwise such proposal is invalid. This case is possible if TC
// has included QC with the same rank as the TC itself, meaning that
// newestQC.Rank == previousRankTimeoutCert.Rank
previousRankTimeoutCert = nil
}
// Construct Own SignedProposal
// CAUTION, design constraints:
// (i) We cannot process our own proposal within the `EventHandler` right
// away.
// (ii) We cannot add our own proposal to Forks here right away.
// (iii) Metrics for the PaceMaker/CruiseControl assume that the EventHandler
// is the only caller of `TargetPublicationTime`. Technically,
// `TargetPublicationTime` records the publication delay relative to
// its _latest_ call.
//
// To satisfy all constraints, we construct the proposal here and query
// (once!) its `TargetPublicationTime`. Though, we do _not_ process our own
// states right away and instead ingest them into the EventHandler the same
// way as proposals from other consensus participants. Specifically, on the
// path through the HotStuff state machine leading to state construction, the
// node's own proposal is largely ephemeral. The proposal is handed to the
// `MessageHub` (via the `OnOwnProposal` notification including the
// `TargetPublicationTime`). The `MessageHub` waits until
// `TargetPublicationTime` and only then broadcast the proposal and puts it
// into the EventLoop's queue for inbound states. This is exactly the same way
// as proposals from other nodes are ingested by the `EventHandler`, except
// that we are skipping the ComplianceEngine (assuming that our own proposals
// are protocol-compliant).
//
// Context:
// • On constraint (i): We want to support consensus committees only
// consisting of a *single* node. If the EventHandler internally processed
// the state right away via a direct message call, the call-stack would be
// ever-growing and the node would crash eventually (we experienced this
// with a very early HotStuff implementation). Specifically, if we wanted
// to process the state directly without taking a detour through the
// EventLoop's inbound queue, we would call `OnReceiveProposal` here. The
// function `OnReceiveProposal` would then end up calling
// `proposeForNewRankIfPrimary` (this function) to generate the next
// proposal, which again would result in calling `OnReceiveProposal` and so
// on so forth until the call stack or memory limit is reached and the node
// crashes. This is only a problem for consensus committees of size 1.
// • On constraint (ii): When adding a proposal to Forks, Forks emits a
// `StateIncorporatedEvent` notification, which is observed by Cruise
// Control and would change its state. However, note that Cruise Control
// is trying to estimate the point in time when _other_ nodes are observing
// the proposal. The time when we broadcast the proposal (i.e.
// `TargetPublicationTime`) is a reasonably good estimator, but *not* the
// time the proposer constructed the state (because there is potentially
// still a significant wait until `TargetPublicationTime`).
//
// The current approach is for a node to process its own proposals at the same
// time and through the same code path as proposals from other nodes. This
// satisfies constraints (i) and (ii) and generates very strong consistency,
// from a software design perspective.
// Just hypothetically, if we changed Cruise Control to be notified about
// own state proposals _only_ when they are broadcast (satisfying constraint
// (ii) without relying on the EventHandler), then we could add a proposal to
// Forks here right away. Nevertheless, the restriction remains that we cannot
// process that proposal right away within the EventHandler and instead need
// to put it into the EventLoop's inbound queue to support consensus
// committees of size 1.
stateProposal, err := e.stateProducer.MakeStateProposal(
curRank,
newestQC,
previousRankTimeoutCert,
)
if err != nil {
if models.IsNoVoteError(err) {
e.tracer.Error(
"aborting state proposal to prevent equivocation (likely re-entered proposal logic due to crash)",
err,
consensus.Uint64Param("current_rank", curRank),
consensus.Uint64Param("finalized_rank", finalizedRank),
consensus.IdentityParam("leader_id", currentLeader),
)
return nil
}
return fmt.Errorf(
"can not make state proposal for curRank %d: %w",
curRank,
err,
)
}
targetPublicationTime := e.paceMaker.TargetPublicationTime(
stateProposal.State.Rank,
start,
stateProposal.State.ParentQuorumCertificate.Identity(),
) // determine target publication time
e.tracer.Trace(
"forwarding proposal to communicator for broadcasting",
consensus.Uint64Param("state_rank", stateProposal.State.Rank),
consensus.TimeParam("target_publication", targetPublicationTime),
consensus.IdentityParam("state_id", stateProposal.State.Identifier),
consensus.Uint64Param("parent_rank", newestQC.GetRank()),
consensus.IdentityParam("parent_id", newestQC.Identity()),
consensus.IdentityParam("signer", stateProposal.State.ProposerID),
)
// emit notification with own proposal (also triggers broadcast)
e.notifier.OnOwnProposal(stateProposal, targetPublicationTime)
return nil
}
// processStateForCurrentRank processes the state for the current rank.
// It is called AFTER the state has been stored or found in Forks
// It checks whether to vote for this state.
// No errors are expected during normal operation.
func (e *EventHandler[
StateT,
VoteT,
PeerIDT,
CollectedT,
]) processStateForCurrentRank(
proposal *models.SignedProposal[StateT, VoteT],
) error {
// sanity check that state is really for the current rank:
curRank := e.paceMaker.CurrentRank()
state := proposal.State
if state.Rank != curRank {
// ignore outdated proposals in case we have moved forward
return nil
}
// leader (node ID) for next rank
nextLeader, err := e.committee.LeaderForRank(curRank + 1)
if errors.Is(err, models.ErrRankUnknown) {
// We are attempting process a state in an unknown rank
// This should never happen, because:
// * the compliance layer ensures proposals are passed to the event loop
// strictly after their parent
// * the protocol state ensures that, before incorporating the first state
// of an rank R, either R is known or we have triggered fallback mode - in
// either case the current rank is known
return fmt.Errorf("attempting to process a state for unknown rank")
}
if err != nil {
return fmt.Errorf(
"failed to determine primary for next rank %d: %w",
curRank+1,
err,
)
}
// safetyRules performs all the checks to decide whether to vote for this
// state or not.
err = e.ownVote(proposal, curRank, nextLeader)
if err != nil {
return fmt.Errorf("unexpected error in voting logic: %w", err)
}
return nil
}
// ownVote generates and forwards the own vote, if we decide to vote.
// Any errors are potential symptoms of uncovered edge cases or corrupted
// internal state (fatal). No errors are expected during normal operation.
func (e *EventHandler[
StateT,
VoteT,
PeerIDT,
CollectedT,
]) ownVote(
proposal *models.SignedProposal[StateT, VoteT],
curRank uint64,
nextLeader models.Identity,
) error {
_, found := e.forks.GetState(
proposal.State.ParentQuorumCertificate.Identity(),
)
if !found {
// we don't have parent for this proposal, we can't vote since we can't
// guarantee validity of proposals payload. Strictly speaking this shouldn't
// ever happen because compliance engine makes sure that we receive
// proposals with valid parents.
return fmt.Errorf(
"won't vote for proposal, no parent state for this proposal",
)
}
// safetyRules performs all the checks to decide whether to vote for this
// state or not.
ownVote, err := e.safetyRules.ProduceVote(proposal, curRank)
if err != nil {
if !models.IsNoVoteError(err) {
// unknown error, exit the event loop
return fmt.Errorf("could not produce vote: %w", err)
}
e.tracer.Trace(
"should not vote for this state",
consensus.Uint64Param("state_rank", proposal.State.Rank),
consensus.IdentityParam("state_id", proposal.State.Identifier),
consensus.Uint64Param(
"parent_rank",
proposal.State.ParentQuorumCertificate.GetRank(),
),
consensus.IdentityParam(
"parent_id",
proposal.State.ParentQuorumCertificate.Identity(),
),
consensus.IdentityParam("signer", proposal.State.ProposerID[:]),
)
return nil
}
e.tracer.Trace(
"forwarding vote to compliance engine",
consensus.Uint64Param("state_rank", proposal.State.Rank),
consensus.IdentityParam("state_id", proposal.State.Identifier),
consensus.Uint64Param(
"parent_rank",
proposal.State.ParentQuorumCertificate.GetRank(),
),
consensus.IdentityParam(
"parent_id",
proposal.State.ParentQuorumCertificate.Identity(),
),
consensus.IdentityParam("signer", proposal.State.ProposerID[:]),
)
e.notifier.OnOwnVote(ownVote, nextLeader)
return nil
}
// Type used to satisfy generic arguments in compiler time type assertion check
type nilUnique struct{}
// GetSignature implements models.Unique.
func (n *nilUnique) GetSignature() []byte {
panic("unimplemented")
}
// GetTimestamp implements models.Unique.
func (n *nilUnique) GetTimestamp() uint64 {
panic("unimplemented")
}
// Source implements models.Unique.
func (n *nilUnique) Source() models.Identity {
panic("unimplemented")
}
// Clone implements models.Unique.
func (n *nilUnique) Clone() models.Unique {
panic("unimplemented")
}
// GetRank implements models.Unique.
func (n *nilUnique) GetRank() uint64 {
panic("unimplemented")
}
// Identity implements models.Unique.
func (n *nilUnique) Identity() models.Identity {
panic("unimplemented")
}
var _ models.Unique = (*nilUnique)(nil)

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,382 @@
package eventloop
import (
"context"
"fmt"
"time"
"source.quilibrium.com/quilibrium/monorepo/consensus"
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
"source.quilibrium.com/quilibrium/monorepo/consensus/tracker"
"source.quilibrium.com/quilibrium/monorepo/lifecycle"
)
// queuedProposal is a helper structure that is used to transmit proposal in
// channel it contains an attached insertionTime that is used to measure how
// long we have waited between queening proposal and actually processing by
// `EventHandler`.
type queuedProposal[StateT models.Unique, VoteT models.Unique] struct {
proposal *models.SignedProposal[StateT, VoteT]
insertionTime time.Time
}
// EventLoop buffers all incoming events to the hotstuff EventHandler, and feeds
// EventHandler one event at a time.
type EventLoop[StateT models.Unique, VoteT models.Unique] struct {
*lifecycle.ComponentManager
eventHandler consensus.EventHandler[StateT, VoteT]
proposals chan queuedProposal[StateT, VoteT]
newestSubmittedTimeoutCertificate *tracker.NewestTCTracker
newestSubmittedQc *tracker.NewestQCTracker
newestSubmittedPartialTimeoutCertificate *tracker.NewestPartialTimeoutCertificateTracker
tcSubmittedNotifier chan struct{}
qcSubmittedNotifier chan struct{}
partialTimeoutCertificateCreatedNotifier chan struct{}
startTime time.Time
tracer consensus.TraceLogger
}
var _ consensus.EventLoop[*nilUnique, *nilUnique] = (*EventLoop[*nilUnique, *nilUnique])(nil)
// NewEventLoop creates an instance of EventLoop.
func NewEventLoop[StateT models.Unique, VoteT models.Unique](
tracer consensus.TraceLogger,
eventHandler consensus.EventHandler[StateT, VoteT],
startTime time.Time,
) (*EventLoop[StateT, VoteT], error) {
// we will use a buffered channel to avoid blocking of caller
// we can't afford to drop messages since it undermines liveness, but we also
// want to avoid blocking of compliance engine. We assume that we should be
// able to process proposals faster than compliance engine feeds them, worst
// case we will fill the buffer and state compliance engine worker but that
// should happen only if compliance engine receives large number of states in
// short period of time(when catching up for instance).
proposals := make(chan queuedProposal[StateT, VoteT], 1000)
el := &EventLoop[StateT, VoteT]{
tracer: tracer,
eventHandler: eventHandler,
proposals: proposals,
tcSubmittedNotifier: make(chan struct{}, 1),
qcSubmittedNotifier: make(chan struct{}, 1),
partialTimeoutCertificateCreatedNotifier: make(chan struct{}, 1),
newestSubmittedTimeoutCertificate: tracker.NewNewestTCTracker(),
newestSubmittedQc: tracker.NewNewestQCTracker(),
newestSubmittedPartialTimeoutCertificate: tracker.NewNewestPartialTimeoutCertificateTracker(),
startTime: startTime,
}
componentBuilder := lifecycle.NewComponentManagerBuilder()
componentBuilder.AddWorker(func(
ctx lifecycle.SignalerContext,
ready lifecycle.ReadyFunc,
) {
ready()
// launch when scheduled by el.startTime
el.tracer.Trace(fmt.Sprintf("event loop will start at: %v", el.startTime))
select {
case <-ctx.Done():
return
case <-time.After(time.Until(el.startTime)):
el.tracer.Trace("starting event loop")
err := el.loop(ctx)
if err != nil {
el.tracer.Error("irrecoverable event loop error", err)
ctx.Throw(err)
}
}
})
el.ComponentManager = componentBuilder.Build()
return el, nil
}
// loop executes the core HotStuff logic in a single thread. It picks inputs
// from the various inbound channels and executes the EventHandler's respective
// method for processing this input. During normal operations, the EventHandler
// is not expected to return any errors, as all inputs are assumed to be fully
// validated (or produced by trusted components within the node). Therefore,
// any error is a symptom of state corruption, bugs or violation of API
// contracts. In all cases, continuing operations is not an option, i.e. we exit
// the event loop and return an exception.
func (el *EventLoop[StateT, VoteT]) loop(ctx context.Context) error {
err := el.eventHandler.Start(ctx)
if err != nil {
return fmt.Errorf("could not start event handler: %w", err)
}
shutdownSignaled := ctx.Done()
timeoutCertificates := el.tcSubmittedNotifier
quorumCertificates := el.qcSubmittedNotifier
partialTCs := el.partialTimeoutCertificateCreatedNotifier
for {
// Giving timeout events the priority to be processed first.
// This is to prevent attacks from malicious nodes that attempt
// to block honest nodes' pacemaker from progressing by sending
// other events.
timeoutChannel := el.eventHandler.TimeoutChannel()
// the first select makes sure we process timeouts with priority
select {
// if we receive the shutdown signal, exit the loop
case <-shutdownSignaled:
el.tracer.Trace("shutting down event loop")
return nil
// processing timeout or partial TC event are top priority since
// they allow node to contribute to TC aggregation when replicas can't
// make progress on happy path
case <-timeoutChannel:
el.tracer.Trace("received timeout")
err = el.eventHandler.OnLocalTimeout()
if err != nil {
return fmt.Errorf("could not process timeout: %w", err)
}
// At this point, we have received and processed an event from the timeout
// channel. A timeout also means that we have made progress. A new timeout
// will have been started and el.eventHandler.TimeoutChannel() will be a
// NEW channel (for the just-started timeout). Very important to start the
// for loop from the beginning, to continue the with the new timeout
// channel!
continue
case <-partialTCs:
el.tracer.Trace("received partial timeout")
err = el.eventHandler.OnPartialTimeoutCertificateCreated(
el.newestSubmittedPartialTimeoutCertificate.NewestPartialTimeoutCertificate(),
)
if err != nil {
return fmt.Errorf("could not process partial created TC event: %w", err)
}
// At this point, we have received and processed partial TC event, it
// could have resulted in several scenarios:
// 1. a rank change with potential voting or proposal creation
// 2. a created and broadcast timeout state
// 3. QC and TC didn't result in rank change and no timeout was created
// since we have already timed out or the partial TC was created for rank
// different from current one.
continue
default:
el.tracer.Trace("non-priority event")
// fall through to non-priority events
}
// select for state headers/QCs here
select {
// same as before
case <-shutdownSignaled:
el.tracer.Trace("shutting down event loop")
return nil
// same as before
case <-timeoutChannel:
el.tracer.Trace("received timeout")
err = el.eventHandler.OnLocalTimeout()
if err != nil {
return fmt.Errorf("could not process timeout: %w", err)
}
// if we have a new proposal, process it
case queuedItem := <-el.proposals:
el.tracer.Trace("received proposal")
proposal := queuedItem.proposal
err = el.eventHandler.OnReceiveProposal(proposal)
if err != nil {
return fmt.Errorf(
"could not process proposal %x: %w",
proposal.State.Identifier,
err,
)
}
el.tracer.Trace(
"state proposal has been processed successfully",
consensus.Uint64Param("rank", proposal.State.Rank),
)
// if we have a new QC, process it
case <-quorumCertificates:
el.tracer.Trace("received quorum certificate")
err = el.eventHandler.OnReceiveQuorumCertificate(
*el.newestSubmittedQc.NewestQC(),
)
if err != nil {
return fmt.Errorf("could not process QC: %w", err)
}
// if we have a new TC, process it
case <-timeoutCertificates:
el.tracer.Trace("received timeout certificate")
err = el.eventHandler.OnReceiveTimeoutCertificate(
*el.newestSubmittedTimeoutCertificate.NewestTC(),
)
if err != nil {
return fmt.Errorf("could not process TC: %w", err)
}
case <-partialTCs:
el.tracer.Trace("received partial timeout certificate")
err = el.eventHandler.OnPartialTimeoutCertificateCreated(
el.newestSubmittedPartialTimeoutCertificate.NewestPartialTimeoutCertificate(),
)
if err != nil {
return fmt.Errorf("could no process partial created TC event: %w", err)
}
}
}
}
// SubmitProposal pushes the received state to the proposals channel
func (el *EventLoop[StateT, VoteT]) SubmitProposal(
proposal *models.SignedProposal[StateT, VoteT],
) {
queueItem := queuedProposal[StateT, VoteT]{
proposal: proposal,
insertionTime: time.Now(),
}
select {
case el.proposals <- queueItem:
case <-el.ComponentManager.ShutdownSignal():
return
}
}
// onTrustedQC pushes the received QC (which MUST be validated) to the
// quorumCertificates channel
func (el *EventLoop[StateT, VoteT]) onTrustedQC(qc *models.QuorumCertificate) {
if el.newestSubmittedQc.Track(qc) {
select {
case el.qcSubmittedNotifier <- struct{}{}:
default:
}
}
}
// onTrustedTC pushes the received TC (which MUST be validated) to the
// timeoutCertificates channel
func (el *EventLoop[StateT, VoteT]) onTrustedTC(tc *models.TimeoutCertificate) {
if el.newestSubmittedTimeoutCertificate.Track(tc) {
select {
case el.tcSubmittedNotifier <- struct{}{}:
default:
}
} else {
qc := (*tc).GetLatestQuorumCert()
if el.newestSubmittedQc.Track(&qc) {
select {
case el.qcSubmittedNotifier <- struct{}{}:
default:
}
}
}
}
// OnTimeoutCertificateConstructedFromTimeouts pushes the received TC to the
// timeoutCertificates channel
func (el *EventLoop[StateT, VoteT]) OnTimeoutCertificateConstructedFromTimeouts(
tc models.TimeoutCertificate,
) {
el.onTrustedTC(&tc)
}
// OnPartialTimeoutCertificateCreated created a
// consensus.PartialTimeoutCertificateCreated payload and pushes it into
// partialTimeoutCertificateCreated buffered channel for further processing by
// EventHandler. Since we use buffered channel this function can block if buffer
// is full.
func (el *EventLoop[StateT, VoteT]) OnPartialTimeoutCertificateCreated(
rank uint64,
newestQC models.QuorumCertificate,
previousRankTimeoutCert models.TimeoutCertificate,
) {
event := &consensus.PartialTimeoutCertificateCreated{
Rank: rank,
NewestQuorumCertificate: newestQC,
PriorRankTimeoutCertificate: previousRankTimeoutCert,
}
if el.newestSubmittedPartialTimeoutCertificate.Track(event) {
select {
case el.partialTimeoutCertificateCreatedNotifier <- struct{}{}:
default:
}
}
}
// OnNewQuorumCertificateDiscovered pushes already validated QCs that were
// submitted from TimeoutAggregator to the event handler
func (el *EventLoop[StateT, VoteT]) OnNewQuorumCertificateDiscovered(
qc models.QuorumCertificate,
) {
el.onTrustedQC(&qc)
}
// OnNewTimeoutCertificateDiscovered pushes already validated TCs that were
// submitted from TimeoutAggregator to the event handler
func (el *EventLoop[StateT, VoteT]) OnNewTimeoutCertificateDiscovered(
tc models.TimeoutCertificate,
) {
el.onTrustedTC(&tc)
}
// OnQuorumCertificateConstructedFromVotes implements
// consensus.VoteCollectorConsumer and pushes received qc into processing
// pipeline.
func (el *EventLoop[StateT, VoteT]) OnQuorumCertificateConstructedFromVotes(
qc models.QuorumCertificate,
) {
el.onTrustedQC(&qc)
}
// OnTimeoutProcessed implements consensus.TimeoutCollectorConsumer and is no-op
func (el *EventLoop[StateT, VoteT]) OnTimeoutProcessed(
timeout *models.TimeoutState[VoteT],
) {
}
// OnVoteProcessed implements consensus.VoteCollectorConsumer and is no-op
func (el *EventLoop[StateT, VoteT]) OnVoteProcessed(vote *VoteT) {}
// Type used to satisfy generic arguments in compiler time type assertion check
type nilUnique struct{}
// GetSignature implements models.Unique.
func (n *nilUnique) GetSignature() []byte {
panic("unimplemented")
}
// GetTimestamp implements models.Unique.
func (n *nilUnique) GetTimestamp() uint64 {
panic("unimplemented")
}
// Source implements models.Unique.
func (n *nilUnique) Source() models.Identity {
panic("unimplemented")
}
// Clone implements models.Unique.
func (n *nilUnique) Clone() models.Unique {
panic("unimplemented")
}
// GetRank implements models.Unique.
func (n *nilUnique) GetRank() uint64 {
panic("unimplemented")
}
// Identity implements models.Unique.
func (n *nilUnique) Identity() models.Identity {
panic("unimplemented")
}
var _ models.Unique = (*nilUnique)(nil)

View File

@ -0,0 +1,262 @@
package eventloop
import (
"context"
"sync"
"testing"
"time"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
"github.com/stretchr/testify/suite"
"go.uber.org/atomic"
"source.quilibrium.com/quilibrium/monorepo/consensus"
"source.quilibrium.com/quilibrium/monorepo/consensus/helper"
"source.quilibrium.com/quilibrium/monorepo/consensus/mocks"
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
"source.quilibrium.com/quilibrium/monorepo/lifecycle/unittest"
)
// TestEventLoop performs unit testing of event loop, checks if submitted events are propagated
// to event handler as well as handling of timeouts.
func TestEventLoop(t *testing.T) {
suite.Run(t, new(EventLoopTestSuite))
}
type EventLoopTestSuite struct {
suite.Suite
eh *mocks.EventHandler[*helper.TestState, *helper.TestVote]
cancel context.CancelFunc
eventLoop *EventLoop[*helper.TestState, *helper.TestVote]
}
func (s *EventLoopTestSuite) SetupTest() {
s.eh = mocks.NewEventHandler[*helper.TestState, *helper.TestVote](s.T())
s.eh.On("Start", mock.Anything).Return(nil).Maybe()
s.eh.On("TimeoutChannel").Return(make(<-chan time.Time, 1)).Maybe()
s.eh.On("OnLocalTimeout").Return(nil).Maybe()
eventLoop, err := NewEventLoop(helper.Logger(), s.eh, time.Time{})
require.NoError(s.T(), err)
s.eventLoop = eventLoop
ctx, cancel := context.WithCancel(context.Background())
s.cancel = cancel
signalerCtx := unittest.NewMockSignalerContext(s.T(), ctx)
s.eventLoop.Start(signalerCtx)
unittest.RequireCloseBefore(s.T(), s.eventLoop.Ready(), 100*time.Millisecond, "event loop not started")
}
func (s *EventLoopTestSuite) TearDownTest() {
s.cancel()
unittest.RequireCloseBefore(s.T(), s.eventLoop.Done(), 100*time.Millisecond, "event loop not stopped")
}
// TestReadyDone tests if event loop stops internal worker thread
func (s *EventLoopTestSuite) TestReadyDone() {
time.Sleep(1 * time.Second)
go func() {
s.cancel()
}()
unittest.RequireCloseBefore(s.T(), s.eventLoop.Done(), 100*time.Millisecond, "event loop not stopped")
}
// Test_SubmitQC tests that submitted proposal is eventually sent to event handler for processing
func (s *EventLoopTestSuite) Test_SubmitProposal() {
proposal := helper.MakeSignedProposal[*helper.TestState, *helper.TestVote]()
processed := atomic.NewBool(false)
s.eh.On("OnReceiveProposal", proposal).Run(func(args mock.Arguments) {
processed.Store(true)
}).Return(nil).Once()
s.eventLoop.SubmitProposal(proposal)
require.Eventually(s.T(), processed.Load, time.Millisecond*100, time.Millisecond*10)
}
// Test_SubmitQC tests that submitted QC is eventually sent to `EventHandler.OnReceiveQuorumCertificate` for processing
func (s *EventLoopTestSuite) Test_SubmitQC() {
// qcIngestionFunction is the archetype for EventLoop.OnQuorumCertificateConstructedFromVotes and EventLoop.OnNewQuorumCertificateDiscovered
type qcIngestionFunction func(models.QuorumCertificate)
testQCIngestionFunction := func(f qcIngestionFunction, qcRank uint64) {
qc := helper.MakeQC(helper.WithQCRank(qcRank))
processed := atomic.NewBool(false)
s.eh.On("OnReceiveQuorumCertificate", qc).Run(func(args mock.Arguments) {
processed.Store(true)
}).Return(nil).Once()
f(qc)
require.Eventually(s.T(), processed.Load, time.Millisecond*100, time.Millisecond*10)
}
s.Run("QCs handed to EventLoop.OnQuorumCertificateConstructedFromVotes are forwarded to EventHandler", func() {
testQCIngestionFunction(s.eventLoop.OnQuorumCertificateConstructedFromVotes, 100)
})
s.Run("QCs handed to EventLoop.OnNewQuorumCertificateDiscovered are forwarded to EventHandler", func() {
testQCIngestionFunction(s.eventLoop.OnNewQuorumCertificateDiscovered, 101)
})
}
// Test_SubmitTC tests that submitted TC is eventually sent to `EventHandler.OnReceiveTimeoutCertificate` for processing
func (s *EventLoopTestSuite) Test_SubmitTC() {
// tcIngestionFunction is the archetype for EventLoop.OnTimeoutCertificateConstructedFromTimeouts and EventLoop.OnNewTimeoutCertificateDiscovered
type tcIngestionFunction func(models.TimeoutCertificate)
testTCIngestionFunction := func(f tcIngestionFunction, tcRank uint64) {
tc := helper.MakeTC(helper.WithTCRank(tcRank))
processed := atomic.NewBool(false)
s.eh.On("OnReceiveTimeoutCertificate", tc).Run(func(args mock.Arguments) {
processed.Store(true)
}).Return(nil).Once()
f(tc)
require.Eventually(s.T(), processed.Load, time.Millisecond*100, time.Millisecond*10)
}
s.Run("TCs handed to EventLoop.OnTimeoutCertificateConstructedFromTimeouts are forwarded to EventHandler", func() {
testTCIngestionFunction(s.eventLoop.OnTimeoutCertificateConstructedFromTimeouts, 100)
})
s.Run("TCs handed to EventLoop.OnNewTimeoutCertificateDiscovered are forwarded to EventHandler", func() {
testTCIngestionFunction(s.eventLoop.OnNewTimeoutCertificateDiscovered, 101)
})
}
// Test_SubmitTC_IngestNewestQC tests that included QC in TC is eventually sent to `EventHandler.OnReceiveQuorumCertificate` for processing
func (s *EventLoopTestSuite) Test_SubmitTC_IngestNewestQC() {
// tcIngestionFunction is the archetype for EventLoop.OnTimeoutCertificateConstructedFromTimeouts and EventLoop.OnNewTimeoutCertificateDiscovered
type tcIngestionFunction func(models.TimeoutCertificate)
testTCIngestionFunction := func(f tcIngestionFunction, tcRank, qcRank uint64) {
tc := helper.MakeTC(helper.WithTCRank(tcRank),
helper.WithTCNewestQC(helper.MakeQC(helper.WithQCRank(qcRank))))
processed := atomic.NewBool(false)
s.eh.On("OnReceiveQuorumCertificate", tc.GetLatestQuorumCert()).Run(func(args mock.Arguments) {
processed.Store(true)
}).Return(nil).Once()
f(tc)
require.Eventually(s.T(), processed.Load, time.Millisecond*100, time.Millisecond*10)
}
// process initial TC, this will track the newest TC
s.eh.On("OnReceiveTimeoutCertificate", mock.Anything).Return(nil).Once()
s.eventLoop.OnTimeoutCertificateConstructedFromTimeouts(helper.MakeTC(
helper.WithTCRank(100),
helper.WithTCNewestQC(
helper.MakeQC(
helper.WithQCRank(80),
),
),
))
s.Run("QCs handed to EventLoop.OnTimeoutCertificateConstructedFromTimeouts are forwarded to EventHandler", func() {
testTCIngestionFunction(s.eventLoop.OnTimeoutCertificateConstructedFromTimeouts, 100, 99)
})
s.Run("QCs handed to EventLoop.OnNewTimeoutCertificateDiscovered are forwarded to EventHandler", func() {
testTCIngestionFunction(s.eventLoop.OnNewTimeoutCertificateDiscovered, 100, 100)
})
}
// Test_OnPartialTimeoutCertificateCreated tests that event loop delivers partialTimeoutCertificateCreated events to event handler.
func (s *EventLoopTestSuite) Test_OnPartialTimeoutCertificateCreated() {
rank := uint64(1000)
newestQC := helper.MakeQC(helper.WithQCRank(rank - 10))
previousRankTimeoutCert := helper.MakeTC(helper.WithTCRank(rank-1), helper.WithTCNewestQC(newestQC))
processed := atomic.NewBool(false)
partialTimeoutCertificateCreated := &consensus.PartialTimeoutCertificateCreated{
Rank: rank,
NewestQuorumCertificate: newestQC,
PriorRankTimeoutCertificate: previousRankTimeoutCert,
}
s.eh.On("OnPartialTimeoutCertificateCreated", partialTimeoutCertificateCreated).Run(func(args mock.Arguments) {
processed.Store(true)
}).Return(nil).Once()
s.eventLoop.OnPartialTimeoutCertificateCreated(rank, newestQC, previousRankTimeoutCert)
require.Eventually(s.T(), processed.Load, time.Millisecond*100, time.Millisecond*10)
}
// TestEventLoop_Timeout tests that event loop delivers timeout events to event handler under pressure
func TestEventLoop_Timeout(t *testing.T) {
eh := &mocks.EventHandler[*helper.TestState, *helper.TestVote]{}
processed := atomic.NewBool(false)
eh.On("Start", mock.Anything).Return(nil).Once()
eh.On("OnReceiveQuorumCertificate", mock.Anything).Return(nil).Maybe()
eh.On("OnReceiveProposal", mock.Anything).Return(nil).Maybe()
eh.On("OnLocalTimeout").Run(func(args mock.Arguments) {
processed.Store(true)
}).Return(nil).Once()
eventLoop, err := NewEventLoop(helper.Logger(), eh, time.Time{})
require.NoError(t, err)
eh.On("TimeoutChannel").Return(time.After(100 * time.Millisecond))
ctx, cancel := context.WithCancel(context.Background())
signalerCtx := unittest.NewMockSignalerContext(t, ctx)
eventLoop.Start(signalerCtx)
unittest.RequireCloseBefore(t, eventLoop.Ready(), 100*time.Millisecond, "event loop not stopped")
time.Sleep(10 * time.Millisecond)
var wg sync.WaitGroup
wg.Add(2)
// spam with proposals and QCs
go func() {
defer wg.Done()
for !processed.Load() {
qc := helper.MakeQC()
eventLoop.OnQuorumCertificateConstructedFromVotes(qc)
}
}()
go func() {
defer wg.Done()
for !processed.Load() {
eventLoop.SubmitProposal(helper.MakeSignedProposal[*helper.TestState, *helper.TestVote]())
}
}()
require.Eventually(t, processed.Load, time.Millisecond*200, time.Millisecond*10)
unittest.AssertReturnsBefore(t, func() { wg.Wait() }, time.Millisecond*200)
cancel()
unittest.RequireCloseBefore(t, eventLoop.Done(), 100*time.Millisecond, "event loop not stopped")
}
// TestReadyDoneWithStartTime tests that event loop correctly starts and schedules start of processing
// when startTime argument is used
func TestReadyDoneWithStartTime(t *testing.T) {
eh := &mocks.EventHandler[*helper.TestState, *helper.TestVote]{}
eh.On("Start", mock.Anything).Return(nil)
eh.On("TimeoutChannel").Return(make(<-chan time.Time, 1))
eh.On("OnLocalTimeout").Return(nil)
startTimeDuration := 2 * time.Second
startTime := time.Now().Add(startTimeDuration)
eventLoop, err := NewEventLoop(helper.Logger(), eh, startTime)
require.NoError(t, err)
done := make(chan struct{})
eh.On("OnReceiveProposal", mock.Anything).Run(func(args mock.Arguments) {
require.True(t, time.Now().After(startTime))
close(done)
}).Return(nil).Once()
ctx, cancel := context.WithCancel(context.Background())
signalerCtx := unittest.NewMockSignalerContext(t, ctx)
eventLoop.Start(signalerCtx)
unittest.RequireCloseBefore(t, eventLoop.Ready(), 100*time.Millisecond, "event loop not started")
eventLoop.SubmitProposal(helper.MakeSignedProposal[*helper.TestState, *helper.TestVote]())
unittest.RequireCloseBefore(t, done, startTimeDuration+100*time.Millisecond, "proposal wasn't received")
cancel()
unittest.RequireCloseBefore(t, eventLoop.Done(), 100*time.Millisecond, "event loop not stopped")
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,394 @@
package forest
import (
"fmt"
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
)
// LevelledForest contains multiple trees (which is a potentially disconnected
// planar graph). Each vertex in the graph has a level and a hash. A vertex can
// only have one parent, which must have strictly smaller level. A vertex can
// have multiple children, all with strictly larger level.
// A LevelledForest provides the ability to prune all vertices up to a specific
// level. A tree whose root is below the pruning threshold might decompose into
// multiple disconnected subtrees as a result of pruning.
// By design, the LevelledForest does _not_ touch the parent information for
// vertices that are on the lowest retained level. Thereby, it is possible to
// initialize the LevelledForest with a root vertex at the lowest retained
// level, without this root needing to have a parent. Furthermore, the root
// vertex can be at level 0 and in absence of a parent still satisfy the
// condition that any parent must be of lower level (mathematical principle of
// acuous truth) without the implementation needing to worry about unsigned
// integer underflow.
//
// LevelledForest is NOT safe for concurrent use by multiple goroutines.
type LevelledForest struct {
vertices VertexSet
verticesAtLevel map[uint64]VertexList
size uint64
LowestLevel uint64
}
type VertexList []*vertexContainer
type VertexSet map[models.Identity]*vertexContainer
// vertexContainer holds information about a tree vertex. Internally, we
// distinguish between
// - FULL container: has non-nil value for vertex.
// Used for vertices, which have been added to the tree.
// - EMPTY container: has NIL value for vertex.
// Used for vertices, which have NOT been added to the tree, but are
// referenced by vertices in the tree. An empty container is converted to a
// full container when the respective vertex is added to the tree
type vertexContainer struct {
id models.Identity
level uint64
children VertexList
// the following are only set if the state is actually known
vertex Vertex
}
// NewLevelledForest initializes a LevelledForest
func NewLevelledForest(lowestLevel uint64) *LevelledForest {
return &LevelledForest{
vertices: make(VertexSet),
verticesAtLevel: make(map[uint64]VertexList),
LowestLevel: lowestLevel,
}
}
// PruneUpToLevel prunes all states UP TO but NOT INCLUDING `level`.
func (f *LevelledForest) PruneUpToLevel(level uint64) error {
if level < f.LowestLevel {
return fmt.Errorf(
"new lowest level %d cannot be smaller than previous last retained level %d",
level,
f.LowestLevel,
)
}
if len(f.vertices) == 0 {
f.LowestLevel = level
return nil
}
elementsPruned := 0
// to optimize the pruning large level-ranges, we compare:
// * the number of levels for which we have stored vertex containers:
// len(f.verticesAtLevel)
// * the number of levels that need to be pruned: level-f.LowestLevel
// We iterate over the dimension which is smaller.
if uint64(len(f.verticesAtLevel)) < level-f.LowestLevel {
for l, vertices := range f.verticesAtLevel {
if l < level {
for _, v := range vertices {
if !f.isEmptyContainer(v) {
elementsPruned++
}
delete(f.vertices, v.id)
}
delete(f.verticesAtLevel, l)
}
}
} else {
for l := f.LowestLevel; l < level; l++ {
verticesAtLevel := f.verticesAtLevel[l]
for _, v := range verticesAtLevel {
if !f.isEmptyContainer(v) {
elementsPruned++
}
delete(f.vertices, v.id)
}
delete(f.verticesAtLevel, l)
}
}
f.LowestLevel = level
f.size -= uint64(elementsPruned)
return nil
}
// HasVertex returns true iff full vertex exists.
func (f *LevelledForest) HasVertex(id models.Identity) bool {
container, exists := f.vertices[id]
return exists && !f.isEmptyContainer(container)
}
// isEmptyContainer returns true iff vertexContainer container is empty, i.e.
// full vertex itself has not been added
func (f *LevelledForest) isEmptyContainer(
vertexContainer *vertexContainer,
) bool {
return vertexContainer.vertex == nil
}
// GetVertex returns (<full vertex>, true) if the vertex with `id` and `level`
// was found (nil, false) if full vertex is unknown
func (f *LevelledForest) GetVertex(id models.Identity) (Vertex, bool) {
container, exists := f.vertices[id]
if !exists || f.isEmptyContainer(container) {
return nil, false
}
return container.vertex, true
}
// GetSize returns the total number of vertices above the lowest pruned level.
// Note this call is not concurrent-safe, caller is responsible to ensure
// concurrency safety.
func (f *LevelledForest) GetSize() uint64 {
return f.size
}
// GetChildren returns a VertexIterator to iterate over the children
// An empty VertexIterator is returned, if no vertices are known whose parent is
// `id`.
func (f *LevelledForest) GetChildren(id models.Identity) VertexIterator {
// if vertex does not exist, container will be nil
if container, ok := f.vertices[id]; ok {
return newVertexIterator(container.children)
}
return newVertexIterator(nil) // VertexIterator gracefully handles nil slices
}
// GetNumberOfChildren returns number of children of given vertex
func (f *LevelledForest) GetNumberOfChildren(id models.Identity) int {
// if vertex does not exist, container is the default zero value for
// vertexContainer, which contains a nil-slice for its children
container := f.vertices[id]
num := 0
for _, child := range container.children {
if child.vertex != nil {
num++
}
}
return num
}
// GetVerticesAtLevel returns a VertexIterator to iterate over the Vertices at
// the specified level. An empty VertexIterator is returned, if no vertices are
// known at the specified level. If `level` is already pruned, an empty
// VertexIterator is returned.
func (f *LevelledForest) GetVerticesAtLevel(level uint64) VertexIterator {
return newVertexIterator(f.verticesAtLevel[level])
}
// GetNumberOfVerticesAtLevel returns the number of full vertices at given
// level. A full vertex is a vertex that was explicitly added to the forest. In
// contrast, an empty vertex container represents a vertex that is _referenced_
// as parent by one or more full vertices, but has not been added itself to the
// forest. We only count vertices that have been explicitly added to the forest
// and not yet pruned. (In comparision, we do _not_ count vertices that are
// _referenced_ as parent by vertices, but have not been added themselves).
func (f *LevelledForest) GetNumberOfVerticesAtLevel(level uint64) int {
num := 0
for _, container := range f.verticesAtLevel[level] {
if !f.isEmptyContainer(container) {
num++
}
}
return num
}
// AddVertex adds vertex to forest if vertex is within non-pruned levels
// Handles repeated addition of same vertex (keeps first added vertex).
// If vertex is at or below pruning level: method is NoOp.
// UNVALIDATED:
// requires that vertex would pass validity check LevelledForest.VerifyVertex(vertex).
func (f *LevelledForest) AddVertex(vertex Vertex) {
if vertex.Level() < f.LowestLevel {
return
}
container := f.getOrCreateVertexContainer(vertex.VertexID(), vertex.Level())
if !f.isEmptyContainer(container) { // the vertex was already stored
return
}
// container is empty, i.e. full vertex is new and should be stored in container
container.vertex = vertex // add vertex to container
f.registerWithParent(container)
f.size += 1
}
// registerWithParent retrieves the parent and registers the given vertex as a
// child. For a state, whose level equal to the pruning threshold, we do not
// inspect the parent at all. Thereby, this implementation can gracefully handle
// the corner case where the tree has a defined end vertex (distinct root). This
// is commonly the case in statechain (genesis, or spork root state).
// Mathematically, this means that this library can also represent bounded
// trees.
func (f *LevelledForest) registerWithParent(vertexContainer *vertexContainer) {
// caution, necessary for handling bounded trees:
// For root vertex (genesis state) the rank is _exactly_ at LowestLevel. For
// these states, a parent does not exist. In the implementation, we
// deliberately do not call the `Parent()` method, as its output is
// conceptually undefined. Thereby, we can gracefully handle the corner case
// of
// vertex.level = vertex.Parent().Level = LowestLevel = 0
if vertexContainer.level <= f.LowestLevel { // check (a)
return
}
_, parentRank := vertexContainer.vertex.Parent()
if parentRank < f.LowestLevel {
return
}
parentContainer := f.getOrCreateVertexContainer(
vertexContainer.vertex.Parent(),
)
parentContainer.children = append(parentContainer.children, vertexContainer)
}
// getOrCreateVertexContainer returns the vertexContainer if there exists one
// or creates a new vertexContainer and adds it to the internal data structures.
// (i.e. there exists an empty or full container with the same id but different
// level).
func (f *LevelledForest) getOrCreateVertexContainer(
id models.Identity,
level uint64,
) *vertexContainer {
container, exists := f.vertices[id]
if !exists {
container = &vertexContainer{
id: id,
level: level,
}
f.vertices[container.id] = container
vertices := f.verticesAtLevel[container.level]
f.verticesAtLevel[container.level] = append(vertices, container)
}
return container
}
// VerifyVertex verifies that adding vertex `v` would yield a valid Levelled
// Forest. Specifically, we verify that _all_ of the following conditions are
// satisfied:
//
// 1. `v.Level()` must be strictly larger than the level that `v` reports
// for its parent (maintains an acyclic graph).
//
// 2. If a vertex with the same ID as `v.VertexID()` exists in the graph or is
// referenced by another vertex within the graph, the level must be
// identical. (In other words, we don't have vertices with the same ID but
// different level)
//
// 3. Let `ParentLevel`, `ParentID` denote the level, ID that `v` reports for
// its parent. If a vertex with `ParentID` exists (or is referenced by other
// vertices as their parent), we require that the respective level is
// identical to `ParentLevel`.
//
// Notes:
// - If `v.Level()` has already been pruned, adding it to the forest is a
// NoOp. Hence, any vertex with level below the pruning threshold
// automatically passes.
// - By design, the LevelledForest does _not_ touch the parent information for
// vertices that are on the lowest retained level. Thereby, it is possible
// to initialize the LevelledForest with a root vertex at the lowest
// retained level, without this root needing to have a parent. Furthermore,
// the root vertex can be at level 0 and in absence of a parent still
// satisfy the condition that any parent must be of lower level
// (mathematical principle of vacuous truth) without the implementation
// needing to worry about unsigned integer underflow.
//
// Error returns:
// - InvalidVertexError if the input vertex is invalid for insertion to the
// forest.
func (f *LevelledForest) VerifyVertex(v Vertex) error {
if v.Level() < f.LowestLevel {
return nil
}
storedContainer, haveVertexContainer := f.vertices[v.VertexID()]
if !haveVertexContainer { // have no vertex with same id stored
// the only thing remaining to check is the parent information
return f.ensureConsistentParent(v)
}
// Found a vertex container, i.e. `v` already exists, or it is referenced by
// some other vertex. In all cases, `v.Level()` should match the
// vertexContainer's information
if v.Level() != storedContainer.level {
return NewInvalidVertexErrorf(
v,
"level conflicts with stored vertex with same id (%d!=%d)",
v.Level(),
storedContainer.level,
)
}
// vertex container is empty, i.e. `v` is referenced by some other vertex as
// its parent:
if f.isEmptyContainer(storedContainer) {
// the only thing remaining to check is the parent information
return f.ensureConsistentParent(v)
}
// vertex container holds a vertex with the same ID as `v`:
// The parent information from vertexContainer has already been checked for
// consistency. So we simply compare with the existing vertex for
// inconsistencies
// the vertex is at or below the lowest retained level, so we can't check the
// parent (it's pruned)
if v.Level() == f.LowestLevel {
return nil
}
newParentId, newParentLevel := v.Parent()
storedParentId, storedParentLevel := storedContainer.vertex.Parent()
if newParentId != storedParentId {
return NewInvalidVertexErrorf(
v,
"parent ID conflicts with stored parent (%x!=%x)",
newParentId,
storedParentId,
)
}
if newParentLevel != storedParentLevel {
return NewInvalidVertexErrorf(
v,
"parent level conflicts with stored parent (%d!=%d)",
newParentLevel,
storedParentLevel,
)
}
// all _relevant_ fields identical
return nil
}
// ensureConsistentParent verifies that vertex.Parent() is consistent with
// current forest.
// Returns InvalidVertexError if:
// * there is a parent with the same ID but different level;
// * the parent's level is _not_ smaller than the vertex's level
func (f *LevelledForest) ensureConsistentParent(vertex Vertex) error {
if vertex.Level() <= f.LowestLevel {
// the vertex is at or below the lowest retained level, so we can't check
// the parent (it's pruned)
return nil
}
// verify parent
parentID, parentLevel := vertex.Parent()
if !(vertex.Level() > parentLevel) {
return NewInvalidVertexErrorf(
vertex,
"vertex parent level (%d) must be smaller than proposed vertex level (%d)",
parentLevel,
vertex.Level(),
)
}
storedParent, haveParentStored := f.GetVertex(parentID)
if !haveParentStored {
return nil
}
if storedParent.Level() != parentLevel {
return NewInvalidVertexErrorf(
vertex,
"parent level conflicts with stored parent (%d!=%d)",
parentLevel,
storedParent.Level(),
)
}
return nil
}

103
consensus/forest/vertex.go Normal file
View File

@ -0,0 +1,103 @@
package forest
import (
"errors"
"fmt"
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
)
type Vertex interface {
// VertexID returns the vertex's ID (in most cases its hash)
VertexID() models.Identity
// Level returns the vertex's level
Level() uint64
// Parent returns the parent's (level, ID)
Parent() (models.Identity, uint64)
}
// VertexToString returns a string representation of the vertex.
func VertexToString(v Vertex) string {
parentID, parentLevel := v.Parent()
return fmt.Sprintf(
"<id=%x level=%d parent_id=%s parent_level=%d>",
v.VertexID(),
v.Level(),
parentID,
parentLevel,
)
}
// VertexIterator is a stateful iterator for VertexList.
// Internally operates directly on the Vertex Containers
// It has one-element look ahead for skipping empty vertex containers.
type VertexIterator struct {
data VertexList
idx int
next Vertex
}
func (it *VertexIterator) preLoad() {
for it.idx < len(it.data) {
v := it.data[it.idx].vertex
it.idx++
if v != nil {
it.next = v
return
}
}
it.next = nil
}
// NextVertex returns the next Vertex or nil if there is none
func (it *VertexIterator) NextVertex() Vertex {
res := it.next
it.preLoad()
return res
}
// HasNext returns true if and only if there is a next Vertex
func (it *VertexIterator) HasNext() bool {
return it.next != nil
}
func newVertexIterator(vertexList VertexList) VertexIterator {
it := VertexIterator{
data: vertexList,
}
it.preLoad()
return it
}
// InvalidVertexError indicates that a proposed vertex is invalid for insertion
// to the forest.
type InvalidVertexError struct {
// Vertex is the invalid vertex
Vertex Vertex
// msg provides additional context
msg string
}
func (err InvalidVertexError) Error() string {
return fmt.Sprintf(
"invalid vertex %s: %s",
VertexToString(err.Vertex),
err.msg,
)
}
func IsInvalidVertexError(err error) bool {
var target InvalidVertexError
return errors.As(err, &target)
}
func NewInvalidVertexErrorf(
vertex Vertex,
msg string,
args ...interface{},
) InvalidVertexError {
return InvalidVertexError{
Vertex: vertex,
msg: fmt.Sprintf(msg, args...),
}
}

657
consensus/forks/forks.go Normal file
View File

@ -0,0 +1,657 @@
package forks
import (
"fmt"
"source.quilibrium.com/quilibrium/monorepo/consensus"
"source.quilibrium.com/quilibrium/monorepo/consensus/forest"
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
)
// Forks enforces structural validity of the consensus state and implements
// finalization rules as defined in Jolteon consensus
// https://arxiv.org/abs/2106.10362 The same approach has later been adopted by
// the Diem team resulting in DiemBFT v4:
// https://developers.diem.com/papers/diem-consensus-state-machine-replication-in-the-diem-blockchain/2021-08-17.pdf
// Forks is NOT safe for concurrent use by multiple goroutines.
type Forks[StateT models.Unique, VoteT models.Unique] struct {
finalizationCallback consensus.Finalizer
notifier consensus.FollowerConsumer[StateT, VoteT]
forest forest.LevelledForest
trustedRoot *models.CertifiedState[StateT]
// finalityProof holds the latest finalized state including the certified
// child as proof of finality. CAUTION: is nil, when Forks has not yet
// finalized any states beyond the finalized root state it was initialized
// with
finalityProof *consensus.FinalityProof[StateT]
}
var _ consensus.Forks[*nilUnique] = (*Forks[*nilUnique, *nilUnique])(nil)
func NewForks[StateT models.Unique, VoteT models.Unique](
trustedRoot *models.CertifiedState[StateT],
finalizationCallback consensus.Finalizer,
notifier consensus.FollowerConsumer[StateT, VoteT],
) (*Forks[StateT, VoteT], error) {
if trustedRoot == nil {
return nil,
models.NewConfigurationErrorf("invalid root: root is nil")
}
if (trustedRoot.State.Identifier != trustedRoot.CertifyingQuorumCertificate.Identity()) ||
(trustedRoot.State.Rank != trustedRoot.CertifyingQuorumCertificate.GetRank()) {
return nil,
models.NewConfigurationErrorf(
"invalid root: root QC is not pointing to root state",
)
}
forks := Forks[StateT, VoteT]{
finalizationCallback: finalizationCallback,
notifier: notifier,
forest: *forest.NewLevelledForest(trustedRoot.State.Rank),
trustedRoot: trustedRoot,
finalityProof: nil,
}
// verify and add root state to levelled forest
err := forks.EnsureStateIsValidExtension(trustedRoot.State)
if err != nil {
return nil, fmt.Errorf(
"invalid root state %x: %w",
trustedRoot.Identifier(),
err,
)
}
forks.forest.AddVertex(ToStateContainer2[StateT](trustedRoot.State))
return &forks, nil
}
// FinalizedRank returns the largest rank number where a finalized state is
// known
func (f *Forks[StateT, VoteT]) FinalizedRank() uint64 {
if f.finalityProof == nil {
return f.trustedRoot.State.Rank
}
return f.finalityProof.State.Rank
}
// FinalizedState returns the finalized state with the largest rank number
func (f *Forks[StateT, VoteT]) FinalizedState() *models.State[StateT] {
if f.finalityProof == nil {
return f.trustedRoot.State
}
return f.finalityProof.State
}
// FinalityProof returns the latest finalized state and a certified child from
// the subsequent rank, which proves finality.
// CAUTION: method returns (nil, false), when Forks has not yet finalized any
// states beyond the finalized root state it was initialized with.
func (f *Forks[StateT, VoteT]) FinalityProof() (
*consensus.FinalityProof[StateT],
bool,
) {
return f.finalityProof, f.finalityProof != nil
}
// GetState returns (*models.State, true) if the state with the specified
// id was found and (nil, false) otherwise.
func (f *Forks[StateT, VoteT]) GetState(stateID models.Identity) (
*models.State[StateT],
bool,
) {
stateContainer, hasState := f.forest.GetVertex(stateID)
if !hasState {
return nil, false
}
return stateContainer.(*StateContainer[StateT]).GetState(), true
}
// GetStatesForRank returns all known states for the given rank
func (f *Forks[StateT, VoteT]) GetStatesForRank(
rank uint64,
) []*models.State[StateT] {
vertexIterator := f.forest.GetVerticesAtLevel(rank)
// in the vast majority of cases, there will only be one proposal for a
// particular rank
states := make([]*models.State[StateT], 0, 1)
for vertexIterator.HasNext() {
v := vertexIterator.NextVertex()
states = append(states, v.(*StateContainer[StateT]).GetState())
}
return states
}
// IsKnownState checks whether state is known.
func (f *Forks[StateT, VoteT]) IsKnownState(stateID models.Identity) bool {
_, hasState := f.forest.GetVertex(stateID)
return hasState
}
// IsProcessingNeeded determines whether the given state needs processing,
// based on the state's rank and hash.
// Returns false if any of the following conditions applies
// - state rank is _below_ the most recently finalized state
// - the state already exists in the consensus state
//
// UNVALIDATED: expects state to pass Forks.EnsureStateIsValidExtension(state)
func (f *Forks[StateT, VoteT]) IsProcessingNeeded(state *models.State[StateT]) bool {
if state.Rank < f.FinalizedRank() || f.IsKnownState(state.Identifier) {
return false
}
return true
}
// EnsureStateIsValidExtension checks that the given state is a valid extension
// to the tree of states already stored (no state modifications). Specifically,
// the following conditions are enforced, which are critical to the correctness
// of Forks:
//
// 1. If a state with the same ID is already stored, their ranks must be
// identical.
// 2. The state's rank must be strictly larger than the rank of its parent.
// 3. The parent must already be stored (or below the pruning height).
//
// Exclusions to these rules (by design):
// Let W denote the rank of state's parent (i.e. W := state.QC.Rank) and F the
// latest finalized rank.
//
// (i) If state.Rank < F, adding the state would be a no-op. Such states are
// considered compatible (principle of vacuous truth), i.e. we skip
// checking 1, 2, 3.
// (ii) If state.Rank == F, we do not inspect the QC / parent at all (skip 2
// and 3). This exception is important for compatability with genesis or
// spork-root states, which do not contain a QC.
// (iii) If state.Rank > F, but state.QC.Rank < F the parent has already been
// pruned. In this case, we omit rule 3. (principle of vacuous truth
// applied to the parent)
//
// We assume that all states are fully verified. A valid state must satisfy all
// consistency requirements; otherwise we have a bug in the compliance layer.
//
// Error returns:
// - models.MissingStateError if the parent of the input proposal does not
// exist in the forest (but is above the pruned rank). Represents violation
// of condition 3.
// - models.InvalidStateError if the state violates condition 1. or 2.
// - generic error in case of unexpected bug or internal state corruption
func (f *Forks[StateT, VoteT]) EnsureStateIsValidExtension(
state *models.State[StateT],
) error {
if state.Rank < f.forest.LowestLevel { // exclusion (i)
return nil
}
// LevelledForest enforces conditions 1. and 2. including the respective
// exclusions (ii) and (iii).
stateContainer := ToStateContainer2[StateT](state)
err := f.forest.VerifyVertex(stateContainer)
if err != nil {
if forest.IsInvalidVertexError(err) {
return models.NewInvalidStateErrorf(
state,
"not a valid vertex for state tree: %w",
err,
)
}
return fmt.Errorf(
"state tree generated unexpected error validating vertex: %w",
err,
)
}
// Condition 3:
// LevelledForest implements a more generalized algorithm that also works for
// disjoint graphs. Therefore, LevelledForest _not_ enforce condition 3. Here,
// we additionally require that the pending states form a tree (connected
// graph), i.e. we need to enforce condition 3
if (state.Rank == f.forest.LowestLevel) ||
(state.ParentQuorumCertificate.GetRank() < f.forest.LowestLevel) { // exclusion (ii) and (iii)
return nil
}
// For a state whose parent is _not_ below the pruning height, we expect the
// parent to be known.
_, isParentKnown := f.forest.GetVertex(
state.ParentQuorumCertificate.Identity(),
)
if !isParentKnown { // missing parent
return models.MissingStateError{
Rank: state.ParentQuorumCertificate.GetRank(),
Identifier: state.ParentQuorumCertificate.Identity(),
}
}
return nil
}
// AddCertifiedState[StateT] appends the given certified state to the tree of
// pending states and updates the latest finalized state (if finalization
// progressed). Unless the parent is below the pruning threshold (latest
// finalized rank), we require that the parent is already stored in Forks.
// Calling this method with previously processed states leaves the consensus
// state invariant (though, it will potentially cause some duplicate
// processing).
//
// Possible error returns:
// - models.MissingStateError if the parent does not exist in the forest (but
// is above the pruned rank). From the perspective of Forks, this error is
// benign (no-op).
// - models.InvalidStateError if the state is invalid (see
// `Forks.EnsureStateIsValidExtension` for details). From the perspective of
// Forks, this error is benign (no-op). However, we assume all states are
// fully verified, i.e. they should satisfy all consistency requirements.
// Hence, this error is likely an indicator of a bug in the compliance
// layer.
// - models.ByzantineThresholdExceededError if conflicting QCs or conflicting
// finalized states have been detected (violating a foundational consensus
// guarantees). This indicates that there are 1/3+ Byzantine nodes (weighted
// by seniority) in the network, breaking the safety guarantees of HotStuff
// (or there is a critical bug / data corruption). Forks cannot recover from
// this exception.
// - All other errors are potential symptoms of bugs or state corruption.
func (f *Forks[StateT, VoteT]) AddCertifiedState(
certifiedState *models.CertifiedState[StateT],
) error {
if !f.IsProcessingNeeded(certifiedState.State) {
return nil
}
// Check proposal for byzantine evidence, store it and emit
// `OnStateIncorporated` notification. Note: `checkForByzantineEvidence` only
// inspects the state, but _not_ its certifying QC. Hence, we have to
// additionally check here, whether the certifying QC conflicts with any known
// QCs.
err := f.checkForByzantineEvidence(certifiedState.State)
if err != nil {
return fmt.Errorf(
"cannot check for Byzantine evidence in certified state %x: %w",
certifiedState.State.Identifier,
err,
)
}
err = f.checkForConflictingQCs(&certifiedState.CertifyingQuorumCertificate)
if err != nil {
return fmt.Errorf(
"certifying QC for state %x failed check for conflicts: %w",
certifiedState.State.Identifier,
err,
)
}
f.forest.AddVertex(ToStateContainer2[StateT](certifiedState.State))
f.notifier.OnStateIncorporated(certifiedState.State)
// Update finality status:
err = f.checkForAdvancingFinalization(certifiedState)
if err != nil {
return fmt.Errorf("updating finalization failed: %w", err)
}
return nil
}
// AddValidatedState appends the validated state to the tree of pending
// states and updates the latest finalized state (if applicable). Unless the
// parent is below the pruning threshold (latest finalized rank), we require
// that the parent is already stored in Forks. Calling this method with
// previously processed states leaves the consensus state invariant (though, it
// will potentially cause some duplicate processing).
// Notes:
// - Method `AddCertifiedState[StateT](..)` should be used preferably, if a QC
// certifying `state` is already known. This is generally the case for the
// consensus follower. Method `AddValidatedState` is intended for active
// consensus participants, which fully validate states (incl. payload), i.e.
// QCs are processed as part of validated proposals.
//
// Possible error returns:
// - models.MissingStateError if the parent does not exist in the forest (but
// is above the pruned rank). From the perspective of Forks, this error is
// benign (no-op).
// - models.InvalidStateError if the state is invalid (see
// `Forks.EnsureStateIsValidExtension` for details). From the perspective of
// Forks, this error is benign (no-op). However, we assume all states are
// fully verified, i.e. they should satisfy all consistency requirements.
// Hence, this error is likely an indicator of a bug in the compliance
// layer.
// - models.ByzantineThresholdExceededError if conflicting QCs or conflicting
// finalized states have been detected (violating a foundational consensus
// guarantees). This indicates that there are 1/3+ Byzantine nodes (weighted
// by seniority) in the network, breaking the safety guarantees of HotStuff
// (or there is a critical bug / data corruption). Forks cannot recover from
// this exception.
// - All other errors are potential symptoms of bugs or state corruption.
func (f *Forks[StateT, VoteT]) AddValidatedState(
proposal *models.State[StateT],
) error {
if !f.IsProcessingNeeded(proposal) {
return nil
}
// Check proposal for byzantine evidence, store it and emit
// `OnStateIncorporated` notification:
err := f.checkForByzantineEvidence(proposal)
if err != nil {
return fmt.Errorf(
"cannot check Byzantine evidence for state %x: %w",
proposal.Identifier,
err,
)
}
f.forest.AddVertex(ToStateContainer2[StateT](proposal))
f.notifier.OnStateIncorporated(proposal)
// Update finality status: In the implementation, our notion of finality is
// based on certified states.
// The certified parent essentially combines the parent, with the QC contained
// in state, to drive finalization.
parent, found := f.GetState(proposal.ParentQuorumCertificate.Identity())
if !found {
// Not finding the parent means it is already pruned; hence this state does
// not change the finalization state.
return nil
}
certifiedParent, err := models.NewCertifiedState[StateT](
parent,
proposal.ParentQuorumCertificate,
)
if err != nil {
return fmt.Errorf(
"mismatching QC with parent (corrupted Forks state):%w",
err,
)
}
err = f.checkForAdvancingFinalization(certifiedParent)
if err != nil {
return fmt.Errorf("updating finalization failed: %w", err)
}
return nil
}
// checkForByzantineEvidence inspects whether the given `state` together with
// the already known information yields evidence of byzantine behaviour.
// Furthermore, the method enforces that `state` is a valid extension of the
// tree of pending states. If the state is a double proposal, we emit an
// `OnStateIncorporated` notification. Though, provided the state is a valid
// extension of the state tree by itself, it passes this method without an
// error.
//
// Possible error returns:
// - models.MissingStateError if the parent does not exist in the forest (but
// is above the pruned rank). From the perspective of Forks, this error is
// benign (no-op).
// - models.InvalidStateError if the state is invalid (see
// `Forks.EnsureStateIsValidExtension` for details). From the perspective of
// Forks, this error is benign (no-op). However, we assume all states are
// fully verified, i.e. they should satisfy all consistency requirements.
// Hence, this error is likely an indicator of a bug in the compliance
// layer.
// - models.ByzantineThresholdExceededError if conflicting QCs have been
// detected. Forks cannot recover from this exception.
// - All other errors are potential symptoms of bugs or state corruption.
func (f *Forks[StateT, VoteT]) checkForByzantineEvidence(
state *models.State[StateT],
) error {
err := f.EnsureStateIsValidExtension(state)
if err != nil {
return fmt.Errorf("consistency check on state failed: %w", err)
}
err = f.checkForConflictingQCs(&state.ParentQuorumCertificate)
if err != nil {
return fmt.Errorf("checking QC for conflicts failed: %w", err)
}
f.checkForDoubleProposal(state)
return nil
}
// checkForConflictingQCs checks if QC conflicts with a stored Quorum
// Certificate. In case a conflicting QC is found, an
// ByzantineThresholdExceededError is returned. Two Quorum Certificates q1 and
// q2 are defined as conflicting iff:
//
// q1.Rank == q2.Rank AND q1.Identifier ≠ q2.Identifier
//
// This means there are two Quorums for conflicting states at the same rank.
// Per 'Observation 1' from the Jolteon paper https://arxiv.org/pdf/2106.10362v1.pdf,
// two conflicting QCs can exist if and only if the Byzantine threshold is
// exceeded.
// Error returns:
// - models.ByzantineThresholdExceededError if conflicting QCs have been
// detected. Forks cannot recover from this exception.
// - All other errors are potential symptoms of bugs or state corruption.
func (f *Forks[StateT, VoteT]) checkForConflictingQCs(
qc *models.QuorumCertificate,
) error {
it := f.forest.GetVerticesAtLevel((*qc).GetRank())
for it.HasNext() {
otherState := it.NextVertex() // by construction, must have same rank as qc.Rank
if (*qc).Identity() != otherState.VertexID() {
// * we have just found another state at the same rank number as qc.Rank
// but with different hash
// * if this state has a child c, this child will have
// c.qc.rank = parentRank
// c.qc.ID != parentIdentifier
// => conflicting qc
otherChildren := f.forest.GetChildren(otherState.VertexID())
if otherChildren.HasNext() {
otherChild := otherChildren.NextVertex().(*StateContainer[StateT]).GetState()
conflictingQC := otherChild.ParentQuorumCertificate
return models.ByzantineThresholdExceededError{Evidence: fmt.Sprintf(
"conflicting QCs at rank %d: %x and %x",
(*qc).GetRank(), (*qc).Identity(), conflictingQC.Identity(),
)}
}
}
}
return nil
}
// checkForDoubleProposal checks if the input proposal is a double proposal.
// A double proposal occurs when two proposals with the same rank exist in
// Forks. If there is a double proposal, notifier.OnDoubleProposeDetected is
// triggered.
func (f *Forks[StateT, VoteT]) checkForDoubleProposal(
state *models.State[StateT],
) {
it := f.forest.GetVerticesAtLevel(state.Rank)
for it.HasNext() {
otherVertex := it.NextVertex() // by construction, must have same rank as state
otherState := otherVertex.(*StateContainer[StateT]).GetState()
if state.Identifier != otherState.Identifier {
f.notifier.OnDoubleProposeDetected(state, otherState)
}
}
}
// checkForAdvancingFinalization checks whether observing certifiedState leads
// to progress of finalization. This function should be called every time a new
// state is added to Forks. If the new state is the head of a 2-chain satisfying
// the finalization rule, we update `Forks.finalityProof` to the new latest
// finalized state. Calling this method with previously-processed states leaves
// the consensus state invariant.
// UNVALIDATED: assumes that relevant state properties are consistent with
// previous states
// Error returns:
// - models.MissingStateError if the parent does not exist in the forest (but
// is above the pruned rank). From the perspective of Forks, this error is
// benign (no-op).
// - models.ByzantineThresholdExceededError in case we detect a finalization
// fork (violating a foundational consensus guarantee). This indicates that
// there are 1/3+ Byzantine nodes (weighted by seniority) in the network,
// breaking the safety guarantees of HotStuff (or there is a critical bug /
// data corruption). Forks cannot recover from this exception.
// - generic error in case of unexpected bug or internal state corruption
func (f *Forks[StateT, VoteT]) checkForAdvancingFinalization(
certifiedState *models.CertifiedState[StateT],
) error {
// We prune all states in forest which are below the most recently finalized
// state. Hence, we have a pruned ancestry if and only if either of the
// following conditions applies:
// (a) If a state's parent rank (i.e. state.QC.Rank) is below the most
// recently finalized state.
// (b) If a state's rank is equal to the most recently finalized state.
// Caution:
// * Under normal operation, case (b) is covered by the logic for case (a)
// * However, the existence of a genesis state requires handling case (b)
// explicitly:
// The root state is specified and trusted by the node operator. If the root
// state is the genesis state, it might not contain a QC pointing to a
// parent (as there is no parent). In this case, condition (a) cannot be
// evaluated.
lastFinalizedRank := f.FinalizedRank()
if (certifiedState.Rank() <= lastFinalizedRank) ||
(certifiedState.State.ParentQuorumCertificate.GetRank() < lastFinalizedRank) {
// Repeated states are expected during normal operations. We enter this code
// state if and only if the parent's rank is _below_ the last finalized
// state. It is straight forward to show:
// Lemma: Let B be a state whose 2-chain reaches beyond the last finalized
// state => B will not update the locked or finalized state
return nil
}
// retrieve parent; always expected to succeed, because we passed the checks
// above
qcForParent := certifiedState.State.ParentQuorumCertificate
parentVertex, parentStateKnown := f.forest.GetVertex(
qcForParent.Identity(),
)
if !parentStateKnown {
return models.MissingStateError{
Rank: qcForParent.GetRank(),
Identifier: qcForParent.Identity(),
}
}
parentState := parentVertex.(*StateContainer[StateT]).GetState()
// Note: we assume that all stored states pass
// Forks.EnsureStateIsValidExtension(state); specifically, that state's
// RankNumber is strictly monotonically increasing which is enforced by
// LevelledForest.VerifyVertex(...)
// We denote:
// * a DIRECT 1-chain as '<-'
// * a general 1-chain as '<~' (direct or indirect)
// Jolteon's rule for finalizing `parentState` is
// parentState <- State <~ certifyingQC (i.e. a DIRECT 1-chain PLUS
// ╰─────────────────────╯ any 1-chain)
// certifiedState
// Hence, we can finalize `parentState` as head of a 2-chain,
// if and only if `State.Rank` is exactly 1 higher than the rank of
// `parentState`
if parentState.Rank+1 != certifiedState.Rank() {
return nil
}
// `parentState` is now finalized:
// * While Forks is single-threaded, there is still the possibility of
// reentrancy. Specifically, the consumers of our finalization events are
// served by the goroutine executing Forks. It is conceivable that a
// consumer might access Forks and query the latest finalization proof.
// This would be legal, if the component supplying the goroutine to Forks
// also consumes the notifications.
// * Therefore, for API safety, we want to first update Fork's
// `finalityProof` before we emit any notifications.
// Advancing finalization step (i): we collect all states for finalization (no
// notifications are emitted)
statesToBeFinalized, err := f.collectStatesForFinalization(&qcForParent)
if err != nil {
return fmt.Errorf(
"advancing finalization to state %x from rank %d failed: %w",
qcForParent.Identity(),
qcForParent.GetRank(),
err,
)
}
// Advancing finalization step (ii): update `finalityProof` and prune
// `LevelledForest`
f.finalityProof = &consensus.FinalityProof[StateT]{
State: parentState,
CertifiedChild: certifiedState,
}
err = f.forest.PruneUpToLevel(f.FinalizedRank())
if err != nil {
return fmt.Errorf("pruning levelled forest failed unexpectedly: %w", err)
}
// Advancing finalization step (iii): iterate over the states from (i) and
// emit finalization events
for _, b := range statesToBeFinalized {
// first notify other critical components about finalized state - all errors
// returned here are fatal exceptions
err = f.finalizationCallback.MakeFinal(b.Identifier)
if err != nil {
return fmt.Errorf("finalization error in other component: %w", err)
}
// notify less important components about finalized state
f.notifier.OnFinalizedState(b)
}
return nil
}
// collectStatesForFinalization collects and returns all newly finalized states
// up to (and including) the state pointed to by `qc`. The states are listed in
// order of increasing height.
// Error returns:
// - models.ByzantineThresholdExceededError in case we detect a finalization
// fork (violating a foundational consensus guarantee). This indicates that
// there are 1/3+ Byzantine nodes (weighted by seniority) in the network,
// breaking the safety guarantees of HotStuff (or there is a critical bug /
// data corruption). Forks cannot recover from this exception.
// - generic error in case of bug or internal state corruption
func (f *Forks[StateT, VoteT]) collectStatesForFinalization(
qc *models.QuorumCertificate,
) ([]*models.State[StateT], error) {
lastFinalized := f.FinalizedState()
if (*qc).GetRank() < lastFinalized.Rank {
return nil, models.ByzantineThresholdExceededError{Evidence: fmt.Sprintf(
"finalizing state with rank %d which is lower than previously finalized state at rank %d",
(*qc).GetRank(), lastFinalized.Rank,
)}
}
if (*qc).GetRank() == lastFinalized.Rank { // no new states to be finalized
return nil, nil
}
// Collect all states that are pending finalization in slice. While we crawl
// the states starting from the newest finalized state backwards (decreasing
// ranks), we would like to return them in order of _increasing_ rank.
// Therefore, we fill the slice starting with the highest index.
l := (*qc).GetRank() - lastFinalized.Rank // l is an upper limit to the number of states that can be maximally finalized
statesToBeFinalized := make([]*models.State[StateT], l)
for (*qc).GetRank() > lastFinalized.Rank {
b, ok := f.GetState((*qc).Identity())
if !ok {
return nil, fmt.Errorf(
"failed to get state (rank=%d, stateID=%x) for finalization",
(*qc).GetRank(),
(*qc).Identity(),
)
}
l--
statesToBeFinalized[l] = b
qc = &b.ParentQuorumCertificate // move to parent
}
// Now, `l` is the index where we stored the oldest state that should be
// finalized. Note that `l` might be larger than zero, if some ranks have no
// finalized states. Hence, `statesToBeFinalized` might start with nil
// entries, which we remove:
statesToBeFinalized = statesToBeFinalized[l:]
// qc should now point to the latest finalized state. Otherwise, the
// consensus committee is compromised (or we have a critical internal bug).
if (*qc).GetRank() < lastFinalized.Rank {
return nil, models.ByzantineThresholdExceededError{Evidence: fmt.Sprintf(
"finalizing state with rank %d which is lower than previously finalized state at rank %d",
(*qc).GetRank(), lastFinalized.Rank,
)}
}
if (*qc).GetRank() == lastFinalized.Rank &&
lastFinalized.Identifier != (*qc).Identity() {
return nil, models.ByzantineThresholdExceededError{Evidence: fmt.Sprintf(
"finalizing states with rank %d at conflicting forks: %x and %x",
(*qc).GetRank(), (*qc).Identity(), lastFinalized.Identifier,
)}
}
return statesToBeFinalized, nil
}

View File

@ -0,0 +1,950 @@
package forks
import (
"fmt"
"testing"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
"source.quilibrium.com/quilibrium/monorepo/consensus"
"source.quilibrium.com/quilibrium/monorepo/consensus/helper"
"source.quilibrium.com/quilibrium/monorepo/consensus/mocks"
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
)
/*****************************************************************************
* NOTATION: *
* A state is denoted as [(<qc_number>) <state_rank_number>]. *
* For example, [(1) 2] means: a state of rank 2 that has a QC for rank 1. *
*****************************************************************************/
// TestInitialization verifies that at initialization, Forks reports:
// - the root / genesis state as finalized
// - it has no finalization proof for the root / genesis state (state and its finalization is trusted)
func TestInitialization(t *testing.T) {
forks, _ := newForks(t)
requireOnlyGenesisStateFinalized(t, forks)
_, hasProof := forks.FinalityProof()
require.False(t, hasProof)
}
// TestFinalize_Direct1Chain tests adding a direct 1-chain on top of the genesis state:
// - receives [◄(1) 2] [◄(2) 5]
//
// Expected behaviour:
// - On the one hand, Forks should not finalize any _additional_ states, because there is
// no finalizable 2-chain for [◄(1) 2]. Hence, finalization no events should be emitted.
// - On the other hand, after adding the two states, Forks has enough knowledge to construct
// a FinalityProof for the genesis state.
func TestFinalize_Direct1Chain(t *testing.T) {
builder := NewStateBuilder().
Add(1, 2).
Add(2, 3)
states, err := builder.States()
require.NoError(t, err)
t.Run("consensus participant mode: ingest validated states", func(t *testing.T) {
forks, _ := newForks(t)
// adding state [◄(1) 2] should not finalize anything
// as the genesis state is trusted, there should be no FinalityProof available for it
require.NoError(t, forks.AddValidatedState(states[0]))
requireOnlyGenesisStateFinalized(t, forks)
_, hasProof := forks.FinalityProof()
require.False(t, hasProof)
// After adding state [◄(2) 3], Forks has enough knowledge to construct a FinalityProof for the
// genesis state. However, finalization remains at the genesis state, so no events should be emitted.
expectedFinalityProof := makeFinalityProof(t, builder.GenesisState().State, states[0], states[1].ParentQuorumCertificate)
require.NoError(t, forks.AddValidatedState(states[1]))
requireLatestFinalizedState(t, forks, builder.GenesisState().State)
requireFinalityProof(t, forks, expectedFinalityProof)
})
t.Run("consensus follower mode: ingest certified states", func(t *testing.T) {
forks, _ := newForks(t)
// After adding CertifiedState [◄(1) 2] ◄(2), Forks has enough knowledge to construct a FinalityProof for
// the genesis state. However, finalization remains at the genesis state, so no events should be emitted.
expectedFinalityProof := makeFinalityProof(t, builder.GenesisState().State, states[0], states[1].ParentQuorumCertificate)
c, err := models.NewCertifiedState(states[0], states[1].ParentQuorumCertificate)
require.NoError(t, err)
require.NoError(t, forks.AddCertifiedState(c))
requireLatestFinalizedState(t, forks, builder.GenesisState().State)
requireFinalityProof(t, forks, expectedFinalityProof)
})
}
// TestFinalize_Direct2Chain tests adding a direct 1-chain on a direct 1-chain (direct 2-chain).
// - receives [◄(1) 2] [◄(2) 3] [◄(3) 4]
// - Forks should finalize [◄(1) 2]
func TestFinalize_Direct2Chain(t *testing.T) {
states, err := NewStateBuilder().
Add(1, 2).
Add(2, 3).
Add(3, 4).
States()
require.NoError(t, err)
expectedFinalityProof := makeFinalityProof(t, states[0], states[1], states[2].ParentQuorumCertificate)
t.Run("consensus participant mode: ingest validated states", func(t *testing.T) {
forks, _ := newForks(t)
require.Nil(t, addValidatedStateToForks(forks, states))
requireLatestFinalizedState(t, forks, states[0])
requireFinalityProof(t, forks, expectedFinalityProof)
})
t.Run("consensus follower mode: ingest certified states", func(t *testing.T) {
forks, _ := newForks(t)
require.Nil(t, addCertifiedStatesToForks(forks, states))
requireLatestFinalizedState(t, forks, states[0])
requireFinalityProof(t, forks, expectedFinalityProof)
})
}
// TestFinalize_DirectIndirect2Chain tests adding an indirect 1-chain on a direct 1-chain.
// receives [◄(1) 2] [◄(2) 3] [◄(3) 5]
// it should finalize [◄(1) 2]
func TestFinalize_DirectIndirect2Chain(t *testing.T) {
states, err := NewStateBuilder().
Add(1, 2).
Add(2, 3).
Add(3, 5).
States()
require.NoError(t, err)
expectedFinalityProof := makeFinalityProof(t, states[0], states[1], states[2].ParentQuorumCertificate)
t.Run("consensus participant mode: ingest validated states", func(t *testing.T) {
forks, _ := newForks(t)
require.Nil(t, addValidatedStateToForks(forks, states))
requireLatestFinalizedState(t, forks, states[0])
requireFinalityProof(t, forks, expectedFinalityProof)
})
t.Run("consensus follower mode: ingest certified states", func(t *testing.T) {
forks, _ := newForks(t)
require.Nil(t, addCertifiedStatesToForks(forks, states))
requireLatestFinalizedState(t, forks, states[0])
requireFinalityProof(t, forks, expectedFinalityProof)
})
}
// TestFinalize_IndirectDirect2Chain tests adding a direct 1-chain on an indirect 1-chain.
// - Forks receives [◄(1) 3] [◄(3) 5] [◄(7) 7]
// - it should not finalize any states because there is no finalizable 2-chain.
func TestFinalize_IndirectDirect2Chain(t *testing.T) {
states, err := NewStateBuilder().
Add(1, 3).
Add(3, 5).
Add(5, 7).
States()
require.NoError(t, err)
t.Run("consensus participant mode: ingest validated states", func(t *testing.T) {
forks, _ := newForks(t)
require.Nil(t, addValidatedStateToForks(forks, states))
requireOnlyGenesisStateFinalized(t, forks)
_, hasProof := forks.FinalityProof()
require.False(t, hasProof)
})
t.Run("consensus follower mode: ingest certified states", func(t *testing.T) {
forks, _ := newForks(t)
require.Nil(t, addCertifiedStatesToForks(forks, states))
requireOnlyGenesisStateFinalized(t, forks)
_, hasProof := forks.FinalityProof()
require.False(t, hasProof)
})
}
// TestFinalize_Direct2ChainOnIndirect tests adding a direct 2-chain on an indirect 2-chain:
// - ingesting [◄(1) 3] [◄(3) 5] [◄(5) 6] [◄(6) 7] [◄(7) 8]
// - should result in finalization of [◄(5) 6]
func TestFinalize_Direct2ChainOnIndirect(t *testing.T) {
states, err := NewStateBuilder().
Add(1, 3).
Add(3, 5).
Add(5, 6).
Add(6, 7).
Add(7, 8).
States()
require.NoError(t, err)
expectedFinalityProof := makeFinalityProof(t, states[2], states[3], states[4].ParentQuorumCertificate)
t.Run("consensus participant mode: ingest validated states", func(t *testing.T) {
forks, _ := newForks(t)
require.Nil(t, addValidatedStateToForks(forks, states))
requireLatestFinalizedState(t, forks, states[2])
requireFinalityProof(t, forks, expectedFinalityProof)
})
t.Run("consensus follower mode: ingest certified states", func(t *testing.T) {
forks, _ := newForks(t)
require.Nil(t, addCertifiedStatesToForks(forks, states))
requireLatestFinalizedState(t, forks, states[2])
requireFinalityProof(t, forks, expectedFinalityProof)
})
}
// TestFinalize_Direct2ChainOnDirect tests adding a sequence of direct 2-chains:
// - ingesting [◄(1) 2] [◄(2) 3] [◄(3) 4] [◄(4) 5] [◄(5) 6]
// - should result in finalization of [◄(3) 4]
func TestFinalize_Direct2ChainOnDirect(t *testing.T) {
states, err := NewStateBuilder().
Add(1, 2).
Add(2, 3).
Add(3, 4).
Add(4, 5).
Add(5, 6).
States()
require.NoError(t, err)
expectedFinalityProof := makeFinalityProof(t, states[2], states[3], states[4].ParentQuorumCertificate)
t.Run("consensus participant mode: ingest validated states", func(t *testing.T) {
forks, _ := newForks(t)
require.Nil(t, addValidatedStateToForks(forks, states))
requireLatestFinalizedState(t, forks, states[2])
requireFinalityProof(t, forks, expectedFinalityProof)
})
t.Run("consensus follower mode: ingest certified states", func(t *testing.T) {
forks, _ := newForks(t)
require.Nil(t, addCertifiedStatesToForks(forks, states))
requireLatestFinalizedState(t, forks, states[2])
requireFinalityProof(t, forks, expectedFinalityProof)
})
}
// TestFinalize_Multiple2Chains tests the case where a state can be finalized by different 2-chains.
// - ingesting [◄(1) 2] [◄(2) 3] [◄(3) 5] [◄(3) 6] [◄(3) 7]
// - should result in finalization of [◄(1) 2]
func TestFinalize_Multiple2Chains(t *testing.T) {
states, err := NewStateBuilder().
Add(1, 2).
Add(2, 3).
Add(3, 5).
Add(3, 6).
Add(3, 7).
States()
require.NoError(t, err)
expectedFinalityProof := makeFinalityProof(t, states[0], states[1], states[2].ParentQuorumCertificate)
t.Run("consensus participant mode: ingest validated states", func(t *testing.T) {
forks, _ := newForks(t)
require.Nil(t, addValidatedStateToForks(forks, states))
requireLatestFinalizedState(t, forks, states[0])
requireFinalityProof(t, forks, expectedFinalityProof)
})
t.Run("consensus follower mode: ingest certified states", func(t *testing.T) {
forks, _ := newForks(t)
require.Nil(t, addCertifiedStatesToForks(forks, states))
requireLatestFinalizedState(t, forks, states[0])
requireFinalityProof(t, forks, expectedFinalityProof)
})
}
// TestFinalize_OrphanedFork tests that we can finalize a state which causes a conflicting fork to be orphaned.
// We ingest the following state tree:
//
// [◄(1) 2] [◄(2) 3]
// [◄(2) 4] [◄(4) 5] [◄(5) 6]
//
// which should result in finalization of [◄(2) 4] and pruning of [◄(2) 3]
func TestFinalize_OrphanedFork(t *testing.T) {
states, err := NewStateBuilder().
Add(1, 2). // [◄(1) 2]
Add(2, 3). // [◄(2) 3], should eventually be pruned
Add(2, 4). // [◄(2) 4], should eventually be finalized
Add(4, 5). // [◄(4) 5]
Add(5, 6). // [◄(5) 6]
States()
require.NoError(t, err)
expectedFinalityProof := makeFinalityProof(t, states[2], states[3], states[4].ParentQuorumCertificate)
t.Run("consensus participant mode: ingest validated states", func(t *testing.T) {
forks, _ := newForks(t)
require.Nil(t, addValidatedStateToForks(forks, states))
require.False(t, forks.IsKnownState(states[1].Identifier))
requireLatestFinalizedState(t, forks, states[2])
requireFinalityProof(t, forks, expectedFinalityProof)
})
t.Run("consensus follower mode: ingest certified states", func(t *testing.T) {
forks, _ := newForks(t)
require.Nil(t, addCertifiedStatesToForks(forks, states))
require.False(t, forks.IsKnownState(states[1].Identifier))
requireLatestFinalizedState(t, forks, states[2])
requireFinalityProof(t, forks, expectedFinalityProof)
})
}
// TestDuplication tests that delivering the same state/qc multiple times has
// the same end state as delivering the state/qc once.
// - Forks receives [◄(1) 2] [◄(2) 3] [◄(2) 3] [◄(3) 4] [◄(3) 4] [◄(4) 5] [◄(4) 5]
// - it should finalize [◄(2) 3]
func TestDuplication(t *testing.T) {
states, err := NewStateBuilder().
Add(1, 2).
Add(2, 3).
Add(2, 3).
Add(3, 4).
Add(3, 4).
Add(4, 5).
Add(4, 5).
States()
require.NoError(t, err)
expectedFinalityProof := makeFinalityProof(t, states[1], states[3], states[5].ParentQuorumCertificate)
t.Run("consensus participant mode: ingest validated states", func(t *testing.T) {
forks, _ := newForks(t)
require.Nil(t, addValidatedStateToForks(forks, states))
requireLatestFinalizedState(t, forks, states[1])
requireFinalityProof(t, forks, expectedFinalityProof)
})
t.Run("consensus follower mode: ingest certified states", func(t *testing.T) {
forks, _ := newForks(t)
require.Nil(t, addCertifiedStatesToForks(forks, states))
requireLatestFinalizedState(t, forks, states[1])
requireFinalityProof(t, forks, expectedFinalityProof)
})
}
// TestIgnoreStatesBelowFinalizedRank tests that states below finalized rank are ignored.
// - Forks receives [◄(1) 2] [◄(2) 3] [◄(3) 4] [◄(1) 5]
// - it should finalize [◄(1) 2]
func TestIgnoreStatesBelowFinalizedRank(t *testing.T) {
builder := NewStateBuilder().
Add(1, 2). // [◄(1) 2]
Add(2, 3). // [◄(2) 3]
Add(3, 4). // [◄(3) 4]
Add(1, 5) // [◄(1) 5]
states, err := builder.States()
require.NoError(t, err)
expectedFinalityProof := makeFinalityProof(t, states[0], states[1], states[2].ParentQuorumCertificate)
t.Run("consensus participant mode: ingest validated states", func(t *testing.T) {
// initialize forks and add first 3 states:
// * state [◄(1) 2] should then be finalized
// * and state [1] should be pruned
forks, _ := newForks(t)
require.Nil(t, addValidatedStateToForks(forks, states[:3]))
// sanity checks to confirm correct test setup
requireLatestFinalizedState(t, forks, states[0])
requireFinalityProof(t, forks, expectedFinalityProof)
require.False(t, forks.IsKnownState(builder.GenesisState().Identifier()))
// adding state [◄(1) 5]: note that QC is _below_ the pruning threshold, i.e. cannot resolve the parent
// * Forks should store state, despite the parent already being pruned
// * finalization should not change
orphanedState := states[3]
require.Nil(t, forks.AddValidatedState(orphanedState))
require.True(t, forks.IsKnownState(orphanedState.Identifier))
requireLatestFinalizedState(t, forks, states[0])
requireFinalityProof(t, forks, expectedFinalityProof)
})
t.Run("consensus follower mode: ingest certified states", func(t *testing.T) {
// initialize forks and add first 3 states:
// * state [◄(1) 2] should then be finalized
// * and state [1] should be pruned
forks, _ := newForks(t)
require.Nil(t, addCertifiedStatesToForks(forks, states[:3]))
// sanity checks to confirm correct test setup
requireLatestFinalizedState(t, forks, states[0])
requireFinalityProof(t, forks, expectedFinalityProof)
require.False(t, forks.IsKnownState(builder.GenesisState().Identifier()))
// adding state [◄(1) 5]: note that QC is _below_ the pruning threshold, i.e. cannot resolve the parent
// * Forks should store state, despite the parent already being pruned
// * finalization should not change
certStateWithUnknownParent := toCertifiedState(t, states[3])
require.Nil(t, forks.AddCertifiedState(certStateWithUnknownParent))
require.True(t, forks.IsKnownState(certStateWithUnknownParent.State.Identifier))
requireLatestFinalizedState(t, forks, states[0])
requireFinalityProof(t, forks, expectedFinalityProof)
})
}
// TestDoubleProposal tests that the DoubleProposal notification is emitted when two different
// states for the same rank are added. We ingest the following state tree:
//
// / [◄(1) 2]
// [1]
// \ [◄(1) 2']
//
// which should result in a DoubleProposal event referencing the states [◄(1) 2] and [◄(1) 2']
func TestDoubleProposal(t *testing.T) {
states, err := NewStateBuilder().
Add(1, 2). // [◄(1) 2]
AddVersioned(1, 2, 0, 1). // [◄(1) 2']
States()
require.NoError(t, err)
t.Run("consensus participant mode: ingest validated states", func(t *testing.T) {
forks, notifier := newForks(t)
notifier.On("OnDoubleProposeDetected", states[1], states[0]).Once()
err = addValidatedStateToForks(forks, states)
require.NoError(t, err)
})
t.Run("consensus follower mode: ingest certified states", func(t *testing.T) {
forks, notifier := newForks(t)
notifier.On("OnDoubleProposeDetected", states[1], states[0]).Once()
err = forks.AddCertifiedState(toCertifiedState(t, states[0])) // add [◄(1) 2] as certified state
require.NoError(t, err)
err = forks.AddCertifiedState(toCertifiedState(t, states[1])) // add [◄(1) 2'] as certified state
require.NoError(t, err)
})
}
// TestConflictingQCs checks that adding 2 conflicting QCs should return models.ByzantineThresholdExceededError
// We ingest the following state tree:
//
// [◄(1) 2] [◄(2) 3] [◄(3) 4] [◄(4) 6]
// [◄(2) 3'] [◄(3') 5]
//
// which should result in a `ByzantineThresholdExceededError`, because conflicting states 3 and 3' both have QCs
func TestConflictingQCs(t *testing.T) {
states, err := NewStateBuilder().
Add(1, 2). // [◄(1) 2]
Add(2, 3). // [◄(2) 3]
AddVersioned(2, 3, 0, 1). // [◄(2) 3']
Add(3, 4). // [◄(3) 4]
Add(4, 6). // [◄(4) 6]
AddVersioned(3, 5, 1, 0). // [◄(3') 5]
States()
require.NoError(t, err)
t.Run("consensus participant mode: ingest validated states", func(t *testing.T) {
forks, notifier := newForks(t)
notifier.On("OnDoubleProposeDetected", states[2], states[1]).Return(nil)
err = addValidatedStateToForks(forks, states)
assert.True(t, models.IsByzantineThresholdExceededError(err))
})
t.Run("consensus follower mode: ingest certified states", func(t *testing.T) {
forks, notifier := newForks(t)
notifier.On("OnDoubleProposeDetected", states[2], states[1]).Return(nil)
// As [◄(3') 5] is not certified, it will not be added to Forks. However, its QC ◄(3') is
// delivered to Forks as part of the *certified* state [◄(2) 3'].
err = addCertifiedStatesToForks(forks, states)
assert.True(t, models.IsByzantineThresholdExceededError(err))
})
}
// TestConflictingFinalizedForks checks that finalizing 2 conflicting forks should return models.ByzantineThresholdExceededError
// We ingest the following state tree:
//
// [◄(1) 2] [◄(2) 3] [◄(3) 4] [◄(4) 5]
// [◄(2) 6] [◄(6) 7] [◄(7) 8]
//
// Here, both states [◄(2) 3] and [◄(2) 6] satisfy the finalization condition, i.e. we have a fork
// in the finalized states, which should result in a models.ByzantineThresholdExceededError exception.
func TestConflictingFinalizedForks(t *testing.T) {
states, err := NewStateBuilder().
Add(1, 2).
Add(2, 3).
Add(3, 4).
Add(4, 5). // finalizes [◄(2) 3]
Add(2, 6).
Add(6, 7).
Add(7, 8). // finalizes [◄(2) 6], conflicting with conflicts with [◄(2) 3]
States()
require.NoError(t, err)
t.Run("consensus participant mode: ingest validated states", func(t *testing.T) {
forks, _ := newForks(t)
err = addValidatedStateToForks(forks, states)
assert.True(t, models.IsByzantineThresholdExceededError(err))
})
t.Run("consensus follower mode: ingest certified states", func(t *testing.T) {
forks, _ := newForks(t)
err = addCertifiedStatesToForks(forks, states)
assert.True(t, models.IsByzantineThresholdExceededError(err))
})
}
// TestAddDisconnectedState checks that adding a state which does not connect to the
// latest finalized state returns a `models.MissingStateError`
// - receives [◄(2) 3]
// - should return `models.MissingStateError`, because the parent is above the pruning
// threshold, but Forks does not know its parent
func TestAddDisconnectedState(t *testing.T) {
states, err := NewStateBuilder().
Add(1, 2). // we will skip this state [◄(1) 2]
Add(2, 3). // [◄(2) 3]
States()
require.NoError(t, err)
t.Run("consensus participant mode: ingest validated states", func(t *testing.T) {
forks, _ := newForks(t)
err := forks.AddValidatedState(states[1])
require.Error(t, err)
assert.True(t, models.IsMissingStateError(err))
})
t.Run("consensus follower mode: ingest certified states", func(t *testing.T) {
forks, _ := newForks(t)
err := forks.AddCertifiedState(toCertifiedState(t, states[1]))
require.Error(t, err)
assert.True(t, models.IsMissingStateError(err))
})
}
// TestGetState tests that we can retrieve stored states. Here, we test that
// attempting to retrieve nonexistent or pruned states fails without causing an exception.
// - Forks receives [◄(1) 2] [◄(2) 3] [◄(3) 4], then [◄(4) 5]
// - should finalize [◄(1) 2], then [◄(2) 3]
func TestGetState(t *testing.T) {
states, err := NewStateBuilder().
Add(1, 2). // [◄(1) 2]
Add(2, 3). // [◄(2) 3]
Add(3, 4). // [◄(3) 4]
Add(4, 5). // [◄(4) 5]
States()
require.NoError(t, err)
t.Run("consensus participant mode: ingest validated states", func(t *testing.T) {
statesAddedFirst := states[:3] // [◄(1) 2] [◄(2) 3] [◄(3) 4]
remainingState := states[3] // [◄(4) 5]
forks, _ := newForks(t)
// should be unable to retrieve a state before it is added
_, ok := forks.GetState(states[0].Identifier)
assert.False(t, ok)
// add first 3 states - should finalize [◄(1) 2]
err = addValidatedStateToForks(forks, statesAddedFirst)
require.NoError(t, err)
// should be able to retrieve all stored states
for _, state := range statesAddedFirst {
b, ok := forks.GetState(state.Identifier)
assert.True(t, ok)
assert.Equal(t, state, b)
}
// add remaining state [◄(4) 5] - should finalize [◄(2) 3] and prune [◄(1) 2]
require.Nil(t, forks.AddValidatedState(remainingState))
// should be able to retrieve just added state
b, ok := forks.GetState(remainingState.Identifier)
assert.True(t, ok)
assert.Equal(t, remainingState, b)
// should be unable to retrieve pruned state
_, ok = forks.GetState(statesAddedFirst[0].Identifier)
assert.False(t, ok)
})
// Caution: finalization is driven by QCs. Therefore, we include the QC for state 3
// in the first batch of states that we add. This is analogous to previous test case,
// except that we are delivering the QC ◄(3) as part of the certified state of rank 2
// [◄(2) 3] ◄(3)
// while in the previous sub-test, the QC ◄(3) was delivered as part of state [◄(3) 4]
t.Run("consensus follower mode: ingest certified states", func(t *testing.T) {
statesAddedFirst := toCertifiedStates(t, states[:2]...) // [◄(1) 2] [◄(2) 3] ◄(3)
remainingState := toCertifiedState(t, states[2]) // [◄(3) 4] ◄(4)
forks, _ := newForks(t)
// should be unable to retrieve a state before it is added
_, ok := forks.GetState(states[0].Identifier)
assert.False(t, ok)
// add first states - should finalize [◄(1) 2]
err := forks.AddCertifiedState(statesAddedFirst[0])
require.NoError(t, err)
err = forks.AddCertifiedState(statesAddedFirst[1])
require.NoError(t, err)
// should be able to retrieve all stored states
for _, state := range statesAddedFirst {
b, ok := forks.GetState(state.State.Identifier)
assert.True(t, ok)
assert.Equal(t, state.State, b)
}
// add remaining state [◄(4) 5] - should finalize [◄(2) 3] and prune [◄(1) 2]
require.Nil(t, forks.AddCertifiedState(remainingState))
// should be able to retrieve just added state
b, ok := forks.GetState(remainingState.State.Identifier)
assert.True(t, ok)
assert.Equal(t, remainingState.State, b)
// should be unable to retrieve pruned state
_, ok = forks.GetState(statesAddedFirst[0].State.Identifier)
assert.False(t, ok)
})
}
// TestGetStatesForRank tests retrieving states for a rank (also including double proposals).
// - Forks receives [◄(1) 2] [◄(2) 4] [◄(2) 4'],
// where [◄(2) 4'] is a double proposal, because it has the same rank as [◄(2) 4]
//
// Expected behaviour:
// - Forks should store all the states
// - Forks should emit a `OnDoubleProposeDetected` notification
// - we can retrieve all states, including the double proposals
func TestGetStatesForRank(t *testing.T) {
states, err := NewStateBuilder().
Add(1, 2). // [◄(1) 2]
Add(2, 4). // [◄(2) 4]
AddVersioned(2, 4, 0, 1). // [◄(2) 4']
States()
require.NoError(t, err)
t.Run("consensus participant mode: ingest validated states", func(t *testing.T) {
forks, notifier := newForks(t)
notifier.On("OnDoubleProposeDetected", states[2], states[1]).Once()
err = addValidatedStateToForks(forks, states)
require.NoError(t, err)
// expect 1 state at rank 2
storedStates := forks.GetStatesForRank(2)
assert.Len(t, storedStates, 1)
assert.Equal(t, states[0], storedStates[0])
// expect 2 states at rank 4
storedStates = forks.GetStatesForRank(4)
assert.Len(t, storedStates, 2)
assert.ElementsMatch(t, states[1:], storedStates)
// expect 0 states at rank 3
storedStates = forks.GetStatesForRank(3)
assert.Len(t, storedStates, 0)
})
t.Run("consensus follower mode: ingest certified states", func(t *testing.T) {
forks, notifier := newForks(t)
notifier.On("OnDoubleProposeDetected", states[2], states[1]).Once()
err := forks.AddCertifiedState(toCertifiedState(t, states[0]))
require.NoError(t, err)
err = forks.AddCertifiedState(toCertifiedState(t, states[1]))
require.NoError(t, err)
err = forks.AddCertifiedState(toCertifiedState(t, states[2]))
require.NoError(t, err)
// expect 1 state at rank 2
storedStates := forks.GetStatesForRank(2)
assert.Len(t, storedStates, 1)
assert.Equal(t, states[0], storedStates[0])
// expect 2 states at rank 4
storedStates = forks.GetStatesForRank(4)
assert.Len(t, storedStates, 2)
assert.ElementsMatch(t, states[1:], storedStates)
// expect 0 states at rank 3
storedStates = forks.GetStatesForRank(3)
assert.Len(t, storedStates, 0)
})
}
// TestNotifications tests that Forks emits the expected events:
// - Forks receives [◄(1) 2] [◄(2) 3] [◄(3) 4]
//
// Expected Behaviour:
// - Each of the ingested states should result in an `OnStateIncorporated` notification
// - Forks should finalize [◄(1) 2], resulting in a `MakeFinal` event and an `OnFinalizedState` event
func TestNotifications(t *testing.T) {
builder := NewStateBuilder().
Add(1, 2).
Add(2, 3).
Add(3, 4)
states, err := builder.States()
require.NoError(t, err)
t.Run("consensus participant mode: ingest validated states", func(t *testing.T) {
notifier := &mocks.Consumer[*helper.TestState, *helper.TestVote]{}
// 4 states including the genesis are incorporated
notifier.On("OnStateIncorporated", mock.Anything).Return(nil).Times(4)
notifier.On("OnFinalizedState", states[0]).Once()
finalizationCallback := mocks.NewFinalizer(t)
finalizationCallback.On("MakeFinal", states[0].Identifier).Return(nil).Once()
forks, err := NewForks(builder.GenesisState(), finalizationCallback, notifier)
require.NoError(t, err)
require.NoError(t, addValidatedStateToForks(forks, states))
})
t.Run("consensus follower mode: ingest certified states", func(t *testing.T) {
notifier := &mocks.Consumer[*helper.TestState, *helper.TestVote]{}
// 4 states including the genesis are incorporated
notifier.On("OnStateIncorporated", mock.Anything).Return(nil).Times(4)
notifier.On("OnFinalizedState", states[0]).Once()
finalizationCallback := mocks.NewFinalizer(t)
finalizationCallback.On("MakeFinal", states[0].Identifier).Return(nil).Once()
forks, err := NewForks(builder.GenesisState(), finalizationCallback, notifier)
require.NoError(t, err)
require.NoError(t, addCertifiedStatesToForks(forks, states))
})
}
// TestFinalizingMultipleStates tests that `OnFinalizedState` notifications are emitted in correct order
// when there are multiple states finalized by adding a _single_ state.
// - receiving [◄(1) 3] [◄(3) 5] [◄(5) 7] [◄(7) 11] [◄(11) 12] should not finalize any states,
// because there is no 2-chain with the first chain link being a _direct_ 1-chain
// - adding [◄(12) 22] should finalize up to state [◄(6) 11]
//
// This test verifies the following expected properties:
// 1. Safety under reentrancy:
// While Forks is single-threaded, there is still the possibility of reentrancy. Specifically, the
// consumers of our finalization events are served by the goroutine executing Forks. It is conceivable
// that a consumer might access Forks and query the latest finalization proof. This would be legal, if
// the component supplying the goroutine to Forks also consumes the notifications. Therefore, for API
// safety, we require forks to _first update_ its `FinalityProof()` before it emits _any_ events.
// 2. For each finalized state, `finalizationCallback` event is executed _before_ `OnFinalizedState` notifications.
// 3. States are finalized in order of increasing height (without skipping any states).
func TestFinalizingMultipleStates(t *testing.T) {
builder := NewStateBuilder().
Add(1, 3). // index 0: [◄(1) 2]
Add(3, 5). // index 1: [◄(2) 4]
Add(5, 7). // index 2: [◄(4) 6]
Add(7, 11). // index 3: [◄(6) 11] -- expected to be finalized
Add(11, 12). // index 4: [◄(11) 12]
Add(12, 22) // index 5: [◄(12) 22]
states, err := builder.States()
require.NoError(t, err)
// The Finality Proof should right away point to the _latest_ finalized state. Subsequently emitting
// Finalization events for lower states is fine, because notifications are guaranteed to be
// _eventually_ arriving. I.e. consumers expect notifications / events to be potentially lagging behind.
expectedFinalityProof := makeFinalityProof(t, states[3], states[4], states[5].ParentQuorumCertificate)
setupForksAndAssertions := func() (*Forks[*helper.TestState, *helper.TestVote], *mocks.Finalizer, *mocks.Consumer[*helper.TestState, *helper.TestVote]) {
// initialize Forks with custom event consumers so we can check order of emitted events
notifier := &mocks.Consumer[*helper.TestState, *helper.TestVote]{}
finalizationCallback := mocks.NewFinalizer(t)
notifier.On("OnStateIncorporated", mock.Anything).Return(nil)
forks, err := NewForks(builder.GenesisState(), finalizationCallback, notifier)
require.NoError(t, err)
// expecting finalization of [◄(1) 2] [◄(2) 4] [◄(4) 6] [◄(6) 11] in this order
statesAwaitingFinalization := toStateAwaitingFinalization(states[:4])
finalizationCallback.On("MakeFinal", mock.Anything).Run(func(args mock.Arguments) {
requireFinalityProof(t, forks, expectedFinalityProof) // Requirement 1: forks should _first update_ its `FinalityProof()` before it emits _any_ events
// Requirement 3: finalized in order of increasing height (without skipping any states).
expectedNextFinalizationEvents := statesAwaitingFinalization[0]
require.Equal(t, expectedNextFinalizationEvents.State.Identifier, args[0])
// Requirement 2: finalized state, `finalizationCallback` event is executed _before_ `OnFinalizedState` notifications.
// no duplication of events under normal operations expected
require.False(t, expectedNextFinalizationEvents.MakeFinalCalled)
require.False(t, expectedNextFinalizationEvents.OnFinalizedStateEmitted)
expectedNextFinalizationEvents.MakeFinalCalled = true
}).Return(nil).Times(4)
notifier.On("OnFinalizedState", mock.Anything).Run(func(args mock.Arguments) {
requireFinalityProof(t, forks, expectedFinalityProof) // Requirement 1: forks should _first update_ its `FinalityProof()` before it emits _any_ events
// Requirement 3: finalized in order of increasing height (without skipping any states).
expectedNextFinalizationEvents := statesAwaitingFinalization[0]
require.Equal(t, expectedNextFinalizationEvents.State, args[0])
// Requirement 2: finalized state, `finalizationCallback` event is executed _before_ `OnFinalizedState` notifications.
// no duplication of events under normal operations expected
require.True(t, expectedNextFinalizationEvents.MakeFinalCalled)
require.False(t, expectedNextFinalizationEvents.OnFinalizedStateEmitted)
expectedNextFinalizationEvents.OnFinalizedStateEmitted = true
// At this point, `MakeFinal` and `OnFinalizedState` have both been emitted for the state, so we are done with it
statesAwaitingFinalization = statesAwaitingFinalization[1:]
}).Times(4)
return forks, finalizationCallback, notifier
}
t.Run("consensus participant mode: ingest validated states", func(t *testing.T) {
forks, finalizationCallback, notifier := setupForksAndAssertions()
err = addValidatedStateToForks(forks, states[:5]) // adding [◄(1) 2] [◄(2) 4] [◄(4) 6] [◄(6) 11] [◄(11) 12]
require.NoError(t, err)
requireOnlyGenesisStateFinalized(t, forks) // finalization should still be at the genesis state
require.NoError(t, forks.AddValidatedState(states[5])) // adding [◄(12) 22] should trigger finalization events
requireFinalityProof(t, forks, expectedFinalityProof)
finalizationCallback.AssertExpectations(t)
notifier.AssertExpectations(t)
})
t.Run("consensus follower mode: ingest certified states", func(t *testing.T) {
forks, finalizationCallback, notifier := setupForksAndAssertions()
// adding [◄(1) 2] [◄(2) 4] [◄(4) 6] [◄(6) 11] ◄(11)
require.NoError(t, forks.AddCertifiedState(toCertifiedState(t, states[0])))
require.NoError(t, forks.AddCertifiedState(toCertifiedState(t, states[1])))
require.NoError(t, forks.AddCertifiedState(toCertifiedState(t, states[2])))
require.NoError(t, forks.AddCertifiedState(toCertifiedState(t, states[3])))
require.NoError(t, err)
requireOnlyGenesisStateFinalized(t, forks) // finalization should still be at the genesis state
// adding certified state [◄(11) 12] ◄(12) should trigger finalization events
require.NoError(t, forks.AddCertifiedState(toCertifiedState(t, states[4])))
requireFinalityProof(t, forks, expectedFinalityProof)
finalizationCallback.AssertExpectations(t)
notifier.AssertExpectations(t)
})
}
//* ************************************* internal functions ************************************* */
func newForks(t *testing.T) (*Forks[*helper.TestState, *helper.TestVote], *mocks.Consumer[*helper.TestState, *helper.TestVote]) {
notifier := mocks.NewConsumer[*helper.TestState, *helper.TestVote](t)
notifier.On("OnStateIncorporated", mock.Anything).Return(nil).Maybe()
notifier.On("OnFinalizedState", mock.Anything).Maybe()
finalizationCallback := mocks.NewFinalizer(t)
finalizationCallback.On("MakeFinal", mock.Anything).Return(nil).Maybe()
genesisBQ := makeGenesis()
forks, err := NewForks(genesisBQ, finalizationCallback, notifier)
require.NoError(t, err)
return forks, notifier
}
// addValidatedStateToForks adds all the given states to Forks, in order.
// If any errors occur, returns the first one.
func addValidatedStateToForks(forks *Forks[*helper.TestState, *helper.TestVote], states []*models.State[*helper.TestState]) error {
for _, state := range states {
err := forks.AddValidatedState(state)
if err != nil {
return fmt.Errorf("test failed to add state for rank %d: %w", state.Rank, err)
}
}
return nil
}
// addCertifiedStatesToForks iterates over all states, caches them locally in a map,
// constructs certified states whenever possible and adds the certified states to forks,
// Note: if states is a single fork, the _last state_ in the slice will not be added,
//
// because there is no qc for it
//
// If any errors occur, returns the first one.
func addCertifiedStatesToForks(forks *Forks[*helper.TestState, *helper.TestVote], states []*models.State[*helper.TestState]) error {
uncertifiedStates := make(map[models.Identity]*models.State[*helper.TestState])
for _, b := range states {
uncertifiedStates[b.Identifier] = b
parentID := b.ParentQuorumCertificate.Identity()
parent, found := uncertifiedStates[parentID]
if !found {
continue
}
delete(uncertifiedStates, parentID)
certParent, err := models.NewCertifiedState(parent, b.ParentQuorumCertificate)
if err != nil {
return fmt.Errorf("test failed to creat certified state for rank %d: %w", certParent.State.Rank, err)
}
err = forks.AddCertifiedState(certParent)
if err != nil {
return fmt.Errorf("test failed to add certified state for rank %d: %w", certParent.State.Rank, err)
}
}
return nil
}
// requireLatestFinalizedState asserts that the latest finalized state has the given rank and qc rank.
func requireLatestFinalizedState(t *testing.T, forks *Forks[*helper.TestState, *helper.TestVote], expectedFinalized *models.State[*helper.TestState]) {
require.Equal(t, expectedFinalized, forks.FinalizedState(), "finalized state is not as expected")
require.Equal(t, forks.FinalizedRank(), expectedFinalized.Rank, "FinalizedRank returned wrong value")
}
// requireOnlyGenesisStateFinalized asserts that no states have been finalized beyond the genesis state.
// Caution: does not inspect output of `forks.FinalityProof()`
func requireOnlyGenesisStateFinalized(t *testing.T, forks *Forks[*helper.TestState, *helper.TestVote]) {
genesis := makeGenesis()
require.Equal(t, forks.FinalizedState(), genesis.State, "finalized state is not the genesis state")
require.Equal(t, forks.FinalizedState().Rank, genesis.State.Rank)
require.Equal(t, forks.FinalizedState().Rank, genesis.CertifyingQuorumCertificate.GetRank())
require.Equal(t, forks.FinalizedRank(), genesis.State.Rank, "finalized state has wrong qc")
finalityProof, isKnown := forks.FinalityProof()
require.Nil(t, finalityProof, "expecting finality proof to be nil for genesis state at initialization")
require.False(t, isKnown, "no finality proof should be known for genesis state at initialization")
}
// requireNoStatesFinalized asserts that no states have been finalized (genesis is latest finalized state).
func requireFinalityProof(t *testing.T, forks *Forks[*helper.TestState, *helper.TestVote], expectedFinalityProof *consensus.FinalityProof[*helper.TestState]) {
finalityProof, isKnown := forks.FinalityProof()
require.True(t, isKnown)
require.Equal(t, expectedFinalityProof, finalityProof)
require.Equal(t, forks.FinalizedState(), expectedFinalityProof.State)
require.Equal(t, forks.FinalizedRank(), expectedFinalityProof.State.Rank)
}
// toCertifiedState generates a QC for the given state and returns their combination as a certified state
func toCertifiedState(t *testing.T, state *models.State[*helper.TestState]) *models.CertifiedState[*helper.TestState] {
qc := &helper.TestQuorumCertificate{
Rank: state.Rank,
Selector: state.Identifier,
}
cb, err := models.NewCertifiedState(state, qc)
require.NoError(t, err)
return cb
}
// toCertifiedStates generates a QC for the given state and returns their combination as a certified states
func toCertifiedStates(t *testing.T, states ...*models.State[*helper.TestState]) []*models.CertifiedState[*helper.TestState] {
certStates := make([]*models.CertifiedState[*helper.TestState], 0, len(states))
for _, b := range states {
certStates = append(certStates, toCertifiedState(t, b))
}
return certStates
}
func makeFinalityProof(t *testing.T, state *models.State[*helper.TestState], directChild *models.State[*helper.TestState], qcCertifyingChild models.QuorumCertificate) *consensus.FinalityProof[*helper.TestState] {
c, err := models.NewCertifiedState(directChild, qcCertifyingChild) // certified child of FinalizedState
require.NoError(t, err)
return &consensus.FinalityProof[*helper.TestState]{State: state, CertifiedChild: c}
}
// stateAwaitingFinalization is intended for tracking finalization events and their order for a specific state
type stateAwaitingFinalization struct {
State *models.State[*helper.TestState]
MakeFinalCalled bool // indicates whether `Finalizer.MakeFinal` was called
OnFinalizedStateEmitted bool // indicates whether `OnFinalizedStateCalled` notification was emitted
}
// toStateAwaitingFinalization creates a `stateAwaitingFinalization` tracker for each input state
func toStateAwaitingFinalization(states []*models.State[*helper.TestState]) []*stateAwaitingFinalization {
trackers := make([]*stateAwaitingFinalization, 0, len(states))
for _, b := range states {
tracker := &stateAwaitingFinalization{b, false, false}
trackers = append(trackers, tracker)
}
return trackers
}

View File

@ -0,0 +1,165 @@
package forks
import (
"fmt"
"source.quilibrium.com/quilibrium/monorepo/consensus/helper"
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
)
// StateRank specifies the data to create a state
type StateRank struct {
// Rank is the rank of the state to be created
Rank uint64
// StateVersion is the version of the state for that rank.
// Useful for creating conflicting states at the same rank.
StateVersion int
// QCRank is the rank of the QC embedded in this state (also: the rank of the state's parent)
QCRank uint64
// QCVersion is the version of the QC for that rank.
QCVersion int
}
// QCIndex returns a unique identifier for the state's QC.
func (bv *StateRank) QCIndex() string {
return fmt.Sprintf("%v-%v", bv.QCRank, bv.QCVersion)
}
// StateIndex returns a unique identifier for the state.
func (bv *StateRank) StateIndex() string {
return fmt.Sprintf("%v-%v", bv.Rank, bv.StateVersion)
}
// StateBuilder is a test utility for creating state structure fixtures.
type StateBuilder struct {
stateRanks []*StateRank
}
func NewStateBuilder() *StateBuilder {
return &StateBuilder{
stateRanks: make([]*StateRank, 0),
}
}
// Add adds a state with the given qcRank and stateRank. Returns self-reference for chaining.
func (bb *StateBuilder) Add(qcRank uint64, stateRank uint64) *StateBuilder {
bb.stateRanks = append(bb.stateRanks, &StateRank{
Rank: stateRank,
QCRank: qcRank,
})
return bb
}
// GenesisState returns the genesis state, which is always finalized.
func (bb *StateBuilder) GenesisState() *models.CertifiedState[*helper.TestState] {
return makeGenesis()
}
// AddVersioned adds a state with the given qcRank and stateRank.
// In addition, the version identifier of the QC embedded within the state
// is specified by `qcVersion`. The version identifier for the state itself
// (primarily for emulating different state ID) is specified by `stateVersion`.
// [(◄3) 4] denotes a state of rank 4, with a qc for rank 3
// [(◄3) 4'] denotes a state of rank 4 that is different than [(◄3) 4], with a qc for rank 3
// [(◄3) 4'] can be created by AddVersioned(3, 4, 0, 1)
// [(◄3') 4] can be created by AddVersioned(3, 4, 1, 0)
// Returns self-reference for chaining.
func (bb *StateBuilder) AddVersioned(qcRank uint64, stateRank uint64, qcVersion int, stateVersion int) *StateBuilder {
bb.stateRanks = append(bb.stateRanks, &StateRank{
Rank: stateRank,
QCRank: qcRank,
StateVersion: stateVersion,
QCVersion: qcVersion,
})
return bb
}
// Proposals returns a list of all proposals added to the StateBuilder.
// Returns an error if the states do not form a connected tree rooted at genesis.
func (bb *StateBuilder) Proposals() ([]*models.Proposal[*helper.TestState], error) {
states := make([]*models.Proposal[*helper.TestState], 0, len(bb.stateRanks))
genesisState := makeGenesis()
genesisBV := &StateRank{
Rank: genesisState.State.Rank,
QCRank: genesisState.CertifyingQuorumCertificate.GetRank(),
}
qcs := make(map[string]models.QuorumCertificate)
qcs[genesisBV.QCIndex()] = genesisState.CertifyingQuorumCertificate
for _, bv := range bb.stateRanks {
qc, ok := qcs[bv.QCIndex()]
if !ok {
return nil, fmt.Errorf("test fail: no qc found for qc index: %v", bv.QCIndex())
}
var previousRankTimeoutCert models.TimeoutCertificate
if qc.GetRank()+1 != bv.Rank {
previousRankTimeoutCert = helper.MakeTC(helper.WithTCRank(bv.Rank - 1))
}
proposal := &models.Proposal[*helper.TestState]{
State: &models.State[*helper.TestState]{
Rank: bv.Rank,
ParentQuorumCertificate: qc,
},
PreviousRankTimeoutCertificate: previousRankTimeoutCert,
}
proposal.State.Identifier = makeIdentifier(proposal.State, bv.StateVersion)
states = append(states, proposal)
// generate QC for the new proposal
qcs[bv.StateIndex()] = &helper.TestQuorumCertificate{
Rank: proposal.State.Rank,
Selector: proposal.State.Identifier,
AggregatedSignature: nil,
}
}
return states, nil
}
// States returns a list of all states added to the StateBuilder.
// Returns an error if the states do not form a connected tree rooted at genesis.
func (bb *StateBuilder) States() ([]*models.State[*helper.TestState], error) {
proposals, err := bb.Proposals()
if err != nil {
return nil, fmt.Errorf("StateBuilder failed to generate proposals: %w", err)
}
return toStates(proposals), nil
}
// makeIdentifier creates a state identifier based on the state's rank, QC, and state version.
// This is used to identify states uniquely, in this specific test setup.
// ATTENTION: this should not be confused with the state ID used in production code which is a collision-resistant hash
// of the full state content.
func makeIdentifier(state *models.State[*helper.TestState], stateVersion int) models.Identity {
return fmt.Sprintf("%d-%s-%d", state.Rank, state.Identifier, stateVersion)
}
// constructs the genesis state (identical for all calls)
func makeGenesis() *models.CertifiedState[*helper.TestState] {
genesis := &models.State[*helper.TestState]{
Rank: 1,
}
genesis.Identifier = makeIdentifier(genesis, 0)
genesisQC := &helper.TestQuorumCertificate{
Rank: 1,
Selector: genesis.Identifier,
}
certifiedGenesisState, err := models.NewCertifiedState(genesis, genesisQC)
if err != nil {
panic(fmt.Sprintf("combining genesis state and genensis QC to certified state failed: %s", err.Error()))
}
return certifiedGenesisState
}
// toStates converts the given proposals to slice of states
func toStates(proposals []*models.Proposal[*helper.TestState]) []*models.State[*helper.TestState] {
states := make([]*models.State[*helper.TestState], 0, len(proposals))
for _, b := range proposals {
states = append(states, b.State)
}
return states
}

View File

@ -0,0 +1,77 @@
package forks
import (
"source.quilibrium.com/quilibrium/monorepo/consensus/forest"
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
)
// StateContainer wraps a state proposal to implement forest.Vertex
// so the proposal can be stored in forest.LevelledForest
type StateContainer[StateT models.Unique] models.State[StateT]
var _ forest.Vertex = (*StateContainer[*nilUnique])(nil)
func ToStateContainer2[StateT models.Unique](
state *models.State[StateT],
) *StateContainer[StateT] {
return (*StateContainer[StateT])(state)
}
func (b *StateContainer[StateT]) GetState() *models.State[StateT] {
return (*models.State[StateT])(b)
}
// Functions implementing forest.Vertex
func (b *StateContainer[StateT]) VertexID() models.Identity {
return b.Identifier
}
func (b *StateContainer[StateT]) Level() uint64 {
return b.Rank
}
func (b *StateContainer[StateT]) Parent() (models.Identity, uint64) {
// Caution: not all states have a QC for the parent, such as the spork root
// states. Per API contract, we are obliged to return a value to prevent
// panics during logging. (see vertex `forest.VertexToString` method).
if b.ParentQuorumCertificate == nil {
return "", 0
}
return b.ParentQuorumCertificate.Identity(),
b.ParentQuorumCertificate.GetRank()
}
// Type used to satisfy generic arguments in compiler time type assertion check
type nilUnique struct{}
// GetSignature implements models.Unique.
func (n *nilUnique) GetSignature() []byte {
panic("unimplemented")
}
// GetTimestamp implements models.Unique.
func (n *nilUnique) GetTimestamp() uint64 {
panic("unimplemented")
}
// Source implements models.Unique.
func (n *nilUnique) Source() models.Identity {
panic("unimplemented")
}
// Clone implements models.Unique.
func (n *nilUnique) Clone() models.Unique {
panic("unimplemented")
}
// GetRank implements models.Unique.
func (n *nilUnique) GetRank() uint64 {
panic("unimplemented")
}
// Identity implements models.Unique.
func (n *nilUnique) Identity() models.Identity {
panic("unimplemented")
}
var _ models.Unique = (*nilUnique)(nil)

View File

@ -1,16 +1,8 @@
module source.quilibrium.com/quilibrium/monorepo/consensus module source.quilibrium.com/quilibrium/monorepo/consensus
go 1.23.0 go 1.24.0
toolchain go1.23.4 toolchain go1.24.9
replace source.quilibrium.com/quilibrium/monorepo/protobufs => ../protobufs
replace source.quilibrium.com/quilibrium/monorepo/types => ../types
replace source.quilibrium.com/quilibrium/monorepo/config => ../config
replace source.quilibrium.com/quilibrium/monorepo/utils => ../utils
replace github.com/multiformats/go-multiaddr => ../go-multiaddr replace github.com/multiformats/go-multiaddr => ../go-multiaddr
@ -20,13 +12,24 @@ replace github.com/libp2p/go-libp2p => ../go-libp2p
replace github.com/libp2p/go-libp2p-kad-dht => ../go-libp2p-kad-dht replace github.com/libp2p/go-libp2p-kad-dht => ../go-libp2p-kad-dht
replace source.quilibrium.com/quilibrium/monorepo/go-libp2p-blossomsub => ../go-libp2p-blossomsub replace source.quilibrium.com/quilibrium/monorepo/lifecycle => ../lifecycle
require go.uber.org/zap v1.27.0 require github.com/gammazero/workerpool v1.1.3
require ( require (
github.com/stretchr/testify v1.10.0 // indirect github.com/davecgh/go-spew v1.1.1 // indirect
go.uber.org/multierr v1.11.0 // indirect github.com/gammazero/deque v0.2.0 // indirect
github.com/kr/pretty v0.3.1 // indirect
github.com/pmezard/go-difflib v1.0.0 // indirect
github.com/stretchr/objx v0.5.2 // indirect
go.uber.org/goleak v1.3.0 // indirect
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c // indirect
gopkg.in/yaml.v3 v3.0.1 // indirect
) )
require github.com/pkg/errors v0.9.1 require (
github.com/stretchr/testify v1.11.1
go.uber.org/atomic v1.11.0
golang.org/x/sync v0.17.0
source.quilibrium.com/quilibrium/monorepo/lifecycle v0.0.0-00010101000000-000000000000
)

View File

@ -1,16 +1,40 @@
github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E=
github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c= github.com/davecgh/go-spew v1.1.1 h1:vj9j/u1bqnvCEfJOwUhtlOARqs3+rkHYY13jYWTU97c=
github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= github.com/gammazero/deque v0.2.0 h1:SkieyNB4bg2/uZZLxvya0Pq6diUlwx7m2TeT7GAIWaA=
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/gammazero/deque v0.2.0/go.mod h1:LFroj8x4cMYCukHJDbxFCkT+r9AndaJnFMuZDV34tuU=
github.com/gammazero/workerpool v1.1.3 h1:WixN4xzukFoN0XSeXF6puqEqFTl2mECI9S6W44HWy9Q=
github.com/gammazero/workerpool v1.1.3/go.mod h1:wPjyBLDbyKnUn2XwwyD3EEwo9dHutia9/fwNmSHWACc=
github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA=
github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4=
github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo=
github.com/hashicorp/go-multierror v1.1.1/go.mod h1:iw975J/qwKPdAO1clOe2L8331t/9/fmwbPZ6JB6eMoM=
github.com/kr/pretty v0.2.1/go.mod h1:ipq/a2n7PKx3OHsz4KJII5eveXtPO4qwEXGdVfWzfnI=
github.com/kr/pretty v0.3.1 h1:flRD4NNwYAUpkphVc1HcthR4KEIFJ65n8Mw5qdRn3LE=
github.com/kr/pretty v0.3.1/go.mod h1:hoEshYVHaxMs3cyo3Yncou5ZscifuDolrwPKZanG3xk=
github.com/kr/pty v1.1.1/go.mod h1:pFQYn66WHrOpPYNljwOMqo10TkYh1fy3cYio2l3bCsQ=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/pkg/diff v0.0.0-20210226163009-20ebb0f2a09e/go.mod h1:pJLUxLENpZxwdsKMEsNbx1VGcRFpLqf3715MtcvvzbA=
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM= github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
github.com/stretchr/testify v1.10.0 h1:Xv5erBjTwe/5IxqUQTdXv5kgmIvbHo3QQyRwhJsOfJA= github.com/rogpeppe/go-internal v1.9.0 h1:73kH8U+JUqXU8lRuOHeVHaa/SZPifC7BkcraZVejAe8=
github.com/stretchr/testify v1.10.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY= github.com/rogpeppe/go-internal v1.9.0/go.mod h1:WtVeX8xhTBvf0smdhujwtBcq4Qrzq/fJaraNFVN+nFs=
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.11.1 h1:7s2iGBzp5EwR7/aIZr8ao5+dra3wiQyKjjFuvgVKu7U=
github.com/stretchr/testify v1.11.1/go.mod h1:wZwfW3scLgRK+23gO65QZefKpKQRnfz6sD981Nm4B6U=
go.uber.org/atomic v1.11.0 h1:ZvwS0R+56ePWxUNi+Atn9dWONBPp/AUETXlHW0DxSjE=
go.uber.org/atomic v1.11.0/go.mod h1:LUxbIzbOniOlMKjJjyPfpl4v+PKK2cNJn91OQbhoJI0=
go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto=
go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE= go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE=
go.uber.org/multierr v1.11.0 h1:blXXJkSxSSfBVBlC76pxqeO+LN3aDfLQo+309xJstO0= golang.org/x/sync v0.17.0 h1:l60nONMj9l5drqw6jlhIELNv9I0A4OFgRsG9k2oT9Ug=
go.uber.org/multierr v1.11.0/go.mod h1:20+QtiLqy0Nd6FdQB9TLXag12DsQkrbs3htMFfDN80Y= golang.org/x/sync v0.17.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI=
go.uber.org/zap v1.27.0 h1:aJMhYGrd5QSmlpLMr2MftRKl7t8J8PTZPA732ud/XR8= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0=
go.uber.org/zap v1.27.0/go.mod h1:GB2qFLM7cTU87MWRP2mPIjqfIDnGu+VIO4V/SdhGo2E= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk=
gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
pgregory.net/rapid v1.2.0 h1:keKAYRcjm+e1F0oAuU5F5+YPAWcyxNNRK2wud503Gnk=
pgregory.net/rapid v1.2.0/go.mod h1:PY5XlDGj0+V1FCq0o192FdRhpKHGTRIWBgqjDBTrq04=

View File

@ -0,0 +1,122 @@
package helper
import (
"bytes"
crand "crypto/rand"
"fmt"
"math/rand"
"time"
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
)
type TestAggregatedSignature struct {
Signature []byte
PublicKey []byte
Bitmask []byte
}
func (t *TestAggregatedSignature) GetSignature() []byte {
return t.Signature
}
func (t *TestAggregatedSignature) GetPubKey() []byte {
return t.PublicKey
}
func (t *TestAggregatedSignature) GetBitmask() []byte {
return t.Bitmask
}
type TestQuorumCertificate struct {
Filter []byte
Rank uint64
FrameNumber uint64
Selector models.Identity
Timestamp uint64
AggregatedSignature models.AggregatedSignature
}
func (t *TestQuorumCertificate) GetFilter() []byte {
return t.Filter
}
func (t *TestQuorumCertificate) GetRank() uint64 {
return t.Rank
}
func (t *TestQuorumCertificate) GetFrameNumber() uint64 {
return t.FrameNumber
}
func (t *TestQuorumCertificate) Identity() models.Identity {
return t.Selector
}
func (t *TestQuorumCertificate) GetTimestamp() uint64 {
return t.Timestamp
}
func (t *TestQuorumCertificate) GetAggregatedSignature() models.AggregatedSignature {
return t.AggregatedSignature
}
func (t *TestQuorumCertificate) Equals(other models.QuorumCertificate) bool {
return bytes.Equal(t.Filter, other.GetFilter()) &&
t.Rank == other.GetRank() &&
t.FrameNumber == other.GetFrameNumber() &&
t.Selector == other.Identity() &&
t.Timestamp == other.GetTimestamp() &&
bytes.Equal(
t.AggregatedSignature.GetBitmask(),
other.GetAggregatedSignature().GetBitmask(),
) &&
bytes.Equal(
t.AggregatedSignature.GetPubKey(),
other.GetAggregatedSignature().GetPubKey(),
) &&
bytes.Equal(
t.AggregatedSignature.GetSignature(),
other.GetAggregatedSignature().GetSignature(),
)
}
func MakeQC(options ...func(*TestQuorumCertificate)) models.QuorumCertificate {
s := make([]byte, 32)
crand.Read(s)
qc := &TestQuorumCertificate{
Rank: rand.Uint64(),
FrameNumber: rand.Uint64() + 1,
Selector: string(s),
Timestamp: uint64(time.Now().UnixMilli()),
AggregatedSignature: &TestAggregatedSignature{
PublicKey: make([]byte, 585),
Signature: make([]byte, 74),
Bitmask: []byte{0x01},
},
}
for _, option := range options {
option(qc)
}
return qc
}
func WithQCState[StateT models.Unique](state *models.State[StateT]) func(*TestQuorumCertificate) {
return func(qc *TestQuorumCertificate) {
qc.Rank = state.Rank
qc.Selector = state.Identifier
}
}
func WithQCSigners(signerIndices []byte) func(*TestQuorumCertificate) {
return func(qc *TestQuorumCertificate) {
qc.AggregatedSignature.(*TestAggregatedSignature).Bitmask = signerIndices // buildutils:allow-slice-alias
}
}
func WithQCRank(rank uint64) func(*TestQuorumCertificate) {
return func(qc *TestQuorumCertificate) {
qc.Rank = rank
qc.Selector = fmt.Sprintf("%d", rank)
}
}

467
consensus/helper/state.go Normal file
View File

@ -0,0 +1,467 @@
package helper
import (
crand "crypto/rand"
"fmt"
"math/rand"
"slices"
"strings"
"time"
"source.quilibrium.com/quilibrium/monorepo/consensus"
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
)
type TestWeightedIdentity struct {
ID string
}
// Identity implements models.WeightedIdentity.
func (t *TestWeightedIdentity) Identity() models.Identity {
return t.ID
}
// PublicKey implements models.WeightedIdentity.
func (t *TestWeightedIdentity) PublicKey() []byte {
return make([]byte, 585)
}
// Weight implements models.WeightedIdentity.
func (t *TestWeightedIdentity) Weight() uint64 {
return 1000
}
var _ models.WeightedIdentity = (*TestWeightedIdentity)(nil)
type TestState struct {
Rank uint64
Signature []byte
Timestamp uint64
ID models.Identity
Prover models.Identity
}
// Clone implements models.Unique.
func (t *TestState) Clone() models.Unique {
return &TestState{
Rank: t.Rank,
Signature: slices.Clone(t.Signature),
Timestamp: t.Timestamp,
ID: t.ID,
Prover: t.Prover,
}
}
// GetRank implements models.Unique.
func (t *TestState) GetRank() uint64 {
return t.Rank
}
// GetSignature implements models.Unique.
func (t *TestState) GetSignature() []byte {
return t.Signature
}
// GetTimestamp implements models.Unique.
func (t *TestState) GetTimestamp() uint64 {
return t.Timestamp
}
// Identity implements models.Unique.
func (t *TestState) Identity() models.Identity {
return t.ID
}
// Source implements models.Unique.
func (t *TestState) Source() models.Identity {
return t.Prover
}
type TestVote struct {
Rank uint64
Signature []byte
Timestamp uint64
ID models.Identity
StateID models.Identity
}
// Clone implements models.Unique.
func (t *TestVote) Clone() models.Unique {
return &TestVote{
Rank: t.Rank,
Signature: slices.Clone(t.Signature),
Timestamp: t.Timestamp,
ID: t.ID,
StateID: t.StateID,
}
}
// GetRank implements models.Unique.
func (t *TestVote) GetRank() uint64 {
return t.Rank
}
// GetSignature implements models.Unique.
func (t *TestVote) GetSignature() []byte {
return t.Signature
}
// GetTimestamp implements models.Unique.
func (t *TestVote) GetTimestamp() uint64 {
return t.Timestamp
}
// Identity implements models.Unique.
func (t *TestVote) Identity() models.Identity {
return t.ID
}
// Source implements models.Unique.
func (t *TestVote) Source() models.Identity {
return t.StateID
}
type TestPeer struct {
PeerID string
}
// Clone implements models.Unique.
func (t *TestPeer) Clone() models.Unique {
return &TestPeer{
PeerID: t.PeerID,
}
}
// GetRank implements models.Unique.
func (t *TestPeer) GetRank() uint64 {
return 0
}
// GetSignature implements models.Unique.
func (t *TestPeer) GetSignature() []byte {
return []byte{}
}
// GetTimestamp implements models.Unique.
func (t *TestPeer) GetTimestamp() uint64 {
return 0
}
// Identity implements models.Unique.
func (t *TestPeer) Identity() models.Identity {
return t.PeerID
}
// Source implements models.Unique.
func (t *TestPeer) Source() models.Identity {
return t.PeerID
}
type TestCollected struct {
Rank uint64
TXs [][]byte
}
// Clone implements models.Unique.
func (t *TestCollected) Clone() models.Unique {
return &TestCollected{
Rank: t.Rank,
TXs: slices.Clone(t.TXs),
}
}
// GetRank implements models.Unique.
func (t *TestCollected) GetRank() uint64 {
return t.Rank
}
// GetSignature implements models.Unique.
func (t *TestCollected) GetSignature() []byte {
return []byte{}
}
// GetTimestamp implements models.Unique.
func (t *TestCollected) GetTimestamp() uint64 {
return 0
}
// Identity implements models.Unique.
func (t *TestCollected) Identity() models.Identity {
return fmt.Sprintf("%d", t.Rank)
}
// Source implements models.Unique.
func (t *TestCollected) Source() models.Identity {
return ""
}
var _ models.Unique = (*TestState)(nil)
var _ models.Unique = (*TestVote)(nil)
var _ models.Unique = (*TestPeer)(nil)
var _ models.Unique = (*TestCollected)(nil)
func MakeIdentity() models.Identity {
s := make([]byte, 32)
crand.Read(s)
return models.Identity(s)
}
func MakeState[StateT models.Unique](options ...func(*models.State[StateT])) *models.State[StateT] {
rank := rand.Uint64()
state := models.State[StateT]{
Rank: rank,
Identifier: MakeIdentity(),
ProposerID: MakeIdentity(),
Timestamp: uint64(time.Now().UnixMilli()),
ParentQuorumCertificate: MakeQC(WithQCRank(rank - 1)),
}
for _, option := range options {
option(&state)
}
return &state
}
func WithStateRank[StateT models.Unique](rank uint64) func(*models.State[StateT]) {
return func(state *models.State[StateT]) {
state.Rank = rank
}
}
func WithStateProposer[StateT models.Unique](proposerID models.Identity) func(*models.State[StateT]) {
return func(state *models.State[StateT]) {
state.ProposerID = proposerID
}
}
func WithParentState[StateT models.Unique](parent *models.State[StateT]) func(*models.State[StateT]) {
return func(state *models.State[StateT]) {
state.ParentQuorumCertificate.(*TestQuorumCertificate).Selector = parent.Identifier
state.ParentQuorumCertificate.(*TestQuorumCertificate).Rank = parent.Rank
}
}
func WithParentSigners[StateT models.Unique](signerIndices []byte) func(*models.State[StateT]) {
return func(state *models.State[StateT]) {
state.ParentQuorumCertificate.(*TestQuorumCertificate).AggregatedSignature.(*TestAggregatedSignature).Bitmask = signerIndices // buildutils:allow-slice-alias
}
}
func WithStateQC[StateT models.Unique](qc models.QuorumCertificate) func(*models.State[StateT]) {
return func(state *models.State[StateT]) {
state.ParentQuorumCertificate = qc
}
}
func MakeVote[VoteT models.Unique]() *VoteT {
return new(VoteT)
}
func MakeSignedProposal[StateT models.Unique, VoteT models.Unique](options ...func(*models.SignedProposal[StateT, VoteT])) *models.SignedProposal[StateT, VoteT] {
proposal := &models.SignedProposal[StateT, VoteT]{
Proposal: *MakeProposal[StateT](),
Vote: MakeVote[VoteT](),
}
for _, option := range options {
option(proposal)
}
return proposal
}
func MakeProposal[StateT models.Unique](options ...func(*models.Proposal[StateT])) *models.Proposal[StateT] {
proposal := &models.Proposal[StateT]{
State: MakeState[StateT](),
PreviousRankTimeoutCertificate: nil,
}
for _, option := range options {
option(proposal)
}
return proposal
}
func WithProposal[StateT models.Unique, VoteT models.Unique](proposal *models.Proposal[StateT]) func(*models.SignedProposal[StateT, VoteT]) {
return func(signedProposal *models.SignedProposal[StateT, VoteT]) {
signedProposal.Proposal = *proposal
}
}
func WithState[StateT models.Unique](state *models.State[StateT]) func(*models.Proposal[StateT]) {
return func(proposal *models.Proposal[StateT]) {
proposal.State = state
}
}
func WithVote[StateT models.Unique, VoteT models.Unique](vote *VoteT) func(*models.SignedProposal[StateT, VoteT]) {
return func(proposal *models.SignedProposal[StateT, VoteT]) {
proposal.Vote = vote
}
}
func WithPreviousRankTimeoutCertificate[StateT models.Unique](previousRankTimeoutCert models.TimeoutCertificate) func(*models.Proposal[StateT]) {
return func(proposal *models.Proposal[StateT]) {
proposal.PreviousRankTimeoutCertificate = previousRankTimeoutCert
}
}
func WithWeightedIdentityList(count int) []models.WeightedIdentity {
wi := []models.WeightedIdentity{}
for range count {
wi = append(wi, &TestWeightedIdentity{
ID: MakeIdentity(),
})
}
return wi
}
func VoteForStateFixture(state *models.State[*TestState], ops ...func(vote **TestVote)) *TestVote {
v := &TestVote{
Rank: state.Rank,
ID: MakeIdentity(),
StateID: state.Identifier,
Signature: make([]byte, 74),
}
for _, op := range ops {
op(&v)
}
return v
}
func VoteFixture(op func(vote **TestVote)) *TestVote {
v := &TestVote{
Rank: rand.Uint64(),
ID: MakeIdentity(),
StateID: MakeIdentity(),
Signature: make([]byte, 74),
}
op(&v)
return v
}
type FmtLog struct {
params []consensus.LogParam
}
// Error implements consensus.TraceLogger.
func (n *FmtLog) Error(message string, err error, params ...consensus.LogParam) {
b := strings.Builder{}
b.WriteString(fmt.Sprintf("ERROR: %s: %v\n", message, err))
for _, param := range n.params {
b.WriteString(fmt.Sprintf(
"\t%s: %s\n",
param.GetKey(),
stringFromValue(param),
))
}
for _, param := range params {
b.WriteString(fmt.Sprintf(
"\t%s: %s\n",
param.GetKey(),
stringFromValue(param),
))
}
fmt.Println(b.String())
}
// Trace implements consensus.TraceLogger.
func (n *FmtLog) Trace(message string, params ...consensus.LogParam) {
b := strings.Builder{}
b.WriteString(fmt.Sprintf("TRACE: %s\n", message))
b.WriteString(fmt.Sprintf("\t[%s]\n", time.Now().String()))
for _, param := range n.params {
b.WriteString(fmt.Sprintf(
"\t%s: %s\n",
param.GetKey(),
stringFromValue(param),
))
}
for _, param := range params {
b.WriteString(fmt.Sprintf(
"\t%s: %s\n",
param.GetKey(),
stringFromValue(param),
))
}
fmt.Println(b.String())
}
func (n *FmtLog) With(params ...consensus.LogParam) consensus.TraceLogger {
return &FmtLog{
params: slices.Concat(n.params, params),
}
}
func stringFromValue(param consensus.LogParam) string {
switch param.GetKind() {
case "string":
return param.GetValue().(string)
case "time":
return param.GetValue().(time.Time).String()
default:
return fmt.Sprintf("%v", param.GetValue())
}
}
func Logger() *FmtLog {
return &FmtLog{}
}
type BufferLog struct {
params []consensus.LogParam
b *strings.Builder
}
// Error implements consensus.TraceLogger.
func (n *BufferLog) Error(message string, err error, params ...consensus.LogParam) {
n.b.WriteString(fmt.Sprintf("ERROR: %s: %v\n", message, err))
for _, param := range n.params {
n.b.WriteString(fmt.Sprintf(
"\t%s: %s\n",
param.GetKey(),
stringFromValue(param),
))
}
for _, param := range params {
n.b.WriteString(fmt.Sprintf(
"\t%s: %s\n",
param.GetKey(),
stringFromValue(param),
))
}
}
// Trace implements consensus.TraceLogger.
func (n *BufferLog) Trace(message string, params ...consensus.LogParam) {
n.b.WriteString(fmt.Sprintf("TRACE: %s\n", message))
n.b.WriteString(fmt.Sprintf("\t[%s]\n", time.Now().String()))
for _, param := range n.params {
n.b.WriteString(fmt.Sprintf(
"\t%s: %s\n",
param.GetKey(),
stringFromValue(param),
))
}
for _, param := range params {
n.b.WriteString(fmt.Sprintf(
"\t%s: %s\n",
param.GetKey(),
stringFromValue(param),
))
}
}
func (n *BufferLog) Flush() {
fmt.Println(n.b.String())
}
func (n *BufferLog) With(params ...consensus.LogParam) consensus.TraceLogger {
return &BufferLog{
params: slices.Concat(n.params, params),
b: n.b,
}
}
func BufferLogger() *BufferLog {
return &BufferLog{
b: &strings.Builder{},
}
}

View File

@ -0,0 +1,171 @@
package helper
import (
"bytes"
crand "crypto/rand"
"math/rand"
"slices"
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
)
type TestTimeoutCertificate struct {
Filter []byte
Rank uint64
LatestRanks []uint64
LatestQuorumCert models.QuorumCertificate
AggregatedSignature models.AggregatedSignature
}
func (t *TestTimeoutCertificate) GetFilter() []byte {
return t.Filter
}
func (t *TestTimeoutCertificate) GetRank() uint64 {
return t.Rank
}
func (t *TestTimeoutCertificate) GetLatestRanks() []uint64 {
return t.LatestRanks
}
func (t *TestTimeoutCertificate) GetLatestQuorumCert() models.QuorumCertificate {
return t.LatestQuorumCert
}
func (t *TestTimeoutCertificate) GetAggregatedSignature() models.AggregatedSignature {
return t.AggregatedSignature
}
func (t *TestTimeoutCertificate) Equals(other models.TimeoutCertificate) bool {
return bytes.Equal(t.Filter, other.GetFilter()) &&
t.Rank == other.GetRank() &&
slices.Equal(t.LatestRanks, other.GetLatestRanks()) &&
t.LatestQuorumCert.Equals(other.GetLatestQuorumCert()) &&
bytes.Equal(
t.AggregatedSignature.GetBitmask(),
other.GetAggregatedSignature().GetBitmask(),
) &&
bytes.Equal(
t.AggregatedSignature.GetPubKey(),
other.GetAggregatedSignature().GetPubKey(),
) &&
bytes.Equal(
t.AggregatedSignature.GetSignature(),
other.GetAggregatedSignature().GetSignature(),
)
}
func MakeTC(options ...func(*TestTimeoutCertificate)) models.TimeoutCertificate {
tcRank := rand.Uint64()
s := make([]byte, 32)
crand.Read(s)
qc := MakeQC(WithQCRank(tcRank - 1))
highQCRanks := make([]uint64, 3)
for i := range highQCRanks {
highQCRanks[i] = qc.GetRank()
}
tc := &TestTimeoutCertificate{
Rank: tcRank,
LatestQuorumCert: qc,
LatestRanks: highQCRanks,
AggregatedSignature: &TestAggregatedSignature{
Signature: make([]byte, 74),
PublicKey: make([]byte, 585),
Bitmask: []byte{0x01},
},
}
for _, option := range options {
option(tc)
}
return tc
}
func WithTCNewestQC(qc models.QuorumCertificate) func(*TestTimeoutCertificate) {
return func(tc *TestTimeoutCertificate) {
tc.LatestQuorumCert = qc
tc.LatestRanks = []uint64{qc.GetRank()}
}
}
func WithTCSigners(signerIndices []byte) func(*TestTimeoutCertificate) {
return func(tc *TestTimeoutCertificate) {
tc.AggregatedSignature.(*TestAggregatedSignature).Bitmask = signerIndices // buildutils:allow-slice-alias
}
}
func WithTCRank(rank uint64) func(*TestTimeoutCertificate) {
return func(tc *TestTimeoutCertificate) {
tc.Rank = rank
}
}
func WithTCHighQCRanks(highQCRanks []uint64) func(*TestTimeoutCertificate) {
return func(tc *TestTimeoutCertificate) {
tc.LatestRanks = highQCRanks // buildutils:allow-slice-alias
}
}
func TimeoutStateFixture[VoteT models.Unique](
opts ...func(TimeoutState *models.TimeoutState[VoteT]),
) *models.TimeoutState[VoteT] {
timeoutRank := uint64(rand.Uint32())
newestQC := MakeQC(WithQCRank(timeoutRank - 10))
timeout := &models.TimeoutState[VoteT]{
Rank: timeoutRank,
LatestQuorumCertificate: newestQC,
PriorRankTimeoutCertificate: MakeTC(
WithTCRank(timeoutRank-1),
WithTCNewestQC(MakeQC(WithQCRank(newestQC.GetRank()))),
),
}
for _, opt := range opts {
opt(timeout)
}
if timeout.Vote == nil {
panic("WithTimeoutVote must be called")
}
return timeout
}
func WithTimeoutVote[VoteT models.Unique](
vote VoteT,
) func(*models.TimeoutState[VoteT]) {
return func(state *models.TimeoutState[VoteT]) {
state.Vote = &vote
}
}
func WithTimeoutNewestQC[VoteT models.Unique](
newestQC models.QuorumCertificate,
) func(*models.TimeoutState[VoteT]) {
return func(timeout *models.TimeoutState[VoteT]) {
timeout.LatestQuorumCertificate = newestQC
}
}
func WithTimeoutPreviousRankTimeoutCertificate[VoteT models.Unique](
previousRankTimeoutCert models.TimeoutCertificate,
) func(*models.TimeoutState[VoteT]) {
return func(timeout *models.TimeoutState[VoteT]) {
timeout.PriorRankTimeoutCertificate = previousRankTimeoutCert
}
}
func WithTimeoutStateRank[VoteT models.Unique](
rank uint64,
) func(*models.TimeoutState[VoteT]) {
return func(timeout *models.TimeoutState[VoteT]) {
timeout.Rank = rank
if timeout.LatestQuorumCertificate != nil {
timeout.LatestQuorumCertificate.(*TestQuorumCertificate).Rank = rank
}
if timeout.PriorRankTimeoutCertificate != nil {
timeout.PriorRankTimeoutCertificate.(*TestTimeoutCertificate).Rank = rank - 1
}
}
}

View File

@ -0,0 +1,40 @@
package integration
import (
"source.quilibrium.com/quilibrium/monorepo/consensus/helper"
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
)
func FinalizedStates(in *Instance) []*models.State[*helper.TestState] {
finalized := make([]*models.State[*helper.TestState], 0)
lastFinalID := in.forks.FinalizedState().Identifier
in.updatingStates.RLock()
finalizedState, found := in.headers[lastFinalID]
defer in.updatingStates.RUnlock()
if !found {
return finalized
}
for {
finalized = append(finalized, finalizedState)
if finalizedState.ParentQuorumCertificate == nil {
break
}
finalizedState, found =
in.headers[finalizedState.ParentQuorumCertificate.Identity()]
if !found {
break
}
}
return finalized
}
func FinalizedRanks(in *Instance) []uint64 {
finalizedStates := FinalizedStates(in)
ranks := make([]uint64, 0, len(finalizedStates))
for _, b := range finalizedStates {
ranks = append(ranks, b.Rank)
}
return ranks
}

View File

@ -0,0 +1,19 @@
package integration
type Condition func(*Instance) bool
func RightAway(*Instance) bool {
return true
}
func RankFinalized(rank uint64) Condition {
return func(in *Instance) bool {
return in.forks.FinalizedRank() >= rank
}
}
func RankReached(rank uint64) Condition {
return func(in *Instance) bool {
return in.pacemaker.CurrentRank() >= rank
}
}

View File

@ -0,0 +1,114 @@
package integration
import (
"testing"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
"source.quilibrium.com/quilibrium/monorepo/consensus/helper"
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
)
func Connect(t *testing.T, instances []*Instance) {
// first, create a map of all instances and a queue for each
lookup := make(map[models.Identity]*Instance)
for _, in := range instances {
lookup[in.localID] = in
}
// then, for each instance, initialize a wired up communicator
for _, sender := range instances {
sender := sender // avoid capturing loop variable in closure
*sender.notifier = *NewMockedCommunicatorConsumer()
sender.notifier.CommunicatorConsumer.On("OnOwnProposal", mock.Anything, mock.Anything).Run(
func(args mock.Arguments) {
proposal, ok := args[0].(*models.SignedProposal[*helper.TestState, *helper.TestVote])
require.True(t, ok)
// sender should always have the parent
sender.updatingStates.RLock()
_, exists := sender.headers[proposal.State.ParentQuorumCertificate.Identity()]
sender.updatingStates.RUnlock()
if !exists {
t.Fatalf("parent for proposal not found (sender: %x, parent: %x)", sender.localID, proposal.State.ParentQuorumCertificate.Identity())
}
// store locally and loop back to engine for processing
sender.ProcessState(proposal)
// check if we should drop the outgoing proposal
if sender.dropPropOut(proposal) {
return
}
// iterate through potential receivers
for _, receiver := range instances {
// we should skip ourselves always
if receiver.localID == sender.localID {
continue
}
// check if we should drop the incoming proposal
if receiver.dropPropIn(proposal) {
continue
}
receiver.ProcessState(proposal)
}
},
)
sender.notifier.CommunicatorConsumer.On("OnOwnVote", mock.Anything, mock.Anything).Run(
func(args mock.Arguments) {
vote, ok := args[0].(**helper.TestVote)
require.True(t, ok)
recipientID, ok := args[1].(models.Identity)
require.True(t, ok)
// get the receiver
receiver, exists := lookup[recipientID]
if !exists {
t.Fatalf("recipient doesn't exist (sender: %x, receiver: %x)", sender.localID, recipientID)
}
// if we are next leader we should be receiving our own vote
if recipientID != sender.localID {
// check if we should drop the outgoing vote
if sender.dropVoteOut(*vote) {
return
}
// check if we should drop the incoming vote
if receiver.dropVoteIn(*vote) {
return
}
}
// submit the vote to the receiving event loop (non-dropping)
receiver.queue <- *vote
},
)
sender.notifier.CommunicatorConsumer.On("OnOwnTimeout", mock.Anything).Run(
func(args mock.Arguments) {
timeoutState, ok := args[0].(*models.TimeoutState[*helper.TestVote])
require.True(t, ok)
// iterate through potential receivers
for _, receiver := range instances {
// we should skip ourselves always
if receiver.localID == sender.localID {
continue
}
// check if we should drop the outgoing value
if sender.dropTimeoutStateOut(timeoutState) {
continue
}
// check if we should drop the incoming value
if receiver.dropTimeoutStateIn(timeoutState) {
continue
}
receiver.queue <- timeoutState
}
})
}
}

View File

@ -0,0 +1,27 @@
package integration
import (
"time"
"source.quilibrium.com/quilibrium/monorepo/consensus/helper"
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
)
func DefaultRoot() *models.State[*helper.TestState] {
ts := uint64(time.Now().UnixMilli())
id := helper.MakeIdentity()
s := &helper.TestState{
Rank: 0,
Signature: make([]byte, 0),
Timestamp: ts,
ID: id,
Prover: "",
}
header := &models.State[*helper.TestState]{
Rank: 0,
State: &s,
Identifier: id,
Timestamp: ts,
}
return header
}

View File

@ -0,0 +1,76 @@
package integration
import (
"math/rand"
"source.quilibrium.com/quilibrium/monorepo/consensus/helper"
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
)
// VoteFilter is a filter function for dropping Votes.
// Return value `true` implies that the given Vote should be
// dropped, while `false` indicates that the Vote should be received.
type VoteFilter func(*helper.TestVote) bool
func DropNoVotes(*helper.TestVote) bool {
return false
}
func DropAllVotes(*helper.TestVote) bool {
return true
}
// DropVoteRandomly drops votes randomly with a probability of `dropProbability` ∈ [0,1]
func DropVoteRandomly(dropProbability float64) VoteFilter {
return func(*helper.TestVote) bool {
return rand.Float64() < dropProbability
}
}
func DropVotesBy(voterID models.Identity) VoteFilter {
return func(vote *helper.TestVote) bool {
return vote.ID == voterID
}
}
// ProposalFilter is a filter function for dropping Proposals.
// Return value `true` implies that the given SignedProposal should be
// dropped, while `false` indicates that the SignedProposal should be received.
type ProposalFilter func(*models.SignedProposal[*helper.TestState, *helper.TestVote]) bool
func DropNoProposals(*models.SignedProposal[*helper.TestState, *helper.TestVote]) bool {
return false
}
func DropAllProposals(*models.SignedProposal[*helper.TestState, *helper.TestVote]) bool {
return true
}
// DropProposalRandomly drops proposals randomly with a probability of `dropProbability` ∈ [0,1]
func DropProposalRandomly(dropProbability float64) ProposalFilter {
return func(*models.SignedProposal[*helper.TestState, *helper.TestVote]) bool {
return rand.Float64() < dropProbability
}
}
// DropProposalsBy drops all proposals originating from the specified `proposerID`
func DropProposalsBy(proposerID models.Identity) ProposalFilter {
return func(proposal *models.SignedProposal[*helper.TestState, *helper.TestVote]) bool {
return proposal.State.ProposerID == proposerID
}
}
// TimeoutStateFilter is a filter function for dropping TimeoutStates.
// Return value `true` implies that the given TimeoutState should be
// dropped, while `false` indicates that the TimeoutState should be received.
type TimeoutStateFilter func(*models.TimeoutState[*helper.TestVote]) bool
// DropAllTimeoutStates always returns `true`, i.e. drops all TimeoutStates
func DropAllTimeoutStates(*models.TimeoutState[*helper.TestVote]) bool {
return true
}
// DropNoTimeoutStates always returns `false`, i.e. it lets all TimeoutStates pass.
func DropNoTimeoutStates(*models.TimeoutState[*helper.TestVote]) bool {
return false
}

View File

@ -0,0 +1,738 @@
package integration
import (
"context"
"fmt"
"reflect"
"sync"
"testing"
"time"
"github.com/gammazero/workerpool"
"github.com/stretchr/testify/mock"
"github.com/stretchr/testify/require"
"go.uber.org/atomic"
"source.quilibrium.com/quilibrium/monorepo/consensus"
"source.quilibrium.com/quilibrium/monorepo/consensus/counters"
"source.quilibrium.com/quilibrium/monorepo/consensus/eventhandler"
"source.quilibrium.com/quilibrium/monorepo/consensus/forks"
"source.quilibrium.com/quilibrium/monorepo/consensus/helper"
"source.quilibrium.com/quilibrium/monorepo/consensus/mocks"
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
"source.quilibrium.com/quilibrium/monorepo/consensus/notifications"
"source.quilibrium.com/quilibrium/monorepo/consensus/notifications/pubsub"
"source.quilibrium.com/quilibrium/monorepo/consensus/pacemaker"
"source.quilibrium.com/quilibrium/monorepo/consensus/pacemaker/timeout"
"source.quilibrium.com/quilibrium/monorepo/consensus/safetyrules"
"source.quilibrium.com/quilibrium/monorepo/consensus/stateproducer"
"source.quilibrium.com/quilibrium/monorepo/consensus/timeoutaggregator"
"source.quilibrium.com/quilibrium/monorepo/consensus/timeoutcollector"
"source.quilibrium.com/quilibrium/monorepo/consensus/validator"
"source.quilibrium.com/quilibrium/monorepo/consensus/voteaggregator"
"source.quilibrium.com/quilibrium/monorepo/consensus/votecollector"
"source.quilibrium.com/quilibrium/monorepo/lifecycle"
"source.quilibrium.com/quilibrium/monorepo/lifecycle/unittest"
)
type Instance struct {
// instance parameters
logger consensus.TraceLogger
participants []models.WeightedIdentity
localID models.Identity
dropVoteIn VoteFilter
dropVoteOut VoteFilter
dropPropIn ProposalFilter
dropPropOut ProposalFilter
dropTimeoutStateIn TimeoutStateFilter
dropTimeoutStateOut TimeoutStateFilter
stop Condition
// instance data
queue chan interface{}
updatingStates sync.RWMutex
headers map[models.Identity]*models.State[*helper.TestState]
pendings map[models.Identity]*models.SignedProposal[*helper.TestState, *helper.TestVote] // indexed by parent ID
// mocked dependencies
committee *mocks.DynamicCommittee
builder *mocks.LeaderProvider[*helper.TestState, *helper.TestPeer, *helper.TestCollected]
finalizer *mocks.Finalizer
persist *mocks.ConsensusStore[*helper.TestVote]
signer *mocks.Signer[*helper.TestState, *helper.TestVote]
verifier *mocks.Verifier[*helper.TestVote]
notifier *MockedCommunicatorConsumer
voting *mocks.VotingProvider[*helper.TestState, *helper.TestVote, *helper.TestPeer]
// real dependencies
pacemaker consensus.Pacemaker
producer *stateproducer.StateProducer[*helper.TestState, *helper.TestVote, *helper.TestPeer, *helper.TestCollected]
forks *forks.Forks[*helper.TestState, *helper.TestVote]
voteAggregator *voteaggregator.VoteAggregator[*helper.TestState, *helper.TestVote]
timeoutAggregator *timeoutaggregator.TimeoutAggregator[*helper.TestVote]
safetyRules *safetyrules.SafetyRules[*helper.TestState, *helper.TestVote]
validator *validator.Validator[*helper.TestState, *helper.TestVote]
// main logic
handler *eventhandler.EventHandler[*helper.TestState, *helper.TestVote, *helper.TestPeer, *helper.TestCollected]
}
type MockedCommunicatorConsumer struct {
notifications.NoopProposalViolationConsumer[*helper.TestState, *helper.TestVote]
notifications.NoopParticipantConsumer[*helper.TestState, *helper.TestVote]
notifications.NoopFinalizationConsumer[*helper.TestState]
*mocks.CommunicatorConsumer[*helper.TestState, *helper.TestVote]
}
func NewMockedCommunicatorConsumer() *MockedCommunicatorConsumer {
return &MockedCommunicatorConsumer{
CommunicatorConsumer: &mocks.CommunicatorConsumer[*helper.TestState, *helper.TestVote]{},
}
}
var _ consensus.Consumer[*helper.TestState, *helper.TestVote] = (*MockedCommunicatorConsumer)(nil)
var _ consensus.TimeoutCollectorConsumer[*helper.TestVote] = (*Instance)(nil)
func NewInstance(t *testing.T, options ...Option) *Instance {
// generate random default identity
identity := helper.MakeIdentity()
// initialize the default configuration
cfg := Config{
Logger: helper.Logger(),
Root: DefaultRoot(),
Participants: []models.WeightedIdentity{&helper.TestWeightedIdentity{
ID: identity,
}},
LocalID: identity,
Timeouts: timeout.DefaultConfig,
IncomingVotes: DropNoVotes,
OutgoingVotes: DropNoVotes,
IncomingProposals: DropNoProposals,
OutgoingProposals: DropNoProposals,
IncomingTimeoutStates: DropNoTimeoutStates,
OutgoingTimeoutStates: DropNoTimeoutStates,
StopCondition: RightAway,
}
// apply the custom options
for _, option := range options {
option(&cfg)
}
// check the local ID is a participant
takesPart := false
for _, participant := range cfg.Participants {
if participant.Identity() == cfg.LocalID {
takesPart = true
break
}
}
require.True(t, takesPart)
// initialize the instance
in := Instance{
// instance parameters
logger: cfg.Logger,
participants: cfg.Participants,
localID: cfg.LocalID,
dropVoteIn: cfg.IncomingVotes,
dropVoteOut: cfg.OutgoingVotes,
dropPropIn: cfg.IncomingProposals,
dropPropOut: cfg.OutgoingProposals,
dropTimeoutStateIn: cfg.IncomingTimeoutStates,
dropTimeoutStateOut: cfg.OutgoingTimeoutStates,
stop: cfg.StopCondition,
// instance data
pendings: make(map[models.Identity]*models.SignedProposal[*helper.TestState, *helper.TestVote]),
headers: make(map[models.Identity]*models.State[*helper.TestState]),
queue: make(chan interface{}, 1024),
// instance mocks
committee: &mocks.DynamicCommittee{},
builder: &mocks.LeaderProvider[*helper.TestState, *helper.TestPeer, *helper.TestCollected]{},
persist: &mocks.ConsensusStore[*helper.TestVote]{},
signer: &mocks.Signer[*helper.TestState, *helper.TestVote]{},
verifier: &mocks.Verifier[*helper.TestVote]{},
notifier: NewMockedCommunicatorConsumer(),
finalizer: &mocks.Finalizer{},
voting: &mocks.VotingProvider[*helper.TestState, *helper.TestVote, *helper.TestPeer]{},
}
// insert root state into headers register
in.headers[cfg.Root.Identifier] = cfg.Root
// program the hotstuff committee state
in.committee.On("IdentitiesByRank", mock.Anything).Return(
func(_ uint64) []models.WeightedIdentity {
return in.participants
},
nil,
)
in.committee.On("IdentitiesByState", mock.Anything).Return(
func(_ models.Identity) []models.WeightedIdentity {
return in.participants
},
nil,
)
for _, participant := range in.participants {
in.committee.On("IdentityByState", mock.Anything, participant.Identity()).Return(participant, nil)
in.committee.On("IdentityByRank", mock.Anything, participant.Identity()).Return(participant, nil)
}
in.committee.On("Self").Return(in.localID)
in.committee.On("LeaderForRank", mock.Anything).Return(
func(rank uint64) models.Identity {
return in.participants[int(rank)%len(in.participants)].Identity()
}, nil,
)
in.committee.On("QuorumThresholdForRank", mock.Anything).Return(uint64(len(in.participants)*2000/3), nil)
in.committee.On("TimeoutThresholdForRank", mock.Anything).Return(uint64(len(in.participants)*2000/3), nil)
// program the builder module behaviour
in.builder.On("ProveNextState", mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(
func(ctx context.Context, rank uint64, filter []byte, parentID models.Identity) **helper.TestState {
in.updatingStates.Lock()
defer in.updatingStates.Unlock()
_, ok := in.headers[parentID]
if !ok {
return nil
}
s := &helper.TestState{
Rank: rank,
Signature: []byte{},
Timestamp: uint64(time.Now().UnixMilli()),
ID: helper.MakeIdentity(),
Prover: in.localID,
}
return &s
},
func(ctx context.Context, rank uint64, filter []byte, parentID models.Identity) error {
in.updatingStates.RLock()
_, ok := in.headers[parentID]
in.updatingStates.RUnlock()
if !ok {
return fmt.Errorf("parent state not found (parent: %x)", parentID)
}
return nil
},
)
// check on stop condition, stop the tests as soon as entering a certain rank
in.persist.On("PutConsensusState", mock.Anything).Return(nil)
in.persist.On("PutLivenessState", mock.Anything).Return(nil)
// program the hotstuff signer behaviour
in.signer.On("CreateVote", mock.Anything).Return(
func(state *models.State[*helper.TestState]) **helper.TestVote {
vote := &helper.TestVote{
Rank: state.Rank,
StateID: state.Identifier,
ID: in.localID,
Signature: make([]byte, 74),
}
return &vote
},
nil,
)
in.signer.On("CreateTimeout", mock.Anything, mock.Anything, mock.Anything).Return(
func(curRank uint64, newestQC models.QuorumCertificate, previousRankTimeoutCert models.TimeoutCertificate) *models.TimeoutState[*helper.TestVote] {
v := &helper.TestVote{
Rank: curRank,
Signature: make([]byte, 74),
Timestamp: uint64(time.Now().UnixMilli()),
ID: in.localID,
}
timeoutState := &models.TimeoutState[*helper.TestVote]{
Rank: curRank,
LatestQuorumCertificate: newestQC,
PriorRankTimeoutCertificate: previousRankTimeoutCert,
Vote: &v,
}
return timeoutState
},
nil,
)
in.signer.On("CreateQuorumCertificate", mock.Anything).Return(
func(votes []*helper.TestVote) models.QuorumCertificate {
voterIDs := make([]models.Identity, 0, len(votes))
bitmask := []byte{0, 0}
for i, vote := range votes {
bitmask[i/8] |= 1 << (i % 8)
voterIDs = append(voterIDs, vote.ID)
}
qc := &helper.TestQuorumCertificate{
Rank: votes[0].Rank,
FrameNumber: votes[0].Rank,
Selector: votes[0].StateID,
Timestamp: uint64(time.Now().UnixMilli()),
AggregatedSignature: &helper.TestAggregatedSignature{
Signature: make([]byte, 74),
Bitmask: bitmask,
PublicKey: make([]byte, 585),
},
}
return qc
},
nil,
)
// program the hotstuff verifier behaviour
in.verifier.On("VerifyVote", mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil)
in.verifier.On("VerifyQuorumCertificate", mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil)
in.verifier.On("VerifyTimeoutCertificate", mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(nil)
// program the hotstuff communicator behaviour
in.notifier.CommunicatorConsumer.On("OnOwnProposal", mock.Anything, mock.Anything).Run(
func(args mock.Arguments) {
proposal, ok := args[0].(*models.SignedProposal[*helper.TestState, *helper.TestVote])
require.True(t, ok)
// sender should always have the parent
in.updatingStates.RLock()
_, exists := in.headers[proposal.State.ParentQuorumCertificate.Identity()]
in.updatingStates.RUnlock()
if !exists {
t.Fatalf("parent for proposal not found parent: %x", proposal.State.ParentQuorumCertificate.Identity())
}
// store locally and loop back to engine for processing
in.ProcessState(proposal)
},
)
in.notifier.CommunicatorConsumer.On("OnOwnTimeout", mock.Anything).Run(func(args mock.Arguments) {
timeoutState, ok := args[0].(*models.TimeoutState[*helper.TestVote])
require.True(t, ok)
in.queue <- timeoutState
},
)
// in case of single node setup we should just forward vote to our own node
// for multi-node setup this method will be overridden
in.notifier.CommunicatorConsumer.On("OnOwnVote", mock.Anything, mock.Anything).Run(func(args mock.Arguments) {
vote, ok := args[0].(**helper.TestVote)
require.True(t, ok)
in.queue <- *vote
})
// program the finalizer module behaviour
in.finalizer.On("MakeFinal", mock.Anything).Return(
func(stateID models.Identity) error {
// as we don't use mocks to assert expectations, but only to
// simulate behaviour, we should drop the call data regularly
in.updatingStates.RLock()
state, found := in.headers[stateID]
in.updatingStates.RUnlock()
if !found {
return fmt.Errorf("can't broadcast with unknown parent")
}
if state.Rank%100 == 0 {
in.committee.Calls = nil
in.builder.Calls = nil
in.signer.Calls = nil
in.verifier.Calls = nil
in.notifier.CommunicatorConsumer.Calls = nil
in.finalizer.Calls = nil
}
return nil
},
)
// initialize error handling and logging
var err error
notifier := pubsub.NewDistributor[*helper.TestState, *helper.TestVote]()
notifier.AddConsumer(in.notifier)
logConsumer := notifications.NewLogConsumer[*helper.TestState, *helper.TestVote](in.logger)
notifier.AddConsumer(logConsumer)
// initialize the finalizer
var rootState *models.State[*helper.TestState]
if cfg.Root.ParentQuorumCertificate != nil {
rootState = models.StateFrom(cfg.Root.State, cfg.Root.ParentQuorumCertificate)
} else {
rootState = models.GenesisStateFrom(cfg.Root.State)
}
rootQC := &helper.TestQuorumCertificate{
Rank: rootState.Rank,
FrameNumber: rootState.Rank,
Selector: rootState.Identifier,
Timestamp: uint64(time.Now().UnixMilli()),
AggregatedSignature: &helper.TestAggregatedSignature{
Signature: make([]byte, 74),
Bitmask: []byte{0b11111111, 0b00000000},
PublicKey: make([]byte, 585),
},
}
certifiedRootState, err := models.NewCertifiedState(rootState, rootQC)
require.NoError(t, err)
livenessData := &models.LivenessState{
CurrentRank: rootQC.Rank + 1,
LatestQuorumCertificate: rootQC,
}
in.persist.On("GetLivenessState", mock.Anything).Return(livenessData, nil).Once()
// initialize the pacemaker
controller := timeout.NewController(cfg.Timeouts)
in.pacemaker, err = pacemaker.NewPacemaker[*helper.TestState, *helper.TestVote](nil, controller, pacemaker.NoProposalDelay(), notifier, in.persist, in.logger)
require.NoError(t, err)
// initialize the forks handler
in.forks, err = forks.NewForks(certifiedRootState, in.finalizer, notifier)
require.NoError(t, err)
// initialize the validator
in.validator = validator.NewValidator[*helper.TestState, *helper.TestVote](in.committee, in.verifier)
packer := &mocks.Packer{}
packer.On("Pack", mock.Anything, mock.Anything).Return(
func(rank uint64, sig *consensus.StateSignatureData) ([]byte, []byte, error) {
indices := []byte{0, 0}
for i := range sig.Signers {
indices[i/8] |= 1 << (i % 8)
}
return indices, make([]byte, 74), nil
},
).Maybe()
onQCCreated := func(qc models.QuorumCertificate) {
in.queue <- qc
}
voteProcessorFactory := mocks.NewVoteProcessorFactory[*helper.TestState, *helper.TestVote, *helper.TestPeer](t)
voteProcessorFactory.On("Create", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(
func(tracer consensus.TraceLogger, filter []byte, proposal *models.SignedProposal[*helper.TestState, *helper.TestVote], dsTag []byte, aggregator consensus.SignatureAggregator, votingProvider consensus.VotingProvider[*helper.TestState, *helper.TestVote, *helper.TestPeer]) consensus.VerifyingVoteProcessor[*helper.TestState, *helper.TestVote] {
processor, err := votecollector.NewBootstrapVoteProcessor[*helper.TestState, *helper.TestVote, *helper.TestPeer](
in.logger,
filter,
in.committee,
proposal.State,
onQCCreated,
[]byte{},
aggregator,
in.voting,
)
require.NoError(t, err)
vote, err := proposal.ProposerVote()
require.NoError(t, err)
err = processor.Process(vote)
if err != nil {
t.Fatalf("invalid vote for own proposal: %v", err)
}
return processor
}, nil).Maybe()
in.voting.On("FinalizeQuorumCertificate", mock.Anything, mock.Anything, mock.Anything).Return(
func(
ctx context.Context,
state *models.State[*helper.TestState],
aggregatedSignature models.AggregatedSignature,
) (models.QuorumCertificate, error) {
return &helper.TestQuorumCertificate{
Rank: state.Rank,
Timestamp: state.Timestamp,
FrameNumber: state.Rank,
Selector: state.Identifier,
AggregatedSignature: aggregatedSignature,
}, nil
},
)
in.voting.On("FinalizeTimeout", mock.Anything, mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(
func(ctx context.Context, rank uint64, latestQuorumCertificate models.QuorumCertificate, latestQuorumCertificateRanks []consensus.TimeoutSignerInfo, aggregatedSignature models.AggregatedSignature) (models.TimeoutCertificate, error) {
ranks := []uint64{}
for _, i := range latestQuorumCertificateRanks {
ranks = append(ranks, i.NewestQCRank)
}
return &helper.TestTimeoutCertificate{
Filter: nil,
Rank: rank,
LatestRanks: ranks,
LatestQuorumCert: latestQuorumCertificate,
AggregatedSignature: aggregatedSignature,
}, nil
},
)
voteAggregationDistributor := pubsub.NewVoteAggregationDistributor[*helper.TestState, *helper.TestVote]()
sigAgg := mocks.NewSignatureAggregator(t)
sigAgg.On("Aggregate", mock.Anything, mock.Anything).Return(
func(publicKeys [][]byte, signatures [][]byte) (models.AggregatedSignature, error) {
bitmask := []byte{0, 0}
for i := range publicKeys {
bitmask[i/8] |= 1 << (i % 8)
}
return &helper.TestAggregatedSignature{
Signature: make([]byte, 74),
Bitmask: bitmask,
PublicKey: make([]byte, 585),
}, nil
}).Maybe()
sigAgg.On("VerifySignatureRaw", mock.Anything, mock.Anything, mock.Anything, mock.Anything).Return(true, nil).Maybe()
createCollectorFactoryMethod := votecollector.NewStateMachineFactory(in.logger, []byte{}, voteAggregationDistributor, voteProcessorFactory.Create, []byte{}, sigAgg, in.voting)
voteCollectors := voteaggregator.NewVoteCollectors[*helper.TestState, *helper.TestVote](in.logger, livenessData.CurrentRank, workerpool.New(2), createCollectorFactoryMethod)
// initialize the vote aggregator
in.voteAggregator, err = voteaggregator.NewVoteAggregator[*helper.TestState, *helper.TestVote](
in.logger,
voteAggregationDistributor,
livenessData.CurrentRank,
voteCollectors,
)
require.NoError(t, err)
// initialize factories for timeout collector and timeout processor
timeoutAggregationDistributor := pubsub.NewTimeoutAggregationDistributor[*helper.TestVote]()
timeoutProcessorFactory := mocks.NewTimeoutProcessorFactory[*helper.TestVote](t)
timeoutProcessorFactory.On("Create", mock.Anything).Return(
func(rank uint64) consensus.TimeoutProcessor[*helper.TestVote] {
// mock signature aggregator which doesn't perform any crypto operations and just tracks total weight
aggregator := &mocks.TimeoutSignatureAggregator{}
totalWeight := atomic.NewUint64(0)
newestRank := counters.NewMonotonicCounter(0)
bits := counters.NewMonotonicCounter(0)
aggregator.On("Rank").Return(rank).Maybe()
aggregator.On("TotalWeight").Return(func() uint64 {
return totalWeight.Load()
}).Maybe()
aggregator.On("VerifyAndAdd", mock.Anything, mock.Anything, mock.Anything).Return(
func(signerID models.Identity, _ []byte, newestQCRank uint64) uint64 {
newestRank.Set(newestQCRank)
var signer models.WeightedIdentity
for _, p := range in.participants {
if p.Identity() == signerID {
signer = p
}
}
require.NotNil(t, signer)
bits.Increment()
return totalWeight.Add(signer.Weight())
}, nil,
).Maybe()
aggregator.On("Aggregate").Return(
func() []consensus.TimeoutSignerInfo {
signersData := make([]consensus.TimeoutSignerInfo, 0, len(in.participants))
newestQCRank := newestRank.Value()
for _, signer := range in.participants {
signersData = append(signersData, consensus.TimeoutSignerInfo{
NewestQCRank: newestQCRank,
Signer: signer.Identity(),
})
}
return signersData
},
func() models.AggregatedSignature {
bitCount := bits.Value()
bitmask := []byte{0, 0}
for i := range bitCount {
pos := i / 8
bitmask[pos] |= 1 << (i % 8)
}
return &helper.TestAggregatedSignature{
Signature: make([]byte, 74),
Bitmask: bitmask,
PublicKey: make([]byte, 585),
}
},
nil,
).Maybe()
p, err := timeoutcollector.NewTimeoutProcessor[*helper.TestState, *helper.TestVote, *helper.TestPeer](
in.logger,
in.committee,
in.validator,
aggregator,
timeoutAggregationDistributor,
in.voting,
)
require.NoError(t, err)
return p
}, nil).Maybe()
timeoutCollectorFactory := timeoutcollector.NewTimeoutCollectorFactory(
in.logger,
timeoutAggregationDistributor,
timeoutProcessorFactory,
)
timeoutCollectors := timeoutaggregator.NewTimeoutCollectors(
in.logger,
livenessData.CurrentRank,
timeoutCollectorFactory,
)
// initialize the timeout aggregator
in.timeoutAggregator, err = timeoutaggregator.NewTimeoutAggregator(
in.logger,
livenessData.CurrentRank,
timeoutCollectors,
)
require.NoError(t, err)
safetyData := &models.ConsensusState[*helper.TestVote]{
FinalizedRank: rootState.Rank,
LatestAcknowledgedRank: rootState.Rank,
}
in.persist.On("GetConsensusState", mock.Anything).Return(safetyData, nil).Once()
// initialize the safety rules
in.safetyRules, err = safetyrules.NewSafetyRules(nil, in.signer, in.persist, in.committee)
require.NoError(t, err)
// initialize the state producer
in.producer, err = stateproducer.NewStateProducer[*helper.TestState, *helper.TestVote, *helper.TestPeer, *helper.TestCollected](in.safetyRules, in.committee, in.builder)
require.NoError(t, err)
// initialize the event handler
in.handler, err = eventhandler.NewEventHandler[*helper.TestState, *helper.TestVote, *helper.TestPeer, *helper.TestCollected](
in.pacemaker,
in.producer,
in.forks,
in.persist,
in.committee,
in.safetyRules,
notifier,
in.logger,
)
require.NoError(t, err)
timeoutAggregationDistributor.AddTimeoutCollectorConsumer(logConsumer)
timeoutAggregationDistributor.AddTimeoutCollectorConsumer(&in)
voteAggregationDistributor.AddVoteCollectorConsumer(logConsumer)
return &in
}
func (in *Instance) Run(t *testing.T) error {
ctx, cancel := context.WithCancel(context.Background())
defer func() {
cancel()
<-lifecycle.AllDone(in.voteAggregator, in.timeoutAggregator)
}()
signalerCtx := unittest.NewMockSignalerContext(t, ctx)
in.voteAggregator.Start(signalerCtx)
in.timeoutAggregator.Start(signalerCtx)
<-lifecycle.AllReady(in.voteAggregator, in.timeoutAggregator)
// start the event handler
err := in.handler.Start(ctx)
if err != nil {
return fmt.Errorf("could not start event handler: %w", err)
}
// run until an error or stop condition is reached
for {
// check on stop conditions
if in.stop(in) {
return errStopCondition
}
// we handle timeouts with priority
select {
case <-in.handler.TimeoutChannel():
err := in.handler.OnLocalTimeout()
if err != nil {
panic(fmt.Errorf("could not process timeout: %w", err))
}
default:
}
// check on stop conditions
if in.stop(in) {
return errStopCondition
}
// otherwise, process first received event
select {
case <-in.handler.TimeoutChannel():
err := in.handler.OnLocalTimeout()
if err != nil {
return fmt.Errorf("could not process timeout: %w", err)
}
case msg := <-in.queue:
switch m := msg.(type) {
case *models.SignedProposal[*helper.TestState, *helper.TestVote]:
// add state to aggregator
in.voteAggregator.AddState(m)
// then pass to event handler
err := in.handler.OnReceiveProposal(m)
if err != nil {
return fmt.Errorf("could not process proposal: %w", err)
}
case *helper.TestVote:
in.voteAggregator.AddVote(&m)
case *models.TimeoutState[*helper.TestVote]:
in.timeoutAggregator.AddTimeout(m)
case models.QuorumCertificate:
err := in.handler.OnReceiveQuorumCertificate(m)
if err != nil {
return fmt.Errorf("could not process received QC: %w", err)
}
case models.TimeoutCertificate:
err := in.handler.OnReceiveTimeoutCertificate(m)
if err != nil {
return fmt.Errorf("could not process received TC: %w", err)
}
case *consensus.PartialTimeoutCertificateCreated:
err := in.handler.OnPartialTimeoutCertificateCreated(m)
if err != nil {
return fmt.Errorf("could not process partial TC: %w", err)
}
default:
fmt.Printf("unhandled queue event: %s\n", reflect.ValueOf(msg).Type().String())
}
}
}
}
func (in *Instance) ProcessState(proposal *models.SignedProposal[*helper.TestState, *helper.TestVote]) {
in.updatingStates.Lock()
defer in.updatingStates.Unlock()
_, parentExists := in.headers[proposal.State.ParentQuorumCertificate.Identity()]
if parentExists {
next := proposal
for next != nil {
in.headers[next.State.Identifier] = next.State
in.queue <- next
// keep processing the pending states
next = in.pendings[next.State.ParentQuorumCertificate.Identity()]
}
} else {
// cache it in pendings by ParentID
in.pendings[proposal.State.ParentQuorumCertificate.Identity()] = proposal
}
}
func (in *Instance) OnTimeoutCertificateConstructedFromTimeouts(tc models.TimeoutCertificate) {
in.queue <- tc
}
func (in *Instance) OnPartialTimeoutCertificateCreated(rank uint64, newestQC models.QuorumCertificate, previousRankTimeoutCert models.TimeoutCertificate) {
in.queue <- &consensus.PartialTimeoutCertificateCreated{
Rank: rank,
NewestQuorumCertificate: newestQC,
PriorRankTimeoutCertificate: previousRankTimeoutCert,
}
}
func (in *Instance) OnNewQuorumCertificateDiscovered(qc models.QuorumCertificate) {
in.queue <- qc
}
func (in *Instance) OnNewTimeoutCertificateDiscovered(tc models.TimeoutCertificate) {
in.queue <- tc
}
func (in *Instance) OnTimeoutProcessed(*models.TimeoutState[*helper.TestVote]) {
}

View File

@ -0,0 +1,153 @@
package integration
import (
"errors"
"fmt"
"sync"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"source.quilibrium.com/quilibrium/monorepo/consensus/helper"
)
// a pacemaker timeout to wait for proposals. Usually 10 ms is enough,
// but for slow environment like CI, a longer one is needed.
const safeTimeout = 2 * time.Second
// number of failed rounds before first timeout increase
const happyPathMaxRoundFailures = 6
func TestSingleInstance(t *testing.T) {
fmt.Println("starting single instance test")
// set up a single instance to run
finalRank := uint64(10)
in := NewInstance(t,
WithStopCondition(RankFinalized(finalRank)),
)
// run the event handler until we reach a stop condition
err := in.Run(t)
require.ErrorIs(t, err, errStopCondition, "should run until stop condition")
// check if forks and pacemaker are in expected rank state
assert.Equal(t, finalRank, in.forks.FinalizedRank(), "finalized rank should be three lower than current rank")
fmt.Println("ending single instance test")
}
func TestThreeInstances(t *testing.T) {
fmt.Println("starting three instance test")
// test parameters
num := 3
finalRank := uint64(100)
// generate three hotstuff participants
participants := helper.WithWeightedIdentityList(num)
root := DefaultRoot()
// set up three instances that are exactly the same
// since we don't drop any messages we should have enough data to advance in happy path
// for that reason we will drop all TO related communication.
instances := make([]*Instance, 0, num)
for n := 0; n < num; n++ {
in := NewInstance(t,
WithRoot(root),
WithParticipants(participants),
WithLocalID(participants[n].Identity()),
WithStopCondition(RankFinalized(finalRank)),
WithIncomingTimeoutStates(DropAllTimeoutStates),
)
instances = append(instances, in)
}
// connect the communicators of the instances together
Connect(t, instances)
// start the instances and wait for them to finish
var wg sync.WaitGroup
for _, in := range instances {
wg.Add(1)
go func(in *Instance) {
err := in.Run(t)
require.True(t, errors.Is(err, errStopCondition), "should run until stop condition")
wg.Done()
}(in)
}
wg.Wait()
// check that all instances have the same finalized state
in1 := instances[0]
in2 := instances[1]
in3 := instances[2]
// verify progress has been made
assert.GreaterOrEqual(t, in1.forks.FinalizedState().Rank, finalRank, "the first instance 's finalized rank should be four lower than current rank")
// verify same progresses have been made
assert.Equal(t, in1.forks.FinalizedState(), in2.forks.FinalizedState(), "second instance should have same finalized state as first instance")
assert.Equal(t, in1.forks.FinalizedState(), in3.forks.FinalizedState(), "third instance should have same finalized state as first instance")
assert.Equal(t, FinalizedRanks(in1), FinalizedRanks(in2))
assert.Equal(t, FinalizedRanks(in1), FinalizedRanks(in3))
fmt.Println("ending three instance test")
}
func TestSevenInstances(t *testing.T) {
fmt.Println("starting seven instance test")
// test parameters
numPass := 5
numFail := 2
finalRank := uint64(30)
// generate the seven hotstuff participants
participants := helper.WithWeightedIdentityList(numPass + numFail)
instances := make([]*Instance, 0, numPass+numFail)
root := DefaultRoot()
// set up five instances that work fully
for n := 0; n < numPass; n++ {
in := NewInstance(t,
WithRoot(root),
WithParticipants(participants),
WithLocalID(participants[n].Identity()),
WithStopCondition(RankFinalized(finalRank)),
)
instances = append(instances, in)
}
// set up two instances which can't vote
for n := numPass; n < numPass+numFail; n++ {
in := NewInstance(t,
WithRoot(root),
WithParticipants(participants),
WithLocalID(participants[n].Identity()),
WithStopCondition(RankFinalized(finalRank)),
WithOutgoingVotes(DropAllVotes),
)
instances = append(instances, in)
}
// connect the communicators of the instances together
Connect(t, instances)
// start all seven instances and wait for them to wrap up
var wg sync.WaitGroup
for _, in := range instances {
wg.Add(1)
go func(in *Instance) {
err := in.Run(t)
require.True(t, errors.Is(err, errStopCondition), "should run until stop condition")
wg.Done()
}(in)
}
wg.Wait()
// check that all instances have the same finalized state
ref := instances[0]
assert.Less(t, finalRank-uint64(2*numPass+numFail), ref.forks.FinalizedState().Rank, "expect instance 0 should made enough progress, but didn't")
finalizedRanks := FinalizedRanks(ref)
for i := 1; i < numPass; i++ {
assert.Equal(t, ref.forks.FinalizedState(), instances[i].forks.FinalizedState(), "instance %d should have same finalized state as first instance")
assert.Equal(t, finalizedRanks, FinalizedRanks(instances[i]), "instance %d should have same finalized rank as first instance")
}
fmt.Println("ending seven instance test")
}

View File

@ -0,0 +1,422 @@
package integration
import (
"encoding/hex"
"errors"
"fmt"
"sync"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"source.quilibrium.com/quilibrium/monorepo/consensus"
"source.quilibrium.com/quilibrium/monorepo/consensus/helper"
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
"source.quilibrium.com/quilibrium/monorepo/consensus/pacemaker/timeout"
"source.quilibrium.com/quilibrium/monorepo/lifecycle/unittest"
)
// pacemaker timeout
// if your laptop is fast enough, 10 ms is enough
const pmTimeout = 100 * time.Millisecond
// maxTimeoutRebroadcast specifies how often the PaceMaker rebroadcasts
// its timeout state in case there is no progress. We keep the value
// small so we have smaller latency
const maxTimeoutRebroadcast = 1 * time.Second
// If 2 nodes are down in a 7 nodes cluster, the rest of 5 nodes can
// still make progress and reach consensus
func Test2TimeoutOutof7Instances(t *testing.T) {
healthyReplicas := 5
notVotingReplicas := 2
finalRank := uint64(30)
// generate the seven hotstuff participants
participants := helper.WithWeightedIdentityList(healthyReplicas + notVotingReplicas)
instances := make([]*Instance, 0, healthyReplicas+notVotingReplicas)
root := DefaultRoot()
timeouts, err := timeout.NewConfig(pmTimeout, pmTimeout, 1.5, happyPathMaxRoundFailures, maxTimeoutRebroadcast)
require.NoError(t, err)
// set up five instances that work fully
for n := 0; n < healthyReplicas; n++ {
in := NewInstance(t,
WithRoot(root),
WithParticipants(participants),
WithTimeouts(timeouts),
WithBufferLogger(),
WithLocalID(participants[n].Identity()),
WithLoggerParams(consensus.StringParam("status", "healthy")),
WithStopCondition(RankFinalized(finalRank)),
)
instances = append(instances, in)
}
// set up two instances which can't vote, nor propose
for n := healthyReplicas; n < healthyReplicas+notVotingReplicas; n++ {
in := NewInstance(t,
WithRoot(root),
WithParticipants(participants),
WithTimeouts(timeouts),
WithBufferLogger(),
WithLocalID(participants[n].Identity()),
WithLoggerParams(consensus.StringParam("status", "unhealthy")),
WithStopCondition(RankFinalized(finalRank)),
WithOutgoingVotes(DropAllVotes),
WithOutgoingProposals(DropAllProposals),
)
instances = append(instances, in)
}
// connect the communicators of the instances together
Connect(t, instances)
// start all seven instances and wait for them to wrap up
var wg sync.WaitGroup
for _, in := range instances {
wg.Add(1)
go func(in *Instance) {
err := in.Run(t)
require.ErrorIs(t, err, errStopCondition)
wg.Done()
}(in)
}
unittest.AssertReturnsBefore(t, wg.Wait, 20*time.Second, "expect to finish before timeout")
for i, in := range instances {
fmt.Println("=============================================================================")
fmt.Println("INSTANCE", i, "-", hex.EncodeToString([]byte(in.localID)))
fmt.Println("=============================================================================")
in.logger.(*helper.BufferLog).Flush()
}
// check that all instances have the same finalized state
ref := instances[0]
assert.Equal(t, finalRank, ref.forks.FinalizedState().Rank, "expect instance 0 should made enough progress, but didn't")
finalizedRanks := FinalizedRanks(ref)
for i := 1; i < healthyReplicas; i++ {
assert.Equal(t, ref.forks.FinalizedState(), instances[i].forks.FinalizedState(), "instance %d should have same finalized state as first instance")
assert.Equal(t, finalizedRanks, FinalizedRanks(instances[i]), "instance %d should have same finalized rank as first instance")
}
}
// 2 nodes in a 4-node cluster are configured to be able only to send timeout messages (no voting or proposing).
// The other 2 unconstrained nodes should be able to make progress through the recovery path by creating TCs
// for every round, but no state will be finalized, because finalization requires direct 1-chain and QC.
func Test2TimeoutOutof4Instances(t *testing.T) {
healthyReplicas := 2
replicasDroppingHappyPathMsgs := 2
finalRank := uint64(30)
// generate the 4 hotstuff participants
participants := helper.WithWeightedIdentityList(healthyReplicas + replicasDroppingHappyPathMsgs)
instances := make([]*Instance, 0, healthyReplicas+replicasDroppingHappyPathMsgs)
root := DefaultRoot()
timeouts, err := timeout.NewConfig(10*time.Millisecond, 50*time.Millisecond, 1.5, happyPathMaxRoundFailures, maxTimeoutRebroadcast)
require.NoError(t, err)
// set up two instances that work fully
for n := 0; n < healthyReplicas; n++ {
in := NewInstance(t,
WithRoot(root),
WithParticipants(participants),
WithLocalID(participants[n].Identity()),
WithTimeouts(timeouts),
WithLoggerParams(consensus.StringParam("status", "healthy")),
WithStopCondition(RankReached(finalRank)),
)
instances = append(instances, in)
}
// set up instances which can't vote, nor propose
for n := healthyReplicas; n < healthyReplicas+replicasDroppingHappyPathMsgs; n++ {
in := NewInstance(t,
WithRoot(root),
WithParticipants(participants),
WithLocalID(participants[n].Identity()),
WithTimeouts(timeouts),
WithLoggerParams(consensus.StringParam("status", "unhealthy")),
WithStopCondition(RankReached(finalRank)),
WithOutgoingVotes(DropAllVotes),
WithIncomingVotes(DropAllVotes),
WithOutgoingProposals(DropAllProposals),
)
instances = append(instances, in)
}
// connect the communicators of the instances together
Connect(t, instances)
// start the instances and wait for them to finish
var wg sync.WaitGroup
for _, in := range instances {
wg.Add(1)
go func(in *Instance) {
err := in.Run(t)
require.True(t, errors.Is(err, errStopCondition), "should run until stop condition")
wg.Done()
}(in)
}
unittest.AssertReturnsBefore(t, wg.Wait, 10*time.Second, "expect to finish before timeout")
// check that all instances have the same finalized state
ref := instances[0]
finalizedRanks := FinalizedRanks(ref)
assert.Equal(t, []uint64{0}, finalizedRanks, "no rank was finalized, because finalization requires 2 direct chain plus a QC which never happen in this case")
assert.Equal(t, finalRank, ref.pacemaker.CurrentRank(), "expect instance 0 should made enough progress, but didn't")
for i := 1; i < healthyReplicas; i++ {
assert.Equal(t, ref.forks.FinalizedState(), instances[i].forks.FinalizedState(), "instance %d should have same finalized state as first instance", i)
assert.Equal(t, finalizedRanks, FinalizedRanks(instances[i]), "instance %d should have same finalized rank as first instance", i)
assert.Equal(t, finalRank, instances[i].pacemaker.CurrentRank(), "instance %d should have same active rank as first instance", i)
}
}
// If 1 node is down in a 5 nodes cluster, the rest of 4 nodes can
// make progress and reach consensus
func Test1TimeoutOutof5Instances(t *testing.T) {
healthyReplicas := 4
stateedReplicas := 1
finalRank := uint64(30)
// generate the seven hotstuff participants
participants := helper.WithWeightedIdentityList(healthyReplicas + stateedReplicas)
instances := make([]*Instance, 0, healthyReplicas+stateedReplicas)
root := DefaultRoot()
timeouts, err := timeout.NewConfig(pmTimeout, pmTimeout, 1.5, happyPathMaxRoundFailures, maxTimeoutRebroadcast)
require.NoError(t, err)
// set up instances that work fully
for n := 0; n < healthyReplicas; n++ {
in := NewInstance(t,
WithRoot(root),
WithParticipants(participants),
WithLocalID(participants[n].Identity()),
WithTimeouts(timeouts),
WithLoggerParams(consensus.StringParam("status", "healthy")),
WithStopCondition(RankFinalized(finalRank)),
)
instances = append(instances, in)
}
// set up one instance which can't vote, nor propose
for n := healthyReplicas; n < healthyReplicas+stateedReplicas; n++ {
in := NewInstance(t,
WithRoot(root),
WithParticipants(participants),
WithLocalID(participants[n].Identity()),
WithTimeouts(timeouts),
WithLoggerParams(consensus.StringParam("status", "unhealthy")),
WithStopCondition(RankReached(finalRank)),
WithOutgoingVotes(DropAllVotes),
WithOutgoingProposals(DropAllProposals),
)
instances = append(instances, in)
}
// connect the communicators of the instances together
Connect(t, instances)
// start all seven instances and wait for them to wrap up
var wg sync.WaitGroup
for _, in := range instances {
wg.Add(1)
go func(in *Instance) {
err := in.Run(t)
require.ErrorIs(t, err, errStopCondition)
wg.Done()
}(in)
}
success := unittest.AssertReturnsBefore(t, wg.Wait, 10*time.Second, "expect to finish before timeout")
if !success {
t.Logf("dumping state of system:")
for i, inst := range instances {
t.Logf(
"instance %d: %d %d %d",
i,
inst.pacemaker.CurrentRank(),
inst.pacemaker.LatestQuorumCertificate().GetRank(),
inst.forks.FinalizedState().Rank,
)
}
}
// check that all instances have the same finalized state
ref := instances[0]
finalizedRanks := FinalizedRanks(ref)
assert.Equal(t, finalRank, ref.forks.FinalizedState().Rank, "expect instance 0 should made enough progress, but didn't")
for i := 1; i < healthyReplicas; i++ {
assert.Equal(t, ref.forks.FinalizedState(), instances[i].forks.FinalizedState(), "instance %d should have same finalized state as first instance")
assert.Equal(t, finalizedRanks, FinalizedRanks(instances[i]), "instance %d should have same finalized rank as first instance")
}
}
// TestStateDelayIsHigherThanTimeout tests an edge case protocol edge case, where
// - The state arrives in time for replicas to vote.
// - The next primary does not respond in time with a follow-up proposal,
// so nodes start sending TimeoutStates.
// - However, eventually, the next primary successfully constructs a QC and a new
// state before a TC leads to the round timing out.
//
// This test verifies that nodes still make progress on the happy path (QC constructed),
// despite already having initiated the timeout.
// Example scenarios, how this timing edge case could manifest:
// - state delay is very close (or larger) than round duration
// - delayed message transmission (specifically votes) within network
// - overwhelmed / slowed-down primary
// - byzantine primary
//
// Implementation:
// - We have 4 nodes in total where the TimeoutStates from two of them are always
// discarded. Therefore, no TC can be constructed.
// - To force nodes to initiate the timeout (i.e. send TimeoutStates), we set
// the `stateRateDelay` to _twice_ the PaceMaker Timeout. Furthermore, we configure
// the PaceMaker to only increase timeout duration after 6 successive round failures.
func TestStateDelayIsHigherThanTimeout(t *testing.T) {
healthyReplicas := 2
replicasNotGeneratingTimeouts := 2
finalRank := uint64(20)
// generate the 4 hotstuff participants
participants := helper.WithWeightedIdentityList(healthyReplicas + replicasNotGeneratingTimeouts)
instances := make([]*Instance, 0, healthyReplicas+replicasNotGeneratingTimeouts)
root := DefaultRoot()
timeouts, err := timeout.NewConfig(pmTimeout, pmTimeout, 1.5, happyPathMaxRoundFailures, maxTimeoutRebroadcast)
require.NoError(t, err)
// set up 2 instances that fully work (incl. sending TimeoutStates)
for n := 0; n < healthyReplicas; n++ {
in := NewInstance(t,
WithRoot(root),
WithParticipants(participants),
WithLocalID(participants[n].Identity()),
WithTimeouts(timeouts),
WithStopCondition(RankFinalized(finalRank)),
)
instances = append(instances, in)
}
// set up two instances which don't generate and receive timeout states
for n := healthyReplicas; n < healthyReplicas+replicasNotGeneratingTimeouts; n++ {
in := NewInstance(t,
WithRoot(root),
WithParticipants(participants),
WithLocalID(participants[n].Identity()),
WithTimeouts(timeouts),
WithStopCondition(RankFinalized(finalRank)),
WithIncomingTimeoutStates(DropAllTimeoutStates),
WithOutgoingTimeoutStates(DropAllTimeoutStates),
)
instances = append(instances, in)
}
// connect the communicators of the instances together
Connect(t, instances)
// start all 4 instances and wait for them to wrap up
var wg sync.WaitGroup
for _, in := range instances {
wg.Add(1)
go func(in *Instance) {
err := in.Run(t)
require.ErrorIs(t, err, errStopCondition)
wg.Done()
}(in)
}
unittest.AssertReturnsBefore(t, wg.Wait, 10*time.Second, "expect to finish before timeout")
// check that all instances have the same finalized state
ref := instances[0]
assert.Equal(t, finalRank, ref.forks.FinalizedState().Rank, "expect instance 0 should made enough progress, but didn't")
finalizedRanks := FinalizedRanks(ref)
// in this test we rely on QC being produced in each rank
// make sure that all ranks are strictly in increasing order with no gaps
for i := 1; i < len(finalizedRanks); i++ {
// finalized ranks are sorted in descending order
if finalizedRanks[i-1] != finalizedRanks[i]+1 {
t.Fatalf("finalized ranks series has gap, this is not expected: %v", finalizedRanks)
return
}
}
for i := 1; i < healthyReplicas; i++ {
assert.Equal(t, ref.forks.FinalizedState(), instances[i].forks.FinalizedState(), "instance %d should have same finalized state as first instance")
assert.Equal(t, finalizedRanks, FinalizedRanks(instances[i]), "instance %d should have same finalized rank as first instance")
}
}
// TestAsyncClusterStartup tests a realistic scenario where nodes are started asynchronously:
// - Replicas are started in sequential order
// - Each replica skips voting for first state(emulating message omission).
// - Each replica skips first Timeout State (emulating message omission).
// - At this point protocol loses liveness unless a timeout rebroadcast happens from super-majority of replicas.
//
// This test verifies that nodes still make progress, despite first TO messages being lost.
// Implementation:
// - We have 4 replicas in total, each of them skips voting for first rank to force a timeout
// - State TSs for whole committee until each replica has generated its first TO.
// - After each replica has generated a timeout allow subsequent timeout rebroadcasts to make progress.
func TestAsyncClusterStartup(t *testing.T) {
replicas := 4
finalRank := uint64(20)
// generate the four hotstuff participants
participants := helper.WithWeightedIdentityList(replicas)
instances := make([]*Instance, 0, replicas)
root := DefaultRoot()
timeouts, err := timeout.NewConfig(pmTimeout, pmTimeout, 1.5, 6, maxTimeoutRebroadcast)
require.NoError(t, err)
// set up instances that work fully
var lock sync.Mutex
timeoutStateGenerated := make(map[models.Identity]struct{}, 0)
for n := 0; n < replicas; n++ {
in := NewInstance(t,
WithRoot(root),
WithParticipants(participants),
WithLocalID(participants[n].Identity()),
WithTimeouts(timeouts),
WithStopCondition(RankFinalized(finalRank)),
WithOutgoingVotes(func(vote *helper.TestVote) bool {
return vote.Rank == 1
}),
WithOutgoingTimeoutStates(func(object *models.TimeoutState[*helper.TestVote]) bool {
lock.Lock()
defer lock.Unlock()
timeoutStateGenerated[(*object.Vote).ID] = struct{}{}
// start allowing timeouts when every node has generated one
// when nodes will broadcast again, it will go through
return len(timeoutStateGenerated) != replicas
}),
)
instances = append(instances, in)
}
// connect the communicators of the instances together
Connect(t, instances)
// start each node only after previous one has started
var wg sync.WaitGroup
for _, in := range instances {
wg.Add(1)
go func(in *Instance) {
err := in.Run(t)
require.ErrorIs(t, err, errStopCondition)
wg.Done()
}(in)
}
unittest.AssertReturnsBefore(t, wg.Wait, 20*time.Second, "expect to finish before timeout")
// check that all instances have the same finalized state
ref := instances[0]
assert.Equal(t, finalRank, ref.forks.FinalizedState().Rank, "expect instance 0 should made enough progress, but didn't")
finalizedRanks := FinalizedRanks(ref)
for i := 1; i < replicas; i++ {
assert.Equal(t, ref.forks.FinalizedState(), instances[i].forks.FinalizedState(), "instance %d should have same finalized state as first instance")
assert.Equal(t, finalizedRanks, FinalizedRanks(instances[i]), "instance %d should have same finalized rank as first instance")
}
}

View File

@ -0,0 +1,109 @@
package integration
import (
"errors"
"source.quilibrium.com/quilibrium/monorepo/consensus"
"source.quilibrium.com/quilibrium/monorepo/consensus/helper"
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
"source.quilibrium.com/quilibrium/monorepo/consensus/pacemaker/timeout"
)
var errStopCondition = errors.New("stop condition reached")
type Option func(*Config)
type Config struct {
Logger consensus.TraceLogger
Root *models.State[*helper.TestState]
Participants []models.WeightedIdentity
LocalID models.Identity
Timeouts timeout.Config
IncomingVotes VoteFilter
OutgoingVotes VoteFilter
IncomingTimeoutStates TimeoutStateFilter
OutgoingTimeoutStates TimeoutStateFilter
IncomingProposals ProposalFilter
OutgoingProposals ProposalFilter
StopCondition Condition
}
func WithRoot(root *models.State[*helper.TestState]) Option {
return func(cfg *Config) {
cfg.Root = root
}
}
func WithParticipants(participants []models.WeightedIdentity) Option {
return func(cfg *Config) {
cfg.Participants = participants
}
}
func WithLocalID(localID models.Identity) Option {
return func(cfg *Config) {
cfg.LocalID = localID
cfg.Logger = cfg.Logger.With(consensus.IdentityParam("self", localID))
}
}
func WithTimeouts(timeouts timeout.Config) Option {
return func(cfg *Config) {
cfg.Timeouts = timeouts
}
}
func WithBufferLogger() Option {
return func(cfg *Config) {
cfg.Logger = helper.BufferLogger()
}
}
func WithLoggerParams(params ...consensus.LogParam) Option {
return func(cfg *Config) {
cfg.Logger = cfg.Logger.With(params...)
}
}
func WithIncomingVotes(Filter VoteFilter) Option {
return func(cfg *Config) {
cfg.IncomingVotes = Filter
}
}
func WithOutgoingVotes(Filter VoteFilter) Option {
return func(cfg *Config) {
cfg.OutgoingVotes = Filter
}
}
func WithIncomingProposals(Filter ProposalFilter) Option {
return func(cfg *Config) {
cfg.IncomingProposals = Filter
}
}
func WithOutgoingProposals(Filter ProposalFilter) Option {
return func(cfg *Config) {
cfg.OutgoingProposals = Filter
}
}
func WithIncomingTimeoutStates(Filter TimeoutStateFilter) Option {
return func(cfg *Config) {
cfg.IncomingTimeoutStates = Filter
}
}
func WithOutgoingTimeoutStates(Filter TimeoutStateFilter) Option {
return func(cfg *Config) {
cfg.OutgoingTimeoutStates = Filter
}
}
func WithStopCondition(stop Condition) Option {
return func(cfg *Config) {
cfg.StopCondition = stop
}
}

View File

@ -0,0 +1,48 @@
// Code generated by mockery. DO NOT EDIT.
package mocks
import (
time "time"
"github.com/stretchr/testify/mock"
"source.quilibrium.com/quilibrium/monorepo/consensus"
"source.quilibrium.com/quilibrium/monorepo/consensus/helper"
"source.quilibrium.com/quilibrium/monorepo/consensus/models"
)
// CommunicatorConsumer is an autogenerated mock type for the CommunicatorConsumer type
type CommunicatorConsumer[StateT models.Unique, VoteT models.Unique] struct {
mock.Mock
}
// OnOwnProposal provides a mock function with given fields: proposal, targetPublicationTime
func (_m *CommunicatorConsumer[StateT, VoteT]) OnOwnProposal(proposal *models.SignedProposal[StateT, VoteT], targetPublicationTime time.Time) {
_m.Called(proposal, targetPublicationTime)
}
// OnOwnTimeout provides a mock function with given fields: timeout
func (_m *CommunicatorConsumer[StateT, VoteT]) OnOwnTimeout(timeout *models.TimeoutState[VoteT]) {
_m.Called(timeout)
}
// OnOwnVote provides a mock function with given fields: vote, recipientID
func (_m *CommunicatorConsumer[StateT, VoteT]) OnOwnVote(vote *VoteT, recipientID models.Identity) {
_m.Called(vote, recipientID)
}
// NewCommunicatorConsumer creates a new instance of CommunicatorConsumer. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewCommunicatorConsumer[StateT models.Unique, VoteT models.Unique](t interface {
mock.TestingT
Cleanup(func())
}) *CommunicatorConsumer[StateT, VoteT] {
mock := &CommunicatorConsumer[StateT, VoteT]{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}
var _ consensus.CommunicatorConsumer[*helper.TestState, *helper.TestVote] = (*CommunicatorConsumer[*helper.TestState, *helper.TestVote])(nil)

View File

@ -0,0 +1,123 @@
// Code generated by mockery. DO NOT EDIT.
package mocks
import (
mock "github.com/stretchr/testify/mock"
models "source.quilibrium.com/quilibrium/monorepo/consensus/models"
)
// ConsensusStore is an autogenerated mock type for the ConsensusStore type
type ConsensusStore[VoteT models.Unique] struct {
mock.Mock
}
// GetConsensusState provides a mock function with no fields
func (_m *ConsensusStore[VoteT]) GetConsensusState(filter []byte) (*models.ConsensusState[VoteT], error) {
ret := _m.Called(filter)
if len(ret) == 0 {
panic("no return value specified for GetConsensusState")
}
var r0 *models.ConsensusState[VoteT]
var r1 error
if rf, ok := ret.Get(0).(func(filter []byte) (*models.ConsensusState[VoteT], error)); ok {
return rf(filter)
}
if rf, ok := ret.Get(0).(func(filter []byte) *models.ConsensusState[VoteT]); ok {
r0 = rf(filter)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*models.ConsensusState[VoteT])
}
}
if rf, ok := ret.Get(1).(func(filter []byte) error); ok {
r1 = rf(filter)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// GetLivenessState provides a mock function with no fields
func (_m *ConsensusStore[VoteT]) GetLivenessState(filter []byte) (*models.LivenessState, error) {
ret := _m.Called(filter)
if len(ret) == 0 {
panic("no return value specified for GetLivenessState")
}
var r0 *models.LivenessState
var r1 error
if rf, ok := ret.Get(0).(func(filter []byte) (*models.LivenessState, error)); ok {
return rf(filter)
}
if rf, ok := ret.Get(0).(func(filter []byte) *models.LivenessState); ok {
r0 = rf(filter)
} else {
if ret.Get(0) != nil {
r0 = ret.Get(0).(*models.LivenessState)
}
}
if rf, ok := ret.Get(1).(func(filter []byte) error); ok {
r1 = rf(filter)
} else {
r1 = ret.Error(1)
}
return r0, r1
}
// PutConsensusState provides a mock function with given fields: state
func (_m *ConsensusStore[VoteT]) PutConsensusState(state *models.ConsensusState[VoteT]) error {
ret := _m.Called(state)
if len(ret) == 0 {
panic("no return value specified for PutConsensusState")
}
var r0 error
if rf, ok := ret.Get(0).(func(*models.ConsensusState[VoteT]) error); ok {
r0 = rf(state)
} else {
r0 = ret.Error(0)
}
return r0
}
// PutLivenessState provides a mock function with given fields: state
func (_m *ConsensusStore[VoteT]) PutLivenessState(state *models.LivenessState) error {
ret := _m.Called(state)
if len(ret) == 0 {
panic("no return value specified for PutLivenessState")
}
var r0 error
if rf, ok := ret.Get(0).(func(*models.LivenessState) error); ok {
r0 = rf(state)
} else {
r0 = ret.Error(0)
}
return r0
}
// NewConsensusStore creates a new instance of ConsensusStore. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewConsensusStore[VoteT models.Unique](t interface {
mock.TestingT
Cleanup(func())
}) *ConsensusStore[VoteT] {
mock := &ConsensusStore[VoteT]{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

126
consensus/mocks/consumer.go Normal file
View File

@ -0,0 +1,126 @@
// Code generated by mockery. DO NOT EDIT.
package mocks
import (
mock "github.com/stretchr/testify/mock"
consensus "source.quilibrium.com/quilibrium/monorepo/consensus"
models "source.quilibrium.com/quilibrium/monorepo/consensus/models"
time "time"
)
// Consumer is an autogenerated mock type for the Consumer type
type Consumer[StateT models.Unique, VoteT models.Unique] struct {
mock.Mock
}
// OnCurrentRankDetails provides a mock function with given fields: currentRank, finalizedRank, currentLeader
func (_m *Consumer[StateT, VoteT]) OnCurrentRankDetails(currentRank uint64, finalizedRank uint64, currentLeader models.Identity) {
_m.Called(currentRank, finalizedRank, currentLeader)
}
// OnDoubleProposeDetected provides a mock function with given fields: _a0, _a1
func (_m *Consumer[StateT, VoteT]) OnDoubleProposeDetected(_a0 *models.State[StateT], _a1 *models.State[StateT]) {
_m.Called(_a0, _a1)
}
// OnEventProcessed provides a mock function with no fields
func (_m *Consumer[StateT, VoteT]) OnEventProcessed() {
_m.Called()
}
// OnFinalizedState provides a mock function with given fields: _a0
func (_m *Consumer[StateT, VoteT]) OnFinalizedState(_a0 *models.State[StateT]) {
_m.Called(_a0)
}
// OnInvalidStateDetected provides a mock function with given fields: err
func (_m *Consumer[StateT, VoteT]) OnInvalidStateDetected(err *models.InvalidProposalError[StateT, VoteT]) {
_m.Called(err)
}
// OnLocalTimeout provides a mock function with given fields: currentRank
func (_m *Consumer[StateT, VoteT]) OnLocalTimeout(currentRank uint64) {
_m.Called(currentRank)
}
// OnOwnProposal provides a mock function with given fields: proposal, targetPublicationTime
func (_m *Consumer[StateT, VoteT]) OnOwnProposal(proposal *models.SignedProposal[StateT, VoteT], targetPublicationTime time.Time) {
_m.Called(proposal, targetPublicationTime)
}
// OnOwnTimeout provides a mock function with given fields: timeout
func (_m *Consumer[StateT, VoteT]) OnOwnTimeout(timeout *models.TimeoutState[VoteT]) {
_m.Called(timeout)
}
// OnOwnVote provides a mock function with given fields: vote, recipientID
func (_m *Consumer[StateT, VoteT]) OnOwnVote(vote *VoteT, recipientID models.Identity) {
_m.Called(vote, recipientID)
}
// OnPartialTimeoutCertificate provides a mock function with given fields: currentRank, partialTimeoutCertificate
func (_m *Consumer[StateT, VoteT]) OnPartialTimeoutCertificate(currentRank uint64, partialTimeoutCertificate *consensus.PartialTimeoutCertificateCreated) {
_m.Called(currentRank, partialTimeoutCertificate)
}
// OnQuorumCertificateTriggeredRankChange provides a mock function with given fields: oldRank, newRank, qc
func (_m *Consumer[StateT, VoteT]) OnQuorumCertificateTriggeredRankChange(oldRank uint64, newRank uint64, qc models.QuorumCertificate) {
_m.Called(oldRank, newRank, qc)
}
// OnRankChange provides a mock function with given fields: oldRank, newRank
func (_m *Consumer[StateT, VoteT]) OnRankChange(oldRank uint64, newRank uint64) {
_m.Called(oldRank, newRank)
}
// OnReceiveProposal provides a mock function with given fields: currentRank, proposal
func (_m *Consumer[StateT, VoteT]) OnReceiveProposal(currentRank uint64, proposal *models.SignedProposal[StateT, VoteT]) {
_m.Called(currentRank, proposal)
}
// OnReceiveQuorumCertificate provides a mock function with given fields: currentRank, qc
func (_m *Consumer[StateT, VoteT]) OnReceiveQuorumCertificate(currentRank uint64, qc models.QuorumCertificate) {
_m.Called(currentRank, qc)
}
// OnReceiveTimeoutCertificate provides a mock function with given fields: currentRank, tc
func (_m *Consumer[StateT, VoteT]) OnReceiveTimeoutCertificate(currentRank uint64, tc models.TimeoutCertificate) {
_m.Called(currentRank, tc)
}
// OnStart provides a mock function with given fields: currentRank
func (_m *Consumer[StateT, VoteT]) OnStart(currentRank uint64) {
_m.Called(currentRank)
}
// OnStartingTimeout provides a mock function with given fields: startTime, endTime
func (_m *Consumer[StateT, VoteT]) OnStartingTimeout(startTime time.Time, endTime time.Time) {
_m.Called(startTime, endTime)
}
// OnStateIncorporated provides a mock function with given fields: _a0
func (_m *Consumer[StateT, VoteT]) OnStateIncorporated(_a0 *models.State[StateT]) {
_m.Called(_a0)
}
// OnTimeoutCertificateTriggeredRankChange provides a mock function with given fields: oldRank, newRank, tc
func (_m *Consumer[StateT, VoteT]) OnTimeoutCertificateTriggeredRankChange(oldRank uint64, newRank uint64, tc models.TimeoutCertificate) {
_m.Called(oldRank, newRank, tc)
}
// NewConsumer creates a new instance of Consumer. It also registers a testing interface on the mock and a cleanup function to assert the mocks expectations.
// The first argument is typically a *testing.T value.
func NewConsumer[StateT models.Unique, VoteT models.Unique](t interface {
mock.TestingT
Cleanup(func())
}) *Consumer[StateT, VoteT] {
mock := &Consumer[StateT, VoteT]{}
mock.Mock.Test(t)
t.Cleanup(func() { mock.AssertExpectations(t) })
return mock
}

Some files were not shown because too many files have changed in this diff Show More