Update docs.

This commit is contained in:
FiveMovesAhead 2024-05-17 21:31:03 +08:00
parent 39e40f4171
commit f321a9aecd
5 changed files with 24 additions and 31 deletions

View File

@ -27,7 +27,7 @@ $$num\textunderscore{ }clauses = floor(num\textunderscore{ }variables \cdot \fra
Where $floor$ is a function that rounds a floating point number down to the closest integer.
Consider an example `Challenge` instance with `num_variables=4` and `clauses_to_variables_percent=75`:
Consider an example instance with `num_variables=4` and `clauses_to_variables_percent=75`:
```
clauses = [

View File

@ -17,7 +17,7 @@ The following is an example of the Capacitated Vehicle Routing problem with conf
The demand of each customer is selected independently and uniformly at random from the range [25, 50]. The maximum capacity of each vehicle is set to 100.
Consider an example `Challenge` instance with `num_nodes=5` and `better_than_baseline=0.8` Let the baseline value be 175:
Consider an example instance with `num_nodes=5` and `better_than_baseline=0.8` with the `baseline=175`:
```
demands = [0, 25, 30, 40, 50] # a N array where index (i) is the demand at node i
@ -29,7 +29,7 @@ distance_matrix = [ # a NxN symmetric matrix where index (i,j) is distance from
[40, 35, 30, 10, 0]
]
max_capacity = 100 # total demand for each route must not exceed this number
max_total_distance = baseline*better_than_baseline = 140 # routes must have total distance under this number to be a solution
max_total_distance = baseline*better_than_baseline = 140 # (better_than_baseline * baseline) routes must have total distance under this number to be a solution
```
The depot is the first node (node 0) with demand 0. The vehicle capacity is set to 100. In this example, routes must have a total distance of 140 or less to be a solution.

View File

@ -32,7 +32,7 @@ Each challenge also stipulates the method for verifying whether a "solution" ind
Notes:
- The minimum difficulty of each challenge ensures a minimum of 10^15 unique instances. This number increases further as difficulty increases.
- The minimum difficulty of each challenge ensures a minimum of $10^{15}$ unique instances. This number increases further as difficulty increases.
- Some instances may lack a solution, while others may possess multiple solutions.
- Algorithms are not guaranteed to find a solution.

View File

@ -43,7 +43,7 @@ A Benchmarker must select their settings, comprising 5 fields, before benchmarki
TIG makes it intractable for Benchmarkers to attempt to re-use solutions by:
1. Challenge instances are deterministically pseudo-randomly generated, with at least 10^15 unique instances even at minimum difficulty.
1. Challenge instances are deterministically pseudo-randomly generated, with at least $10^{15}$ unique instances even at minimum difficulty.
2. Instance seeds are computed by hashing benchmark settings and XOR-ing with a nonce, ensuring randomness.
During benchmarking, Benchmarkers iterate over nonces for seed and instance generation.
@ -94,7 +94,7 @@ A benchmark, a lightweight batch of valid solutions found using identical settin
Upon benchmark submission, it enters the mempool for inclusion in the next block. When the benchmark is confirmed into a block, up to three unique nonces are sampled from the metadata, and corresponding solution data must be submitted by Benchmarkers.
TIG refers to this sampling as probabilistic verification, and ensures its unpredictability by using both the new block id and benchmark id in seeding the pseudo-random number generator. Probabilistic verification not only drastically reduces the amount of solution data that gets submitted to TIG, but also renders it irrational to fraudulently “pad” a benchmark with fake solutions:
TIG refers to this sampling as probabilistic verification, and ensures its unpredictability by using both the new block id and benchmark id in seeding the pseudo-random number generator. Probabilistic verification not only drastically reduces the amount of solution data that gets submitted to TIG, but also renders it irrational to fraudulently "pad" a benchmark with fake solutions:
If a Benchmarker computes N solutions, and pads M fake solutions to the benchmark for a total of N + M solutions, then the chance of getting away with this is $\left(\frac{N}{N+M}\right)^3$. The expected payoff for honesty (N solutions always accepted) is always greater than the payoff for fraudulence (N+M solutions sometimes accepted):

View File

@ -1,45 +1,38 @@
# 5. Optimisable Proof-of-Work
Optimisable proof-of-work (OPoW) distinctively requires multiple proof-of-works to be featured, “binding” them in such a way that optimisations to the proof-of-work algorithms do not cause instability/centralisation. This binding is embodied in the calculation of influence for Benchmarkers. The adoption of an algorithm is then calculated using each Benchmarkers influence and the fraction of qualifiers they computed using that algorithm.
Optimisable proof-of-work (OPoW) uniquely can integrate multiple proof-of-works, “binding” them in such a way that optimisations to the proof-of-work algorithms do not cause instability/centralisation. This binding is embodied in the calculation of influence for Benchmarkers. The adoption of an algorithm is then calculated using each Benchmarkers influence and the fraction of qualifiers they computed using that algorithm.
## 5.1. Influence
## 5.1. Rewards for Benchmarkers
OPoW introduces a novel metric, imbalance, aimed at quantifying the degree to which a Benchmarker spreads their computational work between challenges unevenly. This is only possible when there are multiple proof-of-works.
The metric is defined as $imbalance = \frac{CV(\bf{\%qualifiers})^2}{N-1}$ where CV is coefficient of variation, %qualifiers is the fraction of qualifiers found by a Benchmarker across challenges, and N is the number of active challenges. This metric ranges from 0 to 1, where lower values signify less centralisation.
The metric is defined as:
Penalising imbalance is achieved through $imbalance\textunderscore{ }penalty = 1 - exp(-k \cdot imbalance)$, where k is a coefficient (currently set to 1.5). The modifier ranges from 1 to 0, where 0 signifies no penalty.
$$imbalance = \frac{C_v(\\%qualifiers)^2}{N-1}$$
When block rewards are distributed pro-rata amongst Benchmarkers based on influence, where influence has imbalance penalty applied, the result is that Benchmarkers are incentivised to minimise their imbalance as to maximise their earnings:
where \(C_V\) is coefficient of variation, %qualifiers is the fraction of qualifiers found by a Benchmarker across challenges, and N is the number of active challenges. This metric ranges from 0 to 1, where lower values signify less centralisation.
$$weight = mean(\bf{\%qualifiers}) \cdot (1 - imbalance\textunderscore{ }penalty)$$
Penalising imbalance is achieved through:
$$influence = \frac{\bf{weights}}{sum(\bf{weights})}$$
$$imbalance\textunderscore{ }penalty = 1 - exp(- k \cdot imbalance)$$
Where:
* $\bf{\%qualifiers}$ is particular to a Benchmarker, where elements correspond to the fraction of qualifiers found by a Benchmarker for a challenge
* $\bf{weights}$ is a set, where elements correspond to the weight for a particular Benchmarker
where k is a coefficient (currently set to 1.5). The imbalance penalty ranges from 0 to 1, where 0 signifies no penalty.
When block rewards are distributed pro-rata amongst Benchmarkers after applying their imbalance penalty, the result is that Benchmarkers are incentivised to minimise their imbalance as to maximise their reward:
$$benchmarker\textunderscore{ }reward \propto mean(\\%qualifiers) \cdot (1 - imbalance\textunderscore{ }penalty)$$
where $\\%qualifiers$ is the fraction of qualifiers found by a Benchmarker for a particular challenge
Notes:
- A Benchmarker focusing solely on a single challenge will exhibit a maximum imbalance and therefore maximum penalty.
- Conversely, a Benchmarker with an equal fraction of qualifiers across all challenges will demonstrate a minimum imbalance value of 0.
## 5.2. Adoption
## 5.2. Rewards for Innovators
Any active solution can be assumed to already have undergone verification of the algorithm used. This allows the straightforward use of Benchmarkers' influence for calculating an algorithm's weight:
In order to guard against potential manipulation of algorithm adoption by Benchmarkers, Innovator rewards are linked to Benchmarker rewards (where imbalance is heavily penalised):
$$weight = sum(\bf{influences} \cdot \bf{algorithm\textunderscore{ }\%qualifiers})$$
$$innovator\textunderscore{ }reward \propto \sum_{benchmarkers} influence \cdot algorithm\textunderscore{ }\\%qualifiers$$
Where:
* $\bf{influences}$ is a set, where elements correspond to the influence for a particular Benchmarker
* $\bf{algorithm\textunderscore{ }\%qualifiers}$ is a set, where elements correspond to the fraction of qualifiers found by a Benchmarker using a particular algorithm
Then, for each challenge, adoption is calculated:
$$adoption = \frac{\bf{weights}}{sum(\bf{weights})}$$
Where:
* $\bf{\%weights}$ is a set, where elements correspond to the weight for a particular algorithm
By integrating influence into the adoption calculation, TIG effectively guards against potential manipulation by Benchmarkers.
Where $algorithm\textunderscore{ }\\%qualifiers$ is the fraction of qualifiers found by a Benchmarker using a particular algorithm (the algorithm submitted by the Innovator)