Merge branch 'main' of https://github.com/tig-foundation/tig-monorepo
Some checks failed
Test Workspace / Test Workspace (push) Has been cancelled

This commit is contained in:
François Patron 2025-02-02 17:06:05 +01:00
commit c4d7ada8b4
68 changed files with 6149 additions and 716 deletions

View File

@ -38,7 +38,7 @@ jobs:
echo "CHALLENGE=$CHALLENGE" >> $GITHUB_ENV
echo "ALGORITHM=$ALGORITHM" >> $GITHUB_ENV
echo "WASM_PATH=$ALGORITHM" >> $GITHUB_ENV
- uses: dtolnay/rust-toolchain@stable
- uses: dtolnay/rust-toolchain@1.82.0
with:
targets: wasm32-wasi
# - name: Install CUDA

2
_config.yml Normal file
View File

@ -0,0 +1,2 @@
exclude:
- tig-benchmarker/

View File

@ -1,50 +1,54 @@
# Knapsack Problem
The quadratic knapsack problem is one of the most popular variants of the single knapsack problem with applications in many optimization problems. The aim is to maximise the value of individual items placed in the knapsack while satisfying a weight constraint. However, pairs of items also have interaction values which may be negative or positive that are added to the total value within the knapsack.
The quadratic knapsack problem is one of the most popular variants of the single knapsack problem, with applications in many optimization contexts. The aim is to maximize the value of individual items placed in the knapsack while satisfying a weight constraint. However, pairs of items also have positive interaction values, contributing to the total value within the knapsack.
# Example
## Challenge Overview
For our challenge, we use a version of the quadratic knapsack problem with configurable difficulty, where the following two parameters can be adjusted in order to vary the difficulty of the challenge:
- Parameter 1: $num\textunderscore{ }items$ is the number of items from which you need to select a subset to put in the knapsack.
- Parameter 2: $better\textunderscore{ }than\textunderscore{ }baseline \geq 1$ is the factor by which a solution must be better than the baseline value [link TIG challenges for explanation of baseline value].
- Parameter 2: $better\textunderscore{ }than\textunderscore{ }baseline \geq 1$ is the factor by which a solution must be better than the baseline value.
The larger the $num\textunderscore{ }items$, the more number of possible $S_{knapsack}$, making the challenge more difficult. Also, the higher $better\textunderscore{ }than\textunderscore{ }baseline$, the less likely a given $S_{knapsack}$ will be a solution, making the challenge more difficult.
The weight $w_i$ of each of the $num\textunderscore{ }items$ is an integer, chosen independently, uniformly at random, and such that each of the item weights $1 <= w_i <= 50$, for $i=1,2,...,num\textunderscore{ }items$. The individual values of the items $v_i$ are selected by random from the range $50 <= v_i <= 100$, and the interaction values of pairs of items $V_{ij}$ are selected by random from the range $-50 <= V_{ij} <= 50$.
The weight $w_i$ of each of the $num\textunderscore{ }items$ is an integer, chosen independently, uniformly at random, and such that each of the item weights $1 <= w_i <= 50$, for $i=1,2,...,num\textunderscore{ }items$. The values of the items are nonzero with a density of 25%, meaning they have a 25% probability of being nonzero. The nonzero individual values of the item, $v_i$, and the nonzero interaction values of pairs of items, $V_{ij}$, are selected at random from the range $[1,100]$.
The total value of a knapsack is determined by summing up the individual values of items in the knapsack, as well as the interaction values of every pair of items $(i,j)$ where $i > j$ in the knapsack:
The total value of a knapsack is determined by summing up the individual values of items in the knapsack, as well as the interaction values of every pair of items \((i,j)\), where \( i > j \), in the knapsack:
$$V_{knapsack} = \sum_{i \in knapsack}{v_i} + \sum_{(i,j)\in knapsack}{V_{ij}}$$
$$
V_{knapsack} = \sum_{i \in knapsack}{v_i} + \sum_{(i,j)\in knapsack}{V_{ij}}
$$
We impose a weight constraint $W(S_{knapsack}) <= 0.5 \cdot W(S_{all})$, where the knapsack can hold at most half the total weight of all items.
Consider an example of a challenge instance with `num_items=4` and `better_than_baseline = 1.10`. Let the baseline value be 150:
# Example
Consider an example of a challenge instance with `num_items=4` and `better_than_baseline = 1.50`. Let the baseline value be 46:
```
weights = [26, 20, 39, 13]
individual_values = [63, 87, 52, 97]
interaction_values = [ 0, 23, -18, -37
23, 0, 42, -28
-18, 42, 0, 32
-37, -28, 32, 0]
max_weight = 60
min_value = baseline*better_than_baseline = 165
weights = [39, 29, 15, 43]
individual_values = [0, 14, 0, 75]
interaction_values = [ 0, 0, 0, 0
0, 0, 32, 0
0, 32, 0, 0
0, 0, 0, 0]
max_weight = 63
min_value = baseline*better_than_baseline = 69
```
The objective is to find a set of items where the total weight is at most 60 but has a total value of at least 165.
The objective is to find a set of items where the total weight is at most 63 but has a total value of at least 69.
Now consider the following selection:
```
selected_items = [0, 1, 3]
selected_items = [2, 3]
```
When evaluating this selection, we can confirm that the total weight is less than 60, and the total value is more than 165, thereby this selection of items is a solution:
When evaluating this selection, we can confirm that the total weight is less than 63, and the total value is more than 69, thereby this selection of items is a solution:
* Total weight = 26 + 20 + 13 = 59
* Total value = 63 + 52 + 97 + 23 - 37 - 28 = 170
* Total weight = 15 + 43 = 58
* Total value = 0 + 75 + 0 = 75
# Our Challenge
In TIG, the baseline value is determined by a greedy algorithm that simply iterates through items sorted by potential value to weight ratio, adding them if knapsack is still below the weight constraint.
In TIG, the baseline value is determined by a two-stage approach. First, items are selected based on their value-to-weight ratio, including interaction values, until the capacity is reached. Then, a tabu-based local search refines the solution by swapping items to improve value while avoiding reversals, with early termination for unpromising swaps.

View File

@ -76,6 +76,12 @@ if ! is_positive_integer "$num_workers"; then
echo "Error: Number of workers must be a positive integer."
exit 1
fi
read -p "Enter max fuel (default is 10000000000): " max_fuel
max_fuel=${max_fuel:-1}
if ! is_positive_integer "$max_fuel"; then
echo "Error: Max fuel must be a positive integer."
exit 1
fi
read -p "Enable debug mode? (leave blank to disable) " enable_debug
if [[ -n $enable_debug ]]; then
debug_mode=true
@ -117,7 +123,7 @@ while [ $remaining_nonces -gt 0 ]; do
start_time=$(date +%s%3N)
stdout=$(mktemp)
stderr=$(mktemp)
./target/release/tig-worker compute_batch "$SETTINGS" "random_string" $current_nonce $nonces_to_compute $power_of_2_nonces $REPO_DIR/tig-algorithms/wasm/$CHALLENGE/$ALGORITHM.wasm --workers $nonces_to_compute >"$stdout" 2>"$stderr"
./target/release/tig-worker compute_batch "$SETTINGS" "random_string" $current_nonce $nonces_to_compute $power_of_2_nonces $REPO_DIR/tig-algorithms/wasm/$CHALLENGE/$ALGORITHM.wasm --workers $nonces_to_compute --fuel $max_fuel >"$stdout" 2>"$stderr"
exit_code=$?
output_stdout=$(cat "$stdout")
output_stderr=$(cat "$stderr")

View File

@ -108,7 +108,8 @@ pub use classic_quadkp as c003_a051;
// c003_a053
// c003_a054
pub mod quadkp_improved;
pub use quadkp_improved as c003_a054;
// c003_a055

View File

@ -0,0 +1,188 @@
/*!
Copyright 2024 Rootz
Licensed under the TIG Benchmarker Outbound Game License v1.0 (the "License"); you
may not use this file except in compliance with the License. You may obtain a copy
of the License at
https://github.com/tig-foundation/tig-monorepo/tree/main/docs/licenses
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the specific
language governing permissions and limitations under the License.
*/
// TIG's UI uses the pattern `tig_challenges::<challenge_name>` to automatically detect your algorithm's challenge
use anyhow::Result;
use rand::{SeedableRng, Rng, rngs::StdRng};
use tig_challenges::knapsack::{Challenge, Solution};
pub fn solve_challenge(challenge: &Challenge) -> Result<Option<Solution>> {
let vertex_count = challenge.weights.len();
let mut item_scores: Vec<(usize, f32)> = (0..vertex_count)
.map(|index| {
let interaction_sum: i32 = challenge.interaction_values[index].iter().sum();
let secondary_score = challenge.values[index] as f32 / challenge.weights[index] as f32;
let combined_score = (challenge.values[index] as f32 * 0.75 + interaction_sum as f32 * 0.15 + secondary_score * 0.1)
/ challenge.weights[index] as f32;
(index, combined_score)
})
.collect();
item_scores.sort_unstable_by(|a, b| b.1.partial_cmp(&a.1).unwrap());
let mut selected_items = Vec::with_capacity(vertex_count);
let mut unselected_items = Vec::with_capacity(vertex_count);
let mut current_weight = 0;
let mut current_value = 0;
for &(index, _) in &item_scores {
if current_weight + challenge.weights[index] <= challenge.max_weight {
current_weight += challenge.weights[index];
current_value += challenge.values[index] as i32;
for &selected in &selected_items {
current_value += challenge.interaction_values[index][selected];
}
selected_items.push(index);
} else {
unselected_items.push(index);
}
}
let mut mutation_rates = vec![0; vertex_count];
for index in 0..vertex_count {
mutation_rates[index] = challenge.values[index] as i32;
for &selected in &selected_items {
mutation_rates[index] += challenge.interaction_values[index][selected];
}
}
let max_generations = 150;
let mut cooling_schedule = vec![0; vertex_count];
let mut rng = StdRng::seed_from_u64(challenge.seed[0] as u64);
for generation in 0..max_generations {
let mut best_gain = 0;
let mut best_swap = None;
for (u_index, &mutant) in unselected_items.iter().enumerate() {
if cooling_schedule[mutant] > 0 {
continue;
}
let mutant_fitness = mutation_rates[mutant];
let extra_weight = challenge.weights[mutant] as i32 - (challenge.max_weight as i32 - current_weight as i32);
if mutant_fitness < 0 {
continue;
}
for (c_index, &selected) in selected_items.iter().enumerate() {
if cooling_schedule[selected] > 0 {
continue;
}
if extra_weight > 0 && (challenge.weights[selected] as i32) < extra_weight {
continue;
}
let interaction_penalty = (challenge.interaction_values[mutant][selected] as f32 * 0.3) as i32;
let fitness_gain = mutant_fitness - mutation_rates[selected] - interaction_penalty;
if fitness_gain > best_gain {
best_gain = fitness_gain;
best_swap = Some((u_index, c_index));
}
}
}
if let Some((u_index, c_index)) = best_swap {
let added_item = unselected_items[u_index];
let removed_item = selected_items[c_index];
selected_items.swap_remove(c_index);
unselected_items.swap_remove(u_index);
selected_items.push(added_item);
unselected_items.push(removed_item);
current_value += best_gain;
current_weight = current_weight + challenge.weights[added_item] - challenge.weights[removed_item];
if current_weight > challenge.max_weight {
continue;
}
for index in 0..vertex_count {
mutation_rates[index] += challenge.interaction_values[index][added_item]
- challenge.interaction_values[index][removed_item];
}
cooling_schedule[added_item] = 3;
cooling_schedule[removed_item] = 3;
}
if current_value as u32 >= challenge.min_value {
return Ok(Some(Solution { items: selected_items }));
}
for cooling_rate in cooling_schedule.iter_mut() {
*cooling_rate = if *cooling_rate > 0 { *cooling_rate - 1 } else { 0 };
}
if current_value as u32 > (challenge.min_value * 9 / 10) {
let high_potential_items: Vec<usize> = unselected_items
.iter()
.filter(|&&i| challenge.values[i] as i32 > (challenge.min_value as i32 / 4))
.copied()
.collect();
for &item in high_potential_items.iter().take(2) {
if current_weight + challenge.weights[item] <= challenge.max_weight {
selected_items.push(item);
unselected_items.retain(|&x| x != item);
current_weight += challenge.weights[item];
current_value += challenge.values[item] as i32;
for &selected in &selected_items {
if selected != item {
current_value += challenge.interaction_values[item][selected];
}
}
if current_value as u32 >= challenge.min_value {
return Ok(Some(Solution { items: selected_items }));
}
}
}
}
}
if current_value as u32 >= challenge.min_value && current_weight <= challenge.max_weight {
Ok(Some(Solution { items: selected_items }))
} else {
Ok(None)
}
}
#[cfg(feature = "cuda")]
mod gpu_optimisation {
use super::*;
use cudarc::driver::*;
use std::{collections::HashMap, sync::Arc};
use tig_challenges::CudaKernel;
pub const KERNEL: Option<CudaKernel> = None;
pub fn cuda_solve_challenge(
challenge: &Challenge,
dev: &Arc<CudaDevice>,
mut funcs: HashMap<&'static str, CudaFunction>,
) -> anyhow::Result<Option<Solution>> {
solve_challenge(challenge)
}
}
#[cfg(feature = "cuda")]
pub use gpu_optimisation::{cuda_solve_challenge, KERNEL};

View File

@ -0,0 +1,188 @@
/*!
Copyright 2024 Rootz
Licensed under the TIG Commercial License v1.0 (the "License"); you
may not use this file except in compliance with the License. You may obtain a copy
of the License at
https://github.com/tig-foundation/tig-monorepo/tree/main/docs/licenses
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the specific
language governing permissions and limitations under the License.
*/
// TIG's UI uses the pattern `tig_challenges::<challenge_name>` to automatically detect your algorithm's challenge
use anyhow::Result;
use rand::{SeedableRng, Rng, rngs::StdRng};
use tig_challenges::knapsack::{Challenge, Solution};
pub fn solve_challenge(challenge: &Challenge) -> Result<Option<Solution>> {
let vertex_count = challenge.weights.len();
let mut item_scores: Vec<(usize, f32)> = (0..vertex_count)
.map(|index| {
let interaction_sum: i32 = challenge.interaction_values[index].iter().sum();
let secondary_score = challenge.values[index] as f32 / challenge.weights[index] as f32;
let combined_score = (challenge.values[index] as f32 * 0.75 + interaction_sum as f32 * 0.15 + secondary_score * 0.1)
/ challenge.weights[index] as f32;
(index, combined_score)
})
.collect();
item_scores.sort_unstable_by(|a, b| b.1.partial_cmp(&a.1).unwrap());
let mut selected_items = Vec::with_capacity(vertex_count);
let mut unselected_items = Vec::with_capacity(vertex_count);
let mut current_weight = 0;
let mut current_value = 0;
for &(index, _) in &item_scores {
if current_weight + challenge.weights[index] <= challenge.max_weight {
current_weight += challenge.weights[index];
current_value += challenge.values[index] as i32;
for &selected in &selected_items {
current_value += challenge.interaction_values[index][selected];
}
selected_items.push(index);
} else {
unselected_items.push(index);
}
}
let mut mutation_rates = vec![0; vertex_count];
for index in 0..vertex_count {
mutation_rates[index] = challenge.values[index] as i32;
for &selected in &selected_items {
mutation_rates[index] += challenge.interaction_values[index][selected];
}
}
let max_generations = 150;
let mut cooling_schedule = vec![0; vertex_count];
let mut rng = StdRng::seed_from_u64(challenge.seed[0] as u64);
for generation in 0..max_generations {
let mut best_gain = 0;
let mut best_swap = None;
for (u_index, &mutant) in unselected_items.iter().enumerate() {
if cooling_schedule[mutant] > 0 {
continue;
}
let mutant_fitness = mutation_rates[mutant];
let extra_weight = challenge.weights[mutant] as i32 - (challenge.max_weight as i32 - current_weight as i32);
if mutant_fitness < 0 {
continue;
}
for (c_index, &selected) in selected_items.iter().enumerate() {
if cooling_schedule[selected] > 0 {
continue;
}
if extra_weight > 0 && (challenge.weights[selected] as i32) < extra_weight {
continue;
}
let interaction_penalty = (challenge.interaction_values[mutant][selected] as f32 * 0.3) as i32;
let fitness_gain = mutant_fitness - mutation_rates[selected] - interaction_penalty;
if fitness_gain > best_gain {
best_gain = fitness_gain;
best_swap = Some((u_index, c_index));
}
}
}
if let Some((u_index, c_index)) = best_swap {
let added_item = unselected_items[u_index];
let removed_item = selected_items[c_index];
selected_items.swap_remove(c_index);
unselected_items.swap_remove(u_index);
selected_items.push(added_item);
unselected_items.push(removed_item);
current_value += best_gain;
current_weight = current_weight + challenge.weights[added_item] - challenge.weights[removed_item];
if current_weight > challenge.max_weight {
continue;
}
for index in 0..vertex_count {
mutation_rates[index] += challenge.interaction_values[index][added_item]
- challenge.interaction_values[index][removed_item];
}
cooling_schedule[added_item] = 3;
cooling_schedule[removed_item] = 3;
}
if current_value as u32 >= challenge.min_value {
return Ok(Some(Solution { items: selected_items }));
}
for cooling_rate in cooling_schedule.iter_mut() {
*cooling_rate = if *cooling_rate > 0 { *cooling_rate - 1 } else { 0 };
}
if current_value as u32 > (challenge.min_value * 9 / 10) {
let high_potential_items: Vec<usize> = unselected_items
.iter()
.filter(|&&i| challenge.values[i] as i32 > (challenge.min_value as i32 / 4))
.copied()
.collect();
for &item in high_potential_items.iter().take(2) {
if current_weight + challenge.weights[item] <= challenge.max_weight {
selected_items.push(item);
unselected_items.retain(|&x| x != item);
current_weight += challenge.weights[item];
current_value += challenge.values[item] as i32;
for &selected in &selected_items {
if selected != item {
current_value += challenge.interaction_values[item][selected];
}
}
if current_value as u32 >= challenge.min_value {
return Ok(Some(Solution { items: selected_items }));
}
}
}
}
}
if current_value as u32 >= challenge.min_value && current_weight <= challenge.max_weight {
Ok(Some(Solution { items: selected_items }))
} else {
Ok(None)
}
}
#[cfg(feature = "cuda")]
mod gpu_optimisation {
use super::*;
use cudarc::driver::*;
use std::{collections::HashMap, sync::Arc};
use tig_challenges::CudaKernel;
pub const KERNEL: Option<CudaKernel> = None;
pub fn cuda_solve_challenge(
challenge: &Challenge,
dev: &Arc<CudaDevice>,
mut funcs: HashMap<&'static str, CudaFunction>,
) -> anyhow::Result<Option<Solution>> {
solve_challenge(challenge)
}
}
#[cfg(feature = "cuda")]
pub use gpu_optimisation::{cuda_solve_challenge, KERNEL};

View File

@ -0,0 +1,188 @@
/*!
Copyright 2024 Rootz
Licensed under the TIG Inbound Game License v1.0 or (at your option) any later
version (the "License"); you may not use this file except in compliance with the
License. You may obtain a copy of the License at
https://github.com/tig-foundation/tig-monorepo/tree/main/docs/licenses
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the specific
language governing permissions and limitations under the License.
*/
// TIG's UI uses the pattern `tig_challenges::<challenge_name>` to automatically detect your algorithm's challenge
use anyhow::Result;
use rand::{SeedableRng, Rng, rngs::StdRng};
use tig_challenges::knapsack::{Challenge, Solution};
pub fn solve_challenge(challenge: &Challenge) -> Result<Option<Solution>> {
let vertex_count = challenge.weights.len();
let mut item_scores: Vec<(usize, f32)> = (0..vertex_count)
.map(|index| {
let interaction_sum: i32 = challenge.interaction_values[index].iter().sum();
let secondary_score = challenge.values[index] as f32 / challenge.weights[index] as f32;
let combined_score = (challenge.values[index] as f32 * 0.75 + interaction_sum as f32 * 0.15 + secondary_score * 0.1)
/ challenge.weights[index] as f32;
(index, combined_score)
})
.collect();
item_scores.sort_unstable_by(|a, b| b.1.partial_cmp(&a.1).unwrap());
let mut selected_items = Vec::with_capacity(vertex_count);
let mut unselected_items = Vec::with_capacity(vertex_count);
let mut current_weight = 0;
let mut current_value = 0;
for &(index, _) in &item_scores {
if current_weight + challenge.weights[index] <= challenge.max_weight {
current_weight += challenge.weights[index];
current_value += challenge.values[index] as i32;
for &selected in &selected_items {
current_value += challenge.interaction_values[index][selected];
}
selected_items.push(index);
} else {
unselected_items.push(index);
}
}
let mut mutation_rates = vec![0; vertex_count];
for index in 0..vertex_count {
mutation_rates[index] = challenge.values[index] as i32;
for &selected in &selected_items {
mutation_rates[index] += challenge.interaction_values[index][selected];
}
}
let max_generations = 150;
let mut cooling_schedule = vec![0; vertex_count];
let mut rng = StdRng::seed_from_u64(challenge.seed[0] as u64);
for generation in 0..max_generations {
let mut best_gain = 0;
let mut best_swap = None;
for (u_index, &mutant) in unselected_items.iter().enumerate() {
if cooling_schedule[mutant] > 0 {
continue;
}
let mutant_fitness = mutation_rates[mutant];
let extra_weight = challenge.weights[mutant] as i32 - (challenge.max_weight as i32 - current_weight as i32);
if mutant_fitness < 0 {
continue;
}
for (c_index, &selected) in selected_items.iter().enumerate() {
if cooling_schedule[selected] > 0 {
continue;
}
if extra_weight > 0 && (challenge.weights[selected] as i32) < extra_weight {
continue;
}
let interaction_penalty = (challenge.interaction_values[mutant][selected] as f32 * 0.3) as i32;
let fitness_gain = mutant_fitness - mutation_rates[selected] - interaction_penalty;
if fitness_gain > best_gain {
best_gain = fitness_gain;
best_swap = Some((u_index, c_index));
}
}
}
if let Some((u_index, c_index)) = best_swap {
let added_item = unselected_items[u_index];
let removed_item = selected_items[c_index];
selected_items.swap_remove(c_index);
unselected_items.swap_remove(u_index);
selected_items.push(added_item);
unselected_items.push(removed_item);
current_value += best_gain;
current_weight = current_weight + challenge.weights[added_item] - challenge.weights[removed_item];
if current_weight > challenge.max_weight {
continue;
}
for index in 0..vertex_count {
mutation_rates[index] += challenge.interaction_values[index][added_item]
- challenge.interaction_values[index][removed_item];
}
cooling_schedule[added_item] = 3;
cooling_schedule[removed_item] = 3;
}
if current_value as u32 >= challenge.min_value {
return Ok(Some(Solution { items: selected_items }));
}
for cooling_rate in cooling_schedule.iter_mut() {
*cooling_rate = if *cooling_rate > 0 { *cooling_rate - 1 } else { 0 };
}
if current_value as u32 > (challenge.min_value * 9 / 10) {
let high_potential_items: Vec<usize> = unselected_items
.iter()
.filter(|&&i| challenge.values[i] as i32 > (challenge.min_value as i32 / 4))
.copied()
.collect();
for &item in high_potential_items.iter().take(2) {
if current_weight + challenge.weights[item] <= challenge.max_weight {
selected_items.push(item);
unselected_items.retain(|&x| x != item);
current_weight += challenge.weights[item];
current_value += challenge.values[item] as i32;
for &selected in &selected_items {
if selected != item {
current_value += challenge.interaction_values[item][selected];
}
}
if current_value as u32 >= challenge.min_value {
return Ok(Some(Solution { items: selected_items }));
}
}
}
}
}
if current_value as u32 >= challenge.min_value && current_weight <= challenge.max_weight {
Ok(Some(Solution { items: selected_items }))
} else {
Ok(None)
}
}
#[cfg(feature = "cuda")]
mod gpu_optimisation {
use super::*;
use cudarc::driver::*;
use std::{collections::HashMap, sync::Arc};
use tig_challenges::CudaKernel;
pub const KERNEL: Option<CudaKernel> = None;
pub fn cuda_solve_challenge(
challenge: &Challenge,
dev: &Arc<CudaDevice>,
mut funcs: HashMap<&'static str, CudaFunction>,
) -> anyhow::Result<Option<Solution>> {
solve_challenge(challenge)
}
}
#[cfg(feature = "cuda")]
pub use gpu_optimisation::{cuda_solve_challenge, KERNEL};

View File

@ -0,0 +1,188 @@
/*!
Copyright 2024 Rootz
Licensed under the TIG Innovator Outbound Game License v1.0 (the "License"); you
may not use this file except in compliance with the License. You may obtain a copy
of the License at
https://github.com/tig-foundation/tig-monorepo/tree/main/docs/licenses
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the specific
language governing permissions and limitations under the License.
*/
// TIG's UI uses the pattern `tig_challenges::<challenge_name>` to automatically detect your algorithm's challenge
use anyhow::Result;
use rand::{SeedableRng, Rng, rngs::StdRng};
use tig_challenges::knapsack::{Challenge, Solution};
pub fn solve_challenge(challenge: &Challenge) -> Result<Option<Solution>> {
let vertex_count = challenge.weights.len();
let mut item_scores: Vec<(usize, f32)> = (0..vertex_count)
.map(|index| {
let interaction_sum: i32 = challenge.interaction_values[index].iter().sum();
let secondary_score = challenge.values[index] as f32 / challenge.weights[index] as f32;
let combined_score = (challenge.values[index] as f32 * 0.75 + interaction_sum as f32 * 0.15 + secondary_score * 0.1)
/ challenge.weights[index] as f32;
(index, combined_score)
})
.collect();
item_scores.sort_unstable_by(|a, b| b.1.partial_cmp(&a.1).unwrap());
let mut selected_items = Vec::with_capacity(vertex_count);
let mut unselected_items = Vec::with_capacity(vertex_count);
let mut current_weight = 0;
let mut current_value = 0;
for &(index, _) in &item_scores {
if current_weight + challenge.weights[index] <= challenge.max_weight {
current_weight += challenge.weights[index];
current_value += challenge.values[index] as i32;
for &selected in &selected_items {
current_value += challenge.interaction_values[index][selected];
}
selected_items.push(index);
} else {
unselected_items.push(index);
}
}
let mut mutation_rates = vec![0; vertex_count];
for index in 0..vertex_count {
mutation_rates[index] = challenge.values[index] as i32;
for &selected in &selected_items {
mutation_rates[index] += challenge.interaction_values[index][selected];
}
}
let max_generations = 150;
let mut cooling_schedule = vec![0; vertex_count];
let mut rng = StdRng::seed_from_u64(challenge.seed[0] as u64);
for generation in 0..max_generations {
let mut best_gain = 0;
let mut best_swap = None;
for (u_index, &mutant) in unselected_items.iter().enumerate() {
if cooling_schedule[mutant] > 0 {
continue;
}
let mutant_fitness = mutation_rates[mutant];
let extra_weight = challenge.weights[mutant] as i32 - (challenge.max_weight as i32 - current_weight as i32);
if mutant_fitness < 0 {
continue;
}
for (c_index, &selected) in selected_items.iter().enumerate() {
if cooling_schedule[selected] > 0 {
continue;
}
if extra_weight > 0 && (challenge.weights[selected] as i32) < extra_weight {
continue;
}
let interaction_penalty = (challenge.interaction_values[mutant][selected] as f32 * 0.3) as i32;
let fitness_gain = mutant_fitness - mutation_rates[selected] - interaction_penalty;
if fitness_gain > best_gain {
best_gain = fitness_gain;
best_swap = Some((u_index, c_index));
}
}
}
if let Some((u_index, c_index)) = best_swap {
let added_item = unselected_items[u_index];
let removed_item = selected_items[c_index];
selected_items.swap_remove(c_index);
unselected_items.swap_remove(u_index);
selected_items.push(added_item);
unselected_items.push(removed_item);
current_value += best_gain;
current_weight = current_weight + challenge.weights[added_item] - challenge.weights[removed_item];
if current_weight > challenge.max_weight {
continue;
}
for index in 0..vertex_count {
mutation_rates[index] += challenge.interaction_values[index][added_item]
- challenge.interaction_values[index][removed_item];
}
cooling_schedule[added_item] = 3;
cooling_schedule[removed_item] = 3;
}
if current_value as u32 >= challenge.min_value {
return Ok(Some(Solution { items: selected_items }));
}
for cooling_rate in cooling_schedule.iter_mut() {
*cooling_rate = if *cooling_rate > 0 { *cooling_rate - 1 } else { 0 };
}
if current_value as u32 > (challenge.min_value * 9 / 10) {
let high_potential_items: Vec<usize> = unselected_items
.iter()
.filter(|&&i| challenge.values[i] as i32 > (challenge.min_value as i32 / 4))
.copied()
.collect();
for &item in high_potential_items.iter().take(2) {
if current_weight + challenge.weights[item] <= challenge.max_weight {
selected_items.push(item);
unselected_items.retain(|&x| x != item);
current_weight += challenge.weights[item];
current_value += challenge.values[item] as i32;
for &selected in &selected_items {
if selected != item {
current_value += challenge.interaction_values[item][selected];
}
}
if current_value as u32 >= challenge.min_value {
return Ok(Some(Solution { items: selected_items }));
}
}
}
}
}
if current_value as u32 >= challenge.min_value && current_weight <= challenge.max_weight {
Ok(Some(Solution { items: selected_items }))
} else {
Ok(None)
}
}
#[cfg(feature = "cuda")]
mod gpu_optimisation {
use super::*;
use cudarc::driver::*;
use std::{collections::HashMap, sync::Arc};
use tig_challenges::CudaKernel;
pub const KERNEL: Option<CudaKernel> = None;
pub fn cuda_solve_challenge(
challenge: &Challenge,
dev: &Arc<CudaDevice>,
mut funcs: HashMap<&'static str, CudaFunction>,
) -> anyhow::Result<Option<Solution>> {
solve_challenge(challenge)
}
}
#[cfg(feature = "cuda")]
pub use gpu_optimisation::{cuda_solve_challenge, KERNEL};

View File

@ -0,0 +1,4 @@
mod benchmarker_outbound;
pub use benchmarker_outbound::solve_challenge;
#[cfg(feature = "cuda")]
pub use benchmarker_outbound::{cuda_solve_challenge, KERNEL};

View File

@ -0,0 +1,188 @@
/*!
Copyright 2024 Rootz
Licensed under the TIG Open Data License v1.0 or (at your option) any later version
(the "License"); you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://github.com/tig-foundation/tig-monorepo/tree/main/docs/licenses
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the specific
language governing permissions and limitations under the License.
*/
// TIG's UI uses the pattern `tig_challenges::<challenge_name>` to automatically detect your algorithm's challenge
use anyhow::Result;
use rand::{SeedableRng, Rng, rngs::StdRng};
use tig_challenges::knapsack::{Challenge, Solution};
pub fn solve_challenge(challenge: &Challenge) -> Result<Option<Solution>> {
let vertex_count = challenge.weights.len();
let mut item_scores: Vec<(usize, f32)> = (0..vertex_count)
.map(|index| {
let interaction_sum: i32 = challenge.interaction_values[index].iter().sum();
let secondary_score = challenge.values[index] as f32 / challenge.weights[index] as f32;
let combined_score = (challenge.values[index] as f32 * 0.75 + interaction_sum as f32 * 0.15 + secondary_score * 0.1)
/ challenge.weights[index] as f32;
(index, combined_score)
})
.collect();
item_scores.sort_unstable_by(|a, b| b.1.partial_cmp(&a.1).unwrap());
let mut selected_items = Vec::with_capacity(vertex_count);
let mut unselected_items = Vec::with_capacity(vertex_count);
let mut current_weight = 0;
let mut current_value = 0;
for &(index, _) in &item_scores {
if current_weight + challenge.weights[index] <= challenge.max_weight {
current_weight += challenge.weights[index];
current_value += challenge.values[index] as i32;
for &selected in &selected_items {
current_value += challenge.interaction_values[index][selected];
}
selected_items.push(index);
} else {
unselected_items.push(index);
}
}
let mut mutation_rates = vec![0; vertex_count];
for index in 0..vertex_count {
mutation_rates[index] = challenge.values[index] as i32;
for &selected in &selected_items {
mutation_rates[index] += challenge.interaction_values[index][selected];
}
}
let max_generations = 150;
let mut cooling_schedule = vec![0; vertex_count];
let mut rng = StdRng::seed_from_u64(challenge.seed[0] as u64);
for generation in 0..max_generations {
let mut best_gain = 0;
let mut best_swap = None;
for (u_index, &mutant) in unselected_items.iter().enumerate() {
if cooling_schedule[mutant] > 0 {
continue;
}
let mutant_fitness = mutation_rates[mutant];
let extra_weight = challenge.weights[mutant] as i32 - (challenge.max_weight as i32 - current_weight as i32);
if mutant_fitness < 0 {
continue;
}
for (c_index, &selected) in selected_items.iter().enumerate() {
if cooling_schedule[selected] > 0 {
continue;
}
if extra_weight > 0 && (challenge.weights[selected] as i32) < extra_weight {
continue;
}
let interaction_penalty = (challenge.interaction_values[mutant][selected] as f32 * 0.3) as i32;
let fitness_gain = mutant_fitness - mutation_rates[selected] - interaction_penalty;
if fitness_gain > best_gain {
best_gain = fitness_gain;
best_swap = Some((u_index, c_index));
}
}
}
if let Some((u_index, c_index)) = best_swap {
let added_item = unselected_items[u_index];
let removed_item = selected_items[c_index];
selected_items.swap_remove(c_index);
unselected_items.swap_remove(u_index);
selected_items.push(added_item);
unselected_items.push(removed_item);
current_value += best_gain;
current_weight = current_weight + challenge.weights[added_item] - challenge.weights[removed_item];
if current_weight > challenge.max_weight {
continue;
}
for index in 0..vertex_count {
mutation_rates[index] += challenge.interaction_values[index][added_item]
- challenge.interaction_values[index][removed_item];
}
cooling_schedule[added_item] = 3;
cooling_schedule[removed_item] = 3;
}
if current_value as u32 >= challenge.min_value {
return Ok(Some(Solution { items: selected_items }));
}
for cooling_rate in cooling_schedule.iter_mut() {
*cooling_rate = if *cooling_rate > 0 { *cooling_rate - 1 } else { 0 };
}
if current_value as u32 > (challenge.min_value * 9 / 10) {
let high_potential_items: Vec<usize> = unselected_items
.iter()
.filter(|&&i| challenge.values[i] as i32 > (challenge.min_value as i32 / 4))
.copied()
.collect();
for &item in high_potential_items.iter().take(2) {
if current_weight + challenge.weights[item] <= challenge.max_weight {
selected_items.push(item);
unselected_items.retain(|&x| x != item);
current_weight += challenge.weights[item];
current_value += challenge.values[item] as i32;
for &selected in &selected_items {
if selected != item {
current_value += challenge.interaction_values[item][selected];
}
}
if current_value as u32 >= challenge.min_value {
return Ok(Some(Solution { items: selected_items }));
}
}
}
}
}
if current_value as u32 >= challenge.min_value && current_weight <= challenge.max_weight {
Ok(Some(Solution { items: selected_items }))
} else {
Ok(None)
}
}
#[cfg(feature = "cuda")]
mod gpu_optimisation {
use super::*;
use cudarc::driver::*;
use std::{collections::HashMap, sync::Arc};
use tig_challenges::CudaKernel;
pub const KERNEL: Option<CudaKernel> = None;
pub fn cuda_solve_challenge(
challenge: &Challenge,
dev: &Arc<CudaDevice>,
mut funcs: HashMap<&'static str, CudaFunction>,
) -> anyhow::Result<Option<Solution>> {
solve_challenge(challenge)
}
}
#[cfg(feature = "cuda")]
pub use gpu_optimisation::{cuda_solve_challenge, KERNEL};

View File

@ -1,11 +1,7 @@
/*!
Copyright [year copyright work created] [name of copyright owner]
Copyright [yyyy] [name of copyright owner]
Identity of Submitter [name of person or entity that submits the Work to TIG]
UAI [UAI (if applicable)]
Licensed under the TIG Inbound Game License v2.0 or (at your option) any later
Licensed under the TIG Inbound Game License v1.0 or (at your option) any later
version (the "License"); you may not use this file except in compliance with the
License. You may obtain a copy of the License at
@ -17,24 +13,6 @@ CONDITIONS OF ANY KIND, either express or implied. See the License for the speci
language governing permissions and limitations under the License.
*/
// REMOVE BELOW SECTION IF UNUSED
/*
REFERENCES AND ACKNOWLEDGMENTS
This implementation is based on or inspired by existing work. Citations and
acknowledgments below:
1. Academic Papers:
- [Author(s), "Paper Title", DOI (if available)]
2. Code References:
- [Author(s), URL]
3. Other:
- [Author(s), Details]
*/
// TIG's UI uses the pattern `tig_challenges::<challenge_name>` to automatically detect your algorithm's challenge
use anyhow::{anyhow, Result};
use tig_challenges::knapsack::{Challenge, Solution};

View File

@ -89,7 +89,8 @@ pub use sat_global as c001_a034;
pub mod sat_global_opt;
pub use sat_global_opt as c001_a041;
// c001_a042
pub mod sat_adaptive;
pub use sat_adaptive as c001_a042;
// c001_a043

View File

@ -0,0 +1,315 @@
/*!
Copyright 2024 syebastian
Licensed under the TIG Benchmarker Outbound Game License v1.0 (the "License"); you
may not use this file except in compliance with the License. You may obtain a copy
of the License at
https://github.com/tig-foundation/tig-monorepo/tree/main/docs/licenses
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the specific
language governing permissions and limitations under the License.
*/
use rand::{rngs::{SmallRng, StdRng}, Rng, SeedableRng};
use std::collections::HashMap;
use tig_challenges::satisfiability::*;
pub fn solve_challenge(challenge: &Challenge) -> anyhow::Result<Option<Solution>> {
let mut rng = SmallRng::seed_from_u64(u64::from_le_bytes(challenge.seed[..8].try_into().unwrap()) as u64);
let mut p_single = vec![false; challenge.difficulty.num_variables];
let mut n_single = vec![false; challenge.difficulty.num_variables];
let mut clauses_ = challenge.clauses.clone();
let mut clauses: Vec<Vec<i32>> = Vec::with_capacity(clauses_.len());
let mut rounds = 0;
let mut dead = false;
while !(dead) {
let mut done = true;
for c in &clauses_ {
let mut c_: Vec<i32> = Vec::with_capacity(c.len()); // Preallocate with capacity
let mut skip = false;
for (i, l) in c.iter().enumerate() {
if (p_single[(l.abs() - 1) as usize] && *l > 0)
|| (n_single[(l.abs() - 1) as usize] && *l < 0)
|| c[(i + 1)..].contains(&-l)
{
skip = true;
break;
} else if p_single[(l.abs() - 1) as usize]
|| n_single[(l.abs() - 1) as usize]
|| c[(i + 1)..].contains(&l)
{
done = false;
continue;
} else {
c_.push(*l);
}
}
if skip {
done = false;
continue;
};
match c_[..] {
[l] => {
done = false;
if l > 0 {
if n_single[(l.abs() - 1) as usize] {
dead = true;
break;
} else {
p_single[(l.abs() - 1) as usize] = true;
}
} else {
if p_single[(l.abs() - 1) as usize] {
dead = true;
break;
} else {
n_single[(l.abs() - 1) as usize] = true;
}
}
}
[] => {
dead = true;
break;
}
_ => {
clauses.push(c_);
}
}
}
if done {
break;
} else {
clauses_ = clauses;
clauses = Vec::with_capacity(clauses_.len());
}
}
if dead {
return Ok(None);
}
let num_variables = challenge.difficulty.num_variables;
let num_clauses = clauses.len();
let mut p_clauses: Vec<Vec<usize>> = vec![Vec::new(); num_variables];
let mut n_clauses: Vec<Vec<usize>> = vec![Vec::new(); num_variables];
// Preallocate capacity for p_clauses and n_clauses
for c in &clauses {
for &l in c {
let var = (l.abs() - 1) as usize;
if l > 0 {
if p_clauses[var].capacity() == 0 {
p_clauses[var] = Vec::with_capacity(clauses.len() / num_variables + 1);
}
} else {
if n_clauses[var].capacity() == 0 {
n_clauses[var] = Vec::with_capacity(clauses.len() / num_variables + 1);
}
}
}
}
for (i, &ref c) in clauses.iter().enumerate() {
for &l in c {
let var = (l.abs() - 1) as usize;
if l > 0 {
p_clauses[var].push(i);
} else {
n_clauses[var].push(i);
}
}
}
let mut variables = vec![false; num_variables];
for v in 0..num_variables {
let num_p = p_clauses[v].len();
let num_n = n_clauses[v].len();
let nad = 1.28;
let mut vad = nad + 1.0;
if num_n > 0 {
vad = num_p as f32 / num_n as f32;
}
if vad <= nad {
variables[v] = false;
} else {
let prob = num_p as f64 / (num_p + num_n).max(1) as f64;
variables[v] = rng.gen_bool(prob)
}
}
let mut num_good_so_far: Vec<u8> = vec![0; num_clauses];
for (i, &ref c) in clauses.iter().enumerate() {
for &l in c {
let var = (l.abs() - 1) as usize;
if l > 0 && variables[var] {
num_good_so_far[i] += 1
} else if l < 0 && !variables[var] {
num_good_so_far[i] += 1
}
}
}
let mut residual_ = Vec::with_capacity(num_clauses);
let mut residual_indices = vec![None; num_clauses];
for (i, &num_good) in num_good_so_far.iter().enumerate() {
if num_good == 0 {
residual_.push(i);
residual_indices[i] = Some(residual_.len() - 1);
}
}
let base_prob = 0.52;
let mut current_prob = base_prob;
let check_interval = 50;
let mut last_check_residual = residual_.len();
let clauses_ratio = challenge.difficulty.clauses_to_variables_percent as f64;
let num_vars = challenge.difficulty.num_variables as f64;
let max_fuel = 2000000000.0;
let base_fuel = (2000.0 + 40.0 * clauses_ratio) * num_vars;
let flip_fuel = 350.0 + 0.9 * clauses_ratio;
let max_num_rounds = ((max_fuel - base_fuel) / flip_fuel) as usize;
loop {
if !residual_.is_empty() {
let rand_val = rng.gen::<usize>();
let i = residual_[rand_val % residual_.len()];
let mut min_sad = clauses.len();
let mut v_min_sad = usize::MAX;
let c = &mut clauses[i];
if c.len() > 1 {
let random_index = rand_val % c.len();
c.swap(0, random_index);
}
for &l in c.iter() {
let abs_l = l.abs() as usize - 1;
let clauses_to_check = if variables[abs_l] { &p_clauses[abs_l] } else { &n_clauses[abs_l] };
let mut sad = 0;
for &c in clauses_to_check {
if num_good_so_far[c] == 1 {
sad += 1;
}
}
if sad < min_sad {
min_sad = sad;
v_min_sad = abs_l;
}
}
if rounds % check_interval == 0 {
let progress = last_check_residual as i64 - residual_.len() as i64;
let progress_ratio = progress as f64 / last_check_residual as f64;
let progress_threshold = 0.2 + 0.1 * f64::min(1.0, (clauses_ratio - 410.0) / 15.0);
if progress <= 0 {
let prob_adjustment = 0.025 * (-progress as f64 / last_check_residual as f64).min(1.0);
current_prob = (current_prob + prob_adjustment).min(0.9);
} else if progress_ratio > progress_threshold {
current_prob = base_prob;
} else {
current_prob = current_prob * 0.8 + base_prob * 0.2;
}
last_check_residual = residual_.len();
}
let v = if min_sad == 0 {
v_min_sad
} else if rng.gen_bool(current_prob) {
c[0].abs() as usize - 1
} else {
v_min_sad
};
if variables[v] {
for &c in &n_clauses[v] {
num_good_so_far[c] += 1;
if num_good_so_far[c] == 1 {
let i = residual_indices[c].take().unwrap();
let last = residual_.pop().unwrap();
if i < residual_.len() {
residual_[i] = last;
residual_indices[last] = Some(i);
}
}
}
for &c in &p_clauses[v] {
if num_good_so_far[c] == 1 {
residual_.push(c);
residual_indices[c] = Some(residual_.len() - 1);
}
num_good_so_far[c] -= 1;
}
} else {
for &c in &n_clauses[v] {
if num_good_so_far[c] == 1 {
residual_.push(c);
residual_indices[c] = Some(residual_.len() - 1);
}
num_good_so_far[c] -= 1;
}
for &c in &p_clauses[v] {
num_good_so_far[c] += 1;
if num_good_so_far[c] == 1 {
let i = residual_indices[c].take().unwrap();
let last = residual_.pop().unwrap();
if i < residual_.len() {
residual_[i] = last;
residual_indices[last] = Some(i);
}
}
}
}
variables[v] = !variables[v];
} else {
break;
}
rounds += 1;
if rounds >= max_num_rounds {
return Ok(None);
}
}
return Ok(Some(Solution { variables }));
}
#[cfg(feature = "cuda")]
mod gpu_optimisation {
use super::*;
use cudarc::driver::*;
use std::{collections::HashMap, sync::Arc};
use tig_challenges::CudaKernel;
// set KERNEL to None if algorithm only has a CPU implementation
pub const KERNEL: Option<CudaKernel> = None;
// Important! your GPU and CPU version of the algorithm should return the same result
pub fn cuda_solve_challenge(
challenge: &Challenge,
dev: &Arc<CudaDevice>,
mut funcs: HashMap<&'static str, CudaFunction>,
) -> anyhow::Result<Option<Solution>> {
solve_challenge(challenge)
}
}
#[cfg(feature = "cuda")]
pub use gpu_optimisation::{cuda_solve_challenge, KERNEL};

View File

@ -0,0 +1,315 @@
/*!
Copyright 2024 syebastian
Licensed under the TIG Commercial License v1.0 (the "License"); you
may not use this file except in compliance with the License. You may obtain a copy
of the License at
https://github.com/tig-foundation/tig-monorepo/tree/main/docs/licenses
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the specific
language governing permissions and limitations under the License.
*/
use rand::{rngs::{SmallRng, StdRng}, Rng, SeedableRng};
use std::collections::HashMap;
use tig_challenges::satisfiability::*;
pub fn solve_challenge(challenge: &Challenge) -> anyhow::Result<Option<Solution>> {
let mut rng = SmallRng::seed_from_u64(u64::from_le_bytes(challenge.seed[..8].try_into().unwrap()) as u64);
let mut p_single = vec![false; challenge.difficulty.num_variables];
let mut n_single = vec![false; challenge.difficulty.num_variables];
let mut clauses_ = challenge.clauses.clone();
let mut clauses: Vec<Vec<i32>> = Vec::with_capacity(clauses_.len());
let mut rounds = 0;
let mut dead = false;
while !(dead) {
let mut done = true;
for c in &clauses_ {
let mut c_: Vec<i32> = Vec::with_capacity(c.len()); // Preallocate with capacity
let mut skip = false;
for (i, l) in c.iter().enumerate() {
if (p_single[(l.abs() - 1) as usize] && *l > 0)
|| (n_single[(l.abs() - 1) as usize] && *l < 0)
|| c[(i + 1)..].contains(&-l)
{
skip = true;
break;
} else if p_single[(l.abs() - 1) as usize]
|| n_single[(l.abs() - 1) as usize]
|| c[(i + 1)..].contains(&l)
{
done = false;
continue;
} else {
c_.push(*l);
}
}
if skip {
done = false;
continue;
};
match c_[..] {
[l] => {
done = false;
if l > 0 {
if n_single[(l.abs() - 1) as usize] {
dead = true;
break;
} else {
p_single[(l.abs() - 1) as usize] = true;
}
} else {
if p_single[(l.abs() - 1) as usize] {
dead = true;
break;
} else {
n_single[(l.abs() - 1) as usize] = true;
}
}
}
[] => {
dead = true;
break;
}
_ => {
clauses.push(c_);
}
}
}
if done {
break;
} else {
clauses_ = clauses;
clauses = Vec::with_capacity(clauses_.len());
}
}
if dead {
return Ok(None);
}
let num_variables = challenge.difficulty.num_variables;
let num_clauses = clauses.len();
let mut p_clauses: Vec<Vec<usize>> = vec![Vec::new(); num_variables];
let mut n_clauses: Vec<Vec<usize>> = vec![Vec::new(); num_variables];
// Preallocate capacity for p_clauses and n_clauses
for c in &clauses {
for &l in c {
let var = (l.abs() - 1) as usize;
if l > 0 {
if p_clauses[var].capacity() == 0 {
p_clauses[var] = Vec::with_capacity(clauses.len() / num_variables + 1);
}
} else {
if n_clauses[var].capacity() == 0 {
n_clauses[var] = Vec::with_capacity(clauses.len() / num_variables + 1);
}
}
}
}
for (i, &ref c) in clauses.iter().enumerate() {
for &l in c {
let var = (l.abs() - 1) as usize;
if l > 0 {
p_clauses[var].push(i);
} else {
n_clauses[var].push(i);
}
}
}
let mut variables = vec![false; num_variables];
for v in 0..num_variables {
let num_p = p_clauses[v].len();
let num_n = n_clauses[v].len();
let nad = 1.28;
let mut vad = nad + 1.0;
if num_n > 0 {
vad = num_p as f32 / num_n as f32;
}
if vad <= nad {
variables[v] = false;
} else {
let prob = num_p as f64 / (num_p + num_n).max(1) as f64;
variables[v] = rng.gen_bool(prob)
}
}
let mut num_good_so_far: Vec<u8> = vec![0; num_clauses];
for (i, &ref c) in clauses.iter().enumerate() {
for &l in c {
let var = (l.abs() - 1) as usize;
if l > 0 && variables[var] {
num_good_so_far[i] += 1
} else if l < 0 && !variables[var] {
num_good_so_far[i] += 1
}
}
}
let mut residual_ = Vec::with_capacity(num_clauses);
let mut residual_indices = vec![None; num_clauses];
for (i, &num_good) in num_good_so_far.iter().enumerate() {
if num_good == 0 {
residual_.push(i);
residual_indices[i] = Some(residual_.len() - 1);
}
}
let base_prob = 0.52;
let mut current_prob = base_prob;
let check_interval = 50;
let mut last_check_residual = residual_.len();
let clauses_ratio = challenge.difficulty.clauses_to_variables_percent as f64;
let num_vars = challenge.difficulty.num_variables as f64;
let max_fuel = 2000000000.0;
let base_fuel = (2000.0 + 40.0 * clauses_ratio) * num_vars;
let flip_fuel = 350.0 + 0.9 * clauses_ratio;
let max_num_rounds = ((max_fuel - base_fuel) / flip_fuel) as usize;
loop {
if !residual_.is_empty() {
let rand_val = rng.gen::<usize>();
let i = residual_[rand_val % residual_.len()];
let mut min_sad = clauses.len();
let mut v_min_sad = usize::MAX;
let c = &mut clauses[i];
if c.len() > 1 {
let random_index = rand_val % c.len();
c.swap(0, random_index);
}
for &l in c.iter() {
let abs_l = l.abs() as usize - 1;
let clauses_to_check = if variables[abs_l] { &p_clauses[abs_l] } else { &n_clauses[abs_l] };
let mut sad = 0;
for &c in clauses_to_check {
if num_good_so_far[c] == 1 {
sad += 1;
}
}
if sad < min_sad {
min_sad = sad;
v_min_sad = abs_l;
}
}
if rounds % check_interval == 0 {
let progress = last_check_residual as i64 - residual_.len() as i64;
let progress_ratio = progress as f64 / last_check_residual as f64;
let progress_threshold = 0.2 + 0.1 * f64::min(1.0, (clauses_ratio - 410.0) / 15.0);
if progress <= 0 {
let prob_adjustment = 0.025 * (-progress as f64 / last_check_residual as f64).min(1.0);
current_prob = (current_prob + prob_adjustment).min(0.9);
} else if progress_ratio > progress_threshold {
current_prob = base_prob;
} else {
current_prob = current_prob * 0.8 + base_prob * 0.2;
}
last_check_residual = residual_.len();
}
let v = if min_sad == 0 {
v_min_sad
} else if rng.gen_bool(current_prob) {
c[0].abs() as usize - 1
} else {
v_min_sad
};
if variables[v] {
for &c in &n_clauses[v] {
num_good_so_far[c] += 1;
if num_good_so_far[c] == 1 {
let i = residual_indices[c].take().unwrap();
let last = residual_.pop().unwrap();
if i < residual_.len() {
residual_[i] = last;
residual_indices[last] = Some(i);
}
}
}
for &c in &p_clauses[v] {
if num_good_so_far[c] == 1 {
residual_.push(c);
residual_indices[c] = Some(residual_.len() - 1);
}
num_good_so_far[c] -= 1;
}
} else {
for &c in &n_clauses[v] {
if num_good_so_far[c] == 1 {
residual_.push(c);
residual_indices[c] = Some(residual_.len() - 1);
}
num_good_so_far[c] -= 1;
}
for &c in &p_clauses[v] {
num_good_so_far[c] += 1;
if num_good_so_far[c] == 1 {
let i = residual_indices[c].take().unwrap();
let last = residual_.pop().unwrap();
if i < residual_.len() {
residual_[i] = last;
residual_indices[last] = Some(i);
}
}
}
}
variables[v] = !variables[v];
} else {
break;
}
rounds += 1;
if rounds >= max_num_rounds {
return Ok(None);
}
}
return Ok(Some(Solution { variables }));
}
#[cfg(feature = "cuda")]
mod gpu_optimisation {
use super::*;
use cudarc::driver::*;
use std::{collections::HashMap, sync::Arc};
use tig_challenges::CudaKernel;
// set KERNEL to None if algorithm only has a CPU implementation
pub const KERNEL: Option<CudaKernel> = None;
// Important! your GPU and CPU version of the algorithm should return the same result
pub fn cuda_solve_challenge(
challenge: &Challenge,
dev: &Arc<CudaDevice>,
mut funcs: HashMap<&'static str, CudaFunction>,
) -> anyhow::Result<Option<Solution>> {
solve_challenge(challenge)
}
}
#[cfg(feature = "cuda")]
pub use gpu_optimisation::{cuda_solve_challenge, KERNEL};

View File

@ -0,0 +1,315 @@
/*!
Copyright 2024 syebastian
Licensed under the TIG Inbound Game License v1.0 or (at your option) any later
version (the "License"); you may not use this file except in compliance with the
License. You may obtain a copy of the License at
https://github.com/tig-foundation/tig-monorepo/tree/main/docs/licenses
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the specific
language governing permissions and limitations under the License.
*/
use rand::{rngs::{SmallRng, StdRng}, Rng, SeedableRng};
use std::collections::HashMap;
use tig_challenges::satisfiability::*;
pub fn solve_challenge(challenge: &Challenge) -> anyhow::Result<Option<Solution>> {
let mut rng = SmallRng::seed_from_u64(u64::from_le_bytes(challenge.seed[..8].try_into().unwrap()) as u64);
let mut p_single = vec![false; challenge.difficulty.num_variables];
let mut n_single = vec![false; challenge.difficulty.num_variables];
let mut clauses_ = challenge.clauses.clone();
let mut clauses: Vec<Vec<i32>> = Vec::with_capacity(clauses_.len());
let mut rounds = 0;
let mut dead = false;
while !(dead) {
let mut done = true;
for c in &clauses_ {
let mut c_: Vec<i32> = Vec::with_capacity(c.len()); // Preallocate with capacity
let mut skip = false;
for (i, l) in c.iter().enumerate() {
if (p_single[(l.abs() - 1) as usize] && *l > 0)
|| (n_single[(l.abs() - 1) as usize] && *l < 0)
|| c[(i + 1)..].contains(&-l)
{
skip = true;
break;
} else if p_single[(l.abs() - 1) as usize]
|| n_single[(l.abs() - 1) as usize]
|| c[(i + 1)..].contains(&l)
{
done = false;
continue;
} else {
c_.push(*l);
}
}
if skip {
done = false;
continue;
};
match c_[..] {
[l] => {
done = false;
if l > 0 {
if n_single[(l.abs() - 1) as usize] {
dead = true;
break;
} else {
p_single[(l.abs() - 1) as usize] = true;
}
} else {
if p_single[(l.abs() - 1) as usize] {
dead = true;
break;
} else {
n_single[(l.abs() - 1) as usize] = true;
}
}
}
[] => {
dead = true;
break;
}
_ => {
clauses.push(c_);
}
}
}
if done {
break;
} else {
clauses_ = clauses;
clauses = Vec::with_capacity(clauses_.len());
}
}
if dead {
return Ok(None);
}
let num_variables = challenge.difficulty.num_variables;
let num_clauses = clauses.len();
let mut p_clauses: Vec<Vec<usize>> = vec![Vec::new(); num_variables];
let mut n_clauses: Vec<Vec<usize>> = vec![Vec::new(); num_variables];
// Preallocate capacity for p_clauses and n_clauses
for c in &clauses {
for &l in c {
let var = (l.abs() - 1) as usize;
if l > 0 {
if p_clauses[var].capacity() == 0 {
p_clauses[var] = Vec::with_capacity(clauses.len() / num_variables + 1);
}
} else {
if n_clauses[var].capacity() == 0 {
n_clauses[var] = Vec::with_capacity(clauses.len() / num_variables + 1);
}
}
}
}
for (i, &ref c) in clauses.iter().enumerate() {
for &l in c {
let var = (l.abs() - 1) as usize;
if l > 0 {
p_clauses[var].push(i);
} else {
n_clauses[var].push(i);
}
}
}
let mut variables = vec![false; num_variables];
for v in 0..num_variables {
let num_p = p_clauses[v].len();
let num_n = n_clauses[v].len();
let nad = 1.28;
let mut vad = nad + 1.0;
if num_n > 0 {
vad = num_p as f32 / num_n as f32;
}
if vad <= nad {
variables[v] = false;
} else {
let prob = num_p as f64 / (num_p + num_n).max(1) as f64;
variables[v] = rng.gen_bool(prob)
}
}
let mut num_good_so_far: Vec<u8> = vec![0; num_clauses];
for (i, &ref c) in clauses.iter().enumerate() {
for &l in c {
let var = (l.abs() - 1) as usize;
if l > 0 && variables[var] {
num_good_so_far[i] += 1
} else if l < 0 && !variables[var] {
num_good_so_far[i] += 1
}
}
}
let mut residual_ = Vec::with_capacity(num_clauses);
let mut residual_indices = vec![None; num_clauses];
for (i, &num_good) in num_good_so_far.iter().enumerate() {
if num_good == 0 {
residual_.push(i);
residual_indices[i] = Some(residual_.len() - 1);
}
}
let base_prob = 0.52;
let mut current_prob = base_prob;
let check_interval = 50;
let mut last_check_residual = residual_.len();
let clauses_ratio = challenge.difficulty.clauses_to_variables_percent as f64;
let num_vars = challenge.difficulty.num_variables as f64;
let max_fuel = 2000000000.0;
let base_fuel = (2000.0 + 40.0 * clauses_ratio) * num_vars;
let flip_fuel = 350.0 + 0.9 * clauses_ratio;
let max_num_rounds = ((max_fuel - base_fuel) / flip_fuel) as usize;
loop {
if !residual_.is_empty() {
let rand_val = rng.gen::<usize>();
let i = residual_[rand_val % residual_.len()];
let mut min_sad = clauses.len();
let mut v_min_sad = usize::MAX;
let c = &mut clauses[i];
if c.len() > 1 {
let random_index = rand_val % c.len();
c.swap(0, random_index);
}
for &l in c.iter() {
let abs_l = l.abs() as usize - 1;
let clauses_to_check = if variables[abs_l] { &p_clauses[abs_l] } else { &n_clauses[abs_l] };
let mut sad = 0;
for &c in clauses_to_check {
if num_good_so_far[c] == 1 {
sad += 1;
}
}
if sad < min_sad {
min_sad = sad;
v_min_sad = abs_l;
}
}
if rounds % check_interval == 0 {
let progress = last_check_residual as i64 - residual_.len() as i64;
let progress_ratio = progress as f64 / last_check_residual as f64;
let progress_threshold = 0.2 + 0.1 * f64::min(1.0, (clauses_ratio - 410.0) / 15.0);
if progress <= 0 {
let prob_adjustment = 0.025 * (-progress as f64 / last_check_residual as f64).min(1.0);
current_prob = (current_prob + prob_adjustment).min(0.9);
} else if progress_ratio > progress_threshold {
current_prob = base_prob;
} else {
current_prob = current_prob * 0.8 + base_prob * 0.2;
}
last_check_residual = residual_.len();
}
let v = if min_sad == 0 {
v_min_sad
} else if rng.gen_bool(current_prob) {
c[0].abs() as usize - 1
} else {
v_min_sad
};
if variables[v] {
for &c in &n_clauses[v] {
num_good_so_far[c] += 1;
if num_good_so_far[c] == 1 {
let i = residual_indices[c].take().unwrap();
let last = residual_.pop().unwrap();
if i < residual_.len() {
residual_[i] = last;
residual_indices[last] = Some(i);
}
}
}
for &c in &p_clauses[v] {
if num_good_so_far[c] == 1 {
residual_.push(c);
residual_indices[c] = Some(residual_.len() - 1);
}
num_good_so_far[c] -= 1;
}
} else {
for &c in &n_clauses[v] {
if num_good_so_far[c] == 1 {
residual_.push(c);
residual_indices[c] = Some(residual_.len() - 1);
}
num_good_so_far[c] -= 1;
}
for &c in &p_clauses[v] {
num_good_so_far[c] += 1;
if num_good_so_far[c] == 1 {
let i = residual_indices[c].take().unwrap();
let last = residual_.pop().unwrap();
if i < residual_.len() {
residual_[i] = last;
residual_indices[last] = Some(i);
}
}
}
}
variables[v] = !variables[v];
} else {
break;
}
rounds += 1;
if rounds >= max_num_rounds {
return Ok(None);
}
}
return Ok(Some(Solution { variables }));
}
#[cfg(feature = "cuda")]
mod gpu_optimisation {
use super::*;
use cudarc::driver::*;
use std::{collections::HashMap, sync::Arc};
use tig_challenges::CudaKernel;
// set KERNEL to None if algorithm only has a CPU implementation
pub const KERNEL: Option<CudaKernel> = None;
// Important! your GPU and CPU version of the algorithm should return the same result
pub fn cuda_solve_challenge(
challenge: &Challenge,
dev: &Arc<CudaDevice>,
mut funcs: HashMap<&'static str, CudaFunction>,
) -> anyhow::Result<Option<Solution>> {
solve_challenge(challenge)
}
}
#[cfg(feature = "cuda")]
pub use gpu_optimisation::{cuda_solve_challenge, KERNEL};

View File

@ -0,0 +1,315 @@
/*!
Copyright 2024 syebastian
Licensed under the TIG Innovator Outbound Game License v1.0 (the "License"); you
may not use this file except in compliance with the License. You may obtain a copy
of the License at
https://github.com/tig-foundation/tig-monorepo/tree/main/docs/licenses
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the specific
language governing permissions and limitations under the License.
*/
use rand::{rngs::{SmallRng, StdRng}, Rng, SeedableRng};
use std::collections::HashMap;
use tig_challenges::satisfiability::*;
pub fn solve_challenge(challenge: &Challenge) -> anyhow::Result<Option<Solution>> {
let mut rng = SmallRng::seed_from_u64(u64::from_le_bytes(challenge.seed[..8].try_into().unwrap()) as u64);
let mut p_single = vec![false; challenge.difficulty.num_variables];
let mut n_single = vec![false; challenge.difficulty.num_variables];
let mut clauses_ = challenge.clauses.clone();
let mut clauses: Vec<Vec<i32>> = Vec::with_capacity(clauses_.len());
let mut rounds = 0;
let mut dead = false;
while !(dead) {
let mut done = true;
for c in &clauses_ {
let mut c_: Vec<i32> = Vec::with_capacity(c.len()); // Preallocate with capacity
let mut skip = false;
for (i, l) in c.iter().enumerate() {
if (p_single[(l.abs() - 1) as usize] && *l > 0)
|| (n_single[(l.abs() - 1) as usize] && *l < 0)
|| c[(i + 1)..].contains(&-l)
{
skip = true;
break;
} else if p_single[(l.abs() - 1) as usize]
|| n_single[(l.abs() - 1) as usize]
|| c[(i + 1)..].contains(&l)
{
done = false;
continue;
} else {
c_.push(*l);
}
}
if skip {
done = false;
continue;
};
match c_[..] {
[l] => {
done = false;
if l > 0 {
if n_single[(l.abs() - 1) as usize] {
dead = true;
break;
} else {
p_single[(l.abs() - 1) as usize] = true;
}
} else {
if p_single[(l.abs() - 1) as usize] {
dead = true;
break;
} else {
n_single[(l.abs() - 1) as usize] = true;
}
}
}
[] => {
dead = true;
break;
}
_ => {
clauses.push(c_);
}
}
}
if done {
break;
} else {
clauses_ = clauses;
clauses = Vec::with_capacity(clauses_.len());
}
}
if dead {
return Ok(None);
}
let num_variables = challenge.difficulty.num_variables;
let num_clauses = clauses.len();
let mut p_clauses: Vec<Vec<usize>> = vec![Vec::new(); num_variables];
let mut n_clauses: Vec<Vec<usize>> = vec![Vec::new(); num_variables];
// Preallocate capacity for p_clauses and n_clauses
for c in &clauses {
for &l in c {
let var = (l.abs() - 1) as usize;
if l > 0 {
if p_clauses[var].capacity() == 0 {
p_clauses[var] = Vec::with_capacity(clauses.len() / num_variables + 1);
}
} else {
if n_clauses[var].capacity() == 0 {
n_clauses[var] = Vec::with_capacity(clauses.len() / num_variables + 1);
}
}
}
}
for (i, &ref c) in clauses.iter().enumerate() {
for &l in c {
let var = (l.abs() - 1) as usize;
if l > 0 {
p_clauses[var].push(i);
} else {
n_clauses[var].push(i);
}
}
}
let mut variables = vec![false; num_variables];
for v in 0..num_variables {
let num_p = p_clauses[v].len();
let num_n = n_clauses[v].len();
let nad = 1.28;
let mut vad = nad + 1.0;
if num_n > 0 {
vad = num_p as f32 / num_n as f32;
}
if vad <= nad {
variables[v] = false;
} else {
let prob = num_p as f64 / (num_p + num_n).max(1) as f64;
variables[v] = rng.gen_bool(prob)
}
}
let mut num_good_so_far: Vec<u8> = vec![0; num_clauses];
for (i, &ref c) in clauses.iter().enumerate() {
for &l in c {
let var = (l.abs() - 1) as usize;
if l > 0 && variables[var] {
num_good_so_far[i] += 1
} else if l < 0 && !variables[var] {
num_good_so_far[i] += 1
}
}
}
let mut residual_ = Vec::with_capacity(num_clauses);
let mut residual_indices = vec![None; num_clauses];
for (i, &num_good) in num_good_so_far.iter().enumerate() {
if num_good == 0 {
residual_.push(i);
residual_indices[i] = Some(residual_.len() - 1);
}
}
let base_prob = 0.52;
let mut current_prob = base_prob;
let check_interval = 50;
let mut last_check_residual = residual_.len();
let clauses_ratio = challenge.difficulty.clauses_to_variables_percent as f64;
let num_vars = challenge.difficulty.num_variables as f64;
let max_fuel = 2000000000.0;
let base_fuel = (2000.0 + 40.0 * clauses_ratio) * num_vars;
let flip_fuel = 350.0 + 0.9 * clauses_ratio;
let max_num_rounds = ((max_fuel - base_fuel) / flip_fuel) as usize;
loop {
if !residual_.is_empty() {
let rand_val = rng.gen::<usize>();
let i = residual_[rand_val % residual_.len()];
let mut min_sad = clauses.len();
let mut v_min_sad = usize::MAX;
let c = &mut clauses[i];
if c.len() > 1 {
let random_index = rand_val % c.len();
c.swap(0, random_index);
}
for &l in c.iter() {
let abs_l = l.abs() as usize - 1;
let clauses_to_check = if variables[abs_l] { &p_clauses[abs_l] } else { &n_clauses[abs_l] };
let mut sad = 0;
for &c in clauses_to_check {
if num_good_so_far[c] == 1 {
sad += 1;
}
}
if sad < min_sad {
min_sad = sad;
v_min_sad = abs_l;
}
}
if rounds % check_interval == 0 {
let progress = last_check_residual as i64 - residual_.len() as i64;
let progress_ratio = progress as f64 / last_check_residual as f64;
let progress_threshold = 0.2 + 0.1 * f64::min(1.0, (clauses_ratio - 410.0) / 15.0);
if progress <= 0 {
let prob_adjustment = 0.025 * (-progress as f64 / last_check_residual as f64).min(1.0);
current_prob = (current_prob + prob_adjustment).min(0.9);
} else if progress_ratio > progress_threshold {
current_prob = base_prob;
} else {
current_prob = current_prob * 0.8 + base_prob * 0.2;
}
last_check_residual = residual_.len();
}
let v = if min_sad == 0 {
v_min_sad
} else if rng.gen_bool(current_prob) {
c[0].abs() as usize - 1
} else {
v_min_sad
};
if variables[v] {
for &c in &n_clauses[v] {
num_good_so_far[c] += 1;
if num_good_so_far[c] == 1 {
let i = residual_indices[c].take().unwrap();
let last = residual_.pop().unwrap();
if i < residual_.len() {
residual_[i] = last;
residual_indices[last] = Some(i);
}
}
}
for &c in &p_clauses[v] {
if num_good_so_far[c] == 1 {
residual_.push(c);
residual_indices[c] = Some(residual_.len() - 1);
}
num_good_so_far[c] -= 1;
}
} else {
for &c in &n_clauses[v] {
if num_good_so_far[c] == 1 {
residual_.push(c);
residual_indices[c] = Some(residual_.len() - 1);
}
num_good_so_far[c] -= 1;
}
for &c in &p_clauses[v] {
num_good_so_far[c] += 1;
if num_good_so_far[c] == 1 {
let i = residual_indices[c].take().unwrap();
let last = residual_.pop().unwrap();
if i < residual_.len() {
residual_[i] = last;
residual_indices[last] = Some(i);
}
}
}
}
variables[v] = !variables[v];
} else {
break;
}
rounds += 1;
if rounds >= max_num_rounds {
return Ok(None);
}
}
return Ok(Some(Solution { variables }));
}
#[cfg(feature = "cuda")]
mod gpu_optimisation {
use super::*;
use cudarc::driver::*;
use std::{collections::HashMap, sync::Arc};
use tig_challenges::CudaKernel;
// set KERNEL to None if algorithm only has a CPU implementation
pub const KERNEL: Option<CudaKernel> = None;
// Important! your GPU and CPU version of the algorithm should return the same result
pub fn cuda_solve_challenge(
challenge: &Challenge,
dev: &Arc<CudaDevice>,
mut funcs: HashMap<&'static str, CudaFunction>,
) -> anyhow::Result<Option<Solution>> {
solve_challenge(challenge)
}
}
#[cfg(feature = "cuda")]
pub use gpu_optimisation::{cuda_solve_challenge, KERNEL};

View File

@ -0,0 +1,4 @@
mod benchmarker_outbound;
pub use benchmarker_outbound::solve_challenge;
#[cfg(feature = "cuda")]
pub use benchmarker_outbound::{cuda_solve_challenge, KERNEL};

View File

@ -0,0 +1,315 @@
/*!
Copyright 2024 syebastian
Licensed under the TIG Open Data License v1.0 or (at your option) any later version
(the "License"); you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://github.com/tig-foundation/tig-monorepo/tree/main/docs/licenses
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the specific
language governing permissions and limitations under the License.
*/
use rand::{rngs::{SmallRng, StdRng}, Rng, SeedableRng};
use std::collections::HashMap;
use tig_challenges::satisfiability::*;
pub fn solve_challenge(challenge: &Challenge) -> anyhow::Result<Option<Solution>> {
let mut rng = SmallRng::seed_from_u64(u64::from_le_bytes(challenge.seed[..8].try_into().unwrap()) as u64);
let mut p_single = vec![false; challenge.difficulty.num_variables];
let mut n_single = vec![false; challenge.difficulty.num_variables];
let mut clauses_ = challenge.clauses.clone();
let mut clauses: Vec<Vec<i32>> = Vec::with_capacity(clauses_.len());
let mut rounds = 0;
let mut dead = false;
while !(dead) {
let mut done = true;
for c in &clauses_ {
let mut c_: Vec<i32> = Vec::with_capacity(c.len()); // Preallocate with capacity
let mut skip = false;
for (i, l) in c.iter().enumerate() {
if (p_single[(l.abs() - 1) as usize] && *l > 0)
|| (n_single[(l.abs() - 1) as usize] && *l < 0)
|| c[(i + 1)..].contains(&-l)
{
skip = true;
break;
} else if p_single[(l.abs() - 1) as usize]
|| n_single[(l.abs() - 1) as usize]
|| c[(i + 1)..].contains(&l)
{
done = false;
continue;
} else {
c_.push(*l);
}
}
if skip {
done = false;
continue;
};
match c_[..] {
[l] => {
done = false;
if l > 0 {
if n_single[(l.abs() - 1) as usize] {
dead = true;
break;
} else {
p_single[(l.abs() - 1) as usize] = true;
}
} else {
if p_single[(l.abs() - 1) as usize] {
dead = true;
break;
} else {
n_single[(l.abs() - 1) as usize] = true;
}
}
}
[] => {
dead = true;
break;
}
_ => {
clauses.push(c_);
}
}
}
if done {
break;
} else {
clauses_ = clauses;
clauses = Vec::with_capacity(clauses_.len());
}
}
if dead {
return Ok(None);
}
let num_variables = challenge.difficulty.num_variables;
let num_clauses = clauses.len();
let mut p_clauses: Vec<Vec<usize>> = vec![Vec::new(); num_variables];
let mut n_clauses: Vec<Vec<usize>> = vec![Vec::new(); num_variables];
// Preallocate capacity for p_clauses and n_clauses
for c in &clauses {
for &l in c {
let var = (l.abs() - 1) as usize;
if l > 0 {
if p_clauses[var].capacity() == 0 {
p_clauses[var] = Vec::with_capacity(clauses.len() / num_variables + 1);
}
} else {
if n_clauses[var].capacity() == 0 {
n_clauses[var] = Vec::with_capacity(clauses.len() / num_variables + 1);
}
}
}
}
for (i, &ref c) in clauses.iter().enumerate() {
for &l in c {
let var = (l.abs() - 1) as usize;
if l > 0 {
p_clauses[var].push(i);
} else {
n_clauses[var].push(i);
}
}
}
let mut variables = vec![false; num_variables];
for v in 0..num_variables {
let num_p = p_clauses[v].len();
let num_n = n_clauses[v].len();
let nad = 1.28;
let mut vad = nad + 1.0;
if num_n > 0 {
vad = num_p as f32 / num_n as f32;
}
if vad <= nad {
variables[v] = false;
} else {
let prob = num_p as f64 / (num_p + num_n).max(1) as f64;
variables[v] = rng.gen_bool(prob)
}
}
let mut num_good_so_far: Vec<u8> = vec![0; num_clauses];
for (i, &ref c) in clauses.iter().enumerate() {
for &l in c {
let var = (l.abs() - 1) as usize;
if l > 0 && variables[var] {
num_good_so_far[i] += 1
} else if l < 0 && !variables[var] {
num_good_so_far[i] += 1
}
}
}
let mut residual_ = Vec::with_capacity(num_clauses);
let mut residual_indices = vec![None; num_clauses];
for (i, &num_good) in num_good_so_far.iter().enumerate() {
if num_good == 0 {
residual_.push(i);
residual_indices[i] = Some(residual_.len() - 1);
}
}
let base_prob = 0.52;
let mut current_prob = base_prob;
let check_interval = 50;
let mut last_check_residual = residual_.len();
let clauses_ratio = challenge.difficulty.clauses_to_variables_percent as f64;
let num_vars = challenge.difficulty.num_variables as f64;
let max_fuel = 2000000000.0;
let base_fuel = (2000.0 + 40.0 * clauses_ratio) * num_vars;
let flip_fuel = 350.0 + 0.9 * clauses_ratio;
let max_num_rounds = ((max_fuel - base_fuel) / flip_fuel) as usize;
loop {
if !residual_.is_empty() {
let rand_val = rng.gen::<usize>();
let i = residual_[rand_val % residual_.len()];
let mut min_sad = clauses.len();
let mut v_min_sad = usize::MAX;
let c = &mut clauses[i];
if c.len() > 1 {
let random_index = rand_val % c.len();
c.swap(0, random_index);
}
for &l in c.iter() {
let abs_l = l.abs() as usize - 1;
let clauses_to_check = if variables[abs_l] { &p_clauses[abs_l] } else { &n_clauses[abs_l] };
let mut sad = 0;
for &c in clauses_to_check {
if num_good_so_far[c] == 1 {
sad += 1;
}
}
if sad < min_sad {
min_sad = sad;
v_min_sad = abs_l;
}
}
if rounds % check_interval == 0 {
let progress = last_check_residual as i64 - residual_.len() as i64;
let progress_ratio = progress as f64 / last_check_residual as f64;
let progress_threshold = 0.2 + 0.1 * f64::min(1.0, (clauses_ratio - 410.0) / 15.0);
if progress <= 0 {
let prob_adjustment = 0.025 * (-progress as f64 / last_check_residual as f64).min(1.0);
current_prob = (current_prob + prob_adjustment).min(0.9);
} else if progress_ratio > progress_threshold {
current_prob = base_prob;
} else {
current_prob = current_prob * 0.8 + base_prob * 0.2;
}
last_check_residual = residual_.len();
}
let v = if min_sad == 0 {
v_min_sad
} else if rng.gen_bool(current_prob) {
c[0].abs() as usize - 1
} else {
v_min_sad
};
if variables[v] {
for &c in &n_clauses[v] {
num_good_so_far[c] += 1;
if num_good_so_far[c] == 1 {
let i = residual_indices[c].take().unwrap();
let last = residual_.pop().unwrap();
if i < residual_.len() {
residual_[i] = last;
residual_indices[last] = Some(i);
}
}
}
for &c in &p_clauses[v] {
if num_good_so_far[c] == 1 {
residual_.push(c);
residual_indices[c] = Some(residual_.len() - 1);
}
num_good_so_far[c] -= 1;
}
} else {
for &c in &n_clauses[v] {
if num_good_so_far[c] == 1 {
residual_.push(c);
residual_indices[c] = Some(residual_.len() - 1);
}
num_good_so_far[c] -= 1;
}
for &c in &p_clauses[v] {
num_good_so_far[c] += 1;
if num_good_so_far[c] == 1 {
let i = residual_indices[c].take().unwrap();
let last = residual_.pop().unwrap();
if i < residual_.len() {
residual_[i] = last;
residual_indices[last] = Some(i);
}
}
}
}
variables[v] = !variables[v];
} else {
break;
}
rounds += 1;
if rounds >= max_num_rounds {
return Ok(None);
}
}
return Ok(Some(Solution { variables }));
}
#[cfg(feature = "cuda")]
mod gpu_optimisation {
use super::*;
use cudarc::driver::*;
use std::{collections::HashMap, sync::Arc};
use tig_challenges::CudaKernel;
// set KERNEL to None if algorithm only has a CPU implementation
pub const KERNEL: Option<CudaKernel> = None;
// Important! your GPU and CPU version of the algorithm should return the same result
pub fn cuda_solve_challenge(
challenge: &Challenge,
dev: &Arc<CudaDevice>,
mut funcs: HashMap<&'static str, CudaFunction>,
) -> anyhow::Result<Option<Solution>> {
solve_challenge(challenge)
}
}
#[cfg(feature = "cuda")]
pub use gpu_optimisation::{cuda_solve_challenge, KERNEL};

View File

@ -1,11 +1,7 @@
/*!
Copyright [year copyright work created] [name of copyright owner]
Copyright [yyyy] [name of copyright owner]
Identity of Submitter [name of person or entity that submits the Work to TIG]
UAI [UAI (if applicable)]
Licensed under the TIG Inbound Game License v2.0 or (at your option) any later
Licensed under the TIG Inbound Game License v1.0 or (at your option) any later
version (the "License"); you may not use this file except in compliance with the
License. You may obtain a copy of the License at
@ -17,24 +13,6 @@ CONDITIONS OF ANY KIND, either express or implied. See the License for the speci
language governing permissions and limitations under the License.
*/
// REMOVE BELOW SECTION IF UNUSED
/*
REFERENCES AND ACKNOWLEDGMENTS
This implementation is based on or inspired by existing work. Citations and
acknowledgments below:
1. Academic Papers:
- [Author(s), "Paper Title", DOI (if available)]
2. Code References:
- [Author(s), URL]
3. Other:
- [Author(s), Details]
*/
// TIG's UI uses the pattern `tig_challenges::<challenge_name>` to automatically detect your algorithm's challenge
use anyhow::{anyhow, Result};
use tig_challenges::satisfiability::{Challenge, Solution};

View File

@ -1,11 +1,7 @@
/*!
Copyright [year copyright work created] [name of copyright owner]
Copyright [yyyy] [name of copyright owner]
Identity of Submitter [name of person or entity that submits the Work to TIG]
UAI [UAI (if applicable)]
Licensed under the TIG Inbound Game License v2.0 or (at your option) any later
Licensed under the TIG Inbound Game License v1.0 or (at your option) any later
version (the "License"); you may not use this file except in compliance with the
License. You may obtain a copy of the License at
@ -17,24 +13,6 @@ CONDITIONS OF ANY KIND, either express or implied. See the License for the speci
language governing permissions and limitations under the License.
*/
// REMOVE BELOW SECTION IF UNUSED
/*
REFERENCES AND ACKNOWLEDGMENTS
This implementation is based on or inspired by existing work. Citations and
acknowledgments below:
1. Academic Papers:
- [Author(s), "Paper Title", DOI (if available)]
2. Code References:
- [Author(s), URL]
3. Other:
- [Author(s), Details]
*/
// TIG's UI uses the pattern `tig_challenges::<challenge_name>` to automatically detect your algorithm's challenge
use anyhow::{anyhow, Result};
use tig_challenges::vector_search::{Challenge, Solution};

View File

@ -0,0 +1,240 @@
/*!
Copyright 2024 CodeAlchemist
Licensed under the TIG Benchmarker Outbound Game License v1.0 (the "License"); you
may not use this file except in compliance with the License. You may obtain a copy
of the License at
https://github.com/tig-foundation/tig-monorepo/tree/main/docs/licenses
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the specific
language governing permissions and limitations under the License.
*/
use rand::{rngs::{SmallRng, StdRng}, Rng, SeedableRng};
use tig_challenges::vehicle_routing::{Challenge, Solution};
pub fn solve_challenge(challenge: &Challenge) -> anyhow::Result<Option<Solution>> {
let max_dist: f32 = challenge.distance_matrix[0].iter().sum::<i32>() as f32;
let p = challenge.max_total_distance as f32 / max_dist;
if p < 0.57 {
return Ok(None)
}
let mut best_solution: Option<Solution> = None;
let mut best_cost = std::i32::MAX;
const INITIAL_TEMPERATURE: f32 = 2.0;
const COOLING_RATE: f32 = 0.995;
const ITERATIONS_PER_TEMPERATURE: usize = 2;
let num_nodes = challenge.difficulty.num_nodes;
let mut current_params = vec![1.0; num_nodes];
let mut savings_list = create_initial_savings_list(challenge);
recompute_and_sort_savings(&mut savings_list, &current_params, challenge);
let mut current_solution = create_solution(challenge, &current_params, &savings_list);
let mut current_cost = calculate_solution_cost(&current_solution, &challenge.distance_matrix);
if current_cost <= challenge.max_total_distance {
return Ok(Some(current_solution));
}
if (current_cost as f32 * 0.96) > challenge.max_total_distance as f32 {
return Ok(None);
}
let mut temperature = INITIAL_TEMPERATURE;
let mut rng = StdRng::seed_from_u64(u64::from_le_bytes(challenge.seed[..8].try_into().unwrap()));
while temperature > 1.0 {
for _ in 0..ITERATIONS_PER_TEMPERATURE {
let neighbor_params = generate_neighbor(&current_params, &mut rng);
recompute_and_sort_savings(&mut savings_list, &neighbor_params, challenge);
let mut neighbor_solution = create_solution(challenge, &neighbor_params, &savings_list);
apply_local_search_until_no_improvement(&mut neighbor_solution, &challenge.distance_matrix);
let neighbor_cost = calculate_solution_cost(&neighbor_solution, &challenge.distance_matrix);
let delta = neighbor_cost as f32 - current_cost as f32;
if delta < 0.0 || rng.gen::<f32>() < (-delta / temperature).exp() {
current_params = neighbor_params;
current_cost = neighbor_cost;
current_solution = neighbor_solution;
if current_cost < best_cost {
best_cost = current_cost;
best_solution = Some(Solution {
routes: current_solution.routes.clone(),
});
}
}
if best_cost <= challenge.max_total_distance {
return Ok(best_solution);
}
}
temperature *= COOLING_RATE;
}
Ok(best_solution)
}
#[inline]
fn create_initial_savings_list(challenge: &Challenge) -> Vec<(f32, u8, u8)> {
let num_nodes = challenge.difficulty.num_nodes;
let capacity = ((num_nodes - 1) * (num_nodes - 2)) / 2;
let mut savings = Vec::with_capacity(capacity);
for i in 1..num_nodes {
for j in (i + 1)..num_nodes {
savings.push((0.0, i as u8, j as u8));
}
}
savings
}
#[inline]
fn recompute_and_sort_savings(savings_list: &mut [(f32, u8, u8)], params: &[f32], challenge: &Challenge) {
let distance_matrix = &challenge.distance_matrix;
let mut zero_len = 0;
for (score, i, j) in savings_list.iter_mut() {
let i = *i as usize;
let j = *j as usize;
*score = params[i] * distance_matrix[0][i] as f32 +
params[j] * distance_matrix[j][0] as f32 -
params[i] * params[j] * distance_matrix[i][j] as f32;
}
savings_list.sort_unstable_by(|a, b| b.0.partial_cmp(&a.0).unwrap());
}
#[inline]
fn generate_neighbor<R: Rng + ?Sized>(current: &[f32], rng: &mut R) -> Vec<f32> {
current.iter().map(|&param| {
let delta = rng.gen_range(-0.1..=0.1);
(param + delta).clamp(0.0, 2.0)
}).collect()
}
#[inline]
fn apply_local_search_until_no_improvement(solution: &mut Solution, distance_matrix: &Vec<Vec<i32>>) {
let mut improved = true;
while improved {
improved = false;
for route in &mut solution.routes {
if two_opt(route, distance_matrix) {
improved = true;
}
}
}
}
#[inline]
fn two_opt(route: &mut Vec<usize>, distance_matrix: &Vec<Vec<i32>>) -> bool {
let n = route.len();
let mut improved = false;
for i in 1..n - 2 {
for j in i + 1..n - 1 {
let current_distance = distance_matrix[route[i - 1]][route[i]]
+ distance_matrix[route[j]][route[j + 1]];
let new_distance = distance_matrix[route[i - 1]][route[j]]
+ distance_matrix[route[i]][route[j + 1]];
if new_distance < current_distance {
route[i..=j].reverse();
improved = true;
}
}
}
improved
}
#[inline]
fn calculate_solution_cost(solution: &Solution, distance_matrix: &Vec<Vec<i32>>) -> i32 {
solution.routes.iter().map(|route| {
route.windows(2).map(|w| distance_matrix[w[0]][w[1]]).sum::<i32>()
}).sum()
}
#[inline]
fn create_solution(challenge: &Challenge, params: &[f32], savings_list: &[(f32, u8, u8)]) -> Solution {
let distance_matrix = &challenge.distance_matrix;
let max_capacity = challenge.max_capacity;
let num_nodes = challenge.difficulty.num_nodes;
let demands = &challenge.demands;
let mut routes = vec![None; num_nodes];
for i in 1..num_nodes {
routes[i] = Some(vec![i]);
}
let mut route_demands = demands.clone();
for &(_, i, j) in savings_list {
let (i, j) = (i as usize, j as usize);
if let (Some(left_route), Some(right_route)) = (routes[i].as_ref(), routes[j].as_ref()) {
let (left_start, left_end) = (*left_route.first().unwrap(), *left_route.last().unwrap());
let (right_start, right_end) = (*right_route.first().unwrap(), *right_route.last().unwrap());
if left_start == right_start || route_demands[left_start] + route_demands[right_start] > max_capacity {
continue;
}
let mut new_route = routes[i].take().unwrap();
let mut right_route = routes[j].take().unwrap();
if left_start == i { new_route.reverse(); }
if right_end == j { right_route.reverse(); }
new_route.extend(right_route);
let combined_demand = route_demands[left_start] + route_demands[right_start];
let new_start = new_route[0];
let new_end = *new_route.last().unwrap();
route_demands[new_start] = combined_demand;
route_demands[new_end] = combined_demand;
routes[new_start] = Some(new_route.clone());
routes[new_end] = Some(new_route);
}
}
Solution {
routes: routes
.into_iter()
.enumerate()
.filter_map(|(i, route)| route.filter(|r| r[0] == i))
.map(|mut route| {
route.insert(0, 0);
route.push(0);
route
})
.collect(),
}
}
#[cfg(feature = "cuda")]
mod gpu_optimisation {
use super::*;
use cudarc::driver::*;
use std::{collections::HashMap, sync::Arc};
use tig_challenges::CudaKernel;
// set KERNEL to None if algorithm only has a CPU implementation
pub const KERNEL: Option<CudaKernel> = None;
// Important! your GPU and CPU version of the algorithm should return the same result
pub fn cuda_solve_challenge(
challenge: &Challenge,
dev: &Arc<CudaDevice>,
mut funcs: HashMap<&'static str, CudaFunction>,
) -> anyhow::Result<Option<Solution>> {
solve_challenge(challenge)
}
}
#[cfg(feature = "cuda")]
pub use gpu_optimisation::{cuda_solve_challenge, KERNEL};

View File

@ -0,0 +1,240 @@
/*!
Copyright 2024 CodeAlchemist
Licensed under the TIG Commercial License v1.0 (the "License"); you
may not use this file except in compliance with the License. You may obtain a copy
of the License at
https://github.com/tig-foundation/tig-monorepo/tree/main/docs/licenses
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the specific
language governing permissions and limitations under the License.
*/
use rand::{rngs::{SmallRng, StdRng}, Rng, SeedableRng};
use tig_challenges::vehicle_routing::{Challenge, Solution};
pub fn solve_challenge(challenge: &Challenge) -> anyhow::Result<Option<Solution>> {
let max_dist: f32 = challenge.distance_matrix[0].iter().sum::<i32>() as f32;
let p = challenge.max_total_distance as f32 / max_dist;
if p < 0.57 {
return Ok(None)
}
let mut best_solution: Option<Solution> = None;
let mut best_cost = std::i32::MAX;
const INITIAL_TEMPERATURE: f32 = 2.0;
const COOLING_RATE: f32 = 0.995;
const ITERATIONS_PER_TEMPERATURE: usize = 2;
let num_nodes = challenge.difficulty.num_nodes;
let mut current_params = vec![1.0; num_nodes];
let mut savings_list = create_initial_savings_list(challenge);
recompute_and_sort_savings(&mut savings_list, &current_params, challenge);
let mut current_solution = create_solution(challenge, &current_params, &savings_list);
let mut current_cost = calculate_solution_cost(&current_solution, &challenge.distance_matrix);
if current_cost <= challenge.max_total_distance {
return Ok(Some(current_solution));
}
if (current_cost as f32 * 0.96) > challenge.max_total_distance as f32 {
return Ok(None);
}
let mut temperature = INITIAL_TEMPERATURE;
let mut rng = StdRng::seed_from_u64(u64::from_le_bytes(challenge.seed[..8].try_into().unwrap()));
while temperature > 1.0 {
for _ in 0..ITERATIONS_PER_TEMPERATURE {
let neighbor_params = generate_neighbor(&current_params, &mut rng);
recompute_and_sort_savings(&mut savings_list, &neighbor_params, challenge);
let mut neighbor_solution = create_solution(challenge, &neighbor_params, &savings_list);
apply_local_search_until_no_improvement(&mut neighbor_solution, &challenge.distance_matrix);
let neighbor_cost = calculate_solution_cost(&neighbor_solution, &challenge.distance_matrix);
let delta = neighbor_cost as f32 - current_cost as f32;
if delta < 0.0 || rng.gen::<f32>() < (-delta / temperature).exp() {
current_params = neighbor_params;
current_cost = neighbor_cost;
current_solution = neighbor_solution;
if current_cost < best_cost {
best_cost = current_cost;
best_solution = Some(Solution {
routes: current_solution.routes.clone(),
});
}
}
if best_cost <= challenge.max_total_distance {
return Ok(best_solution);
}
}
temperature *= COOLING_RATE;
}
Ok(best_solution)
}
#[inline]
fn create_initial_savings_list(challenge: &Challenge) -> Vec<(f32, u8, u8)> {
let num_nodes = challenge.difficulty.num_nodes;
let capacity = ((num_nodes - 1) * (num_nodes - 2)) / 2;
let mut savings = Vec::with_capacity(capacity);
for i in 1..num_nodes {
for j in (i + 1)..num_nodes {
savings.push((0.0, i as u8, j as u8));
}
}
savings
}
#[inline]
fn recompute_and_sort_savings(savings_list: &mut [(f32, u8, u8)], params: &[f32], challenge: &Challenge) {
let distance_matrix = &challenge.distance_matrix;
let mut zero_len = 0;
for (score, i, j) in savings_list.iter_mut() {
let i = *i as usize;
let j = *j as usize;
*score = params[i] * distance_matrix[0][i] as f32 +
params[j] * distance_matrix[j][0] as f32 -
params[i] * params[j] * distance_matrix[i][j] as f32;
}
savings_list.sort_unstable_by(|a, b| b.0.partial_cmp(&a.0).unwrap());
}
#[inline]
fn generate_neighbor<R: Rng + ?Sized>(current: &[f32], rng: &mut R) -> Vec<f32> {
current.iter().map(|&param| {
let delta = rng.gen_range(-0.1..=0.1);
(param + delta).clamp(0.0, 2.0)
}).collect()
}
#[inline]
fn apply_local_search_until_no_improvement(solution: &mut Solution, distance_matrix: &Vec<Vec<i32>>) {
let mut improved = true;
while improved {
improved = false;
for route in &mut solution.routes {
if two_opt(route, distance_matrix) {
improved = true;
}
}
}
}
#[inline]
fn two_opt(route: &mut Vec<usize>, distance_matrix: &Vec<Vec<i32>>) -> bool {
let n = route.len();
let mut improved = false;
for i in 1..n - 2 {
for j in i + 1..n - 1 {
let current_distance = distance_matrix[route[i - 1]][route[i]]
+ distance_matrix[route[j]][route[j + 1]];
let new_distance = distance_matrix[route[i - 1]][route[j]]
+ distance_matrix[route[i]][route[j + 1]];
if new_distance < current_distance {
route[i..=j].reverse();
improved = true;
}
}
}
improved
}
#[inline]
fn calculate_solution_cost(solution: &Solution, distance_matrix: &Vec<Vec<i32>>) -> i32 {
solution.routes.iter().map(|route| {
route.windows(2).map(|w| distance_matrix[w[0]][w[1]]).sum::<i32>()
}).sum()
}
#[inline]
fn create_solution(challenge: &Challenge, params: &[f32], savings_list: &[(f32, u8, u8)]) -> Solution {
let distance_matrix = &challenge.distance_matrix;
let max_capacity = challenge.max_capacity;
let num_nodes = challenge.difficulty.num_nodes;
let demands = &challenge.demands;
let mut routes = vec![None; num_nodes];
for i in 1..num_nodes {
routes[i] = Some(vec![i]);
}
let mut route_demands = demands.clone();
for &(_, i, j) in savings_list {
let (i, j) = (i as usize, j as usize);
if let (Some(left_route), Some(right_route)) = (routes[i].as_ref(), routes[j].as_ref()) {
let (left_start, left_end) = (*left_route.first().unwrap(), *left_route.last().unwrap());
let (right_start, right_end) = (*right_route.first().unwrap(), *right_route.last().unwrap());
if left_start == right_start || route_demands[left_start] + route_demands[right_start] > max_capacity {
continue;
}
let mut new_route = routes[i].take().unwrap();
let mut right_route = routes[j].take().unwrap();
if left_start == i { new_route.reverse(); }
if right_end == j { right_route.reverse(); }
new_route.extend(right_route);
let combined_demand = route_demands[left_start] + route_demands[right_start];
let new_start = new_route[0];
let new_end = *new_route.last().unwrap();
route_demands[new_start] = combined_demand;
route_demands[new_end] = combined_demand;
routes[new_start] = Some(new_route.clone());
routes[new_end] = Some(new_route);
}
}
Solution {
routes: routes
.into_iter()
.enumerate()
.filter_map(|(i, route)| route.filter(|r| r[0] == i))
.map(|mut route| {
route.insert(0, 0);
route.push(0);
route
})
.collect(),
}
}
#[cfg(feature = "cuda")]
mod gpu_optimisation {
use super::*;
use cudarc::driver::*;
use std::{collections::HashMap, sync::Arc};
use tig_challenges::CudaKernel;
// set KERNEL to None if algorithm only has a CPU implementation
pub const KERNEL: Option<CudaKernel> = None;
// Important! your GPU and CPU version of the algorithm should return the same result
pub fn cuda_solve_challenge(
challenge: &Challenge,
dev: &Arc<CudaDevice>,
mut funcs: HashMap<&'static str, CudaFunction>,
) -> anyhow::Result<Option<Solution>> {
solve_challenge(challenge)
}
}
#[cfg(feature = "cuda")]
pub use gpu_optimisation::{cuda_solve_challenge, KERNEL};

View File

@ -0,0 +1,240 @@
/*!
Copyright 2024 CodeAlchemist
Licensed under the TIG Inbound Game License v1.0 or (at your option) any later
version (the "License"); you may not use this file except in compliance with the
License. You may obtain a copy of the License at
https://github.com/tig-foundation/tig-monorepo/tree/main/docs/licenses
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the specific
language governing permissions and limitations under the License.
*/
use rand::{rngs::{SmallRng, StdRng}, Rng, SeedableRng};
use tig_challenges::vehicle_routing::{Challenge, Solution};
pub fn solve_challenge(challenge: &Challenge) -> anyhow::Result<Option<Solution>> {
let max_dist: f32 = challenge.distance_matrix[0].iter().sum::<i32>() as f32;
let p = challenge.max_total_distance as f32 / max_dist;
if p < 0.57 {
return Ok(None)
}
let mut best_solution: Option<Solution> = None;
let mut best_cost = std::i32::MAX;
const INITIAL_TEMPERATURE: f32 = 2.0;
const COOLING_RATE: f32 = 0.995;
const ITERATIONS_PER_TEMPERATURE: usize = 2;
let num_nodes = challenge.difficulty.num_nodes;
let mut current_params = vec![1.0; num_nodes];
let mut savings_list = create_initial_savings_list(challenge);
recompute_and_sort_savings(&mut savings_list, &current_params, challenge);
let mut current_solution = create_solution(challenge, &current_params, &savings_list);
let mut current_cost = calculate_solution_cost(&current_solution, &challenge.distance_matrix);
if current_cost <= challenge.max_total_distance {
return Ok(Some(current_solution));
}
if (current_cost as f32 * 0.96) > challenge.max_total_distance as f32 {
return Ok(None);
}
let mut temperature = INITIAL_TEMPERATURE;
let mut rng = StdRng::seed_from_u64(u64::from_le_bytes(challenge.seed[..8].try_into().unwrap()));
while temperature > 1.0 {
for _ in 0..ITERATIONS_PER_TEMPERATURE {
let neighbor_params = generate_neighbor(&current_params, &mut rng);
recompute_and_sort_savings(&mut savings_list, &neighbor_params, challenge);
let mut neighbor_solution = create_solution(challenge, &neighbor_params, &savings_list);
apply_local_search_until_no_improvement(&mut neighbor_solution, &challenge.distance_matrix);
let neighbor_cost = calculate_solution_cost(&neighbor_solution, &challenge.distance_matrix);
let delta = neighbor_cost as f32 - current_cost as f32;
if delta < 0.0 || rng.gen::<f32>() < (-delta / temperature).exp() {
current_params = neighbor_params;
current_cost = neighbor_cost;
current_solution = neighbor_solution;
if current_cost < best_cost {
best_cost = current_cost;
best_solution = Some(Solution {
routes: current_solution.routes.clone(),
});
}
}
if best_cost <= challenge.max_total_distance {
return Ok(best_solution);
}
}
temperature *= COOLING_RATE;
}
Ok(best_solution)
}
#[inline]
fn create_initial_savings_list(challenge: &Challenge) -> Vec<(f32, u8, u8)> {
let num_nodes = challenge.difficulty.num_nodes;
let capacity = ((num_nodes - 1) * (num_nodes - 2)) / 2;
let mut savings = Vec::with_capacity(capacity);
for i in 1..num_nodes {
for j in (i + 1)..num_nodes {
savings.push((0.0, i as u8, j as u8));
}
}
savings
}
#[inline]
fn recompute_and_sort_savings(savings_list: &mut [(f32, u8, u8)], params: &[f32], challenge: &Challenge) {
let distance_matrix = &challenge.distance_matrix;
let mut zero_len = 0;
for (score, i, j) in savings_list.iter_mut() {
let i = *i as usize;
let j = *j as usize;
*score = params[i] * distance_matrix[0][i] as f32 +
params[j] * distance_matrix[j][0] as f32 -
params[i] * params[j] * distance_matrix[i][j] as f32;
}
savings_list.sort_unstable_by(|a, b| b.0.partial_cmp(&a.0).unwrap());
}
#[inline]
fn generate_neighbor<R: Rng + ?Sized>(current: &[f32], rng: &mut R) -> Vec<f32> {
current.iter().map(|&param| {
let delta = rng.gen_range(-0.1..=0.1);
(param + delta).clamp(0.0, 2.0)
}).collect()
}
#[inline]
fn apply_local_search_until_no_improvement(solution: &mut Solution, distance_matrix: &Vec<Vec<i32>>) {
let mut improved = true;
while improved {
improved = false;
for route in &mut solution.routes {
if two_opt(route, distance_matrix) {
improved = true;
}
}
}
}
#[inline]
fn two_opt(route: &mut Vec<usize>, distance_matrix: &Vec<Vec<i32>>) -> bool {
let n = route.len();
let mut improved = false;
for i in 1..n - 2 {
for j in i + 1..n - 1 {
let current_distance = distance_matrix[route[i - 1]][route[i]]
+ distance_matrix[route[j]][route[j + 1]];
let new_distance = distance_matrix[route[i - 1]][route[j]]
+ distance_matrix[route[i]][route[j + 1]];
if new_distance < current_distance {
route[i..=j].reverse();
improved = true;
}
}
}
improved
}
#[inline]
fn calculate_solution_cost(solution: &Solution, distance_matrix: &Vec<Vec<i32>>) -> i32 {
solution.routes.iter().map(|route| {
route.windows(2).map(|w| distance_matrix[w[0]][w[1]]).sum::<i32>()
}).sum()
}
#[inline]
fn create_solution(challenge: &Challenge, params: &[f32], savings_list: &[(f32, u8, u8)]) -> Solution {
let distance_matrix = &challenge.distance_matrix;
let max_capacity = challenge.max_capacity;
let num_nodes = challenge.difficulty.num_nodes;
let demands = &challenge.demands;
let mut routes = vec![None; num_nodes];
for i in 1..num_nodes {
routes[i] = Some(vec![i]);
}
let mut route_demands = demands.clone();
for &(_, i, j) in savings_list {
let (i, j) = (i as usize, j as usize);
if let (Some(left_route), Some(right_route)) = (routes[i].as_ref(), routes[j].as_ref()) {
let (left_start, left_end) = (*left_route.first().unwrap(), *left_route.last().unwrap());
let (right_start, right_end) = (*right_route.first().unwrap(), *right_route.last().unwrap());
if left_start == right_start || route_demands[left_start] + route_demands[right_start] > max_capacity {
continue;
}
let mut new_route = routes[i].take().unwrap();
let mut right_route = routes[j].take().unwrap();
if left_start == i { new_route.reverse(); }
if right_end == j { right_route.reverse(); }
new_route.extend(right_route);
let combined_demand = route_demands[left_start] + route_demands[right_start];
let new_start = new_route[0];
let new_end = *new_route.last().unwrap();
route_demands[new_start] = combined_demand;
route_demands[new_end] = combined_demand;
routes[new_start] = Some(new_route.clone());
routes[new_end] = Some(new_route);
}
}
Solution {
routes: routes
.into_iter()
.enumerate()
.filter_map(|(i, route)| route.filter(|r| r[0] == i))
.map(|mut route| {
route.insert(0, 0);
route.push(0);
route
})
.collect(),
}
}
#[cfg(feature = "cuda")]
mod gpu_optimisation {
use super::*;
use cudarc::driver::*;
use std::{collections::HashMap, sync::Arc};
use tig_challenges::CudaKernel;
// set KERNEL to None if algorithm only has a CPU implementation
pub const KERNEL: Option<CudaKernel> = None;
// Important! your GPU and CPU version of the algorithm should return the same result
pub fn cuda_solve_challenge(
challenge: &Challenge,
dev: &Arc<CudaDevice>,
mut funcs: HashMap<&'static str, CudaFunction>,
) -> anyhow::Result<Option<Solution>> {
solve_challenge(challenge)
}
}
#[cfg(feature = "cuda")]
pub use gpu_optimisation::{cuda_solve_challenge, KERNEL};

View File

@ -0,0 +1,240 @@
/*!
Copyright 2024 CodeAlchemist
Licensed under the TIG Innovator Outbound Game License v1.0 (the "License"); you
may not use this file except in compliance with the License. You may obtain a copy
of the License at
https://github.com/tig-foundation/tig-monorepo/tree/main/docs/licenses
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the specific
language governing permissions and limitations under the License.
*/
use rand::{rngs::{SmallRng, StdRng}, Rng, SeedableRng};
use tig_challenges::vehicle_routing::{Challenge, Solution};
pub fn solve_challenge(challenge: &Challenge) -> anyhow::Result<Option<Solution>> {
let max_dist: f32 = challenge.distance_matrix[0].iter().sum::<i32>() as f32;
let p = challenge.max_total_distance as f32 / max_dist;
if p < 0.57 {
return Ok(None)
}
let mut best_solution: Option<Solution> = None;
let mut best_cost = std::i32::MAX;
const INITIAL_TEMPERATURE: f32 = 2.0;
const COOLING_RATE: f32 = 0.995;
const ITERATIONS_PER_TEMPERATURE: usize = 2;
let num_nodes = challenge.difficulty.num_nodes;
let mut current_params = vec![1.0; num_nodes];
let mut savings_list = create_initial_savings_list(challenge);
recompute_and_sort_savings(&mut savings_list, &current_params, challenge);
let mut current_solution = create_solution(challenge, &current_params, &savings_list);
let mut current_cost = calculate_solution_cost(&current_solution, &challenge.distance_matrix);
if current_cost <= challenge.max_total_distance {
return Ok(Some(current_solution));
}
if (current_cost as f32 * 0.96) > challenge.max_total_distance as f32 {
return Ok(None);
}
let mut temperature = INITIAL_TEMPERATURE;
let mut rng = StdRng::seed_from_u64(u64::from_le_bytes(challenge.seed[..8].try_into().unwrap()));
while temperature > 1.0 {
for _ in 0..ITERATIONS_PER_TEMPERATURE {
let neighbor_params = generate_neighbor(&current_params, &mut rng);
recompute_and_sort_savings(&mut savings_list, &neighbor_params, challenge);
let mut neighbor_solution = create_solution(challenge, &neighbor_params, &savings_list);
apply_local_search_until_no_improvement(&mut neighbor_solution, &challenge.distance_matrix);
let neighbor_cost = calculate_solution_cost(&neighbor_solution, &challenge.distance_matrix);
let delta = neighbor_cost as f32 - current_cost as f32;
if delta < 0.0 || rng.gen::<f32>() < (-delta / temperature).exp() {
current_params = neighbor_params;
current_cost = neighbor_cost;
current_solution = neighbor_solution;
if current_cost < best_cost {
best_cost = current_cost;
best_solution = Some(Solution {
routes: current_solution.routes.clone(),
});
}
}
if best_cost <= challenge.max_total_distance {
return Ok(best_solution);
}
}
temperature *= COOLING_RATE;
}
Ok(best_solution)
}
#[inline]
fn create_initial_savings_list(challenge: &Challenge) -> Vec<(f32, u8, u8)> {
let num_nodes = challenge.difficulty.num_nodes;
let capacity = ((num_nodes - 1) * (num_nodes - 2)) / 2;
let mut savings = Vec::with_capacity(capacity);
for i in 1..num_nodes {
for j in (i + 1)..num_nodes {
savings.push((0.0, i as u8, j as u8));
}
}
savings
}
#[inline]
fn recompute_and_sort_savings(savings_list: &mut [(f32, u8, u8)], params: &[f32], challenge: &Challenge) {
let distance_matrix = &challenge.distance_matrix;
let mut zero_len = 0;
for (score, i, j) in savings_list.iter_mut() {
let i = *i as usize;
let j = *j as usize;
*score = params[i] * distance_matrix[0][i] as f32 +
params[j] * distance_matrix[j][0] as f32 -
params[i] * params[j] * distance_matrix[i][j] as f32;
}
savings_list.sort_unstable_by(|a, b| b.0.partial_cmp(&a.0).unwrap());
}
#[inline]
fn generate_neighbor<R: Rng + ?Sized>(current: &[f32], rng: &mut R) -> Vec<f32> {
current.iter().map(|&param| {
let delta = rng.gen_range(-0.1..=0.1);
(param + delta).clamp(0.0, 2.0)
}).collect()
}
#[inline]
fn apply_local_search_until_no_improvement(solution: &mut Solution, distance_matrix: &Vec<Vec<i32>>) {
let mut improved = true;
while improved {
improved = false;
for route in &mut solution.routes {
if two_opt(route, distance_matrix) {
improved = true;
}
}
}
}
#[inline]
fn two_opt(route: &mut Vec<usize>, distance_matrix: &Vec<Vec<i32>>) -> bool {
let n = route.len();
let mut improved = false;
for i in 1..n - 2 {
for j in i + 1..n - 1 {
let current_distance = distance_matrix[route[i - 1]][route[i]]
+ distance_matrix[route[j]][route[j + 1]];
let new_distance = distance_matrix[route[i - 1]][route[j]]
+ distance_matrix[route[i]][route[j + 1]];
if new_distance < current_distance {
route[i..=j].reverse();
improved = true;
}
}
}
improved
}
#[inline]
fn calculate_solution_cost(solution: &Solution, distance_matrix: &Vec<Vec<i32>>) -> i32 {
solution.routes.iter().map(|route| {
route.windows(2).map(|w| distance_matrix[w[0]][w[1]]).sum::<i32>()
}).sum()
}
#[inline]
fn create_solution(challenge: &Challenge, params: &[f32], savings_list: &[(f32, u8, u8)]) -> Solution {
let distance_matrix = &challenge.distance_matrix;
let max_capacity = challenge.max_capacity;
let num_nodes = challenge.difficulty.num_nodes;
let demands = &challenge.demands;
let mut routes = vec![None; num_nodes];
for i in 1..num_nodes {
routes[i] = Some(vec![i]);
}
let mut route_demands = demands.clone();
for &(_, i, j) in savings_list {
let (i, j) = (i as usize, j as usize);
if let (Some(left_route), Some(right_route)) = (routes[i].as_ref(), routes[j].as_ref()) {
let (left_start, left_end) = (*left_route.first().unwrap(), *left_route.last().unwrap());
let (right_start, right_end) = (*right_route.first().unwrap(), *right_route.last().unwrap());
if left_start == right_start || route_demands[left_start] + route_demands[right_start] > max_capacity {
continue;
}
let mut new_route = routes[i].take().unwrap();
let mut right_route = routes[j].take().unwrap();
if left_start == i { new_route.reverse(); }
if right_end == j { right_route.reverse(); }
new_route.extend(right_route);
let combined_demand = route_demands[left_start] + route_demands[right_start];
let new_start = new_route[0];
let new_end = *new_route.last().unwrap();
route_demands[new_start] = combined_demand;
route_demands[new_end] = combined_demand;
routes[new_start] = Some(new_route.clone());
routes[new_end] = Some(new_route);
}
}
Solution {
routes: routes
.into_iter()
.enumerate()
.filter_map(|(i, route)| route.filter(|r| r[0] == i))
.map(|mut route| {
route.insert(0, 0);
route.push(0);
route
})
.collect(),
}
}
#[cfg(feature = "cuda")]
mod gpu_optimisation {
use super::*;
use cudarc::driver::*;
use std::{collections::HashMap, sync::Arc};
use tig_challenges::CudaKernel;
// set KERNEL to None if algorithm only has a CPU implementation
pub const KERNEL: Option<CudaKernel> = None;
// Important! your GPU and CPU version of the algorithm should return the same result
pub fn cuda_solve_challenge(
challenge: &Challenge,
dev: &Arc<CudaDevice>,
mut funcs: HashMap<&'static str, CudaFunction>,
) -> anyhow::Result<Option<Solution>> {
solve_challenge(challenge)
}
}
#[cfg(feature = "cuda")]
pub use gpu_optimisation::{cuda_solve_challenge, KERNEL};

View File

@ -0,0 +1,4 @@
mod benchmarker_outbound;
pub use benchmarker_outbound::solve_challenge;
#[cfg(feature = "cuda")]
pub use benchmarker_outbound::{cuda_solve_challenge, KERNEL};

View File

@ -0,0 +1,240 @@
/*!
Copyright 2024 CodeAlchemist
Licensed under the TIG Open Data License v1.0 or (at your option) any later version
(the "License"); you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://github.com/tig-foundation/tig-monorepo/tree/main/docs/licenses
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the specific
language governing permissions and limitations under the License.
*/
use rand::{rngs::{SmallRng, StdRng}, Rng, SeedableRng};
use tig_challenges::vehicle_routing::{Challenge, Solution};
pub fn solve_challenge(challenge: &Challenge) -> anyhow::Result<Option<Solution>> {
let max_dist: f32 = challenge.distance_matrix[0].iter().sum::<i32>() as f32;
let p = challenge.max_total_distance as f32 / max_dist;
if p < 0.57 {
return Ok(None)
}
let mut best_solution: Option<Solution> = None;
let mut best_cost = std::i32::MAX;
const INITIAL_TEMPERATURE: f32 = 2.0;
const COOLING_RATE: f32 = 0.995;
const ITERATIONS_PER_TEMPERATURE: usize = 2;
let num_nodes = challenge.difficulty.num_nodes;
let mut current_params = vec![1.0; num_nodes];
let mut savings_list = create_initial_savings_list(challenge);
recompute_and_sort_savings(&mut savings_list, &current_params, challenge);
let mut current_solution = create_solution(challenge, &current_params, &savings_list);
let mut current_cost = calculate_solution_cost(&current_solution, &challenge.distance_matrix);
if current_cost <= challenge.max_total_distance {
return Ok(Some(current_solution));
}
if (current_cost as f32 * 0.96) > challenge.max_total_distance as f32 {
return Ok(None);
}
let mut temperature = INITIAL_TEMPERATURE;
let mut rng = StdRng::seed_from_u64(u64::from_le_bytes(challenge.seed[..8].try_into().unwrap()));
while temperature > 1.0 {
for _ in 0..ITERATIONS_PER_TEMPERATURE {
let neighbor_params = generate_neighbor(&current_params, &mut rng);
recompute_and_sort_savings(&mut savings_list, &neighbor_params, challenge);
let mut neighbor_solution = create_solution(challenge, &neighbor_params, &savings_list);
apply_local_search_until_no_improvement(&mut neighbor_solution, &challenge.distance_matrix);
let neighbor_cost = calculate_solution_cost(&neighbor_solution, &challenge.distance_matrix);
let delta = neighbor_cost as f32 - current_cost as f32;
if delta < 0.0 || rng.gen::<f32>() < (-delta / temperature).exp() {
current_params = neighbor_params;
current_cost = neighbor_cost;
current_solution = neighbor_solution;
if current_cost < best_cost {
best_cost = current_cost;
best_solution = Some(Solution {
routes: current_solution.routes.clone(),
});
}
}
if best_cost <= challenge.max_total_distance {
return Ok(best_solution);
}
}
temperature *= COOLING_RATE;
}
Ok(best_solution)
}
#[inline]
fn create_initial_savings_list(challenge: &Challenge) -> Vec<(f32, u8, u8)> {
let num_nodes = challenge.difficulty.num_nodes;
let capacity = ((num_nodes - 1) * (num_nodes - 2)) / 2;
let mut savings = Vec::with_capacity(capacity);
for i in 1..num_nodes {
for j in (i + 1)..num_nodes {
savings.push((0.0, i as u8, j as u8));
}
}
savings
}
#[inline]
fn recompute_and_sort_savings(savings_list: &mut [(f32, u8, u8)], params: &[f32], challenge: &Challenge) {
let distance_matrix = &challenge.distance_matrix;
let mut zero_len = 0;
for (score, i, j) in savings_list.iter_mut() {
let i = *i as usize;
let j = *j as usize;
*score = params[i] * distance_matrix[0][i] as f32 +
params[j] * distance_matrix[j][0] as f32 -
params[i] * params[j] * distance_matrix[i][j] as f32;
}
savings_list.sort_unstable_by(|a, b| b.0.partial_cmp(&a.0).unwrap());
}
#[inline]
fn generate_neighbor<R: Rng + ?Sized>(current: &[f32], rng: &mut R) -> Vec<f32> {
current.iter().map(|&param| {
let delta = rng.gen_range(-0.1..=0.1);
(param + delta).clamp(0.0, 2.0)
}).collect()
}
#[inline]
fn apply_local_search_until_no_improvement(solution: &mut Solution, distance_matrix: &Vec<Vec<i32>>) {
let mut improved = true;
while improved {
improved = false;
for route in &mut solution.routes {
if two_opt(route, distance_matrix) {
improved = true;
}
}
}
}
#[inline]
fn two_opt(route: &mut Vec<usize>, distance_matrix: &Vec<Vec<i32>>) -> bool {
let n = route.len();
let mut improved = false;
for i in 1..n - 2 {
for j in i + 1..n - 1 {
let current_distance = distance_matrix[route[i - 1]][route[i]]
+ distance_matrix[route[j]][route[j + 1]];
let new_distance = distance_matrix[route[i - 1]][route[j]]
+ distance_matrix[route[i]][route[j + 1]];
if new_distance < current_distance {
route[i..=j].reverse();
improved = true;
}
}
}
improved
}
#[inline]
fn calculate_solution_cost(solution: &Solution, distance_matrix: &Vec<Vec<i32>>) -> i32 {
solution.routes.iter().map(|route| {
route.windows(2).map(|w| distance_matrix[w[0]][w[1]]).sum::<i32>()
}).sum()
}
#[inline]
fn create_solution(challenge: &Challenge, params: &[f32], savings_list: &[(f32, u8, u8)]) -> Solution {
let distance_matrix = &challenge.distance_matrix;
let max_capacity = challenge.max_capacity;
let num_nodes = challenge.difficulty.num_nodes;
let demands = &challenge.demands;
let mut routes = vec![None; num_nodes];
for i in 1..num_nodes {
routes[i] = Some(vec![i]);
}
let mut route_demands = demands.clone();
for &(_, i, j) in savings_list {
let (i, j) = (i as usize, j as usize);
if let (Some(left_route), Some(right_route)) = (routes[i].as_ref(), routes[j].as_ref()) {
let (left_start, left_end) = (*left_route.first().unwrap(), *left_route.last().unwrap());
let (right_start, right_end) = (*right_route.first().unwrap(), *right_route.last().unwrap());
if left_start == right_start || route_demands[left_start] + route_demands[right_start] > max_capacity {
continue;
}
let mut new_route = routes[i].take().unwrap();
let mut right_route = routes[j].take().unwrap();
if left_start == i { new_route.reverse(); }
if right_end == j { right_route.reverse(); }
new_route.extend(right_route);
let combined_demand = route_demands[left_start] + route_demands[right_start];
let new_start = new_route[0];
let new_end = *new_route.last().unwrap();
route_demands[new_start] = combined_demand;
route_demands[new_end] = combined_demand;
routes[new_start] = Some(new_route.clone());
routes[new_end] = Some(new_route);
}
}
Solution {
routes: routes
.into_iter()
.enumerate()
.filter_map(|(i, route)| route.filter(|r| r[0] == i))
.map(|mut route| {
route.insert(0, 0);
route.push(0);
route
})
.collect(),
}
}
#[cfg(feature = "cuda")]
mod gpu_optimisation {
use super::*;
use cudarc::driver::*;
use std::{collections::HashMap, sync::Arc};
use tig_challenges::CudaKernel;
// set KERNEL to None if algorithm only has a CPU implementation
pub const KERNEL: Option<CudaKernel> = None;
// Important! your GPU and CPU version of the algorithm should return the same result
pub fn cuda_solve_challenge(
challenge: &Challenge,
dev: &Arc<CudaDevice>,
mut funcs: HashMap<&'static str, CudaFunction>,
) -> anyhow::Result<Option<Solution>> {
solve_challenge(challenge)
}
}
#[cfg(feature = "cuda")]
pub use gpu_optimisation::{cuda_solve_challenge, KERNEL};

View File

@ -0,0 +1,328 @@
/*!
Copyright 2024 syebastian
Licensed under the TIG Benchmarker Outbound Game License v1.0 (the "License"); you
may not use this file except in compliance with the License. You may obtain a copy
of the License at
https://github.com/tig-foundation/tig-monorepo/tree/main/docs/licenses
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the specific
language governing permissions and limitations under the License.
*/
use rand::{rngs::{SmallRng, StdRng}, Rng, SeedableRng};
use tig_challenges::vehicle_routing::{Challenge, Solution};
pub fn solve_challenge(challenge: &Challenge) -> anyhow::Result<Option<Solution>> {
let mut best_solution: Option<Solution> = None;
let mut best_cost = std::i32::MAX;
const INITIAL_TEMPERATURE: f32 = 2.0;
const COOLING_RATE: f32 = 0.995;
const ITERATIONS_PER_TEMPERATURE: usize = 2;
let num_nodes = challenge.difficulty.num_nodes;
let mut current_params = vec![1.0; num_nodes];
let mut savings_list = create_initial_savings_list(challenge);
recompute_and_sort_savings(&mut savings_list, &current_params, challenge);
let mut current_solution = create_solution(challenge, &current_params, &savings_list);
let mut current_cost = calculate_solution_cost(&current_solution, &challenge.distance_matrix);
if current_cost <= challenge.max_total_distance {
return Ok(Some(current_solution));
}
if (current_cost as f32 * 0.96) > challenge.max_total_distance as f32 {
return Ok(None);
}
let mut temperature = INITIAL_TEMPERATURE;
let mut rng = StdRng::seed_from_u64(u64::from_le_bytes(challenge.seed[..8].try_into().unwrap()));
while temperature > 1.0 {
for _ in 0..ITERATIONS_PER_TEMPERATURE {
let neighbor_params = generate_neighbor(&current_params, &mut rng);
recompute_and_sort_savings(&mut savings_list, &neighbor_params, challenge);
let mut neighbor_solution = create_solution(challenge, &neighbor_params, &savings_list);
apply_local_search_until_no_improvement(&mut neighbor_solution, &challenge.distance_matrix);
let neighbor_cost = calculate_solution_cost(&neighbor_solution, &challenge.distance_matrix);
let delta = neighbor_cost as f32 - current_cost as f32;
if delta < 0.0 || rng.gen::<f32>() < (-delta / temperature).exp() {
current_params = neighbor_params;
current_cost = neighbor_cost;
current_solution = neighbor_solution;
if current_cost < best_cost {
best_cost = current_cost;
best_solution = Some(Solution {
routes: current_solution.routes.clone(),
});
}
}
if best_cost <= challenge.max_total_distance {
return Ok(best_solution);
}
}
temperature *= COOLING_RATE;
}
if let Some(best_sol) = &best_solution {
let mut solution = Solution {
routes: best_sol.routes.clone()
};
if try_inter_route_swap(&mut solution, &challenge.distance_matrix, &challenge.demands, challenge.max_capacity) {
let new_cost = calculate_solution_cost(&solution, &challenge.distance_matrix);
if new_cost < best_cost {
best_solution = Some(solution);
}
}
}
Ok(best_solution)
}
#[inline]
fn create_initial_savings_list(challenge: &Challenge) -> Vec<(f32, u8, u8)> {
let num_nodes = challenge.difficulty.num_nodes;
let capacity = ((num_nodes - 1) * (num_nodes - 2)) / 2;
let mut savings = Vec::with_capacity(capacity);
let max_distance = challenge.distance_matrix.iter().flat_map(|row| row.iter()).cloned().max().unwrap_or(0);
let threshold = max_distance / 2;
for i in 1..num_nodes {
for j in (i + 1)..num_nodes {
if challenge.distance_matrix[i][j] <= threshold {
savings.push((0.0, i as u8, j as u8));
}
}
}
savings
}
#[inline]
fn recompute_and_sort_savings(savings_list: &mut [(f32, u8, u8)], params: &[f32], challenge: &Challenge) {
let distance_matrix = &challenge.distance_matrix;
let mut zero_len = 0;
for (score, i, j) in savings_list.iter_mut() {
let i = *i as usize;
let j = *j as usize;
*score = params[i] * distance_matrix[0][i] as f32 +
params[j] * distance_matrix[j][0] as f32 -
params[i] * params[j] * distance_matrix[i][j] as f32;
}
savings_list.sort_unstable_by(|a, b| b.0.partial_cmp(&a.0).unwrap());
}
#[inline]
fn generate_neighbor<R: Rng + ?Sized>(current: &[f32], rng: &mut R) -> Vec<f32> {
current.iter().map(|&param| {
let delta = rng.gen_range(-0.1..=0.1);
(param + delta).clamp(0.0, 2.0)
}).collect()
}
#[inline]
fn apply_local_search_until_no_improvement(solution: &mut Solution, distance_matrix: &Vec<Vec<i32>>) {
let mut improved = true;
while improved {
improved = false;
for route in &mut solution.routes {
if two_opt(route, distance_matrix) {
improved = true;
}
}
}
}
#[inline]
fn two_opt(route: &mut Vec<usize>, distance_matrix: &Vec<Vec<i32>>) -> bool {
let n = route.len();
let mut improved = false;
for i in 1..n - 2 {
for j in i + 1..n - 1 {
let current_distance = distance_matrix[route[i - 1]][route[i]]
+ distance_matrix[route[j]][route[j + 1]];
let new_distance = distance_matrix[route[i - 1]][route[j]]
+ distance_matrix[route[i]][route[j + 1]];
if new_distance < current_distance {
route[i..=j].reverse();
improved = true;
}
}
}
improved
}
#[inline]
fn calculate_solution_cost(solution: &Solution, distance_matrix: &Vec<Vec<i32>>) -> i32 {
solution.routes.iter().map(|route| {
route.windows(2).map(|w| distance_matrix[w[0]][w[1]]).sum::<i32>()
}).sum()
}
#[inline]
fn create_solution(challenge: &Challenge, params: &[f32], savings_list: &[(f32, u8, u8)]) -> Solution {
let distance_matrix = &challenge.distance_matrix;
let max_capacity = challenge.max_capacity;
let num_nodes = challenge.difficulty.num_nodes;
let demands = &challenge.demands;
let mut routes = vec![None; num_nodes];
for i in 1..num_nodes {
routes[i] = Some(vec![i]);
}
let mut route_demands = demands.clone();
for &(_, i, j) in savings_list {
let (i, j) = (i as usize, j as usize);
if let (Some(left_route), Some(right_route)) = (routes[i].as_ref(), routes[j].as_ref()) {
let (left_start, left_end) = (*left_route.first().unwrap(), *left_route.last().unwrap());
let (right_start, right_end) = (*right_route.first().unwrap(), *right_route.last().unwrap());
if left_start == right_start || route_demands[left_start] + route_demands[right_start] > max_capacity {
continue;
}
let mut new_route = routes[i].take().unwrap();
let mut right_route = routes[j].take().unwrap();
if left_start == i { new_route.reverse(); }
if right_end == j { right_route.reverse(); }
new_route.extend(right_route);
let combined_demand = route_demands[left_start] + route_demands[right_start];
let new_start = new_route[0];
let new_end = *new_route.last().unwrap();
route_demands[new_start] = combined_demand;
route_demands[new_end] = combined_demand;
routes[new_start] = Some(new_route.clone());
routes[new_end] = Some(new_route);
}
}
Solution {
routes: routes
.into_iter()
.enumerate()
.filter_map(|(i, route)| route.filter(|r| r[0] == i))
.map(|mut route| {
route.insert(0, 0);
route.push(0);
route
})
.collect(),
}
}
#[inline]
fn try_inter_route_swap(
solution: &mut Solution,
distance_matrix: &Vec<Vec<i32>>,
demands: &Vec<i32>,
max_capacity: i32
) -> bool {
let mut improved = false;
let num_routes = solution.routes.len();
for i in 0..num_routes {
for j in i + 1..num_routes {
if let Some(better_routes) = find_best_swap(
&solution.routes[i],
&solution.routes[j],
distance_matrix,
demands,
max_capacity
) {
solution.routes[i] = better_routes.0;
solution.routes[j] = better_routes.1;
improved = true;
}
}
}
improved
}
#[inline]
fn find_best_swap(
route1: &Vec<usize>,
route2: &Vec<usize>,
distance_matrix: &Vec<Vec<i32>>,
demands: &Vec<i32>,
max_capacity: i32
) -> Option<(Vec<usize>, Vec<usize>)> {
let mut best_improvement = 0;
let mut best_swap = None;
for i in 1..route1.len() - 1 {
for j in 1..route2.len() - 1 {
let route1_demand: i32 = route1.iter().map(|&n| demands[n]).sum();
let route2_demand: i32 = route2.iter().map(|&n| demands[n]).sum();
let demand_delta = demands[route2[j]] - demands[route1[i]];
if route1_demand + demand_delta > max_capacity ||
route2_demand - demand_delta > max_capacity {
continue;
}
let old_cost = distance_matrix[route1[i-1]][route1[i]] +
distance_matrix[route1[i]][route1[i+1]] +
distance_matrix[route2[j-1]][route2[j]] +
distance_matrix[route2[j]][route2[j+1]];
let new_cost = distance_matrix[route1[i-1]][route2[j]] +
distance_matrix[route2[j]][route1[i+1]] +
distance_matrix[route2[j-1]][route1[i]] +
distance_matrix[route1[i]][route2[j+1]];
let improvement = old_cost - new_cost;
if improvement > best_improvement {
best_improvement = improvement;
let mut new_route1 = route1.clone();
let mut new_route2 = route2.clone();
new_route1[i] = route2[j];
new_route2[j] = route1[i];
best_swap = Some((new_route1, new_route2));
}
}
}
best_swap
}
#[cfg(feature = "cuda")]
mod gpu_optimisation {
use super::*;
use cudarc::driver::*;
use std::{collections::HashMap, sync::Arc};
use tig_challenges::CudaKernel;
// set KERNEL to None if algorithm only has a CPU implementation
pub const KERNEL: Option<CudaKernel> = None;
// Important! your GPU and CPU version of the algorithm should return the same result
pub fn cuda_solve_challenge(
challenge: &Challenge,
dev: &Arc<CudaDevice>,
mut funcs: HashMap<&'static str, CudaFunction>,
) -> anyhow::Result<Option<Solution>> {
solve_challenge(challenge)
}
}
#[cfg(feature = "cuda")]
pub use gpu_optimisation::{cuda_solve_challenge, KERNEL};

View File

@ -0,0 +1,328 @@
/*!
Copyright 2024 syebastian
Licensed under the TIG Commercial License v1.0 (the "License"); you
may not use this file except in compliance with the License. You may obtain a copy
of the License at
https://github.com/tig-foundation/tig-monorepo/tree/main/docs/licenses
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the specific
language governing permissions and limitations under the License.
*/
use rand::{rngs::{SmallRng, StdRng}, Rng, SeedableRng};
use tig_challenges::vehicle_routing::{Challenge, Solution};
pub fn solve_challenge(challenge: &Challenge) -> anyhow::Result<Option<Solution>> {
let mut best_solution: Option<Solution> = None;
let mut best_cost = std::i32::MAX;
const INITIAL_TEMPERATURE: f32 = 2.0;
const COOLING_RATE: f32 = 0.995;
const ITERATIONS_PER_TEMPERATURE: usize = 2;
let num_nodes = challenge.difficulty.num_nodes;
let mut current_params = vec![1.0; num_nodes];
let mut savings_list = create_initial_savings_list(challenge);
recompute_and_sort_savings(&mut savings_list, &current_params, challenge);
let mut current_solution = create_solution(challenge, &current_params, &savings_list);
let mut current_cost = calculate_solution_cost(&current_solution, &challenge.distance_matrix);
if current_cost <= challenge.max_total_distance {
return Ok(Some(current_solution));
}
if (current_cost as f32 * 0.96) > challenge.max_total_distance as f32 {
return Ok(None);
}
let mut temperature = INITIAL_TEMPERATURE;
let mut rng = StdRng::seed_from_u64(u64::from_le_bytes(challenge.seed[..8].try_into().unwrap()));
while temperature > 1.0 {
for _ in 0..ITERATIONS_PER_TEMPERATURE {
let neighbor_params = generate_neighbor(&current_params, &mut rng);
recompute_and_sort_savings(&mut savings_list, &neighbor_params, challenge);
let mut neighbor_solution = create_solution(challenge, &neighbor_params, &savings_list);
apply_local_search_until_no_improvement(&mut neighbor_solution, &challenge.distance_matrix);
let neighbor_cost = calculate_solution_cost(&neighbor_solution, &challenge.distance_matrix);
let delta = neighbor_cost as f32 - current_cost as f32;
if delta < 0.0 || rng.gen::<f32>() < (-delta / temperature).exp() {
current_params = neighbor_params;
current_cost = neighbor_cost;
current_solution = neighbor_solution;
if current_cost < best_cost {
best_cost = current_cost;
best_solution = Some(Solution {
routes: current_solution.routes.clone(),
});
}
}
if best_cost <= challenge.max_total_distance {
return Ok(best_solution);
}
}
temperature *= COOLING_RATE;
}
if let Some(best_sol) = &best_solution {
let mut solution = Solution {
routes: best_sol.routes.clone()
};
if try_inter_route_swap(&mut solution, &challenge.distance_matrix, &challenge.demands, challenge.max_capacity) {
let new_cost = calculate_solution_cost(&solution, &challenge.distance_matrix);
if new_cost < best_cost {
best_solution = Some(solution);
}
}
}
Ok(best_solution)
}
#[inline]
fn create_initial_savings_list(challenge: &Challenge) -> Vec<(f32, u8, u8)> {
let num_nodes = challenge.difficulty.num_nodes;
let capacity = ((num_nodes - 1) * (num_nodes - 2)) / 2;
let mut savings = Vec::with_capacity(capacity);
let max_distance = challenge.distance_matrix.iter().flat_map(|row| row.iter()).cloned().max().unwrap_or(0);
let threshold = max_distance / 2;
for i in 1..num_nodes {
for j in (i + 1)..num_nodes {
if challenge.distance_matrix[i][j] <= threshold {
savings.push((0.0, i as u8, j as u8));
}
}
}
savings
}
#[inline]
fn recompute_and_sort_savings(savings_list: &mut [(f32, u8, u8)], params: &[f32], challenge: &Challenge) {
let distance_matrix = &challenge.distance_matrix;
let mut zero_len = 0;
for (score, i, j) in savings_list.iter_mut() {
let i = *i as usize;
let j = *j as usize;
*score = params[i] * distance_matrix[0][i] as f32 +
params[j] * distance_matrix[j][0] as f32 -
params[i] * params[j] * distance_matrix[i][j] as f32;
}
savings_list.sort_unstable_by(|a, b| b.0.partial_cmp(&a.0).unwrap());
}
#[inline]
fn generate_neighbor<R: Rng + ?Sized>(current: &[f32], rng: &mut R) -> Vec<f32> {
current.iter().map(|&param| {
let delta = rng.gen_range(-0.1..=0.1);
(param + delta).clamp(0.0, 2.0)
}).collect()
}
#[inline]
fn apply_local_search_until_no_improvement(solution: &mut Solution, distance_matrix: &Vec<Vec<i32>>) {
let mut improved = true;
while improved {
improved = false;
for route in &mut solution.routes {
if two_opt(route, distance_matrix) {
improved = true;
}
}
}
}
#[inline]
fn two_opt(route: &mut Vec<usize>, distance_matrix: &Vec<Vec<i32>>) -> bool {
let n = route.len();
let mut improved = false;
for i in 1..n - 2 {
for j in i + 1..n - 1 {
let current_distance = distance_matrix[route[i - 1]][route[i]]
+ distance_matrix[route[j]][route[j + 1]];
let new_distance = distance_matrix[route[i - 1]][route[j]]
+ distance_matrix[route[i]][route[j + 1]];
if new_distance < current_distance {
route[i..=j].reverse();
improved = true;
}
}
}
improved
}
#[inline]
fn calculate_solution_cost(solution: &Solution, distance_matrix: &Vec<Vec<i32>>) -> i32 {
solution.routes.iter().map(|route| {
route.windows(2).map(|w| distance_matrix[w[0]][w[1]]).sum::<i32>()
}).sum()
}
#[inline]
fn create_solution(challenge: &Challenge, params: &[f32], savings_list: &[(f32, u8, u8)]) -> Solution {
let distance_matrix = &challenge.distance_matrix;
let max_capacity = challenge.max_capacity;
let num_nodes = challenge.difficulty.num_nodes;
let demands = &challenge.demands;
let mut routes = vec![None; num_nodes];
for i in 1..num_nodes {
routes[i] = Some(vec![i]);
}
let mut route_demands = demands.clone();
for &(_, i, j) in savings_list {
let (i, j) = (i as usize, j as usize);
if let (Some(left_route), Some(right_route)) = (routes[i].as_ref(), routes[j].as_ref()) {
let (left_start, left_end) = (*left_route.first().unwrap(), *left_route.last().unwrap());
let (right_start, right_end) = (*right_route.first().unwrap(), *right_route.last().unwrap());
if left_start == right_start || route_demands[left_start] + route_demands[right_start] > max_capacity {
continue;
}
let mut new_route = routes[i].take().unwrap();
let mut right_route = routes[j].take().unwrap();
if left_start == i { new_route.reverse(); }
if right_end == j { right_route.reverse(); }
new_route.extend(right_route);
let combined_demand = route_demands[left_start] + route_demands[right_start];
let new_start = new_route[0];
let new_end = *new_route.last().unwrap();
route_demands[new_start] = combined_demand;
route_demands[new_end] = combined_demand;
routes[new_start] = Some(new_route.clone());
routes[new_end] = Some(new_route);
}
}
Solution {
routes: routes
.into_iter()
.enumerate()
.filter_map(|(i, route)| route.filter(|r| r[0] == i))
.map(|mut route| {
route.insert(0, 0);
route.push(0);
route
})
.collect(),
}
}
#[inline]
fn try_inter_route_swap(
solution: &mut Solution,
distance_matrix: &Vec<Vec<i32>>,
demands: &Vec<i32>,
max_capacity: i32
) -> bool {
let mut improved = false;
let num_routes = solution.routes.len();
for i in 0..num_routes {
for j in i + 1..num_routes {
if let Some(better_routes) = find_best_swap(
&solution.routes[i],
&solution.routes[j],
distance_matrix,
demands,
max_capacity
) {
solution.routes[i] = better_routes.0;
solution.routes[j] = better_routes.1;
improved = true;
}
}
}
improved
}
#[inline]
fn find_best_swap(
route1: &Vec<usize>,
route2: &Vec<usize>,
distance_matrix: &Vec<Vec<i32>>,
demands: &Vec<i32>,
max_capacity: i32
) -> Option<(Vec<usize>, Vec<usize>)> {
let mut best_improvement = 0;
let mut best_swap = None;
for i in 1..route1.len() - 1 {
for j in 1..route2.len() - 1 {
let route1_demand: i32 = route1.iter().map(|&n| demands[n]).sum();
let route2_demand: i32 = route2.iter().map(|&n| demands[n]).sum();
let demand_delta = demands[route2[j]] - demands[route1[i]];
if route1_demand + demand_delta > max_capacity ||
route2_demand - demand_delta > max_capacity {
continue;
}
let old_cost = distance_matrix[route1[i-1]][route1[i]] +
distance_matrix[route1[i]][route1[i+1]] +
distance_matrix[route2[j-1]][route2[j]] +
distance_matrix[route2[j]][route2[j+1]];
let new_cost = distance_matrix[route1[i-1]][route2[j]] +
distance_matrix[route2[j]][route1[i+1]] +
distance_matrix[route2[j-1]][route1[i]] +
distance_matrix[route1[i]][route2[j+1]];
let improvement = old_cost - new_cost;
if improvement > best_improvement {
best_improvement = improvement;
let mut new_route1 = route1.clone();
let mut new_route2 = route2.clone();
new_route1[i] = route2[j];
new_route2[j] = route1[i];
best_swap = Some((new_route1, new_route2));
}
}
}
best_swap
}
#[cfg(feature = "cuda")]
mod gpu_optimisation {
use super::*;
use cudarc::driver::*;
use std::{collections::HashMap, sync::Arc};
use tig_challenges::CudaKernel;
// set KERNEL to None if algorithm only has a CPU implementation
pub const KERNEL: Option<CudaKernel> = None;
// Important! your GPU and CPU version of the algorithm should return the same result
pub fn cuda_solve_challenge(
challenge: &Challenge,
dev: &Arc<CudaDevice>,
mut funcs: HashMap<&'static str, CudaFunction>,
) -> anyhow::Result<Option<Solution>> {
solve_challenge(challenge)
}
}
#[cfg(feature = "cuda")]
pub use gpu_optimisation::{cuda_solve_challenge, KERNEL};

View File

@ -0,0 +1,328 @@
/*!
Copyright 2024 syebastian
Licensed under the TIG Inbound Game License v1.0 or (at your option) any later
version (the "License"); you may not use this file except in compliance with the
License. You may obtain a copy of the License at
https://github.com/tig-foundation/tig-monorepo/tree/main/docs/licenses
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the specific
language governing permissions and limitations under the License.
*/
use rand::{rngs::{SmallRng, StdRng}, Rng, SeedableRng};
use tig_challenges::vehicle_routing::{Challenge, Solution};
pub fn solve_challenge(challenge: &Challenge) -> anyhow::Result<Option<Solution>> {
let mut best_solution: Option<Solution> = None;
let mut best_cost = std::i32::MAX;
const INITIAL_TEMPERATURE: f32 = 2.0;
const COOLING_RATE: f32 = 0.995;
const ITERATIONS_PER_TEMPERATURE: usize = 2;
let num_nodes = challenge.difficulty.num_nodes;
let mut current_params = vec![1.0; num_nodes];
let mut savings_list = create_initial_savings_list(challenge);
recompute_and_sort_savings(&mut savings_list, &current_params, challenge);
let mut current_solution = create_solution(challenge, &current_params, &savings_list);
let mut current_cost = calculate_solution_cost(&current_solution, &challenge.distance_matrix);
if current_cost <= challenge.max_total_distance {
return Ok(Some(current_solution));
}
if (current_cost as f32 * 0.96) > challenge.max_total_distance as f32 {
return Ok(None);
}
let mut temperature = INITIAL_TEMPERATURE;
let mut rng = StdRng::seed_from_u64(u64::from_le_bytes(challenge.seed[..8].try_into().unwrap()));
while temperature > 1.0 {
for _ in 0..ITERATIONS_PER_TEMPERATURE {
let neighbor_params = generate_neighbor(&current_params, &mut rng);
recompute_and_sort_savings(&mut savings_list, &neighbor_params, challenge);
let mut neighbor_solution = create_solution(challenge, &neighbor_params, &savings_list);
apply_local_search_until_no_improvement(&mut neighbor_solution, &challenge.distance_matrix);
let neighbor_cost = calculate_solution_cost(&neighbor_solution, &challenge.distance_matrix);
let delta = neighbor_cost as f32 - current_cost as f32;
if delta < 0.0 || rng.gen::<f32>() < (-delta / temperature).exp() {
current_params = neighbor_params;
current_cost = neighbor_cost;
current_solution = neighbor_solution;
if current_cost < best_cost {
best_cost = current_cost;
best_solution = Some(Solution {
routes: current_solution.routes.clone(),
});
}
}
if best_cost <= challenge.max_total_distance {
return Ok(best_solution);
}
}
temperature *= COOLING_RATE;
}
if let Some(best_sol) = &best_solution {
let mut solution = Solution {
routes: best_sol.routes.clone()
};
if try_inter_route_swap(&mut solution, &challenge.distance_matrix, &challenge.demands, challenge.max_capacity) {
let new_cost = calculate_solution_cost(&solution, &challenge.distance_matrix);
if new_cost < best_cost {
best_solution = Some(solution);
}
}
}
Ok(best_solution)
}
#[inline]
fn create_initial_savings_list(challenge: &Challenge) -> Vec<(f32, u8, u8)> {
let num_nodes = challenge.difficulty.num_nodes;
let capacity = ((num_nodes - 1) * (num_nodes - 2)) / 2;
let mut savings = Vec::with_capacity(capacity);
let max_distance = challenge.distance_matrix.iter().flat_map(|row| row.iter()).cloned().max().unwrap_or(0);
let threshold = max_distance / 2;
for i in 1..num_nodes {
for j in (i + 1)..num_nodes {
if challenge.distance_matrix[i][j] <= threshold {
savings.push((0.0, i as u8, j as u8));
}
}
}
savings
}
#[inline]
fn recompute_and_sort_savings(savings_list: &mut [(f32, u8, u8)], params: &[f32], challenge: &Challenge) {
let distance_matrix = &challenge.distance_matrix;
let mut zero_len = 0;
for (score, i, j) in savings_list.iter_mut() {
let i = *i as usize;
let j = *j as usize;
*score = params[i] * distance_matrix[0][i] as f32 +
params[j] * distance_matrix[j][0] as f32 -
params[i] * params[j] * distance_matrix[i][j] as f32;
}
savings_list.sort_unstable_by(|a, b| b.0.partial_cmp(&a.0).unwrap());
}
#[inline]
fn generate_neighbor<R: Rng + ?Sized>(current: &[f32], rng: &mut R) -> Vec<f32> {
current.iter().map(|&param| {
let delta = rng.gen_range(-0.1..=0.1);
(param + delta).clamp(0.0, 2.0)
}).collect()
}
#[inline]
fn apply_local_search_until_no_improvement(solution: &mut Solution, distance_matrix: &Vec<Vec<i32>>) {
let mut improved = true;
while improved {
improved = false;
for route in &mut solution.routes {
if two_opt(route, distance_matrix) {
improved = true;
}
}
}
}
#[inline]
fn two_opt(route: &mut Vec<usize>, distance_matrix: &Vec<Vec<i32>>) -> bool {
let n = route.len();
let mut improved = false;
for i in 1..n - 2 {
for j in i + 1..n - 1 {
let current_distance = distance_matrix[route[i - 1]][route[i]]
+ distance_matrix[route[j]][route[j + 1]];
let new_distance = distance_matrix[route[i - 1]][route[j]]
+ distance_matrix[route[i]][route[j + 1]];
if new_distance < current_distance {
route[i..=j].reverse();
improved = true;
}
}
}
improved
}
#[inline]
fn calculate_solution_cost(solution: &Solution, distance_matrix: &Vec<Vec<i32>>) -> i32 {
solution.routes.iter().map(|route| {
route.windows(2).map(|w| distance_matrix[w[0]][w[1]]).sum::<i32>()
}).sum()
}
#[inline]
fn create_solution(challenge: &Challenge, params: &[f32], savings_list: &[(f32, u8, u8)]) -> Solution {
let distance_matrix = &challenge.distance_matrix;
let max_capacity = challenge.max_capacity;
let num_nodes = challenge.difficulty.num_nodes;
let demands = &challenge.demands;
let mut routes = vec![None; num_nodes];
for i in 1..num_nodes {
routes[i] = Some(vec![i]);
}
let mut route_demands = demands.clone();
for &(_, i, j) in savings_list {
let (i, j) = (i as usize, j as usize);
if let (Some(left_route), Some(right_route)) = (routes[i].as_ref(), routes[j].as_ref()) {
let (left_start, left_end) = (*left_route.first().unwrap(), *left_route.last().unwrap());
let (right_start, right_end) = (*right_route.first().unwrap(), *right_route.last().unwrap());
if left_start == right_start || route_demands[left_start] + route_demands[right_start] > max_capacity {
continue;
}
let mut new_route = routes[i].take().unwrap();
let mut right_route = routes[j].take().unwrap();
if left_start == i { new_route.reverse(); }
if right_end == j { right_route.reverse(); }
new_route.extend(right_route);
let combined_demand = route_demands[left_start] + route_demands[right_start];
let new_start = new_route[0];
let new_end = *new_route.last().unwrap();
route_demands[new_start] = combined_demand;
route_demands[new_end] = combined_demand;
routes[new_start] = Some(new_route.clone());
routes[new_end] = Some(new_route);
}
}
Solution {
routes: routes
.into_iter()
.enumerate()
.filter_map(|(i, route)| route.filter(|r| r[0] == i))
.map(|mut route| {
route.insert(0, 0);
route.push(0);
route
})
.collect(),
}
}
#[inline]
fn try_inter_route_swap(
solution: &mut Solution,
distance_matrix: &Vec<Vec<i32>>,
demands: &Vec<i32>,
max_capacity: i32
) -> bool {
let mut improved = false;
let num_routes = solution.routes.len();
for i in 0..num_routes {
for j in i + 1..num_routes {
if let Some(better_routes) = find_best_swap(
&solution.routes[i],
&solution.routes[j],
distance_matrix,
demands,
max_capacity
) {
solution.routes[i] = better_routes.0;
solution.routes[j] = better_routes.1;
improved = true;
}
}
}
improved
}
#[inline]
fn find_best_swap(
route1: &Vec<usize>,
route2: &Vec<usize>,
distance_matrix: &Vec<Vec<i32>>,
demands: &Vec<i32>,
max_capacity: i32
) -> Option<(Vec<usize>, Vec<usize>)> {
let mut best_improvement = 0;
let mut best_swap = None;
for i in 1..route1.len() - 1 {
for j in 1..route2.len() - 1 {
let route1_demand: i32 = route1.iter().map(|&n| demands[n]).sum();
let route2_demand: i32 = route2.iter().map(|&n| demands[n]).sum();
let demand_delta = demands[route2[j]] - demands[route1[i]];
if route1_demand + demand_delta > max_capacity ||
route2_demand - demand_delta > max_capacity {
continue;
}
let old_cost = distance_matrix[route1[i-1]][route1[i]] +
distance_matrix[route1[i]][route1[i+1]] +
distance_matrix[route2[j-1]][route2[j]] +
distance_matrix[route2[j]][route2[j+1]];
let new_cost = distance_matrix[route1[i-1]][route2[j]] +
distance_matrix[route2[j]][route1[i+1]] +
distance_matrix[route2[j-1]][route1[i]] +
distance_matrix[route1[i]][route2[j+1]];
let improvement = old_cost - new_cost;
if improvement > best_improvement {
best_improvement = improvement;
let mut new_route1 = route1.clone();
let mut new_route2 = route2.clone();
new_route1[i] = route2[j];
new_route2[j] = route1[i];
best_swap = Some((new_route1, new_route2));
}
}
}
best_swap
}
#[cfg(feature = "cuda")]
mod gpu_optimisation {
use super::*;
use cudarc::driver::*;
use std::{collections::HashMap, sync::Arc};
use tig_challenges::CudaKernel;
// set KERNEL to None if algorithm only has a CPU implementation
pub const KERNEL: Option<CudaKernel> = None;
// Important! your GPU and CPU version of the algorithm should return the same result
pub fn cuda_solve_challenge(
challenge: &Challenge,
dev: &Arc<CudaDevice>,
mut funcs: HashMap<&'static str, CudaFunction>,
) -> anyhow::Result<Option<Solution>> {
solve_challenge(challenge)
}
}
#[cfg(feature = "cuda")]
pub use gpu_optimisation::{cuda_solve_challenge, KERNEL};

View File

@ -0,0 +1,328 @@
/*!
Copyright 2024 syebastian
Licensed under the TIG Innovator Outbound Game License v1.0 (the "License"); you
may not use this file except in compliance with the License. You may obtain a copy
of the License at
https://github.com/tig-foundation/tig-monorepo/tree/main/docs/licenses
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the specific
language governing permissions and limitations under the License.
*/
use rand::{rngs::{SmallRng, StdRng}, Rng, SeedableRng};
use tig_challenges::vehicle_routing::{Challenge, Solution};
pub fn solve_challenge(challenge: &Challenge) -> anyhow::Result<Option<Solution>> {
let mut best_solution: Option<Solution> = None;
let mut best_cost = std::i32::MAX;
const INITIAL_TEMPERATURE: f32 = 2.0;
const COOLING_RATE: f32 = 0.995;
const ITERATIONS_PER_TEMPERATURE: usize = 2;
let num_nodes = challenge.difficulty.num_nodes;
let mut current_params = vec![1.0; num_nodes];
let mut savings_list = create_initial_savings_list(challenge);
recompute_and_sort_savings(&mut savings_list, &current_params, challenge);
let mut current_solution = create_solution(challenge, &current_params, &savings_list);
let mut current_cost = calculate_solution_cost(&current_solution, &challenge.distance_matrix);
if current_cost <= challenge.max_total_distance {
return Ok(Some(current_solution));
}
if (current_cost as f32 * 0.96) > challenge.max_total_distance as f32 {
return Ok(None);
}
let mut temperature = INITIAL_TEMPERATURE;
let mut rng = StdRng::seed_from_u64(u64::from_le_bytes(challenge.seed[..8].try_into().unwrap()));
while temperature > 1.0 {
for _ in 0..ITERATIONS_PER_TEMPERATURE {
let neighbor_params = generate_neighbor(&current_params, &mut rng);
recompute_and_sort_savings(&mut savings_list, &neighbor_params, challenge);
let mut neighbor_solution = create_solution(challenge, &neighbor_params, &savings_list);
apply_local_search_until_no_improvement(&mut neighbor_solution, &challenge.distance_matrix);
let neighbor_cost = calculate_solution_cost(&neighbor_solution, &challenge.distance_matrix);
let delta = neighbor_cost as f32 - current_cost as f32;
if delta < 0.0 || rng.gen::<f32>() < (-delta / temperature).exp() {
current_params = neighbor_params;
current_cost = neighbor_cost;
current_solution = neighbor_solution;
if current_cost < best_cost {
best_cost = current_cost;
best_solution = Some(Solution {
routes: current_solution.routes.clone(),
});
}
}
if best_cost <= challenge.max_total_distance {
return Ok(best_solution);
}
}
temperature *= COOLING_RATE;
}
if let Some(best_sol) = &best_solution {
let mut solution = Solution {
routes: best_sol.routes.clone()
};
if try_inter_route_swap(&mut solution, &challenge.distance_matrix, &challenge.demands, challenge.max_capacity) {
let new_cost = calculate_solution_cost(&solution, &challenge.distance_matrix);
if new_cost < best_cost {
best_solution = Some(solution);
}
}
}
Ok(best_solution)
}
#[inline]
fn create_initial_savings_list(challenge: &Challenge) -> Vec<(f32, u8, u8)> {
let num_nodes = challenge.difficulty.num_nodes;
let capacity = ((num_nodes - 1) * (num_nodes - 2)) / 2;
let mut savings = Vec::with_capacity(capacity);
let max_distance = challenge.distance_matrix.iter().flat_map(|row| row.iter()).cloned().max().unwrap_or(0);
let threshold = max_distance / 2;
for i in 1..num_nodes {
for j in (i + 1)..num_nodes {
if challenge.distance_matrix[i][j] <= threshold {
savings.push((0.0, i as u8, j as u8));
}
}
}
savings
}
#[inline]
fn recompute_and_sort_savings(savings_list: &mut [(f32, u8, u8)], params: &[f32], challenge: &Challenge) {
let distance_matrix = &challenge.distance_matrix;
let mut zero_len = 0;
for (score, i, j) in savings_list.iter_mut() {
let i = *i as usize;
let j = *j as usize;
*score = params[i] * distance_matrix[0][i] as f32 +
params[j] * distance_matrix[j][0] as f32 -
params[i] * params[j] * distance_matrix[i][j] as f32;
}
savings_list.sort_unstable_by(|a, b| b.0.partial_cmp(&a.0).unwrap());
}
#[inline]
fn generate_neighbor<R: Rng + ?Sized>(current: &[f32], rng: &mut R) -> Vec<f32> {
current.iter().map(|&param| {
let delta = rng.gen_range(-0.1..=0.1);
(param + delta).clamp(0.0, 2.0)
}).collect()
}
#[inline]
fn apply_local_search_until_no_improvement(solution: &mut Solution, distance_matrix: &Vec<Vec<i32>>) {
let mut improved = true;
while improved {
improved = false;
for route in &mut solution.routes {
if two_opt(route, distance_matrix) {
improved = true;
}
}
}
}
#[inline]
fn two_opt(route: &mut Vec<usize>, distance_matrix: &Vec<Vec<i32>>) -> bool {
let n = route.len();
let mut improved = false;
for i in 1..n - 2 {
for j in i + 1..n - 1 {
let current_distance = distance_matrix[route[i - 1]][route[i]]
+ distance_matrix[route[j]][route[j + 1]];
let new_distance = distance_matrix[route[i - 1]][route[j]]
+ distance_matrix[route[i]][route[j + 1]];
if new_distance < current_distance {
route[i..=j].reverse();
improved = true;
}
}
}
improved
}
#[inline]
fn calculate_solution_cost(solution: &Solution, distance_matrix: &Vec<Vec<i32>>) -> i32 {
solution.routes.iter().map(|route| {
route.windows(2).map(|w| distance_matrix[w[0]][w[1]]).sum::<i32>()
}).sum()
}
#[inline]
fn create_solution(challenge: &Challenge, params: &[f32], savings_list: &[(f32, u8, u8)]) -> Solution {
let distance_matrix = &challenge.distance_matrix;
let max_capacity = challenge.max_capacity;
let num_nodes = challenge.difficulty.num_nodes;
let demands = &challenge.demands;
let mut routes = vec![None; num_nodes];
for i in 1..num_nodes {
routes[i] = Some(vec![i]);
}
let mut route_demands = demands.clone();
for &(_, i, j) in savings_list {
let (i, j) = (i as usize, j as usize);
if let (Some(left_route), Some(right_route)) = (routes[i].as_ref(), routes[j].as_ref()) {
let (left_start, left_end) = (*left_route.first().unwrap(), *left_route.last().unwrap());
let (right_start, right_end) = (*right_route.first().unwrap(), *right_route.last().unwrap());
if left_start == right_start || route_demands[left_start] + route_demands[right_start] > max_capacity {
continue;
}
let mut new_route = routes[i].take().unwrap();
let mut right_route = routes[j].take().unwrap();
if left_start == i { new_route.reverse(); }
if right_end == j { right_route.reverse(); }
new_route.extend(right_route);
let combined_demand = route_demands[left_start] + route_demands[right_start];
let new_start = new_route[0];
let new_end = *new_route.last().unwrap();
route_demands[new_start] = combined_demand;
route_demands[new_end] = combined_demand;
routes[new_start] = Some(new_route.clone());
routes[new_end] = Some(new_route);
}
}
Solution {
routes: routes
.into_iter()
.enumerate()
.filter_map(|(i, route)| route.filter(|r| r[0] == i))
.map(|mut route| {
route.insert(0, 0);
route.push(0);
route
})
.collect(),
}
}
#[inline]
fn try_inter_route_swap(
solution: &mut Solution,
distance_matrix: &Vec<Vec<i32>>,
demands: &Vec<i32>,
max_capacity: i32
) -> bool {
let mut improved = false;
let num_routes = solution.routes.len();
for i in 0..num_routes {
for j in i + 1..num_routes {
if let Some(better_routes) = find_best_swap(
&solution.routes[i],
&solution.routes[j],
distance_matrix,
demands,
max_capacity
) {
solution.routes[i] = better_routes.0;
solution.routes[j] = better_routes.1;
improved = true;
}
}
}
improved
}
#[inline]
fn find_best_swap(
route1: &Vec<usize>,
route2: &Vec<usize>,
distance_matrix: &Vec<Vec<i32>>,
demands: &Vec<i32>,
max_capacity: i32
) -> Option<(Vec<usize>, Vec<usize>)> {
let mut best_improvement = 0;
let mut best_swap = None;
for i in 1..route1.len() - 1 {
for j in 1..route2.len() - 1 {
let route1_demand: i32 = route1.iter().map(|&n| demands[n]).sum();
let route2_demand: i32 = route2.iter().map(|&n| demands[n]).sum();
let demand_delta = demands[route2[j]] - demands[route1[i]];
if route1_demand + demand_delta > max_capacity ||
route2_demand - demand_delta > max_capacity {
continue;
}
let old_cost = distance_matrix[route1[i-1]][route1[i]] +
distance_matrix[route1[i]][route1[i+1]] +
distance_matrix[route2[j-1]][route2[j]] +
distance_matrix[route2[j]][route2[j+1]];
let new_cost = distance_matrix[route1[i-1]][route2[j]] +
distance_matrix[route2[j]][route1[i+1]] +
distance_matrix[route2[j-1]][route1[i]] +
distance_matrix[route1[i]][route2[j+1]];
let improvement = old_cost - new_cost;
if improvement > best_improvement {
best_improvement = improvement;
let mut new_route1 = route1.clone();
let mut new_route2 = route2.clone();
new_route1[i] = route2[j];
new_route2[j] = route1[i];
best_swap = Some((new_route1, new_route2));
}
}
}
best_swap
}
#[cfg(feature = "cuda")]
mod gpu_optimisation {
use super::*;
use cudarc::driver::*;
use std::{collections::HashMap, sync::Arc};
use tig_challenges::CudaKernel;
// set KERNEL to None if algorithm only has a CPU implementation
pub const KERNEL: Option<CudaKernel> = None;
// Important! your GPU and CPU version of the algorithm should return the same result
pub fn cuda_solve_challenge(
challenge: &Challenge,
dev: &Arc<CudaDevice>,
mut funcs: HashMap<&'static str, CudaFunction>,
) -> anyhow::Result<Option<Solution>> {
solve_challenge(challenge)
}
}
#[cfg(feature = "cuda")]
pub use gpu_optimisation::{cuda_solve_challenge, KERNEL};

View File

@ -0,0 +1,4 @@
mod benchmarker_outbound;
pub use benchmarker_outbound::solve_challenge;
#[cfg(feature = "cuda")]
pub use benchmarker_outbound::{cuda_solve_challenge, KERNEL};

View File

@ -0,0 +1,328 @@
/*!
Copyright 2024 syebastian
Licensed under the TIG Open Data License v1.0 or (at your option) any later version
(the "License"); you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://github.com/tig-foundation/tig-monorepo/tree/main/docs/licenses
Unless required by applicable law or agreed to in writing, software distributed
under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR
CONDITIONS OF ANY KIND, either express or implied. See the License for the specific
language governing permissions and limitations under the License.
*/
use rand::{rngs::{SmallRng, StdRng}, Rng, SeedableRng};
use tig_challenges::vehicle_routing::{Challenge, Solution};
pub fn solve_challenge(challenge: &Challenge) -> anyhow::Result<Option<Solution>> {
let mut best_solution: Option<Solution> = None;
let mut best_cost = std::i32::MAX;
const INITIAL_TEMPERATURE: f32 = 2.0;
const COOLING_RATE: f32 = 0.995;
const ITERATIONS_PER_TEMPERATURE: usize = 2;
let num_nodes = challenge.difficulty.num_nodes;
let mut current_params = vec![1.0; num_nodes];
let mut savings_list = create_initial_savings_list(challenge);
recompute_and_sort_savings(&mut savings_list, &current_params, challenge);
let mut current_solution = create_solution(challenge, &current_params, &savings_list);
let mut current_cost = calculate_solution_cost(&current_solution, &challenge.distance_matrix);
if current_cost <= challenge.max_total_distance {
return Ok(Some(current_solution));
}
if (current_cost as f32 * 0.96) > challenge.max_total_distance as f32 {
return Ok(None);
}
let mut temperature = INITIAL_TEMPERATURE;
let mut rng = StdRng::seed_from_u64(u64::from_le_bytes(challenge.seed[..8].try_into().unwrap()));
while temperature > 1.0 {
for _ in 0..ITERATIONS_PER_TEMPERATURE {
let neighbor_params = generate_neighbor(&current_params, &mut rng);
recompute_and_sort_savings(&mut savings_list, &neighbor_params, challenge);
let mut neighbor_solution = create_solution(challenge, &neighbor_params, &savings_list);
apply_local_search_until_no_improvement(&mut neighbor_solution, &challenge.distance_matrix);
let neighbor_cost = calculate_solution_cost(&neighbor_solution, &challenge.distance_matrix);
let delta = neighbor_cost as f32 - current_cost as f32;
if delta < 0.0 || rng.gen::<f32>() < (-delta / temperature).exp() {
current_params = neighbor_params;
current_cost = neighbor_cost;
current_solution = neighbor_solution;
if current_cost < best_cost {
best_cost = current_cost;
best_solution = Some(Solution {
routes: current_solution.routes.clone(),
});
}
}
if best_cost <= challenge.max_total_distance {
return Ok(best_solution);
}
}
temperature *= COOLING_RATE;
}
if let Some(best_sol) = &best_solution {
let mut solution = Solution {
routes: best_sol.routes.clone()
};
if try_inter_route_swap(&mut solution, &challenge.distance_matrix, &challenge.demands, challenge.max_capacity) {
let new_cost = calculate_solution_cost(&solution, &challenge.distance_matrix);
if new_cost < best_cost {
best_solution = Some(solution);
}
}
}
Ok(best_solution)
}
#[inline]
fn create_initial_savings_list(challenge: &Challenge) -> Vec<(f32, u8, u8)> {
let num_nodes = challenge.difficulty.num_nodes;
let capacity = ((num_nodes - 1) * (num_nodes - 2)) / 2;
let mut savings = Vec::with_capacity(capacity);
let max_distance = challenge.distance_matrix.iter().flat_map(|row| row.iter()).cloned().max().unwrap_or(0);
let threshold = max_distance / 2;
for i in 1..num_nodes {
for j in (i + 1)..num_nodes {
if challenge.distance_matrix[i][j] <= threshold {
savings.push((0.0, i as u8, j as u8));
}
}
}
savings
}
#[inline]
fn recompute_and_sort_savings(savings_list: &mut [(f32, u8, u8)], params: &[f32], challenge: &Challenge) {
let distance_matrix = &challenge.distance_matrix;
let mut zero_len = 0;
for (score, i, j) in savings_list.iter_mut() {
let i = *i as usize;
let j = *j as usize;
*score = params[i] * distance_matrix[0][i] as f32 +
params[j] * distance_matrix[j][0] as f32 -
params[i] * params[j] * distance_matrix[i][j] as f32;
}
savings_list.sort_unstable_by(|a, b| b.0.partial_cmp(&a.0).unwrap());
}
#[inline]
fn generate_neighbor<R: Rng + ?Sized>(current: &[f32], rng: &mut R) -> Vec<f32> {
current.iter().map(|&param| {
let delta = rng.gen_range(-0.1..=0.1);
(param + delta).clamp(0.0, 2.0)
}).collect()
}
#[inline]
fn apply_local_search_until_no_improvement(solution: &mut Solution, distance_matrix: &Vec<Vec<i32>>) {
let mut improved = true;
while improved {
improved = false;
for route in &mut solution.routes {
if two_opt(route, distance_matrix) {
improved = true;
}
}
}
}
#[inline]
fn two_opt(route: &mut Vec<usize>, distance_matrix: &Vec<Vec<i32>>) -> bool {
let n = route.len();
let mut improved = false;
for i in 1..n - 2 {
for j in i + 1..n - 1 {
let current_distance = distance_matrix[route[i - 1]][route[i]]
+ distance_matrix[route[j]][route[j + 1]];
let new_distance = distance_matrix[route[i - 1]][route[j]]
+ distance_matrix[route[i]][route[j + 1]];
if new_distance < current_distance {
route[i..=j].reverse();
improved = true;
}
}
}
improved
}
#[inline]
fn calculate_solution_cost(solution: &Solution, distance_matrix: &Vec<Vec<i32>>) -> i32 {
solution.routes.iter().map(|route| {
route.windows(2).map(|w| distance_matrix[w[0]][w[1]]).sum::<i32>()
}).sum()
}
#[inline]
fn create_solution(challenge: &Challenge, params: &[f32], savings_list: &[(f32, u8, u8)]) -> Solution {
let distance_matrix = &challenge.distance_matrix;
let max_capacity = challenge.max_capacity;
let num_nodes = challenge.difficulty.num_nodes;
let demands = &challenge.demands;
let mut routes = vec![None; num_nodes];
for i in 1..num_nodes {
routes[i] = Some(vec![i]);
}
let mut route_demands = demands.clone();
for &(_, i, j) in savings_list {
let (i, j) = (i as usize, j as usize);
if let (Some(left_route), Some(right_route)) = (routes[i].as_ref(), routes[j].as_ref()) {
let (left_start, left_end) = (*left_route.first().unwrap(), *left_route.last().unwrap());
let (right_start, right_end) = (*right_route.first().unwrap(), *right_route.last().unwrap());
if left_start == right_start || route_demands[left_start] + route_demands[right_start] > max_capacity {
continue;
}
let mut new_route = routes[i].take().unwrap();
let mut right_route = routes[j].take().unwrap();
if left_start == i { new_route.reverse(); }
if right_end == j { right_route.reverse(); }
new_route.extend(right_route);
let combined_demand = route_demands[left_start] + route_demands[right_start];
let new_start = new_route[0];
let new_end = *new_route.last().unwrap();
route_demands[new_start] = combined_demand;
route_demands[new_end] = combined_demand;
routes[new_start] = Some(new_route.clone());
routes[new_end] = Some(new_route);
}
}
Solution {
routes: routes
.into_iter()
.enumerate()
.filter_map(|(i, route)| route.filter(|r| r[0] == i))
.map(|mut route| {
route.insert(0, 0);
route.push(0);
route
})
.collect(),
}
}
#[inline]
fn try_inter_route_swap(
solution: &mut Solution,
distance_matrix: &Vec<Vec<i32>>,
demands: &Vec<i32>,
max_capacity: i32
) -> bool {
let mut improved = false;
let num_routes = solution.routes.len();
for i in 0..num_routes {
for j in i + 1..num_routes {
if let Some(better_routes) = find_best_swap(
&solution.routes[i],
&solution.routes[j],
distance_matrix,
demands,
max_capacity
) {
solution.routes[i] = better_routes.0;
solution.routes[j] = better_routes.1;
improved = true;
}
}
}
improved
}
#[inline]
fn find_best_swap(
route1: &Vec<usize>,
route2: &Vec<usize>,
distance_matrix: &Vec<Vec<i32>>,
demands: &Vec<i32>,
max_capacity: i32
) -> Option<(Vec<usize>, Vec<usize>)> {
let mut best_improvement = 0;
let mut best_swap = None;
for i in 1..route1.len() - 1 {
for j in 1..route2.len() - 1 {
let route1_demand: i32 = route1.iter().map(|&n| demands[n]).sum();
let route2_demand: i32 = route2.iter().map(|&n| demands[n]).sum();
let demand_delta = demands[route2[j]] - demands[route1[i]];
if route1_demand + demand_delta > max_capacity ||
route2_demand - demand_delta > max_capacity {
continue;
}
let old_cost = distance_matrix[route1[i-1]][route1[i]] +
distance_matrix[route1[i]][route1[i+1]] +
distance_matrix[route2[j-1]][route2[j]] +
distance_matrix[route2[j]][route2[j+1]];
let new_cost = distance_matrix[route1[i-1]][route2[j]] +
distance_matrix[route2[j]][route1[i+1]] +
distance_matrix[route2[j-1]][route1[i]] +
distance_matrix[route1[i]][route2[j+1]];
let improvement = old_cost - new_cost;
if improvement > best_improvement {
best_improvement = improvement;
let mut new_route1 = route1.clone();
let mut new_route2 = route2.clone();
new_route1[i] = route2[j];
new_route2[j] = route1[i];
best_swap = Some((new_route1, new_route2));
}
}
}
best_swap
}
#[cfg(feature = "cuda")]
mod gpu_optimisation {
use super::*;
use cudarc::driver::*;
use std::{collections::HashMap, sync::Arc};
use tig_challenges::CudaKernel;
// set KERNEL to None if algorithm only has a CPU implementation
pub const KERNEL: Option<CudaKernel> = None;
// Important! your GPU and CPU version of the algorithm should return the same result
pub fn cuda_solve_challenge(
challenge: &Challenge,
dev: &Arc<CudaDevice>,
mut funcs: HashMap<&'static str, CudaFunction>,
) -> anyhow::Result<Option<Solution>> {
solve_challenge(challenge)
}
}
#[cfg(feature = "cuda")]
pub use gpu_optimisation::{cuda_solve_challenge, KERNEL};

View File

@ -106,9 +106,11 @@ pub use advanced_routing as c002_a049;
// c002_a052
// c002_a053
pub mod enhanced_routing;
pub use enhanced_routing as c002_a053;
// c002_a054
pub mod advanced_heuristics;
pub use advanced_heuristics as c002_a054;
// c002_a055

View File

@ -1,11 +1,7 @@
/*!
Copyright [year copyright work created] [name of copyright owner]
Copyright [yyyy] [name of copyright owner]
Identity of Submitter [name of person or entity that submits the Work to TIG]
UAI [UAI (if applicable)]
Licensed under the TIG Inbound Game License v2.0 or (at your option) any later
Licensed under the TIG Inbound Game License v1.0 or (at your option) any later
version (the "License"); you may not use this file except in compliance with the
License. You may obtain a copy of the License at
@ -17,24 +13,6 @@ CONDITIONS OF ANY KIND, either express or implied. See the License for the speci
language governing permissions and limitations under the License.
*/
// REMOVE BELOW SECTION IF UNUSED
/*
REFERENCES AND ACKNOWLEDGMENTS
This implementation is based on or inspired by existing work. Citations and
acknowledgments below:
1. Academic Papers:
- [Author(s), "Paper Title", DOI (if available)]
2. Code References:
- [Author(s), URL]
3. Other:
- [Author(s), Details]
*/
// TIG's UI uses the pattern `tig_challenges::<challenge_name>` to automatically detect your algorithm's challenge
use anyhow::{anyhow, Result};
use tig_challenges::vehicle_routing::{Challenge, Solution};

Binary file not shown.

Binary file not shown.

7
tig-benchmarker/.env Normal file
View File

@ -0,0 +1,7 @@
POSTGRES_USER=postgres
POSTGRES_PASSWORD=mysecretpassword
POSTGRES_DB=postgres
UI_PORT=80
DB_PORT=5432
MASTER_PORT=5115
VERBOSE=

View File

@ -7,17 +7,21 @@ Benchmarker for TIG. Expected setup is a single master and multiple slaves on di
Simply run:
```
POSTGRES_USER=postgres \
POSTGRES_PASSWORD=mysecretpassword \
POSTGRES_DB=postgres \
UI_PORT=80 \
DB_PORT=5432 \
MASTER_PORT=5115 \
# set VERBOSE=1 for debug master logs
VERBOSE= \
docker-compose up --build
```
This uses the `.env` file:
```
POSTGRES_USER=postgres
POSTGRES_PASSWORD=mysecretpassword
POSTGRES_DB=postgres
UI_PORT=80
DB_PORT=5432
MASTER_PORT=5115
VERBOSE=
```
See last section on how to find your player_id & api_key.
**Notes:**

112
tig-benchmarker/calc_apy.py Normal file
View File

@ -0,0 +1,112 @@
import numpy as np
import json
import requests
from common.structs import *
from common.calcs import *
API_URL = "https://mainnet-api.tig.foundation"
deposit = input("Enter deposit in TIG (leave blank to fetch deposit from your player_id): ")
if deposit != "":
lock_period = input("Enter number of weeks to lock (longer lock will have higher APY): ")
deposit = float(deposit)
lock_period = float(lock_period)
player_id = None
else:
player_id = input("Enter player_id: ").lower()
print("Fetching data...")
block = Block.from_dict(requests.get(f"{API_URL}/get-block?include_data").json()["block"])
opow_data = {
x["player_id"]: OPoW.from_dict(x)
for x in requests.get(f"{API_URL}/get-opow?block_id={block.id}").json()["opow"]
}
factors = {
benchmarker: {
**{
f: opow_data[benchmarker].block_data.num_qualifiers_by_challenge.get(f, 0)
for f in block.data.active_ids["challenge"]
},
"weighted_deposit": opow_data[benchmarker].block_data.delegated_weighted_deposit.to_float()
}
for benchmarker in opow_data
}
if player_id is None:
blocks_till_round_ends = block.config["rounds"]["blocks_per_round"] - (block.details.height % block.config["rounds"]["blocks_per_round"])
seconds_till_round_ends = blocks_till_round_ends * block.config["rounds"]["seconds_between_blocks"]
weighted_deposit = calc_weighted_deposit(deposit, seconds_till_round_ends, lock_period * 604800)
else:
player_data = Player.from_dict(requests.get(f"{API_URL}/get-player-data?player_id={player_id}&block_id={block.id}").json()["player"])
deposit = sum(x.to_float() for x in player_data.block_data.deposit_by_locked_period)
weighted_deposit = player_data.block_data.weighted_deposit.to_float()
for delegatee, fraction in player_data.block_data.delegatees.items():
factors[delegatee]["weighted_deposit"] -= fraction * weighted_deposit
total_factors = {
f: sum(factors[benchmarker][f] for benchmarker in opow_data)
for f in list(block.data.active_ids["challenge"]) + ["weighted_deposit"]
}
reward_shares = {
benchmarker: opow_data[benchmarker].block_data.reward_share
for benchmarker in opow_data
}
print("Optimising delegation by splitting into 100 chunks...")
chunk = weighted_deposit / 100
delegate = {}
for i in range(100):
print(f"Chunk {i + 1}: simulating delegation...")
total_factors["weighted_deposit"] += chunk
if len(delegate) == 10:
potential_delegatees = list(delegate)
else:
potential_delegatees = [benchmarker for benchmarker in opow_data if opow_data[benchmarker].block_data.self_deposit.to_float() >= 10000]
highest_apy_benchmarker = max(
potential_delegatees,
key=lambda delegatee: (
calc_influence({
benchmarker: {
f: (factors[benchmarker][f] + chunk * (benchmarker == delegatee and f == "weighted_deposit")) / total_factors[f]
for f in total_factors
}
for benchmarker in opow_data
}, block.config["opow"])[delegatee] *
reward_shares[delegatee] * chunk / (factors[delegatee]["weighted_deposit"] + chunk)
)
)
print(f"Chunk {i + 1}: best delegatee is {highest_apy_benchmarker}")
if highest_apy_benchmarker not in delegate:
delegate[highest_apy_benchmarker] = 0
delegate[highest_apy_benchmarker] += 1
factors[highest_apy_benchmarker]["weighted_deposit"] += chunk
influences = calc_influence({
benchmarker: {
f: factors[benchmarker][f] / total_factors[f]
for f in total_factors
}
for benchmarker in opow_data
}, block.config["opow"])
print("")
print("Optimised delegation split:")
reward_pool = block.config["rewards"]["distribution"]["opow"] * next(x["block_reward"] for x in reversed(block.config["rewards"]["schedule"]) if x["round_start"] <= block.details.round)
deposit_chunk = deposit / 100
total_reward = 0
for delegatee, num_chunks in delegate.items():
share_fraction = influences[delegatee] * reward_shares[delegatee] * (num_chunks * chunk) / factors[delegatee]["weighted_deposit"]
reward = share_fraction * reward_pool
total_reward += reward
apy = reward * block.config["rounds"]["blocks_per_round"] * 52 / (num_chunks * deposit_chunk)
print(f"{delegatee}: %delegated = {num_chunks}%, apy = {apy * 100:.2f}%")
print(f"average_apy = {total_reward * 10080 * 52 / deposit * 100:.2f}% on your deposit of {deposit} TIG")
print("")
print("To set this delegation split, run the following command:")
req = {"delegatees": {k: v / 100 for k, v in delegate.items()}}
print("API_KEY=<YOUR API KEY HERE>")
print(f"curl -H \"X-Api-Key: $API_KEY\" -X POST -d '{json.dumps(req)}' {API_URL}/set-delegatees")

View File

@ -1,68 +1,10 @@
import numpy as np
import json
import requests
from typing import List, Dict
from common.structs import *
from common.calcs import *
from copy import deepcopy
def calc_influence(fractions, opow_config) -> Dict[str, float]:
benchmarkers = list(fractions)
factors = list(next(iter(fractions.values())))
num_challenges = len(factors) - 1
avg_qualifier_fractions = {
benchmarker: sum(
fractions[benchmarker][f]
for f in factors
if f != "weighted_deposit"
) / num_challenges
for benchmarker in benchmarkers
}
deposit_fraction_cap = {
benchmarker: avg_qualifier_fractions[benchmarker] * opow_config["max_deposit_to_qualifier_ratio"]
for benchmarker in benchmarkers
}
capped_fractions = {
benchmarker: {
**fractions[benchmarker],
"weighted_deposit": min(
fractions[benchmarker]["weighted_deposit"],
deposit_fraction_cap[benchmarker]
)
}
for benchmarker in benchmarkers
}
avg_fraction = {
benchmarker: np.mean(list(capped_fractions[benchmarker].values()))
for benchmarker in benchmarkers
}
var_fraction = {
benchmarker: np.var(list(capped_fractions[benchmarker].values()))
for benchmarker in benchmarkers
}
imbalance = {
benchmarker: (var_fraction[benchmarker] / np.square(avg_fraction[benchmarker]) / num_challenges) if avg_fraction[benchmarker] > 0 else 0
for benchmarker in benchmarkers
}
imbalance_penalty = {
benchmarker: 1.0 - np.exp(-opow_config["imbalance_multiplier"] * imbalance[benchmarker])
for benchmarker in benchmarkers
}
weighted_avg_fraction = {
benchmarker: ((avg_qualifier_fractions[benchmarker] * num_challenges) + capped_fractions[benchmarker]["weighted_deposit"] * opow_config["deposit_multiplier"]) / (num_challenges + opow_config["deposit_multiplier"])
for benchmarker in benchmarkers
}
unormalised_influence = {
benchmarker: weighted_avg_fraction[benchmarker] * (1.0 - imbalance_penalty[benchmarker])
for benchmarker in benchmarkers
}
total = sum(unormalised_influence.values())
influence = {
benchmarker: unormalised_influence[benchmarker] / total
for benchmarker in benchmarkers
}
return influence
API_URL = "https://mainnet-api.tig.foundation"
player_id = input("Enter player_id: ").lower()
@ -140,12 +82,12 @@ print(f"reward = {reward['only_self_deposit']:.4f} TIG per block")
print("")
print("Scenario 2 (current self + delegated deposit)")
print(f"%weighted_deposit = {fractions[player_id]['weighted_deposit'] * 100:.2f}%")
print(f"reward = {reward['current']:.4f} TIG per block ({reward['current'] / reward['only_self_deposit'] * 100 - 100:.2f}% difference*)")
print(f"reward = {reward['current']:.4f} TIG per block (recommended max reward_share = {100 - reward['only_self_deposit'] / reward['current'] * 100:.2f}%*)")
print("")
print(f"Scenario 3 (self + delegated deposit at parity)")
print(f"%weighted_deposit = average %qualifiers = {average_fraction_of_qualifiers * 100:.2f}%")
print(f"reward = {reward['at_parity']:.4f} TIG per block ({reward['at_parity'] / reward['only_self_deposit'] * 100 - 100:.2f}% difference*)")
print(f"reward = {reward['at_parity']:.4f} TIG per block (recommended max reward_share = {100 - reward['only_self_deposit'] / reward['at_parity'] * 100:.2f}%*)")
print("")
print("*These are percentage differences in reward compared with relying only on self-deposit (Scenario 1).")
print("*Recommend not setting reward_share above the max. You will not benefit from delegation (earn the same as Scenario 1 with zero delegation).")
print("")
print("Note: the imbalance penalty is such that your reward increases at a high rate when moving up to parity")

View File

@ -0,0 +1,102 @@
import numpy as np
from typing import List, Dict
def calc_influence(fractions: Dict[str, Dict[str, float]], opow_config: dict) -> Dict[str, float]:
"""
Calculate the influence of each benchmarker based on their fractions and the OPoW configuration.
Args:
fractions: A dictionary of dictionaries, mapping benchmarker_ids to their fraction of each factor (challenges & weighted_deposit).
opow_config: A dictionary containing configuration parameters for the calculation.
Returns:
Dict[str, float]: A dictionary mapping each benchmarker_id to their calculated influence.
"""
benchmarkers = list(fractions)
factors = list(next(iter(fractions.values())))
num_challenges = len(factors) - 1
avg_qualifier_fractions = {
benchmarker: sum(
fractions[benchmarker][f]
for f in factors
if f != "weighted_deposit"
) / num_challenges
for benchmarker in benchmarkers
}
deposit_fraction_cap = {
benchmarker: avg_qualifier_fractions[benchmarker] * opow_config["max_deposit_to_qualifier_ratio"]
for benchmarker in benchmarkers
}
capped_fractions = {
benchmarker: {
**fractions[benchmarker],
"weighted_deposit": min(
fractions[benchmarker]["weighted_deposit"],
deposit_fraction_cap[benchmarker]
)
}
for benchmarker in benchmarkers
}
avg_fraction = {
benchmarker: np.mean(list(capped_fractions[benchmarker].values()))
for benchmarker in benchmarkers
}
var_fraction = {
benchmarker: np.var(list(capped_fractions[benchmarker].values()))
for benchmarker in benchmarkers
}
imbalance = {
benchmarker: (var_fraction[benchmarker] / np.square(avg_fraction[benchmarker]) / num_challenges) if avg_fraction[benchmarker] > 0 else 0
for benchmarker in benchmarkers
}
imbalance_penalty = {
benchmarker: 1.0 - np.exp(-opow_config["imbalance_multiplier"] * imbalance[benchmarker])
for benchmarker in benchmarkers
}
weighted_avg_fraction = {
benchmarker: ((avg_qualifier_fractions[benchmarker] * num_challenges) + capped_fractions[benchmarker]["weighted_deposit"] * opow_config["deposit_multiplier"]) / (num_challenges + opow_config["deposit_multiplier"])
for benchmarker in benchmarkers
}
unormalised_influence = {
benchmarker: weighted_avg_fraction[benchmarker] * (1.0 - imbalance_penalty[benchmarker])
for benchmarker in benchmarkers
}
total = sum(unormalised_influence.values())
influence = {
benchmarker: unormalised_influence[benchmarker] / total
for benchmarker in benchmarkers
}
return influence
def calc_weighted_deposit(deposit: float, seconds_till_round_end: int, lock_seconds: int) -> float:
"""
Calculate weighted deposit
Args:
deposit: Amount to deposit
seconds_till_round_end: Seconds remaining in current round
lock_seconds: Total lock duration in seconds
Returns:
Weighted deposit
"""
weighted_deposit = 0
if lock_seconds <= 0:
return weighted_deposit
# Calculate first chunk (partial week)
weighted_deposit += deposit * min(seconds_till_round_end, lock_seconds) // lock_seconds
remaining_seconds = lock_seconds - min(seconds_till_round_end, lock_seconds)
weight = 2
while remaining_seconds > 0:
chunk_seconds = min(remaining_seconds, 604800)
chunk = deposit * chunk_seconds // lock_seconds
weighted_deposit += chunk * weight
remaining_seconds -= chunk_seconds
weight = min(weight + 1, 26)
return weighted_deposit

View File

@ -201,6 +201,7 @@ class OPoWBlockData(FromDict):
num_qualifiers_by_challenge: Dict[str, int]
cutoff: int
delegated_weighted_deposit: PreciseNumber
self_deposit: PreciseNumber
delegators: Set[str]
reward_share: float
imbalance: PreciseNumber
@ -221,13 +222,13 @@ class PlayerDetails(FromDict):
class PlayerState(FromDict):
total_fees_paid: PreciseNumber
available_fee_balance: PreciseNumber
delegatee: Optional[dict]
delegatees: Optional[dict]
votes: dict
reward_share: Optional[dict]
@dataclass
class PlayerBlockData(FromDict):
delegatee: Optional[str]
delegatees: Dict[str, float]
reward_by_type: Dict[str, PreciseNumber]
deposit_by_locked_period: List[PreciseNumber]
weighted_deposit: PreciseNumber

View File

@ -5,7 +5,7 @@ import threading
import uvicorn
import json
from datetime import datetime
from master.sql import db_conn
from master.sql import get_db_conn
from fastapi import FastAPI, Query, Request, HTTPException, Depends, Header
from fastapi.responses import JSONResponse
from fastapi.middleware.cors import CORSMiddleware
@ -21,7 +21,7 @@ class ClientManager:
logger.info("ClientManager initialized and connected to the database.")
# Fetch initial config from database
result = db_conn.fetch_one(
result = get_db_conn().fetch_one(
"""
SELECT config FROM config
LIMIT 1
@ -54,7 +54,7 @@ class ClientManager:
@self.app.get('/stop/{benchmark_id}')
async def stop_benchmark(benchmark_id: str):
try:
db_conn.execute(
get_db_conn().execute(
"""
UPDATE job
SET stopped = true
@ -76,7 +76,7 @@ class ClientManager:
new_config["api_url"] = new_config["api_url"].rstrip('/')
# Update config in database
db_conn.execute(
get_db_conn().execute(
"""
DELETE FROM config;
INSERT INTO config (config)
@ -95,7 +95,7 @@ class ClientManager:
@self.app.get("/get-jobs")
async def get_jobs():
result = db_conn.fetch_all(
result = get_db_conn().fetch_all(
"""
WITH recent_jobs AS (
SELECT benchmark_id
@ -199,6 +199,48 @@ class ClientManager:
headers = {"Accept-Encoding": "gzip"}
)
@self.app.get("/get-batch-data/{batch_id}")
async def get_batch_data(batch_id: str):
benchmark_id, batch_idx = batch_id.split("_")
result = get_db_conn().fetch_one(
f"""
SELECT
JSONB_BUILD_OBJECT(
'id', A.benchmark_id || '_' || A.batch_idx,
'benchmark_id', A.benchmark_id,
'start_nonce', A.batch_idx * B.batch_size,
'num_nonces', LEAST(B.batch_size, B.num_nonces - A.batch_idx * B.batch_size),
'settings', B.settings,
'sampled_nonces', D.sampled_nonces,
'runtime_config', B.runtime_config,
'download_url', B.download_url,
'rand_hash', B.rand_hash,
'batch_size', B.batch_size,
'batch_idx', A.batch_idx
) AS batch,
C.merkle_root,
C.solution_nonces,
C.merkle_proofs
FROM root_batch A
INNER JOIN job B
ON A.benchmark_id = '{benchmark_id}'
AND A.batch_idx = {batch_idx}
AND A.benchmark_id = B.benchmark_id
INNER JOIN batch_data C
ON A.benchmark_id = C.benchmark_id
AND A.batch_idx = C.batch_idx
LEFT JOIN proofs_batch D
ON A.benchmark_id = D.benchmark_id
AND A.batch_idx = D.batch_idx
"""
)
return JSONResponse(
content=dict(result),
status_code=200,
headers = {"Accept-Encoding": "gzip"}
)
def start(self):
def run():
self.app = FastAPI()

View File

@ -117,6 +117,7 @@ class DifficultySampler:
self.challenges = {}
def on_new_block(self, challenges: Dict[str, Challenge], **kwargs):
config = CONFIG["difficulty_sampler_config"]
for c in challenges.values():
if c.block_data is None:
continue
@ -126,7 +127,10 @@ class DifficultySampler:
else:
upper_frontier, lower_frontier = c.block_data.base_frontier, c.block_data.scaled_frontier
self.valid_difficulties[c.details.name] = calc_valid_difficulties(list(upper_frontier), list(lower_frontier))
self.frontiers[c.details.name] = calc_all_frontiers(self.valid_difficulties[c.details.name])
if config["difficulty_ranges"] is None:
self.frontiers[c.details.name] = []
else:
self.frontiers[c.details.name] = calc_all_frontiers(self.valid_difficulties[c.details.name])
self.challenges = [c.details.name for c in challenges.values()]
@ -153,12 +157,16 @@ class DifficultySampler:
logger.debug(f"No valid difficulties found for {c_name} - skipping selected difficulties")
if not found_valid:
frontiers = self.frontiers[c_name]
difficulty_range = config["difficulty_ranges"][c_name]
idx1 = math.floor(difficulty_range[0] * (len(frontiers) - 1))
idx2 = math.ceil(difficulty_range[1] * (len(frontiers) - 1))
difficulties = [p for frontier in frontiers[idx1:idx2 + 1] for p in frontier]
difficulty = random.choice(difficulties)
if len(self.frontiers[c_name]) == 0 or config["difficulty_ranges"] is None:
valid_difficulties = self.valid_difficulties[c_name]
difficulty = random.choice(valid_difficulties)
else:
frontiers = self.frontiers[c_name]
difficulty_range = config["difficulty_ranges"][c_name]
idx1 = math.floor(difficulty_range[0] * (len(frontiers) - 1))
idx2 = math.ceil(difficulty_range[1] * (len(frontiers) - 1))
difficulties = [p for frontier in frontiers[idx1:idx2 + 1] for p in frontier]
difficulty = random.choice(difficulties)
samples[c_name] = difficulty
logger.debug(f"Sampled difficulty {difficulty} for challenge {c_name}")

View File

@ -5,7 +5,7 @@ from common.merkle_tree import MerkleHash, MerkleBranch, MerkleTree
from common.structs import *
from common.utils import *
from typing import Dict, List, Optional, Set
from master.sql import db_conn
from master.sql import get_db_conn
from master.client_manager import CONFIG
import math
@ -39,7 +39,7 @@ class JobManager:
for benchmark_id, x in precommits.items():
if (
benchmark_id in proofs or
db_conn.fetch_one( # check if job is already created
get_db_conn().fetch_one( # check if job is already created
"""
SELECT 1
FROM job
@ -122,14 +122,14 @@ class JobManager:
)
]
db_conn.execute_many(*atomic_inserts)
get_db_conn().execute_many(*atomic_inserts)
# update jobs from confirmed benchmarks
for benchmark_id, x in benchmarks.items():
if (
benchmark_id in proofs or
(result := db_conn.fetch_one(
(result := get_db_conn().fetch_one(
"""
SELECT num_batches, batch_size
FROM job
@ -171,11 +171,11 @@ class JobManager:
)
]
db_conn.execute_many(*atomic_update)
get_db_conn().execute_many(*atomic_update)
# update jobs from confirmed proofs
if len(proofs) > 0:
db_conn.execute(
get_db_conn().execute(
"""
UPDATE job
SET end_time = (EXTRACT(EPOCH FROM NOW()) * 1000)::BIGINT
@ -186,7 +186,7 @@ class JobManager:
)
# stop any expired jobs
db_conn.execute(
get_db_conn().execute(
"""
UPDATE job
SET stopped = true
@ -202,7 +202,7 @@ class JobManager:
now = int(time.time() * 1000)
# Find jobs where all root_batchs are ready
rows = db_conn.fetch_all(
rows = get_db_conn().fetch_all(
"""
WITH ready AS (
SELECT A.benchmark_id
@ -241,7 +241,7 @@ class JobManager:
merkle_root = tree.calc_merkle_root()
# Update the database with calculated merkle root
db_conn.execute_many(*[
get_db_conn().execute_many(*[
(
"""
UPDATE job_data
@ -266,7 +266,7 @@ class JobManager:
])
# Find jobs where all proofs_batchs are ready
rows = db_conn.fetch_all(
rows = get_db_conn().fetch_all(
"""
WITH ready AS (
SELECT A.benchmark_id
@ -306,7 +306,7 @@ class JobManager:
for x in y
]
batch_merkle_roots = db_conn.fetch_one(
batch_merkle_roots = get_db_conn().fetch_one(
"""
SELECT JSONB_AGG(merkle_root ORDER BY batch_idx) as batch_merkle_roots
FROM batch_data
@ -341,7 +341,7 @@ class JobManager:
)
# Update database with calculated merkle proofs
db_conn.execute_many(*[
get_db_conn().execute_many(*[
(
"""
UPDATE job_data

View File

@ -6,7 +6,7 @@ from master.submissions_manager import SubmitPrecommitRequest
from common.structs import *
from common.utils import FromDict
from typing import Dict, List, Optional, Set
from master.sql import db_conn
from master.sql import get_db_conn
from master.client_manager import CONFIG
logger = logging.getLogger(os.path.splitext(os.path.basename(__file__))[0])
@ -93,7 +93,7 @@ class PrecommitManager:
logger.info(f"global qualifier difficulty stats for {challenges[c_id].details.name}: (#nonces: {x['nonces']}, #solutions: {x['solutions']}, avg_nonces_per_solution: {avg_nonces_per_solution})")
def run(self, difficulty_samples: Dict[str, List[int]]) -> SubmitPrecommitRequest:
num_pending_jobs = db_conn.fetch_one(
num_pending_jobs = get_db_conn().fetch_one(
"""
SELECT COUNT(*)
FROM job

View File

@ -13,7 +13,7 @@ import uvicorn
from common.structs import *
from common.utils import *
from typing import Dict, List, Optional, Set
from master.sql import db_conn
from master.sql import get_db_conn
from master.client_manager import CONFIG
@ -27,7 +27,7 @@ class SlaveManager:
def run(self):
with self.lock:
self.batches = db_conn.fetch_all(
self.batches = get_db_conn().fetch_all(
"""
SELECT * FROM (
SELECT
@ -145,7 +145,7 @@ class SlaveManager:
if len(concurrent) == 0:
logger.debug(f"no batches available for {slave_name}")
if len(updates) > 0:
db_conn.execute_many(*updates)
get_db_conn().execute_many(*updates)
return JSONResponse(content=jsonable_encoder(concurrent))
@app.post('/submit-batch-root/{batch_id}')
@ -182,7 +182,7 @@ class SlaveManager:
# Update roots table with merkle root and solution nonces
benchmark_id, batch_idx = batch_id.split("_")
batch_idx = int(batch_idx)
db_conn.execute_many(*[
get_db_conn().execute_many(*[
(
"""
UPDATE root_batch
@ -247,7 +247,7 @@ class SlaveManager:
# Update proofs table with merkle proofs
benchmark_id, batch_idx = batch_id.split("_")
batch_idx = int(batch_idx)
db_conn.execute_many(*[
get_db_conn().execute_many(*[
(
"""
UPDATE proofs_batch
@ -276,8 +276,8 @@ class SlaveManager:
return {"status": "OK"}
config = CONFIG["slave_manager_config"]
thread = Thread(target=lambda: uvicorn.run(app, host="0.0.0.0", port=config["port"]))
thread = Thread(target=lambda: uvicorn.run(app, host="0.0.0.0", port=5115))
thread.daemon = True
thread.start()
logger.info(f"webserver started on 0.0.0.0:{config['port']}")
logger.info(f"webserver started on 0.0.0.0:5115")

View File

@ -17,10 +17,15 @@ class PostgresDB:
}
self._conn = None
@property
def closed(self) -> bool:
return self._conn is None or self._conn.closed
def connect(self) -> None:
"""Establish connection to PostgreSQL database"""
try:
self._conn = psycopg2.connect(**self.conn_params)
self._conn.autocommit = False
logger.info(f"Connected to PostgreSQL database at {self.conn_params['host']}:{self.conn_params['port']}")
except Exception as e:
logger.error(f"Error connecting to PostgreSQL: {str(e)}")
@ -40,7 +45,6 @@ class PostgresDB:
try:
with self._conn.cursor() as cur:
cur.execute("BEGIN")
for query in args:
cur.execute(*query)
self._conn.commit()
@ -100,11 +104,15 @@ class PostgresDB:
db_conn = None
if db_conn is None:
db_conn = PostgresDB(
host=os.environ["POSTGRES_HOST"],
port=5432,
dbname=os.environ["POSTGRES_DB"],
user=os.environ["POSTGRES_USER"],
password=os.environ["POSTGRES_PASSWORD"]
)
def get_db_conn():
global db_conn
if db_conn is None or db_conn.closed:
db_conn = PostgresDB(
host=os.environ["POSTGRES_HOST"],
port=5432,
dbname=os.environ["POSTGRES_DB"],
user=os.environ["POSTGRES_USER"],
password=os.environ["POSTGRES_PASSWORD"]
)
return db_conn

View File

@ -7,7 +7,7 @@ import os
from common.structs import *
from common.utils import *
from typing import Union, Set, List, Dict
from master.sql import db_conn
from master.sql import get_db_conn
from master.client_manager import CONFIG
logger = logging.getLogger(os.path.splitext(os.path.basename(__file__))[0])
@ -72,7 +72,7 @@ class SubmissionsManager:
**kwargs
):
if len(benchmarks) > 0:
db_conn.execute(
get_db_conn().execute(
"""
UPDATE job
SET benchmark_submitted = true
@ -82,7 +82,7 @@ class SubmissionsManager:
)
if len(proofs) > 0:
db_conn.execute(
get_db_conn().execute(
"""
UPDATE job
SET proof_submitted = true
@ -100,7 +100,7 @@ class SubmissionsManager:
else:
self._post_thread("precommit", submit_precommit_req)
benchmark_to_submit = db_conn.fetch_one(
benchmark_to_submit = get_db_conn().fetch_one(
"""
WITH updated AS (
UPDATE job
@ -144,7 +144,7 @@ class SubmissionsManager:
else:
logger.debug("no benchmark to submit")
proof_to_submit = db_conn.fetch_one(
proof_to_submit = get_db_conn().fetch_one(
"""
WITH updated AS (
UPDATE job

View File

@ -18,7 +18,7 @@ http {
server_name localhost;
# Master Service - specific endpoints
location ~ ^/(get-config|stop|update-config|get-jobs|get-latest-data) {
location ~ ^/(get-config|stop|update-config|get-batch-data|get-jobs|get-latest-data) {
proxy_pass http://master:3336;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;

View File

@ -160,7 +160,6 @@ SELECT '
}
},
"slave_manager_config": {
"port": 5115,
"time_before_batch_retry": 60000,
"slaves": [
{

View File

@ -21,11 +21,11 @@ FINISHED_BATCH_IDS = {}
def now():
return int(time.time() * 1000)
def download_wasm(session, download_url, wasm_path):
def download_wasm(download_url, wasm_path):
if not os.path.exists(wasm_path):
start = now()
logger.info(f"downloading WASM from {download_url}")
resp = session.get(download_url)
resp = requests.get(download_url)
if resp.status_code != 200:
raise Exception(f"status {resp.status_code} when downloading WASM: {resp.text}")
with open(wasm_path, 'wb') as f:
@ -53,18 +53,30 @@ def run_tig_worker(tig_worker_path, batch, wasm_path, num_workers, output_path):
process = subprocess.Popen(
cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE
)
stdout, stderr = process.communicate()
if process.returncode != 0:
PROCESSING_BATCH_IDS.remove(batch["id"])
raise Exception(f"tig-worker failed: {stderr.decode()}")
result = json.loads(stdout.decode())
logger.info(f"computing batch took {now() - start}ms")
logger.debug(f"batch result: {result}")
with open(f"{output_path}/{batch['id']}/result.json", "w") as f:
json.dump(result, f)
PROCESSING_BATCH_IDS.remove(batch["id"])
READY_BATCH_IDS.add(batch["id"])
while True:
ret = process.poll()
if ret is not None:
if ret != 0:
PROCESSING_BATCH_IDS.remove(batch["id"])
raise Exception(f"tig-worker failed with return code {ret}")
stdout, stderr = process.communicate()
result = json.loads(stdout.decode())
logger.info(f"computing batch {batch['id']} took {now() - start}ms")
logger.debug(f"batch {batch['id']} result: {result}")
with open(f"{output_path}/{batch['id']}/result.json", "w") as f:
json.dump(result, f)
PROCESSING_BATCH_IDS.remove(batch["id"])
READY_BATCH_IDS.add(batch["id"])
break
elif batch["id"] not in PROCESSING_BATCH_IDS:
process.kill()
logger.info(f"batch {batch['id']} stopped")
break
time.sleep(0.1)
def purge_folders(output_path, ttl):
@ -81,11 +93,11 @@ def purge_folders(output_path, ttl):
for batch_id in purge_batch_ids:
if os.path.exists(f"{output_path}/{batch_id}"):
logger.info(f"purging batch {batch_id}")
shutil.rmtree(f"{output_path}/{batch_id}")
shutil.rmtree(f"{output_path}/{batch_id}", ignore_errors=True)
FINISHED_BATCH_IDS.pop(batch_id)
def send_results(session, master_ip, master_port, tig_worker_path, download_wasms_folder, num_workers, output_path):
def send_results(headers, master_ip, master_port, tig_worker_path, download_wasms_folder, num_workers, output_path):
try:
batch_id = READY_BATCH_IDS.pop()
except KeyError:
@ -118,7 +130,7 @@ def send_results(session, master_ip, master_port, tig_worker_path, download_wasm
submit_url = f"http://{master_ip}:{master_port}/submit-batch-root/{batch_id}"
logger.info(f"posting root to {submit_url}")
resp = session.post(submit_url, json=result)
resp = requests.post(submit_url, headers=headers, json=result)
if resp.status_code == 200:
FINISHED_BATCH_IDS[batch_id] = now()
logger.info(f"successfully posted root for batch {batch_id}")
@ -151,7 +163,7 @@ def send_results(session, master_ip, master_port, tig_worker_path, download_wasm
submit_url = f"http://{master_ip}:{master_port}/submit-batch-proofs/{batch_id}"
logger.info(f"posting proofs to {submit_url}")
resp = session.post(submit_url, json={"merkle_proofs": proofs_to_submit})
resp = requests.post(submit_url, headers=headers, json={"merkle_proofs": proofs_to_submit})
if resp.status_code == 200:
FINISHED_BATCH_IDS[batch_id] = now()
logger.info(f"successfully posted proofs for batch {batch_id}")
@ -164,7 +176,7 @@ def send_results(session, master_ip, master_port, tig_worker_path, download_wasm
time.sleep(2)
def process_batch(session, tig_worker_path, download_wasms_folder, num_workers, output_path):
def process_batch(tig_worker_path, download_wasms_folder, num_workers, output_path):
try:
batch_id = PENDING_BATCH_IDS.pop()
except KeyError:
@ -188,7 +200,7 @@ def process_batch(session, tig_worker_path, download_wasms_folder, num_workers,
batch = json.load(f)
wasm_path = os.path.join(download_wasms_folder, f"{batch['settings']['algorithm_id']}.wasm")
download_wasm(session, batch['download_url'], wasm_path)
download_wasm(batch['download_url'], wasm_path)
Thread(
target=run_tig_worker,
@ -196,10 +208,10 @@ def process_batch(session, tig_worker_path, download_wasms_folder, num_workers,
).start()
def poll_batches(session, master_ip, master_port, output_path):
def poll_batches(headers, master_ip, master_port, output_path):
get_batches_url = f"http://{master_ip}:{master_port}/get-batches"
logger.info(f"fetching batches from {get_batches_url}")
resp = session.get(get_batches_url)
resp = requests.get(get_batches_url, headers=headers)
if resp.status_code == 200:
batches = resp.json()
@ -214,6 +226,9 @@ def poll_batches(session, master_ip, master_port, output_path):
json.dump(batch, f)
PENDING_BATCH_IDS.clear()
PENDING_BATCH_IDS.update(root_batch_ids + proofs_batch_ids)
for batch_id in PROCESSING_BATCH_IDS - set(root_batch_ids + proofs_batch_ids):
logger.info(f"stopping batch {batch_id}")
PROCESSING_BATCH_IDS.remove(batch_id)
time.sleep(5)
else:
@ -247,19 +262,18 @@ def main(
raise FileNotFoundError(f"tig-worker not found at path: {tig_worker_path}")
os.makedirs(download_wasms_folder, exist_ok=True)
session = requests.Session()
session.headers.update({
headers = {
"User-Agent": slave_name
})
}
Thread(
target=wrap_thread,
args=(process_batch, session, tig_worker_path, download_wasms_folder, num_workers, output_path)
args=(process_batch, tig_worker_path, download_wasms_folder, num_workers, output_path)
).start()
Thread(
target=wrap_thread,
args=(send_results, session, master_ip, master_port, tig_worker_path, download_wasms_folder, num_workers, output_path)
args=(send_results, headers, master_ip, master_port, tig_worker_path, download_wasms_folder, num_workers, output_path)
).start()
Thread(
@ -267,7 +281,7 @@ def main(
args=(purge_folders, output_path, ttl)
).start()
wrap_thread(poll_batches, session, master_ip, master_port, output_path)
wrap_thread(poll_batches, headers, master_ip, master_port, output_path)
if __name__ == "__main__":

View File

@ -0,0 +1,134 @@
from common.structs import *
import requests
import json
import random
import os
import subprocess
print("THIS IS AN EXAMPLE SCRIPT TO VERIFY A BATCH")
TIG_WORKER_PATH = input("Enter path of tig-worker executable: ")
NUM_WORKERS = int(input("Enter number of workers: "))
if not os.path.exists(TIG_WORKER_PATH):
raise FileNotFound[ERROR](f"tig-worker not found at path: {TIG_WORKER_PATH}")
MASTER_IP = input("Enter Master IP: ")
MASTER_PORT = input("Enter Master Port: ")
jobs = requests.get(f"http://{MASTER_IP}:{MASTER_PORT}/get-jobs").json()
for i, j in enumerate(jobs):
print(f"{i + 1}) benchmark_id: {j['benchmark_id']}, challenge: {j['challenge']}, algorithm: {j['algorithm']}, status: {j['status']}")
job_idx = int(input("Enter the index of the batch you want to verify: ")) - 1
for i, b in enumerate(jobs[job_idx]["batches"]):
print(f"{i + 1}) slave: {b['slave']}, num_solutions: {b['num_solutions']}, status: {b['status']}")
batch_idx = int(input("Enter the index of the batch you want to verify: ")) - 1
benchmark_id = jobs[job_idx]["benchmark_id"]
url = f"http://{MASTER_IP}:{MASTER_PORT}/get-batch-data/{benchmark_id}_{batch_idx}"
print(f"Fetching batch data from {url}")
data = requests.get(url).json()
batch = data['batch']
merkle_root = data['merkle_root']
solution_nonces = data['solution_nonces']
merkle_proofs = data['merkle_proofs']
if merkle_proofs is not None:
merkle_proofs = {x['leaf']['nonce']: x for x in merkle_proofs}
if (
merkle_proofs is None or
len(solution_nonces := set(merkle_proofs) & set(solution_nonces)) == 0
):
print("No solution data to verify for this batch")
else:
for nonce in solution_nonces:
print(f"Verifying solution for nonce: {nonce}")
cmd = [
TIG_WORKER_PATH, "verify_solution",
json.dumps(batch['settings'], separators=(',', ':')),
batch["rand_hash"],
str(nonce),
json.dumps(merkle_proofs[nonce]['leaf']['solution'], separators=(',', ':')),
]
print(f"Running cmd: {' '.join(cmd)}")
ret = subprocess.run(
cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE
)
if ret.returncode == 0:
print(f"[SUCCESS]: {ret.stdout.decode()}")
else:
print(f"[ERROR]: {ret.stderr.decode()}")
if merkle_root is not None:
download_url = batch["download_url"]
print(f"Downloading WASM from {download_url}")
resp = requests.get(download_url)
if resp.status_code != 200:
raise Exception(f"status {resp.status_code} when downloading WASM: {resp.text}")
wasm_path = f'{batch["settings"]["algorithm_id"]}.wasm'
with open(wasm_path, 'wb') as f:
f.write(resp.content)
print(f"WASM Path: {wasm_path}")
print("")
if merkle_proofs is None:
print("No merkle proofs to verify for this batch")
else:
for nonce in merkle_proofs:
print(f"Verifying output data for nonce: {nonce}")
cmd = [
TIG_WORKER_PATH, "compute_solution",
json.dumps(batch['settings'], separators=(',', ':')),
batch["rand_hash"],
str(nonce),
wasm_path,
"--mem", str(batch["runtime_config"]["max_memory"]),
"--fuel", str(batch["runtime_config"]["max_fuel"]),
]
print(f"Running cmd: {' '.join(cmd)}")
ret = subprocess.run(
cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE
)
out = json.loads(ret.stdout.decode())
expected = json.dumps(merkle_proofs[nonce]['leaf'], separators=(',', ':'), sort_keys=True)
actual = json.dumps(out, separators=(',', ':'), sort_keys=True)
if expected == actual:
print(f"[SUCCESS]: output data match")
else:
print(f"[ERROR]: output data mismatch")
print(f"Batch data: {expected}")
print(f"Recomputed: {actual}")
print(f"")
if merkle_root is None:
print("No merkle root to verify for this batch")
else:
print("Verifying merkle root")
cmd = [
TIG_WORKER_PATH, "compute_batch",
json.dumps(batch["settings"]),
batch["rand_hash"],
str(batch["start_nonce"]),
str(batch["num_nonces"]),
str(batch["batch_size"]),
wasm_path,
"--mem", str(batch["runtime_config"]["max_memory"]),
"--fuel", str(batch["runtime_config"]["max_fuel"]),
"--workers", str(NUM_WORKERS),
]
print(f"Running cmd: {' '.join(cmd)}")
ret = subprocess.run(
cmd, stdout=subprocess.PIPE, stderr=subprocess.PIPE
)
if ret.returncode == 0:
out = json.loads(ret.stdout.decode())
if out["merkle_root"] == merkle_root:
print(f"[SUCCESS]: merkle root match")
else:
print(f"[ERROR]: merkle root mismatch")
print(f"Batch data: {expected}")
print(f"Recomputed: {actual}")
else:
print(f"[ERROR]: {ret.stderr.decode()}")
print("")
print("FINISHED")

View File

@ -4,7 +4,7 @@ A folder that hosts submissions of algorithmic methods made by Innovators in TIG
Each submissions is committed to their own branch with the naming pattern:
`<challenge_name>\method\<method_name>`
`<challenge_name>\breakthrough\<method_name>`
## Making a Submission
@ -13,21 +13,31 @@ Each submissions is committed to their own branch with the naming pattern:
* [Voting Guidelines for Token Holders](../docs/guides/voting.md)
2. Copy `template.md` from the relevant challenges folder (e.g. [`knapsack/tempate.md`](./knapsack/template.md)), and fill in the details with evidence of your breakthrough
2. Email the following to `breakthroughs@tig.foundation` with subject "Breakthrough Submission (`<breakthrough_name>`)":
* **Evidence form**: copy & fill in [`evidence.md`](./evidence.md). Of particular importance is Section 1 which describes your breakthrough
3. Copy [invention_assignment.doc](../docs/agreements/invention_assignment.doc), fill in the details, and sign
* **Invention assignment**: copy & replace [invention_assignment.doc](../docs/agreements/invention_assignment.doc) the highlighted parts. Inventor and witness must sign.
4. Email your invention assignment to contact@tig.foundation with subject "Invention Assignment"
* **Address Signature**: use [etherscan](https://etherscan.io/verifiedSignatures#) to sign a message `I am signing this message to confirm my submission of breakthrough <breakthrough_name>`. Use your player_id that is making the submission. Send the verified etherscan link with message and signature.
5. Submit your evidence via https://play.tig.foundation/innovator
* 250 TIG will be deducted from your Available Fee Balance to make a submission
* You can topup via the [Benchmarker page](https://play.tig.foundation/benchmarker)
* (Optional) **Code implementation**: attach code implementing your breakthrough. Do not submit this code to TIG separately. This will be done for you
**Notes**:
* The time of submission will be taken as the timestamp of the auto-reply to your email attaching the required documents.
* Iterations are permitted for errors highlighted by the Foundation. This will not change the timestamp of your submission
* 250 TIG will be deducted from your Available Fee Balance to make a breakthrough submission
* An additional 10 TIG will be deducted from your Available Fee Balance to make an algorithm submission (if one is attached)
* You can topup via the [Benchmarker page](https://play.tig.foundation/benchmarker)
## Method Submission Flow
1. New submissions get their branch pushed to a private version of this repository
2. A new submission made during round `X` will have its branch pushed to the public version of this repository at the start of round `X + 2`
3. From the start of round `X + 2` till the start of round `X + 4`, token holders can vote on whether they consider the method to be a breakthrough based off the submitted evidence
3. From the start of round `X + 3` till the start of round `X + 4`, token holders can vote on whether they consider the method to be a breakthrough based off the submitted evidence
4. At the start of round `X + 4`, if the submission has at least 50% yes votes, it becomes active
5. Every block, a method's adoption is the sum of all algorithm adoption, where the algorithm is attributed to that method. Methods with at least 50% adoption earn rewards and a merge point
6. At the end of a round, a method from each challenge with the most merge points, meeting the minimum threshold of 5040, gets merged to the `main` branch

View File

@ -23,29 +23,35 @@ WHEN PROVIDING EVIDENCE, YOU MAY CITE LINKS TO EXTERNAL DATA SOURCES.
## UNIQUE ALGORITHM IDENTIFIER (UAI)
> UAI PLACEHOLDER - DO NOT EDIT
> UAI PLACEHOLDER - ASSIGNED BY TIG PROTOCOL
## DESCRIPTION OF ALGORITHMIC METHOD
## SECTION 1
IT IS IMPORTANT THAT THIS SECTION IS COMPLETED IN SUFFICIENT DETAIL TO FULLY DESCRIBE THE METHOD BECAUSE THIS WILL DEFINE THE METHOD THAT IS THE SUBJECT OF THE ASSIGNMENT THAT YOU EXECUTE.
### DESCRIPTION OF ALGORITHMIC METHOD
PLEASE IDENTIFY WHICH TIG CHALLENGE THE ALGORITHMIC METHOD ADDRESSES.
> vehicle_routing - DO NOT EDIT
> YOUR RESPONSE HERE (options are satisfiability, vehicle_routing, knapsack, or vector_search)
PLEASE DESCRIBE THE ALGORITHMIC METHOD AND THE PROBLEM THAT IT SOLVES.
> YOUR RESPONSE HERE
## ATTRIBUTION
## SECTION 2
PLEASE PROVIDE THE IDENTITY OF THE CREATOR OF THE ALGORITHMIC METHOD (this should be a natural person or legal entity. If an artificial intelligence has been used to assist in the creation of the algorithmic method then the creator is the operator of the artificial intelligence)
THE COPYRIGHT IN THE IMPLEMENTATION WILL BE THE SUBJECT OF THE ASSIGNMENT THAT YOU EXECUTE.
### IMPLEMENTATION OF ALGORITHMIC METHOD
TO THE EXTENT THAT YOU HAVE IMPLEMENTED THE ALGORITHMIC METHOD IN CODE YOU SHOULD IDENTIFY THE CODE AND SUBMIT IT WITH THIS
> YOUR RESPONSE HERE
PLEASE PROVIDE EVIDENCE OF AUTHORSHIP WHERE IT IS AVAILABLE.
## SECTION 3
> YOUR RESPONSE HERE
## NOVELTY AND INVENTIVENESS
### NOVELTY AND INVENTIVENESS
To support your claim that an algorithmic method is novel and inventive, you should provide evidence that demonstrates both its uniqueness (novelty) and its non-obviousness (inventiveness) in relation to the existing state of the art.
@ -114,18 +120,24 @@ To support your claim that an algorithmic method is novel and inventive, you sho
> YOUR RESPONSE HERE
## SECTION 4
### EVIDENCE TO SUPPORT PATENTABILITY
- **Development Records:** Please provide documentation of the invention process, including notes, sketches, and software versions, to establish a timeline of your innovation.
> YOUR RESPONSE HERE
## SECTION 5
### SUGGESTED APPLICATIONS
- Please provide suggestions for any real world applications of your abstract algorithmic method that occur to you.
> YOUR RESPONSE HERE
## SECTION 6
### ANY OTHER INFORMATION
- Please provide any other evidence or argument that you think might help support you request for eligibility for Breakthrough Rewards.

View File

@ -1,133 +0,0 @@
# EVIDENCE
THIS IS A TEMPLATE FOR EVIDENCE TO BE SUBMITTED IN SUPPORT OF A REQUEST FOR ELIGIBILITY FOR BREAKTHROUGH REWARDS
## OVERVIEW
- TIG TOKEN HOLDERS WILL VOTE ON WHETHER YOUR ALGORITHMIC METHOD IS ELIGIBLE FOR BREAKTHROUGH REWARDS
- TIG TOKEN HOLDERS ARE FREE TO VOTE AS THEY LIKE BUT HAVE BEEN ADVISED THAT IF THEY WANT TO MAXIMISE THE VALUE OF THE TOKENS THAT THEY HOLD, THEN THEY SHOULD BE SATISFYING THEMSELVES THAT ALGORITHIC METHODS THAT THEY VOTE AS ELIGIBLE WILL BE BOTH NOVEL AND INVENTIVE
- THE REASON WHY NOVELTY AND INVENTIVENESS ARE IMPORTANT ATTRIBUTES IS BECAUSE THEY ARE PREREQUISITES OF PATENTATBILITY.
- **THE PURPOSE OF THIS DOCUMENT IS TO:**
- CAPTURE A DESCRIPTION OF THE ALGORITHMIC METHOD THAT YOU WANT TO BE CONSIDERED FOR ELIGIBILTY.
- TO IDENTIFY THE CREATOR OF THE ALGORITHMIC METHOD.
- TO PROMPT YOU TO PROVIDE THE BEST EVIDNCE TO SUPPORT THE CASE THAT THE ALGORITHMIC METHOD IS NOVEL AND INVENTIVE.
- TO PROMPT YOU TO PROVIDE SUGGESTIONS FOR REAL WORLD APPLICATIONS OF YOUR ALGORITHMIC METHOD WHERE YIOU CAN.
WHEN PROVIDING EVIDENCE, YOU MAY CITE LINKS TO EXTERNAL DATA SOURCES.
## UNIQUE ALGORITHM IDENTIFIER (UAI)
> UAI PLACEHOLDER - DO NOT EDIT
## DESCRIPTION OF ALGORITHMIC METHOD
PLEASE IDENTIFY WHICH TIG CHALLENGE THE ALGORITHMIC METHOD ADDRESSES.
> knapsack - DO NOT EDIT
PLEASE DESCRIBE THE ALGORITHMIC METHOD AND THE PROBLEM THAT IT SOLVES.
> YOUR RESPONSE HERE
## ATTRIBUTION
PLEASE PROVIDE THE IDENTITY OF THE CREATOR OF THE ALGORITHMIC METHOD (this should be a natural person or legal entity. If an artificial intelligence has been used to assist in the creation of the algorithmic method then the creator is the operator of the artificial intelligence)
> YOUR RESPONSE HERE
PLEASE PROVIDE EVIDENCE OF AUTHORSHIP WHERE IT IS AVAILABLE.
> YOUR RESPONSE HERE
## NOVELTY AND INVENTIVENESS
To support your claim that an algorithmic method is novel and inventive, you should provide evidence that demonstrates both its uniqueness (novelty) and its non-obviousness (inventiveness) in relation to the existing state of the art.
### Establish the State of the Art
- **Prior Art Search:** Conduct a comprehensive review of existing methods, algorithms, patents, academic papers, and industry practices to identify prior art in the domain.
- Highlight documents and technologies most closely related to your method.
> YOUR RESPONSE HERE
- Show where these existing methods fall short or lack the features your algorithmic method provides.
> YOUR RESPONSE HERE
- **Technical Context:** Describe the common approaches and challenges in the field prior to your innovation.
> YOUR RESPONSE HERE
### Evidence of Novelty
- **Unique Features:** List the features, mechanisms, or aspects of your algorithmic method that are absent in prior art.
> YOUR RESPONSE HERE
- **New Problem Solved:** Describe how your algorithmic method provides a novel solution to an existing problem.
> YOUR RESPONSE HERE
- **Comparative Analysis:** Use a side-by-side comparison table to highlight the differences between your method and similar existing methods, clearly showing whats new.
> YOUR RESPONSE HERE
### Evidence Inventiveness of Inventiveness
- **Non-Obviousness:** Argue why a skilled person in the field would not have arrived at your method by simply combining existing ideas or extending known techniques.
- Demonstrate that the development involved an inventive step beyond straightforward application of existing principles.
- Unexpected Results: Highlight results or benefits that could not have been predicted based on prior art.
> YOUR RESPONSE HERE
- **Advantages:** Provide evidence of how your algorithm outperforms or offers significant advantages over existing methods, such as:
- Increased efficiency.
- Greater accuracy.
- Reduced computational complexity.
- Novel applications.
> YOUR RESPONSE HERE
### Supporting Data
- **Experimental Results:** Include performance benchmarks, simulations, or empirical data that substantiate your claims of novelty and inventiveness.
> YOUR RESPONSE HERE
- **Proof of Concept:** If possible, show a working prototype or implementation that validates the method.
> YOUR RESPONSE HERE
### Citations and Expert Opinions
- **Literature Gaps:** Affirm, to the best of your knowledge, the absence of similar solutions in published literature to reinforce your novelty claim.
> YOUR RESPONSE HERE
- **Endorsements:** Include reviews or opinions from industry experts, researchers, or peer-reviewed publications that evidences the novelty and impact of your algorithm.
> YOUR RESPONSE HERE
### EVIDENCE TO SUPPORT PATENTABILITY
- **Development Records:** Please provide documentation of the invention process, including notes, sketches, and software versions, to establish a timeline of your innovation.
> YOUR RESPONSE HERE
### SUGGESTED APPLICATIONS
- Please provide suggestions for any real world applications of your abstract algorithmic method that occur to you.
> YOUR RESPONSE HERE
### ANY OTHER INFORMATION
- Please provide any other evidence or argument that you think might help support you request for eligibility for Breakthrough Rewards.
> YOUR RESPONSE HERE

View File

@ -1,133 +0,0 @@
# EVIDENCE
THIS IS A TEMPLATE FOR EVIDENCE TO BE SUBMITTED IN SUPPORT OF A REQUEST FOR ELIGIBILITY FOR BREAKTHROUGH REWARDS
## OVERVIEW
- TIG TOKEN HOLDERS WILL VOTE ON WHETHER YOUR ALGORITHMIC METHOD IS ELIGIBLE FOR BREAKTHROUGH REWARDS
- TIG TOKEN HOLDERS ARE FREE TO VOTE AS THEY LIKE BUT HAVE BEEN ADVISED THAT IF THEY WANT TO MAXIMISE THE VALUE OF THE TOKENS THAT THEY HOLD, THEN THEY SHOULD BE SATISFYING THEMSELVES THAT ALGORITHIC METHODS THAT THEY VOTE AS ELIGIBLE WILL BE BOTH NOVEL AND INVENTIVE
- THE REASON WHY NOVELTY AND INVENTIVENESS ARE IMPORTANT ATTRIBUTES IS BECAUSE THEY ARE PREREQUISITES OF PATENTATBILITY.
- **THE PURPOSE OF THIS DOCUMENT IS TO:**
- CAPTURE A DESCRIPTION OF THE ALGORITHMIC METHOD THAT YOU WANT TO BE CONSIDERED FOR ELIGIBILTY.
- TO IDENTIFY THE CREATOR OF THE ALGORITHMIC METHOD.
- TO PROMPT YOU TO PROVIDE THE BEST EVIDNCE TO SUPPORT THE CASE THAT THE ALGORITHMIC METHOD IS NOVEL AND INVENTIVE.
- TO PROMPT YOU TO PROVIDE SUGGESTIONS FOR REAL WORLD APPLICATIONS OF YOUR ALGORITHMIC METHOD WHERE YIOU CAN.
WHEN PROVIDING EVIDENCE, YOU MAY CITE LINKS TO EXTERNAL DATA SOURCES.
## UNIQUE ALGORITHM IDENTIFIER (UAI)
> UAI PLACEHOLDER - DO NOT EDIT
## DESCRIPTION OF ALGORITHMIC METHOD
PLEASE IDENTIFY WHICH TIG CHALLENGE THE ALGORITHMIC METHOD ADDRESSES.
> satisfiability - DO NOT EDIT
PLEASE DESCRIBE THE ALGORITHMIC METHOD AND THE PROBLEM THAT IT SOLVES.
> YOUR RESPONSE HERE
## ATTRIBUTION
PLEASE PROVIDE THE IDENTITY OF THE CREATOR OF THE ALGORITHMIC METHOD (this should be a natural person or legal entity. If an artificial intelligence has been used to assist in the creation of the algorithmic method then the creator is the operator of the artificial intelligence)
> YOUR RESPONSE HERE
PLEASE PROVIDE EVIDENCE OF AUTHORSHIP WHERE IT IS AVAILABLE.
> YOUR RESPONSE HERE
## NOVELTY AND INVENTIVENESS
To support your claim that an algorithmic method is novel and inventive, you should provide evidence that demonstrates both its uniqueness (novelty) and its non-obviousness (inventiveness) in relation to the existing state of the art.
### Establish the State of the Art
- **Prior Art Search:** Conduct a comprehensive review of existing methods, algorithms, patents, academic papers, and industry practices to identify prior art in the domain.
- Highlight documents and technologies most closely related to your method.
> YOUR RESPONSE HERE
- Show where these existing methods fall short or lack the features your algorithmic method provides.
> YOUR RESPONSE HERE
- **Technical Context:** Describe the common approaches and challenges in the field prior to your innovation.
> YOUR RESPONSE HERE
### Evidence of Novelty
- **Unique Features:** List the features, mechanisms, or aspects of your algorithmic method that are absent in prior art.
> YOUR RESPONSE HERE
- **New Problem Solved:** Describe how your algorithmic method provides a novel solution to an existing problem.
> YOUR RESPONSE HERE
- **Comparative Analysis:** Use a side-by-side comparison table to highlight the differences between your method and similar existing methods, clearly showing whats new.
> YOUR RESPONSE HERE
### Evidence Inventiveness of Inventiveness
- **Non-Obviousness:** Argue why a skilled person in the field would not have arrived at your method by simply combining existing ideas or extending known techniques.
- Demonstrate that the development involved an inventive step beyond straightforward application of existing principles.
- Unexpected Results: Highlight results or benefits that could not have been predicted based on prior art.
> YOUR RESPONSE HERE
- **Advantages:** Provide evidence of how your algorithm outperforms or offers significant advantages over existing methods, such as:
- Increased efficiency.
- Greater accuracy.
- Reduced computational complexity.
- Novel applications.
> YOUR RESPONSE HERE
### Supporting Data
- **Experimental Results:** Include performance benchmarks, simulations, or empirical data that substantiate your claims of novelty and inventiveness.
> YOUR RESPONSE HERE
- **Proof of Concept:** If possible, show a working prototype or implementation that validates the method.
> YOUR RESPONSE HERE
### Citations and Expert Opinions
- **Literature Gaps:** Affirm, to the best of your knowledge, the absence of similar solutions in published literature to reinforce your novelty claim.
> YOUR RESPONSE HERE
- **Endorsements:** Include reviews or opinions from industry experts, researchers, or peer-reviewed publications that evidences the novelty and impact of your algorithm.
> YOUR RESPONSE HERE
### EVIDENCE TO SUPPORT PATENTABILITY
- **Development Records:** Please provide documentation of the invention process, including notes, sketches, and software versions, to establish a timeline of your innovation.
> YOUR RESPONSE HERE
### SUGGESTED APPLICATIONS
- Please provide suggestions for any real world applications of your abstract algorithmic method that occur to you.
> YOUR RESPONSE HERE
### ANY OTHER INFORMATION
- Please provide any other evidence or argument that you think might help support you request for eligibility for Breakthrough Rewards.
> YOUR RESPONSE HERE

View File

@ -1,133 +0,0 @@
# EVIDENCE
THIS IS A TEMPLATE FOR EVIDENCE TO BE SUBMITTED IN SUPPORT OF A REQUEST FOR ELIGIBILITY FOR BREAKTHROUGH REWARDS
## OVERVIEW
- TIG TOKEN HOLDERS WILL VOTE ON WHETHER YOUR ALGORITHMIC METHOD IS ELIGIBLE FOR BREAKTHROUGH REWARDS
- TIG TOKEN HOLDERS ARE FREE TO VOTE AS THEY LIKE BUT HAVE BEEN ADVISED THAT IF THEY WANT TO MAXIMISE THE VALUE OF THE TOKENS THAT THEY HOLD, THEN THEY SHOULD BE SATISFYING THEMSELVES THAT ALGORITHIC METHODS THAT THEY VOTE AS ELIGIBLE WILL BE BOTH NOVEL AND INVENTIVE
- THE REASON WHY NOVELTY AND INVENTIVENESS ARE IMPORTANT ATTRIBUTES IS BECAUSE THEY ARE PREREQUISITES OF PATENTATBILITY.
- **THE PURPOSE OF THIS DOCUMENT IS TO:**
- CAPTURE A DESCRIPTION OF THE ALGORITHMIC METHOD THAT YOU WANT TO BE CONSIDERED FOR ELIGIBILTY.
- TO IDENTIFY THE CREATOR OF THE ALGORITHMIC METHOD.
- TO PROMPT YOU TO PROVIDE THE BEST EVIDNCE TO SUPPORT THE CASE THAT THE ALGORITHMIC METHOD IS NOVEL AND INVENTIVE.
- TO PROMPT YOU TO PROVIDE SUGGESTIONS FOR REAL WORLD APPLICATIONS OF YOUR ALGORITHMIC METHOD WHERE YIOU CAN.
WHEN PROVIDING EVIDENCE, YOU MAY CITE LINKS TO EXTERNAL DATA SOURCES.
## UNIQUE ALGORITHM IDENTIFIER (UAI)
> UAI PLACEHOLDER - DO NOT EDIT
## DESCRIPTION OF ALGORITHMIC METHOD
PLEASE IDENTIFY WHICH TIG CHALLENGE THE ALGORITHMIC METHOD ADDRESSES.
> vector_search - DO NOT EDIT
PLEASE DESCRIBE THE ALGORITHMIC METHOD AND THE PROBLEM THAT IT SOLVES.
> YOUR RESPONSE HERE
## ATTRIBUTION
PLEASE PROVIDE THE IDENTITY OF THE CREATOR OF THE ALGORITHMIC METHOD (this should be a natural person or legal entity. If an artificial intelligence has been used to assist in the creation of the algorithmic method then the creator is the operator of the artificial intelligence)
> YOUR RESPONSE HERE
PLEASE PROVIDE EVIDENCE OF AUTHORSHIP WHERE IT IS AVAILABLE.
> YOUR RESPONSE HERE
## NOVELTY AND INVENTIVENESS
To support your claim that an algorithmic method is novel and inventive, you should provide evidence that demonstrates both its uniqueness (novelty) and its non-obviousness (inventiveness) in relation to the existing state of the art.
### Establish the State of the Art
- **Prior Art Search:** Conduct a comprehensive review of existing methods, algorithms, patents, academic papers, and industry practices to identify prior art in the domain.
- Highlight documents and technologies most closely related to your method.
> YOUR RESPONSE HERE
- Show where these existing methods fall short or lack the features your algorithmic method provides.
> YOUR RESPONSE HERE
- **Technical Context:** Describe the common approaches and challenges in the field prior to your innovation.
> YOUR RESPONSE HERE
### Evidence of Novelty
- **Unique Features:** List the features, mechanisms, or aspects of your algorithmic method that are absent in prior art.
> YOUR RESPONSE HERE
- **New Problem Solved:** Describe how your algorithmic method provides a novel solution to an existing problem.
> YOUR RESPONSE HERE
- **Comparative Analysis:** Use a side-by-side comparison table to highlight the differences between your method and similar existing methods, clearly showing whats new.
> YOUR RESPONSE HERE
### Evidence Inventiveness of Inventiveness
- **Non-Obviousness:** Argue why a skilled person in the field would not have arrived at your method by simply combining existing ideas or extending known techniques.
- Demonstrate that the development involved an inventive step beyond straightforward application of existing principles.
- Unexpected Results: Highlight results or benefits that could not have been predicted based on prior art.
> YOUR RESPONSE HERE
- **Advantages:** Provide evidence of how your algorithm outperforms or offers significant advantages over existing methods, such as:
- Increased efficiency.
- Greater accuracy.
- Reduced computational complexity.
- Novel applications.
> YOUR RESPONSE HERE
### Supporting Data
- **Experimental Results:** Include performance benchmarks, simulations, or empirical data that substantiate your claims of novelty and inventiveness.
> YOUR RESPONSE HERE
- **Proof of Concept:** If possible, show a working prototype or implementation that validates the method.
> YOUR RESPONSE HERE
### Citations and Expert Opinions
- **Literature Gaps:** Affirm, to the best of your knowledge, the absence of similar solutions in published literature to reinforce your novelty claim.
> YOUR RESPONSE HERE
- **Endorsements:** Include reviews or opinions from industry experts, researchers, or peer-reviewed publications that evidences the novelty and impact of your algorithm.
> YOUR RESPONSE HERE
### EVIDENCE TO SUPPORT PATENTABILITY
- **Development Records:** Please provide documentation of the invention process, including notes, sketches, and software versions, to establish a timeline of your innovation.
> YOUR RESPONSE HERE
### SUGGESTED APPLICATIONS
- Please provide suggestions for any real world applications of your abstract algorithmic method that occur to you.
> YOUR RESPONSE HERE
### ANY OTHER INFORMATION
- Please provide any other evidence or argument that you think might help support you request for eligibility for Breakthrough Rewards.
> YOUR RESPONSE HERE

View File

@ -78,21 +78,41 @@ impl crate::ChallengeTrait<Solution, Difficulty, 2> for Challenge {
fn generate_instance(seed: [u8; 32], difficulty: &Difficulty) -> Result<Challenge> {
let mut rng = SmallRng::from_seed(StdRng::from_seed(seed).gen());
// Set constant density for value generation
let density = 0.25;
// Generate weights w_i in the range [1, 50]
let weights: Vec<u32> = (0..difficulty.num_items)
.map(|_| rng.gen_range(1..=50))
.collect();
// Generate values v_i in the range [50, 100]
// Generate values v_i in the range [1, 100] with density probability, 0 otherwise
let values: Vec<u32> = (0..difficulty.num_items)
.map(|_| rng.gen_range(50..=100))
.map(|_| {
if rng.gen_bool(density) {
rng.gen_range(1..=100)
} else {
0
}
})
.collect();
// Generate interactive values V_ij in the range [1, 50], with V_ij == V_ji and V_ij where i==j is 0.
// Generate interaction values V_ij with the following properties:
// - V_ij == V_ji (symmetric matrix)
// - V_ii == 0 (diagonal is zero)
// - Values are in range [1, 100] with density probability, 0 otherwise
let mut interaction_values: Vec<Vec<i32>> =
vec![vec![0; difficulty.num_items]; difficulty.num_items];
for i in 0..difficulty.num_items {
for j in (i + 1)..difficulty.num_items {
let value = rng.gen_range(-50..=50);
let value = if rng.gen_bool(density) {
rng.gen_range(1..=100)
} else {
0
};
// Set both V_ij and V_ji due to symmetry
interaction_values[i][j] = value;
interaction_values[j][i] = value;
}
@ -102,30 +122,145 @@ impl crate::ChallengeTrait<Solution, Difficulty, 2> for Challenge {
// Precompute the ratio between the total value (value + sum of interactive values) and
// weight for each item. Pair the ratio with the item's weight and index
let mut value_weight_ratios: Vec<(usize, f32, u32)> = (0..difficulty.num_items)
let mut item_values: Vec<(usize, f32)> = (0..difficulty.num_items)
.map(|i| {
let total_value = values[i] as i32 + interaction_values[i].iter().sum::<i32>();
let weight = weights[i];
let ratio = total_value as f32 / weight as f32;
(i, ratio, weight)
let ratio = total_value as f32 / weights[i] as f32;
(i, ratio)
})
.collect();
// Sort the list of tuples by value-to-weight ratio in descending order
value_weight_ratios
.sort_by(|&(_, ratio_a, _), &(_, ratio_b, _)| ratio_b.partial_cmp(&ratio_a).unwrap());
// Sort the list of ratios in descending order
item_values.sort_by(|a, b| b.1.partial_cmp(&a.1).unwrap());
// Step 1: Initial solution obtained by greedily selecting items based on value-weight ratio
let mut selected_items = Vec::with_capacity(difficulty.num_items);
let mut unselected_items = Vec::with_capacity(difficulty.num_items);
let mut total_weight = 0;
let mut selected_indices = Vec::new();
for &(i, _, weight) in &value_weight_ratios {
if total_weight + weight <= max_weight {
selected_indices.push(i);
total_weight += weight;
let mut total_value = 0;
let mut is_selected = vec![false; difficulty.num_items];
for &(item, _) in &item_values {
if total_weight + weights[item] <= max_weight {
total_weight += weights[item];
total_value += values[item] as i32;
for &prev_item in &selected_items {
total_value += interaction_values[item][prev_item];
}
selected_items.push(item);
is_selected[item] = true;
} else {
unselected_items.push(item);
}
}
selected_indices.sort();
let mut min_value = calculate_total_value(&selected_indices, &values, &interaction_values);
// Step 2: Improvement of solution with Local Search and Tabu-List
// Precompute sum of interaction values with each selected item for all items
let mut interaction_sum_list = vec![0; difficulty.num_items];
for x in 0..difficulty.num_items {
interaction_sum_list[x] = values[x] as i32;
for &item in &selected_items {
interaction_sum_list[x] += interaction_values[x][item];
}
}
let mut min_selected_item_values = i32::MAX;
for x in 0..difficulty.num_items {
if is_selected[x] {
min_selected_item_values = min_selected_item_values.min(interaction_sum_list[x]);
}
}
// Optimized local search with tabu list
let max_iterations = 100;
let mut tabu_list = vec![0; difficulty.num_items];
for _ in 0..max_iterations {
let mut best_improvement = 0;
let mut best_swap = None;
for i in 0..unselected_items.len() {
let new_item = unselected_items[i];
if tabu_list[new_item] > 0 {
continue;
}
let new_item_values_sum = interaction_sum_list[new_item];
if new_item_values_sum < best_improvement + min_selected_item_values {
continue;
}
// Compute minimal weight of remove_item required to put new_item
let min_weight =
weights[new_item] as i32 - (max_weight as i32 - total_weight as i32);
for j in 0..selected_items.len() {
let remove_item = selected_items[j];
if tabu_list[remove_item] > 0 {
continue;
}
// Don't check the weight if there is enough remaining capacity
if min_weight > 0 {
// Skip a remove_item if the remaining capacity after removal is insufficient to push a new_item
let removed_item_weight = weights[remove_item] as i32;
if removed_item_weight < min_weight {
continue;
}
}
let remove_item_values_sum = interaction_sum_list[remove_item];
let value_diff = new_item_values_sum
- remove_item_values_sum
- interaction_values[new_item][remove_item];
if value_diff > best_improvement {
best_improvement = value_diff;
best_swap = Some((i, j));
}
}
}
if let Some((unselected_index, selected_index)) = best_swap {
let new_item = unselected_items[unselected_index];
let remove_item = selected_items[selected_index];
selected_items.swap_remove(selected_index);
unselected_items.swap_remove(unselected_index);
selected_items.push(new_item);
unselected_items.push(remove_item);
is_selected[new_item] = true;
is_selected[remove_item] = false;
total_value += best_improvement;
total_weight = total_weight + weights[new_item] - weights[remove_item];
// Update sum of interaction values after swapping items
min_selected_item_values = i32::MAX;
for x in 0..difficulty.num_items {
interaction_sum_list[x] +=
interaction_values[x][new_item] - interaction_values[x][remove_item];
if is_selected[x] {
min_selected_item_values =
min_selected_item_values.min(interaction_sum_list[x]);
}
}
// Update tabu list
tabu_list[new_item] = 3;
tabu_list[remove_item] = 3;
} else {
break; // No improvement found, terminate local search
}
// Decrease tabu counters
for t in tabu_list.iter_mut() {
*t = if *t > 0 { *t - 1 } else { 0 };
}
}
let mut min_value = calculate_total_value(&selected_items, &values, &interaction_values);
min_value = (min_value as f32 * (1.0 + difficulty.better_than_baseline as f32 / 1000.0))
.round() as u32;

View File

@ -148,6 +148,7 @@ pub(crate) async fn update(cache: &mut AddBlockCache) {
let active_algorithm_ids = &block_data.active_ids[&ActiveType::Algorithm];
let active_breakthrough_ids = &block_data.active_ids[&ActiveType::Breakthrough];
let active_challenge_ids = &block_data.active_ids[&ActiveType::Challenge];
let active_player_ids = &block_data.active_ids[&ActiveType::Player];
// update votes
for breakthrough_state in voting_breakthroughs_state.values_mut() {
@ -156,7 +157,8 @@ pub(crate) async fn update(cache: &mut AddBlockCache) {
(false, PreciseNumber::from(0)),
]);
}
for (player_id, player_state) in active_players_state.iter() {
for player_id in active_player_ids.iter() {
let player_state = &active_players_state[player_id];
let player_data = &active_players_block_data[player_id];
for (breakthrough_id, vote) in player_state.votes.iter() {
let yes = vote.value;
@ -214,10 +216,9 @@ pub(crate) async fn update(cache: &mut AddBlockCache) {
if let Some(breakthrough_id) = &active_algorithms_details[algorithm_id].breakthrough_id
{
active_breakthroughs_block_data
.get_mut(breakthrough_id)
.unwrap()
.adoption += adoption;
if let Some(block_data) = active_breakthroughs_block_data.get_mut(breakthrough_id) {
block_data.adoption += adoption;
}
}
}
}

View File

@ -212,8 +212,8 @@ pub async fn set_vote<T: Context>(
.get_breakthrough_state(&breakthrough_id)
.await
.ok_or_else(|| anyhow!("Invalid breakthrough '{}'", breakthrough_id))?;
if breakthrough_state.round_pushed > latest_block_details.round
&& latest_block_details.round >= breakthrough_state.round_votes_tallied
if latest_block_details.round < breakthrough_state.round_voting_starts
|| latest_block_details.round >= breakthrough_state.round_votes_tallied
{
return Err(anyhow!("Cannot vote on breakthrough '{}'", breakthrough_id));
}
@ -273,7 +273,7 @@ pub(crate) async fn update(cache: &mut AddBlockCache) {
for deposit in active_deposit_details.values() {
let total_time = PreciseNumber::from(deposit.end_timestamp - deposit.start_timestamp);
for i in 0..lock_period_cap {
if round_timestamps[i + 1] <= deposit.start_timestamp {
if i + 1 < lock_period_cap && round_timestamps[i + 1] <= deposit.start_timestamp {
continue;
}
if round_timestamps[i] >= deposit.end_timestamp {

View File

@ -23,6 +23,7 @@ serializable_struct_with_getters! {
BreakthroughsConfig {
bootstrap_address: String,
min_percent_yes_votes: f64,
vote_start_delay: u32,
vote_period: u32,
min_lock_period_to_vote: u32,
submission_fee: PreciseNumber,

View File

@ -272,6 +272,7 @@ serializable_struct_with_getters! {
block_confirmed: u32,
round_submitted: u32,
round_pushed: u32,
round_voting_starts: u32,
round_votes_tallied: u32,
votes_tally: HashMap<bool, PreciseNumber>,
round_active: Option<u32>,

View File

@ -28,7 +28,7 @@ fn cli() -> Command {
.arg(arg!(<WASM> "Path to a wasm file").value_parser(clap::value_parser!(PathBuf)))
.arg(
arg!(--fuel [FUEL] "Optional maximum fuel parameter for WASM VM")
.default_value("2000000000")
.default_value("10000000000")
.value_parser(clap::value_parser!(u64)),
)
.arg(