tig-monorepo/docs/guides/voting.md
2025-08-27 14:51:48 +01:00

427 lines
36 KiB
Markdown
Raw Permalink Blame History

This file contains invisible Unicode characters

This file contains invisible Unicode characters that are indistinguishable to humans but may be processed differently by a computer. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.

# Advance Rewards Guide for Token Holders
## SUMMARY
- **Vote on whether an algorithmic method is eligible to earn potential Advance Rewards.**
- **One token one vote.**
- **Assumed that vote will be exercised to maximise token value.**
- **Token holders** determine patentability through an assessment of novelty and inventiveness
- **Benchmarkers** determine performance through adoption (the “TIG synthetic market”)
- **Relevant considerations for token holders:**
- **Distinguishing algorithmic method from algorithmic implementation**
- **Assessing algorithmic method for novelty**
- **Assessing algorithmic method for inventiveness**
- **Assessing the impact of your voting decision on token value**
## What are Advance Rewards and why does TIG offer them ?
- Advance Rewards are a notional class of TIG token reward that is reserved for rewarding ONLY innovation in algorithmic methods.
- The efficiency gains from improved algorithmic methods versus code optimizations in implementations can differ greatly in impact and scale.
**Performance**
- Generally, improvements in algorithmic methods tend to yield exponential or order-of-magnitude efficiency gains by changing the problem-solving approach, while code optimizations tend to provide incremental improvements by refining an existing algorithm's implementation [*For illustrative examples see Appendix 1*]. It is important therefore, that TIG places emphasis on incentivising the submission of innovative algorithmic methods. TIG intends to do that by introducing a process for identifying algorithmic methods that may be eligible for higher rewards.
- To reflect the difference in impact and scale that improvements in algorithmic methods can have compared with code optimisations, TIG seeks to offer higher rewards for innovation in algorithmic methods. These higher rewards, so called “**Advance Rewards”**, are made available through both**; (i)** a proportion of rewards in each Round being available to reward innovative algorithmic methods; and **(ii)** the design of the protocol enabling the rewards for innovative algorithmic methods to persist for multiple Rounds where they continue to provide the basis of algorithm implementations adopted by Benchmarkers.
**Intellectual Property**
- In addition to the significant performance gains potentially available from innovative algorithmic methods, whilst not patentable per se, they can also, subject to certain criteria (including being applied to provide a technical effect), provide the basis for a patent application. If a patent is applied for and granted it will contribute to the value of the intellectual property that underpins the value of TIG tokens.
## What is a Token Holder Vote ?
- TIG believes that the determination of whether an algorithmic method is eligible for Advance Rewards should be done by a token weighted vote.
- Because the value of TIG tokens should correlate with the acquisition, by TIG, of innovative algorithmic methods and associated valuable intellectual property and because both of these are incentivised by the offer of Advance Rewards, TIG believes that the interests of TIG token holders are very well aligned with the objective of correctly determining when eligibility for potential Advance Rewards is appropriate.
- Only votes cast will be counted for the purposes of determining the result of the Token Holder Vote. Abstentions and uncast votes will not be counted.
- Delegation of token votes will not be enabled initially but may be introduced at some point in the future.
- If 50% or more of the votes cast vote affirmatively, then the subject algorithmic method will be eligible for potential Advance Rewards from the beginning of the Round in which it is made available for benchmarking. Whether an algorithmic method with the potential to earn Advance Rewards does earn those rewards will depend on adoption by Benchmarkers sufficient to reach the threshold for merger.
- Algorithmic methods which are the subject of the vote will be made public for review at the beginning of the Round commencing one Round after the Round in which the algorithmic method was submitted to TIG.
- Voting in each Token Holder Vote will be open for the Round commencing two Rounds after the Round in which the algorithmic method subject to the vote was submitted to TIG.
- The result of the vote will be final. There will be no appeals process.
## When does a Token Holder Vote occur ?
- A Token Holder Vote will be called by TIG each time an innovator requests a determination of whether their submitted algorithmic method is eligible for potential Advance Rewards.
## Do you qualify to vote ?
- If you own a TIG token and subject that token to a lock for a minimum determined period, then you are entitled to exercise one vote with respect to that token in the Token Holder Vote. Your influence on the outcome of the vote will therefore be weighted according to how many TIG tokens you hold **and** lock when voting.
## How do I vote ?
- Please see the [Token Holder page](https://play.tig.foundation/token-holder) for details.
## What are you voting on ?
- **You are voting on whether you think the algorithmic method subject to review is eligible for potential Advance Rewards.**
## What should you consider when exercising your vote ?
- TIG believes that, amongst other things, the value of the TIG token will be best enhanced by the capture of innovative algorithmic methods which are patentable (and therefore novel and inventive) according to respective patent laws in different jurisdictions.
- **How you vote is entirely up to you.**
- When voting; assuming you wish to enhance the value of the TIG tokens that you hold, it is important that you are equipped to recognise the kind of innovation that Advance Rewards are intended to incentivise.
- Where an algorithmic method is submitted to the game embodied in a code implementation, first you should distinguish the algorithmic method embodied in the algorithm implementation from the way the algorithmic method is expressed in the code implementation. [***See Appendix 2***].
- Having identified the algorithmic method, you should then determine the following:
- Is the algorithmic method **novel** ? [***See Appendix 3***].
- Is the algorithmic method **innovative** (i.e. is the method not obvious to someone skilled in the art) ? [***See Appendix 3***].
- ` `Algorithmic methods are the proper subject of a Token Holder Vote, whilst code implementations are not.
- TIG anticipates that the contributor of the incumbent algorithmic method will advocate **against** the novelty and inventiveness of the challenger algorithmic method where appropriate, and the challenger will advocate **for** its novelty and inventiveness. We will establish a discord channel for each Token Holder Vote so that these arguments on each side can be put, examined, and challenged. This should provide a rich, case specific, body of knowledge to help with the assessment that you will be making.
- TIG accepts that not all TIG token holders will have the necessary skills or experience to make an informed decision on whether eligibility for potential Advance Rewards is appropriate in every specific case, but we hope that sufficient people with the requisite skill, knowledge and experience will engage, advocate and vote in such a way that eligibility is appropriately awarded. In the future we may introduce delegation of votes so that token holders can delegate analysis to experts that they trust to vote in a way that maximises token value.
- Essentially, we expect a rational token holder when voting in the Token Holder Vote to be assessing the novelty and inventiveness of a contributed algorithmic method, and by so doing, its patentability. To maximise the likelihood that an innovative algorithmic method will be ratified by Token Holder Vote as eligible for potential Advance Rewards, contributors seeking Advance Rewards have been advised to disclose the following information so that it may be assessed by token holders:
- **Novelty:** Results of a prior art search to identify any existing disclosures that might affect the novelty of your algorithmic method.
- **Inventiveness:** Documentation supporting how the algorithmic method differs from existing solutions (i.e. is non-obvious) and any new technical effect it achieves.
- **Technical Effect:** Documentation supporting patentability showing how the algorithmic method offers the potential to provide the basis for significant technical advancements. Document how the algorithmic method might have **real world application** e.g. in fields such as computer security, medical imaging, autonomous systems. or data compression.
## The implications of the outcome of the token holder vote for the TIG protocol
To further assist you in exercising your vote in an informed way, we have set out below some of the implications of affirming an algorithmic method as being eligible for potential Advance Rewards.
### Impact on Innovator Incentives
- A voter may be tempted to default to voting affirmatively on each Token Holder Vote with the intent of making an algorithmic method eligible for Advance Rewards to ensure that there is at least an option for TIG to capture intellectual property with respect to the algorithmic method. This might seem the safe option to ensure that TIG can capture innovation presented to it. But you should note that TIG has the right to apply for patents based on all algorithmic methods contributed to the game with an associated request for a Token Holder Vote regardless of the outcome of the vote.
You should endeavour to avoid affirming eligibility for potential Advance Rewards for algorithmic methods that are not, novel and inventive because this can have serious consequences by prejudicing and stifling innovation that does satisfy those criteria.
An algorithmic method that is incorrectly made eligible for potential Advance Rewards may, if sufficiently adopted by benchmarkers, either; **(i)** deprive an innovator of their continuing Advance Rewards thus disincentivising future innovation from that innovator and other observers; or **(ii)** deprive the TIG Treasury of bootstrap rewards (see below). Bootstrap Rewards should be used to fund/incentivise research towards an algorithmic breakthrough corresponding to challenges featured in TIG.
### Bootstrap Rewards
- It is possible, particularly in the early phases of TIG, that certain challenges will **not** be addressed in the Game by algorithmic methods that are eligible for Advance Rewards because the algorithmic methods lack novelty or inventiveness or because no relevant algorithmic method has reached the threshold to be merged. In this situation those TIG token rewards ordinarily available for innovators as Advance Rewards will be unallocated (we call these unallocated rewards, “**Bootstrap Rewards**”).
- Bootstrap Rewards will be allocated to the TIG Treasury. Bootstrap Rewards will be “ringfenced” and used very specifically by TIG (through distribution of bounty rewards) to incentivise the development of algorithmic methods which address challenges for which there are not presently algorithmic methods in the TIG Game which are eligible for potential Advance Rewards or not merged. This should help algorithmic methods in the Game reach state of the art more quickly and this will accelerate the capture of meaningful value by TIG. This form of “centralised” allocation should only persist for as long as there are challenges in the game which are not solved by algorithmic methods that are earning Advance Rewards.
## Intellectual Property
- TIG will only have the rights to file patent applications based on algorithmic methods where the contributor has sought Advance Rewards by making a request for a Token Holder Vote. Innovators will only request Token Holder Votes if they have reasonable confidence that the process is fair and consistent allowing them to assess their chances of success in the vote with reasonable certainty.
## Commercial Value
- Algorithmic methods have the greatest potential to create the most value for TIG and any decisions should seek to ensure that innovators are sufficiently incentivised to submit them. Inappropriately voting against eligibility of an algorithmic method in a Token Holder Vote may cause valuable innovation to be lost to TIG.
# Appendix 1
**Comparison of Algorithmic Methods and Code Optimisations**
## 1. Order of Magnitude Differences (Big-O Complexity)
- **Algorithmic Improvement**: An improved algorithm can shift the complexity class of a solution, significantly impacting runtime, especially as input sizes grow. For instance, switching from a quadratic algorithm O(n2)O(n2)to a linear one O(n)O(n) can make a previously impractical solution feasible for large datasets.
- **Example**: Mergesort (O(nlogn)O(nlogn)) vs. Bubble Sort (O(n2)O(n2)) for sorting large lists. A dataset of 1,000,000 items might take Mergesort milliseconds, while Bubble Sort would take hours or even days.
- **Indicator:** A useful indicator of whether the underlying algorithmic method of an algorithm implementation has changed will be to test for any change in the efficiency of the implementation (where efficiency means the number of solutions per nonce). If the number of solutions per nonce has changed that will be a sufficient condition to conclude that there has been a change in the underlying algorithmic method. Note however that it is not always the case that a change to the underlying algorithm will result in a change in the number of solutions per nonce and so caution should be exercised not to reject an algorithmic method based on this indicator alone.
- **Code Optimization**: Within a given complexity class, optimizations yield speedups by improving constant factors or reducing overhead, often providing 10-50% performance gains rather than orders of magnitude.
- **Example**: Optimizing Mergesort by using in-place operations or efficient memory management can make the same O(nlogn)O(nlogn) algorithm faster but without changing its overall complexity.
## 2. Scalability for Large Datasets
- **Algorithmic Improvement**: Algorithmic advancements typically show exponential scalability. For instance, switching from a brute-force search (O(2n)O(2n)) to a dynamic programming solution (O(n2)O(n2)) for certain problems can enable the handling of very large datasets that were previously infeasible.
- **Example**: A naive brute-force solution for the traveling salesperson problem (TSP) takes O(n!)O(n!), but an approximation algorithm can reduce it to polynomial time, making the problem solvable for hundreds of cities rather than just a few dozen.
- **Code Optimization**: While code optimizations improve runtime, they rarely enable handling fundamentally larger datasets, as they dont change the growth rate. They mostly improve efficiency for smaller to moderate-sized datasets.
- **Example**: Optimizing a brute-force TSP solution with efficient memoization and loop unrolling can improve performance by a constant factor but wont make it feasible for large datasets, as the factorial growth remains.
## 3. Resource Efficiency (Memory, Power, etc.)
- **Algorithmic Improvement**: Changes in algorithms can drastically reduce memory usage, which is critical in constrained environments. For instance, algorithms that use **space-efficient data structures** (e.g., Bloom filters for probabilistic set membership) provide functionality with far lower memory requirements than traditional methods.
- **Example**: Dijkstras algorithm for shortest paths uses a priority queue, which reduces both time and memory usage compared to simpler graph traversal techniques.
- **Code Optimization**: Code optimizations help by refining memory usage through strategies like data locality, loop unrolling, and inlining functions. Gains tend to be moderate but can add up when processing high-frequency data or embedded systems.
- **Example**: Optimizing Dijkstras algorithm by reducing redundant allocations and using in-place updates can yield a noticeable speedup but wont reduce memory asymptotically.
## 4. Impact of Problem Complexity on Efficiency Gains
- **Algorithmic Improvement**: The more complex the problem, the larger the potential gain from algorithmic innovation. NP-hard or NP-complete problems benefit greatly from efficient approximation algorithms or heuristics that deliver "good enough" solutions in polynomial time instead of exponential time.
- **Example**: An approximation algorithm for the knapsack problem can yield near-optimal solutions in polynomial time, making it usable for large-scale decision-making tasks that brute-force methods could never handle.
- **Code Optimization**: For simpler problems, code optimizations are more effective because there is often less room for improvement with algorithm changes. Optimizations can be substantial in domains like graphics or signal processing where real-time performance is critical, and the underlying algorithms are already optimized.
- **Example**: In video processing, optimizations like SIMD (single instruction, multiple data) processing can achieve high-speed frame rendering, though the basic algorithms (e.g., filtering, transformation) may stay constant.
## Summary Comparison Table
|**Aspect**|**Algorithmic Improvement**|**Code Optimization**|
| :- | :- | :- |
|**Impact on Big-O Complexity**|Can reduce asymptotic complexity (e.g., O(n2)→O(nlogn)O(n2)→O(nlogn))|Doesnt change Big-O; improves constant factors|
|**Scalability**|Enables handling much larger datasets or complex problems|Limited scalability improvements; helps with small to moderate datasets|
|**Typical Performance Gain**|Orders of magnitude (e.g., 10x to 1000x speedups)|Typically 10-50% improvement, up to 2-5x in extreme cases|
|**Applicability**|Effective for complex or large-scale problems|Effective for refining already chosen algorithms|
|**Resource Efficiency**|Reduces memory, power, or processing needs at a fundamental level|Reduces resource consumption at implementation level|
## Real-World Example: Sorting Large Datasets
- **Algorithmic Method**: Switching from a sorting algorithm like Bubble Sort (O(n2)O(n2)) to QuickSort (O(nlogn)O(nlogn)) can reduce runtime from hours to seconds for large datasets, demonstrating order-of-magnitude improvement.
- **Code Optimization**: Within QuickSort, optimizations like pivot selection improvements or reducing recursion depth provide further speed-ups but at a smaller scale (often 10-20% faster).
# Appendix 2
## Distinguishing between an Algorithmic Method and a Code Implementation
Algorithmic improvements offer substantial scalability and efficiency gains, especially for large or complex problems, by changing the approach to solving the problem. Code optimizations, on the other hand, fine-tune the implementation for moderate speed or resource improvements without fundamentally altering how the problem is addressed.
To distinguish between an algorithmic method and a code optimization of an implementation of an algorithmic method, it is helpful to focus on their purpose and level of abstraction:
## Algorithmic Method
- **Definition**: An algorithmic method is the fundamental approach or strategy for solving a problem, independent of specific code or language. Its the "recipe" for how the problem is tackled.
- **Characteristics**:
- **Conceptual Focus**: Concerned with the *steps* or *processes* required to solve the problem at a high level.
- **Algorithm Complexity**: Includes decisions about the algorithm's efficiency (e.g., using a divide-and-conquer algorithm rather than a brute-force approach).
- **Language-Agnostic**: The method itself is not bound to a particular programming language or code style.
- **Example**: Choosing between a binary search (O(log n)) and a linear search (O(n)) for searching in an array. This choice fundamentally changes the performance characteristics based on the algorithm.
## Code Optimization
- **Definition**: Code optimization involves making improvements to an existing implementation of an algorithm to enhance its efficiency, readability, or maintainability, without altering the core steps of the algorithm itself.
- **Characteristics**:
- **Implementation Focus**: Works on refining the code that executes the algorithm, often targeting lower-level details like memory usage, computational overhead, or specific language/library features.
- **Micro-Efficiency**: Often about making the code faster or more efficient within the constraints of the chosen algorithm, such as reducing loop overhead, or minimizing function calls.
- **Language-Specific**: Tied to how the algorithm is implemented in code, considering the features and limitations of the language and runtime environment.
- **Example**: In a binary search implementation, an optimization could involve using bitwise shifts instead of division to calculate the midpoint, or unrolling loops to reduce function call overhead.
## Key Differences
|**Aspect**|**Algorithmic Method**|**Code Optimization**|
| :- | :- | :- |
|**Level of Abstraction**|High-level strategy|Low-level, specific to code and language|
|**Goal**|Selects a method based on theoretical efficiency|Refines the chosen algorithms efficiency|
|**Change in Complexity**|Alters the Big-O complexity (e.g., O(n) to O(log n))|Doesn't usually change Big-O complexity|
|**Examples**|Choosing binary vs. linear search|Replacing division with shifts in midpoint calc|
You may consider automated ways of distinguishing between algorithmic methods and code optimisations, but this is challenging is due to the abstract nature of algorithmic methods versus the concrete nature of code optimizations. However, automated analysis can help, especially when focused on examining patterns in code structure, performance metrics, and complexity analysis. Here are some techniques and tools that could contribute to an automated distinction:
## Static Code Analysis for Complexity Analysis
- **Purpose**: This approach can identify algorithmic-level changes by analyzing code complexity.
- **Method**: Use tools to automatically calculate the time complexity of code snippets (e.g., Big-O Notation) based on control structures (loops, recursive calls) and data structures. Significant differences in complexity before and after changes often indicate an algorithmic method change.
- **Tools**: **Big-O Calculator** (like SymPy for theoretical analysis), **Cyclomatic Complexity Tools** (like SonarQube or Radon for Python). These tools can flag significant structural changes that may represent shifts in the underlying algorithm.
## Performance Profiling and Runtime Analysis
- **Purpose**: Automated profiling can reveal whether optimizations impact the entire algorithm or are local refinements.
- **Method**: Using profiling tools, track execution times and memory usage across different parts of the code. Look for specific functions or lines with changed CPU or memory usage patterns. Smaller, localized performance improvements are often code optimizations, while more significant performance shifts suggest algorithmic changes.
- **Tools**: **Profilers** (like Pythons cProfile, Py-Spy, or Intel VTune) can help monitor function-level runtime changes and memory usage, providing insights on where the optimizations are applied.
## Code Similarity and Structure Analysis
- **Purpose**: To identify structural changes in code that may imply a different algorithmic approach versus fine-tuning within the same structure.
- **Method**: Using abstract syntax trees (ASTs) or control flow graphs, compare the structure of two versions of code to detect major changes, such as different types of loops, recursion, or conditional branches. Major structural changes suggest an algorithmic shift, while minor adjustments usually indicate code optimization.
- **Tools**: **AST-based tools** (like Pythons ast module, Clang AST Matcher) to analyze differences in tree structures across code versions.
## Automated Benchmark Testing
- **Purpose**: Detect performance shifts at different input sizes to distinguish changes that affect complexity (algorithmic) versus fixed efficiency improvements (code optimization).
- **Method**: Set up a range of input sizes and measure the execution time and memory usage. Plot these results to identify the growth rate. A new algorithm will often show a different trend in growth rate, while a code optimization will display similar trends with only faster execution.
- **Tools**: **Benchmarking libraries** (like Google Benchmark for C++, pytest-benchmark for Python) to automate tests and capture metrics across different input sizes.
## Machine Learning for Pattern Recognition (Experimental)
- **Purpose**: Use machine learning to recognize patterns associated with algorithmic changes versus code optimizations based on historical examples.
- **Method**: Train models on labeled datasets of code differences, using features like complexity changes, structural edits, and performance metrics. Though still experimental, this approach could potentially identify trends that characterize algorithmic shifts versus optimizations.
- **Tools**: Machine learning libraries, such as **TensorFlow** or **scikit-learn**, can be trained to analyze code features if you have an adequately labeled dataset.
## Putting It All Together
An effective automated process would likely combine several of these methods:
1. **Complexity Analysis and Structure Comparison** to flag potential algorithm changes.
1. **Profiling and Benchmarking** to confirm performance changes and their scope.
1. **Pattern Recognition Models** to predict the nature of changes if trained models are available.
By layering these methods, an automated system could more accurately differentiate between high-level algorithmic changes and lower-level code optimizations.
An algorithmic method can be reasonably considered innovative over an existing method when it demonstrates clear, meaningful advancements in solving a particular problem, typically by improving performance, applicability, or problem-solving capabilities in ways that existing methods cannot.
# Appendix 3
## Novelty and Inventiveness
An algorithmic method is generally considered innovative when it either introduces a novel way to solve a problem or extends applicability or efficiency beyond prior methods**.**
- **Novelty:** The algorithm must be new and not previously disclosed in any prior art (publications, patents, or publicly accessible methods).
- **Application**: If an algorithm introduces a novel approach to solving a problem or achieves results in a way that has not been previously documented, it may meet this criterion.
- **Example**: A new method for efficiently compressing data using an original algorithm could be considered novel if it achieves results that other compression algorithms do not.
**Prior art** refers to any evidence that an invention or idea was already known before a particular date, in this case the date of submission of the algorithmic method to TIG. This includes previous patents, publications, products, demonstrations, or any public disclosures that describe similar technology, processes, or concepts.
## Practical Definition of Prior Art
**Prior Art**: Any publicly accessible information that describes the same or a similar invention, idea, or concept, which could indicate that the invention in question is not entirely new. Prior art can be anything publicly available in any form, including patents, journal articles, books, websites, products, and even conference presentations.
In simpler terms, if the core idea behind a new invention or concept is already documented in any format that others can access or view, it qualifies as prior art.
## How Prior Art is Used to Assess Innovation
Prior art plays a critical role in determining **whether an idea is truly innovative (or patentable) by helping assess two main criteria**:
1. **Novelty**: An invention must be novel, meaning it must differ in some meaningful way from what is already known. If prior art describes the inventions main features, the invention is not considered novel
1. **Non-Obviousness**: Even if the invention isnt exactly described in prior art, it must also be non-obvious, meaning it cannot be a trivial or obvious extension of existing ideas. The concept of "obviousness" is often subjective but generally means that someone skilled in the relevant field would not have easily thought of the invention. For example:
1. If an idea for a new app closely resembles an existing app but adds minor features anyone skilled in app design might think of, it may be deemed obvious.
## Practical Process to Determine Innovation Using Prior Art
1. **Search for Similar Inventions**: Perform prior art searches across patent databases, scientific publications, trade literature, and other sources.
1. **Analyze Differences**: Compare the proposed inventions core claims against prior art. Look for any significant technical or functional distinctions.
1. **Assess Obviousness**: Determine if the invention would be an obvious adaptation of known solutions based on the prior art.
By systematically assessing whether an invention introduces new and non-obvious features beyond what prior art reveals, its possible to determine whether the idea qualifies as innovative.
## Prior art serves as the baseline against which inventiveness is evaluated
Here's how they relate to each other in detail:
**1. Prior Art Establishes the Known Landscape**
Prior art encompasses all known information in a particular field, including prior patents, published research, public demonstrations, and commercially available products. This body of knowledge sets the groundwork for assessing a new invention, essentially defining what is already known or has been achieved.
**2. Inventiveness Requires a Significant Advancement Over Prior Art**
For an invention to be considered **inventive** (or non-obvious), it must not only be new but also involve an element of ingenuity beyond what someone skilled in the field might naturally derive from prior art. The invention needs to solve a problem or offer a benefit in a way that isnt apparent or trivial based on existing knowledge. In other words, if an invention closely resembles what is described in prior art, or could be easily derived from it by someone familiar with the field, it lacks inventiveness. However, if it introduces a novel approach or a surprising solution, it may meet the inventiveness criterion.
**3. Prior Art is Used to Determine Obviousness (Lack of Inventiveness)**
Prior art is used to assess whether an invention is **obvious**:
- **Direct Similarity**: If prior art describes an identical or nearly identical invention, the new invention is not novel or inventive.
- **Logical Extensions**: If the new invention is a predictable modification or extension of prior art, it may be deemed obvious. For example, changing the size, material, or shape of a known device without introducing a new function would likely be considered obvious.
- **Combination of Prior Art**: If the invention combines elements from different prior art sources in a straightforward way, it might also be considered obvious unless it leads to an unexpected or surprising result. For instance, combining two known mechanisms in a way that anyone in the field would naturally think of would likely be deemed an obvious combination of prior art.
**4. Inventiveness (Non-Obviousness) Often Requires a "Surprising" Element**
The more unexpected or surprising the solution provided by the invention, the stronger the case for inventiveness. If an invention introduces a solution that a person skilled in the art would not have thought to derive from the prior art, it is more likely to be deemed inventive.
## Summary of the Relationship
In short:
- **Prior art sets the reference point** against which a new inventions inventiveness is measured.
- **Inventiveness** (non-obviousness) reflects how the invention goes beyond the known prior art in a non-trivial, unexpected way.
Thus, prior art is essential in judging inventiveness because it helps differentiate between truly innovative advancements and mere adaptations of existing knowledge.
An algorithmic method can be reasonably considered innovative over an existing method when it demonstrates clear, meaningful advancements in solving a particular problem, typically by improving performance, applicability, or problem-solving capabilities in ways that existing methods cannot.
Here are the key criteria for determining when an algorithmic method is genuinely innovative:
## Improved Asymptotic Complexity
- **Criteria**: The new algorithm shows a better Big-O complexity (e.g., improving from O(n2)O(n2) to O(nlogn)O(nlogn) or O(n)O(n)), especially in worst-case or average-case scenarios.
- **Example**: The development of QuickSort (average O(nlogn)O(nlogn)) offered an innovative improvement over Bubble Sort (O(n2)O(n2)) for sorting, making it more practical for larger datasets. Similarly, Dijkstras algorithm for shortest paths was later innovatively improved with A\* for specific scenarios.
## Significant Performance Gains on Practical Inputs
- **Criteria**: Even if the Big-O complexity is similar, the new method shows substantially faster performance or lower resource consumption on real-world datasets, often due to reduced constants, more efficient data handling, or better memory locality.
- **Example**: The Strassen algorithm for matrix multiplication has a better theoretical complexity than standard matrix multiplication. However, some algorithms, even without better complexity, may still be innovative if they perform more efficiently in real-world use cases.
## Extended Applicability to Broader Problem Classes
- **Criteria**: The new algorithm applies to a wider range of scenarios, input types, or constraints than previous methods, extending the range of problems that can be solved effectively.
- **Example**: The invention of dynamic programming extended the applicability of algorithms to optimization problems with overlapping subproblems, such as in the knapsack problem or sequence alignment, which werent solvable efficiently with previous methods.
## Enhanced Robustness and Practicality
- **Criteria**: The new algorithm handles edge cases or special cases (e.g., large-scale data, sparse data, highly variable data distributions) more gracefully than its predecessors, which makes it more robust for general use.
- **Example**: K-Means++ is an innovation over the standard K-Means clustering algorithm because it initializes centroids in a way that avoids poor clustering outcomes, especially on skewed data, enhancing robustness.
## New Approach or Paradigm
- **Criteria**: The algorithm introduces a fundamentally new approach, technique, or paradigm that shifts how a problem is conceptualized or solved, opening up new avenues for solutions.
- **Example**: The development of neural networks and backpropagation introduced a paradigm shift in machine learning, allowing problems that were previously considered infeasible to be solved with learning-based approaches.
## Improved Scalability or Feasibility for Large Data Sets
- **Criteria**: The algorithm is designed to be more scalable, specifically addressing issues like memory constraints, parallelization, or distributed computing.
- **Example**: MapReduce, by Google, introduced an innovative algorithmic method for parallel and distributed processing, making it feasible to process massive datasets in a way that standard methods couldnt handle efficiently.
## Reduction of Resource Requirements (Memory, Power, etc.)
- **Criteria**: The new method substantially reduces resource requirements, such as memory usage, power consumption, or communication overhead, often by optimizing data structures or computational pathways.
- **Example**: Bloom filters are an innovative space-efficient probabilistic data structure that allows for quick set membership testing with minimal memory usage, which is valuable in constrained environments.
## Proof of Optimality or Approximation Guarantees
- **Criteria**: If an algorithm provides a proven optimal solution or a guaranteed approximation ratio for NP-hard or difficult problems, it represents a strong innovation.
- **Example**: Approximation algorithms for NP-hard problems, like the Traveling Salesman Problem (TSP), where algorithms can find solutions within a guaranteed percentage of the optimal, are considered innovative because they provide useful solutions where exact methods are impractical.