From Wikipedia, the free encyclopedia
A learning augmented algorithm is an algorithm that can make use of a prediction to improve its performance.[1] Whereas in regular algorithms just the problem instance is inputted, learning augmented algorithms accept an extra parameter. This extra parameter often is a prediction of some property of the solution. This prediction is then used by the algorithm to improve its running time or the quality of its output.
A learning augmented algorithm typically takes an input ( I , A ) {\displaystyle ({\mathcal {I}},{\mathcal {A}})} . Here I {\displaystyle {\mathcal {I}}} is a problem instance and A {\displaystyle {\mathcal {A}}} is the advice: a prediction about a certain property of the optimal solution. The type of the problem instance and the prediction depend on the algorithm. Learning augmented algorithms usually satisfy the following two properties:
Learning augmented algorithms generally do not prescribe how the prediction should be done. For this purpose machine learning can be used.[citation needed]
The binary search algorithm is an algorithm for finding elements of a sorted list x 1 , … , x n {\displaystyle x_{1},\ldots ,x_{n}} . It needs O ( log ( n ) ) {\displaystyle O(\log(n))} steps to find an element with some known value y {\displaystyle y} in a list of length n {\displaystyle n} . With a prediction i {\displaystyle i} for the position of y {\displaystyle y} , the following learning augmented algorithm can be used.[1]
The error is defined to be η = | i − i ∗ | {\displaystyle \eta =|i-i^{*}|} , where i ∗ {\displaystyle i^{*}} is the real index of y {\displaystyle y} . In the learning augmented algorithm, probing the positions i + 1 , i + 2 , i + 4 , … {\displaystyle i+1,i+2,i+4,\ldots } takes log 2 ( η ) {\displaystyle \log _{2}(\eta )} steps. Then a binary search is performed on a list of size at most 2 η {\displaystyle 2\eta } , which takes log 2 ( η ) {\displaystyle \log _{2}(\eta )} steps. This makes the total running time of the algorithm 2 log 2 ( η ) {\displaystyle 2\log _{2}(\eta )} . So, when the error is small, the algorithm is faster than a normal binary search. This shows that the algorithm is consistent. Even in the worst case, the error will be at most n {\displaystyle n} . Then the algorithm takes at most O ( log ( n ) ) {\displaystyle O(\log(n))} steps, so the algorithm is robust.
Learning augmented algorithms are known for:
RetroSearch is an open source project built by @garambo | Open a GitHub Issue
Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo
HTML:
3.2
| Encoding:
UTF-8
| Version:
0.7.3