Importance of caching algorithms
Feature description
This is an algorithms-based repository, which does not necessarily focus on time complexity, but also on readability and understandability. That being said, specific algorithms are benchmarked for speed comparisons, and sometimes implementing a caching system vastly improves this speed.
For example, if the recursive fibonacci function found here
https://github.com/TheAlgorithms/Python/blob/master/maths/fibonacci.py#L63
is timed and produces an average speed of 6.0ms, on my machine at least. however when you implement the functools lru_cache, it speeds the algorithm to 0.0ms, to match the other algorithms.
The example with caching:
def fib_recursive(n: int) -> list[int]:
@lru_cache(maxsize=None)
def fib_recursive_term(i: int) -> int:
if i < 0:
raise Exception("n is negative")
if i < 2:
return i
return fib_recursive_term(i - 1) + fib_recursive_term(i - 2)
if n < 0:
raise Exception("n is negative")
return [fib_recursive_term(i) for i in range(n + 1)]
@cclauss Maybe it could be in the form of adding another function with caching implemented, or mentioning it in the README, or having notes in the docstrings about caching. What is your opinion?
I love the notion for side-by-side implementations without cache and with cache. The first would be easier to read / understand and the second would be more performant. We would need timeit or similar benchmarks to prove the advantage of caching. This sounds like a great way to learn the what, why, and how of caching.
I love the notion for side-by-side implementations without cache and with cache. The first would be easier to read / understand and the second would be more performant. We would need
timeitor similar benchmarks to prove the advantage of caching. This sounds like a great way to learn the what, why, and how of caching.
Awesome, I will make some pull requests implementing this