Intro
A very nuanced approach to solving problems that revolves around breaking down a problem into smaller instances of itself, storing the results, and building upon them to reach a solution to the original problem. Hence the “dynamic” in DynamicProgramming.
It’s tricky to grasp for junior developer, like myself and isn’t as intuitive as it may seem.
How it works
- Decompose the problem into smaller, equivalent sub-problems.
- Express solution in terms of sub-problems.
- Use the table to compute optimal value bottom-up
- Find the optimal solution to the problem based on steps 1-3
Examples
- Fibonacci numbers
- Robot Coin Collecting
- Transitive Closure
- Warshall’s algorithm
- All Pairs Shortest Paths (Floyd’s)
Methods
Top-Down
Starts from the main problem and recursively solves sub-problems and storing the results. Similar to Recursion but with the benefit of solved solutions, preventing solving them over and over. // Think of how the #FibonacciSequence is solved...
It’s also considered to be BackTracking + Memoization.
Memoization
The act of saving the result of previous function call in a Dictionary or Map and reading from it when we do the exact same function. Different ways of doing this, but you may store the arguments of a function-call as a key.
Bottom-Up
Solve the smallest problem first and build up to solve the bigger problem. This method doesn’t rely on Recursion and instead fills out a table where each entry is the solution to a smaller sub-problem.
Start with base-cases and iteratively compute up to the bigger problem. Avoiding the overhead needed for Recursion and memoization leading to more efficient solutions.
References
- Comp3760 Lecture 10: Dynamic Programming.