Dynamic Programming
Unlock one of the most powerful algorithmic techniques by learning to break down complex problems into simpler, reusable pieces.
What Is Dynamic Programming?
Dynamic Programming (DP) is a method for solving complex problems by breaking them down into simpler, overlapping subproblems. The key idea is to solve each subproblem only once and store its result. When the same subproblem occurs again, you simply look up the stored result instead of re-calculating it. This technique is all about "remembering" the past to make the future easier.
A problem is a good candidate for DP if it has two properties:
- Overlapping Subproblems: The problem can be broken down into smaller subproblems that are reused several times. (e.g., calculating `fib(5)` requires `fib(3)`, and so does calculating `fib(4)`).
- Optimal Substructure: The optimal solution to the overall problem can be constructed from the optimal solutions of its subproblems.
There are two main approaches to DP:
- Memoization (Top-Down): You write a standard recursive function, but before any computation, you check if the result for the current state is already in your cache. If it is, you return it. If not, you compute it, store it, and then return it.
Real-world Frontend Example: In React, `useMemo` is a direct application of memoization. You wrap an expensive calculation in `useMemo`, and it will only re-run if its dependencies change, preventing costly re-renders. Similarly, caching API responses to avoid re-fetching the same data is a form of memoization. - Tabulation (Bottom-Up): You solve the problem iteratively, starting with the smallest possible subproblem and building up to the final solution. You typically use an array or matrix to store the results of subproblems in order.