Pages

Thursday, August 1, 2013

Dynamic Programming

Algorithms for optimisation require proof that always return the optimal solution. Greedy algorithms are typically efficient but they do not guarantee optimality. On the other hand, exhaustive-search algorithms do guarantee optimality but are usually very expensive to run. Dynamic Programming (DP) combines the best of both worlds. For a very good coverage of DP, check the wonderful book by Prof. Skiena.

The basic idea behind DP is to search for all possible solutions (correctness) while storing computed results along the way to avoid duplicate work (efficiency). Thus, in a nutshell, DP is a technique for efficiently implementing a recursive algorithm by storing partial results.

What is the TRICK behind DP? to determine whether the recursive algorithm we are analysing computes the same sub-problems over and over again. If so, storing the answers to this sub-problems in a table can lead to a very efficient algorithm. Once this fact has been determined, proceed by applying the following three-step approach:

  1. START with a working recursive algorithm
  2. IDENTIFY repeated computations
  3. DERIVE the DP-based solution
In this post, we show an example of this approach, by means of the derivation of a DP-based implementation of the Fibonacci function.


Recall that the Fibonacci function is an inherently recursive function defined as:

 
fib(0) = 0
fib(1) = 1

fib(n) = fib(n - 1) + fib(n -2)


START with a working recursive algorithm

package com.medina.toolbox.dynamicprogramming; public class FibonacciRecursion { /* * * fib(0) = 0 * fib(1) = 1 * * fib(n) = fib(n - 1) _ fib(n -2) */ long fib(int n) { if (n == 0) return 0; if (n == 1) return 1; return fib(n -1) + fib(n - 2); } public static void main(String[] args) { FibonacciRecursion fr = new FibonacciRecursion(); int n = 10; long f = fr.fib(n); System.out.printf("FREC(%d) : %ld", n, f); } }

No comments:

Post a Comment