1.   Analysis of Algorithms


This chapter considers the general motivations for algorithmic analysis and relationships among various approaches to studying performance characteristics of algorithms.


1.1 Why Analyze an Algorithm?

The most straightforward reason for analyzing an algorithm is to discover its characteristics in order to evaluate its suitability for various applications or compare it with other algorithms for the same application. Moreover, the analysis of an algorithm can help us understand it better, and can suggest informed improvements. Algorithms tend to become shorter, simpler, and more elegant during the analysis process.

1.2 Computational Complexity.

The branch of theoretical computer science where the goal is to classify algorithms according to their efficiency and computational problems according to their inherent difficulty is known as computational complexity. Paradoxically, such classifications are typically not useful for predicting performance or for comparing algorithms in practical applications because they focus on order-of-growth worst-case performance. In this book, we focus on analyses that can be used to predict performance and compare algorithms.

1.3 Analysis of Algorithms.

A complete analysis of the running time of an algorithm involves the following steps: Classical algorithm analysis on early computers could result in exact predictions of running times. Modern systems and algorithms are much more complex, but modern analyses are informed by the idea that exact analysis of this sort could be performed in principle.

1.4 Average-Case Analysis.

Elementary probability theory gives a number of different ways to compute the average value of a quantity. While they are quite closely related, it will be convenient for us to explicitly identify two different approaches to compute the mean.

The distributional approach gives complete information, which can be used directly to compute the standard deviation and other moments. Indirect (often simpler) methods are also available for computing moments when using the other approach, as we will see. In this book, we consider both approaches, though our tendency will be towards the cumulative method, which ultimately allows us to consider the analysis of algorithms in terms of combinatorial properties of basic data structures.

1.5 Example: Analysis of quicksort.

The classical quicksort algorithm was invented by C.A.R. Hoare in 1962:
public class Quick
{
   private static int partition(Comparable[] a, int lo, int hi)
   {
      int i = lo, j = hi+1;
      while (true)
      {
         while (less(a[++i], a[lo])) if (i == hi) break;
         while (less(a[lo], a[--j])) if (j == lo) break;
         if (i >= j) break;
         exch(a, i, j);
      }
      exch(a, lo, j);
      return j;
   }

   private static void sort(Comparable[] a, int lo, int hi)
   {
      if (hi <= lo) return;
      int j = partition(a, lo, hi);
      sort(a, lo, j-1);
      sort(a, j+1, hi);
   }
} 

To analyze this algorithm, we start by defining a cost model (running time) and an input model (randomly ordered distinct elements). To separate the analysis from the implementation, we define $C_N$ to be the number of compares to sort $N$ elements and analyze $C_N$ (hypothesizing that the running time for any implementation will be $\sim aC_N$ for some implementation-dependent constant $a$). Note the following properties of the algorithm:

These imply a mathematical expression (a recurrence relation) that derives directly from the recursive program $$C_N = N+1 + \sum_{0\le k \le N-1}{1\over N}(C_k + C_{N-k-1})$$ This equation is easily solved with a series of simple albeit mysterious algebraic steps. First, apply symmetry, multiply by $N$, subtract the same equation for $N-1$ and rearrange terms to get a simpler recurrence. $$\eqalign{ C_N &= N+1 + {2\over N}\sum_{0\le k \le N-1}C_k\\ NC_N &= N(N+1) + 2\sum_{0\le k \le N-1}C_k\\ NC_N - (N-1)C_{N-1} &= N(N+1) -(N-1)N + 2C_{N-1}\\ NC_N &= (N+1)C_{N-1} +2N\\ }$$ Note that this simpler recurrence gives an efficient algorithm to compute the exact answer. To solve it, divide both sides by $N(N+1)$ and telescope. $$\eqalign{ NC_N &= (N+1)C_{N-1} + 2N {\quad\rm for\quad} N > 1 {\quad\rm with\quad} C_1 = 2\\ {C_N\over N+1} &= {C_{N-1}\over N} + {2\over N+1}\\ &= {C_{N-2}\over N-1} + {2\over N} + {2\over N+1}\\ &= 2H_{N+1} - 2\\ C_N &= 2(N+1)H_{N+1} - 2(N+1) = 2(N+1)H_N - 2N. }$$ The result is an exact expression in terms of the Harmonic numbers.

1.6 Asymptotic Approximations

The Harmonic numbers can be approximated by an integral (see Chapter 3), $$H_N \sim \ln N,$$ leading to the simple asymptotic approximation $$C_N \sim 2N\ln N.$$ It is always a good idea to validate our math with a program. This code
public class QuickCheck
{
   public static void main(String[] args)
   {
      int maxN = Integer.parseInt(args[0]);
      double[] c = new double[maxN+1];
      c[0] = 0;
      for (int N = 1; N <= maxN; N++)
         c[N] = (N+1)*c[N-1]/N + 2;

      for (int N = 10; N <= maxN; N *= 10)
      {
         double approx = 2*N*Math.log(N) - 2*N;
         StdOut.printf("%10d %15.2f %15.2f\n", N, c[N], approx);
      }
   }
}
produces this output.
% java QuickCheck 1000000
        10           44.44           26.05
       100          847.85          721.03
      1000        12985.91        11815.51
     10000       175771.70       164206.81
    100000      2218053.41      2102585.09
The discrepancy in the table is explained by our dropping the $2N$ term (and our not using a more accurate approximation to the integral).

1.7 Distributions.

It is possible to use similar methods to find the standard deviation and other moments. The standard deviation of the number of compares used by quicksort is $\sqrt{7 - 2\pi^2/3}N \approx .6482776 N$ which implies that the expected number of compares is not likely to be far from the mean for large $N$. Does the number of compares obey a normal distribution? No. Characterizing this distribution is a difficult research challenge.

1.8 Probabilistic Algorithms.

Is our assumption that the input array is randomly ordered a valid input model? Yes, because we can randomly order the array before the sort. Doing so turns quicksort into a randomized algorithm whose good performance is guaranteed by the laws of probability.

It is always a good idea to validate our models and analysis by running experiments. Detailed experiments by many people on many computers have done so for quicksort over the past several decades.

In this case, a flaw in the model for some applications is that the array items need not be distinct. Faster implementations are possible for this case, using three-way partitioning.


Selected Exercises

1.14

Follow through the steps above to solve the recurrence $$A_N=1+{2 \over N} \sum_{1\le j\le N} A_{j-1} {\quad\rm for\quad} N>0$$ with $A_0=0$. This is the number of times quicksort is called with hi≥lo.


1.15

Show that the average number of exchanges used during the first partitioning stage (before the pointers cross) is $(N-2)/6$. (Thus, by linearity of the recurrences, the average number of exchanges used by quicksort is ${1\over6}C_N-{1\over2}A_N$.)


1.16

How many subarrays of size $k>0$ are encountered, on the average, when sorting a random array of size $N$ with quicksort?


1.17

If we change the first line in the quicksort implementation above to call insertion sort when hi-lo <= M then the total number of comparisons to sort $N$ elements is described by the recurrence $$ C_N=\begin{cases}N+1+\displaystyle{1\over N} \sum_{1\le j\le N} (C_{j-1}+C_{N-j})&N>M;\\ {1\over4}N(N-1)&N\le M\\ \end{cases}$$ Solve this recurrence.


1.18

Ignoring small terms (those significantly less than $N$) in the answer to the previous exercise, find a function $f(M)$ so that the number of comparisons is approximately $$2N\ln N+f(M)N.$$ Plot the function $f(M)$, and find the value of $M$ that minimizes the function.


Review Questions

Q1.1

Given the recurrence $F_N=t_N+{2 \over N} \sum_{1\le j\le N} F_{j-1} {\ \rm for\ } N>0$ with $F_0=0$, give the order of growth of $F_N$ (constant, linear, linearithmic, quadratic, or cubic) for each of the following choices of the "toll function" $t_N$: (a) 0 (b) 1 (c) $N$ (d) $2N+1$ (e) $N^2$


Q1.2

In a particular (fictitious) sorting application with cloud computing, the cost of sorting files of size less than 1 million is negligible. Otherwise, the cost of comparisons is such that the budget only can cover $10^{12}$ comparisons. Of the following, which is the largest randomly-ordered file of distinct keys that you can expect to be able to sort within budget, using the standard Quicksort algorithm with cutoff to a free sort for files of size less than 1 million: $10^9$, $10^{10}$, $10^{11}$, $10^{12}$, $10^{13}$, or $10^{14}$?


Q1.3

Let $B_{Nk}$ be the average number of subfiles of size $k$ or less encountered when quicksort is used to sort a randomly ordered file of $N$ distinct elements, with $B_{Nk}=N$ for $0\le N\le k$. What is the value of $B_{Nk}$ for $N>k$?