1. Analysis of Algorithms
This chapter considers the general motivations for algorithmic
analysis and relationships among various approaches to studying
performance characteristics of algorithms.
1.1 Why Analyze an Algorithm?
The most straightforward reason for analyzing an algorithm is to discover its characteristics in order to evaluate its suitability for various applications or compare it with other algorithms for the same application. Moreover, the analysis of an algorithm can help us understand it better, and can suggest informed improvements. Algorithms tend to become shorter, simpler, and more elegant during the analysis process.
1.2 Computational Complexity.
The branch of theoretical computer science where the goal is to classify algorithms according to their efficiency and computational problems according to their inherent difficulty is known as computational complexity. Paradoxically, such classifications are typically not useful for predicting performance or for comparing algorithms in practical applications because they focus on order-of-growth worst-case performance. In this book, we focus on analyses that can be used to predict performance and compare algorithms.
1.3 Analysis of Algorithms.
A complete analysis of the running time of an algorithm involves the following steps:- Implement the algorithm completely.
- Determine the time required for each basic operation.
- Identify unknown quantities that can be used to describe the frequency of execution of the basic operations.
- Develop a realistic model for the input to the program.
- Analyze the unknown quantities, assuming the modelled input.
- Calculate the total running time by multiplying the time by the frequency for each operation, then adding all the products.
1.4 Average-Case Analysis.
Elementary probability theory gives a number of different ways to compute the average value of a quantity. While they are quite closely related, it will be convenient for us to explicitly identify two different approaches to compute the mean.- Distributional. Let $\Pi_N$ be the number of possible inputs of size $N$ and $\Pi_{Nk}$ be the number of inputs of size $N$ that cause the algorithm to have cost $k$, so that $\Pi_N=\sum_k\Pi_{Nk}$. Then the probability that the cost is $k$ is $\Pi_{Nk}/\Pi_N$ and the expected cost is $${1\over \Pi_N}\sum_k k\Pi_{Nk}.$$ The analysis depends on "counting." How many inputs are there of size $N$ and how many inputs of size $N$ cause the algorithm to have cost $k$? These are the steps to compute the probability that the cost is $k$, so this approach is perhaps the most direct from elementary probability theory.
- Cumulative. Let $\Sigma_N$ be the total (or cumulated) cost of the algorithm on all inputs of size $N$. (That is, $\Sigma_N=\sum_kk\Pi_{Nk}$, but the point is that it is not necessary to compute $\Sigma_N$ in that way.) Then the average cost is simply $\Sigma_N/\Pi_N$. The analysis depends on a less specific counting problem: what is the total cost of the algorithm, on all inputs? We will be using general tools that make this approach very attractive.
The distributional approach gives complete information, which can be used directly to compute the standard deviation and other moments. Indirect (often simpler) methods are also available for computing moments when using the other approach, as we will see. In this book, we consider both approaches, though our tendency will be towards the cumulative method, which ultimately allows us to consider the analysis of algorithms in terms of combinatorial properties of basic data structures.
1.5 Example: Analysis of quicksort.
The classical quicksort algorithm was invented by C.A.R. Hoare in 1962:
public class Quick { private static int partition(Comparable[] a, int lo, int hi) { int i = lo, j = hi+1; while (true) { while (less(a[++i], a[lo])) if (i == hi) break; while (less(a[lo], a[--j])) if (j == lo) break; if (i >= j) break; exch(a, i, j); } exch(a, lo, j); return j; } private static void sort(Comparable[] a, int lo, int hi) { if (hi <= lo) return; int j = partition(a, lo, hi); sort(a, lo, j-1); sort(a, j+1, hi); } }
To analyze this algorithm, we start by defining a cost model (running time) and an input model (randomly ordered distinct elements). To separate the analysis from the implementation, we define $C_N$ to be the number of compares to sort $N$ elements and analyze $C_N$ (hypothesizing that the running time for any implementation will be $\sim aC_N$ for some implementation-dependent constant $a$). Note the following properties of the algorithm:
- $N+1$ compares are used for partitioning.
- The probability that the partitioning element is the $k$th smallest is $1/N$ for $k$ between $0$ and $N-1$.
- The size of the two subarrays to be sorted in that case are $k$ and $N-k-1$.
- The two subarrays are randomly ordered after partitioning.
1.6 Asymptotic Approximations
The Harmonic numbers can be approximated by an integral (see Chapter 3), $$H_N \sim \ln N,$$ leading to the simple asymptotic approximation $$C_N \sim 2N\ln N.$$ It is always a good idea to validate our math with a program. This codeproduces this output.
public class QuickCheck { public static void main(String[] args) { int maxN = Integer.parseInt(args[0]); double[] c = new double[maxN+1]; c[0] = 0; for (int N = 1; N <= maxN; N++) c[N] = (N+1)*c[N-1]/N + 2; for (int N = 10; N <= maxN; N *= 10) { double approx = 2*N*Math.log(N) - 2*N; StdOut.printf("%10d %15.2f %15.2f\n", N, c[N], approx); } } }
The discrepancy in the table is explained by our dropping the $2N$ term (and our not using a more accurate approximation to the integral).
% java QuickCheck 1000000 10 44.44 26.05 100 847.85 721.03 1000 12985.91 11815.51 10000 175771.70 164206.81 100000 2218053.41 2102585.09
1.7 Distributions.
It is possible to use similar methods to find the standard deviation and other moments. The standard deviation of the number of compares used by quicksort is $\sqrt{7 - 2\pi^2/3}N \approx .6482776 N$ which implies that the expected number of compares is not likely to be far from the mean for large $N$. Does the number of compares obey a normal distribution? No. Characterizing this distribution is a difficult research challenge.
1.8 Probabilistic Algorithms.
Is our assumption that the input array is randomly ordered a valid input model? Yes, because we can randomly order the array before the sort. Doing so turns quicksort into a randomized algorithm whose good performance is guaranteed by the laws of probability.It is always a good idea to validate our models and analysis by running experiments. Detailed experiments by many people on many computers have done so for quicksort over the past several decades.
In this case, a flaw in the model for some applications is that the array items need not be distinct. Faster implementations are possible for this case, using three-way partitioning.
Selected Exercises
1.14
Follow through the steps above to solve the recurrence $$A_N=1+{2 \over N} \sum_{1\le j\le N} A_{j-1} {\quad\rm for\quad} N>0$$ with $A_0=0$. This is the number of times quicksort is called with hi≥lo.
1.15
Show that the average number of exchanges used during the first partitioning stage (before the pointers cross) is $(N-2)/6$. (Thus, by linearity of the recurrences, the average number of exchanges used by quicksort is ${1\over6}C_N-{1\over2}A_N$.)
1.16
How many subarrays of size $k>0$ are encountered, on the average, when sorting a random array of size $N$ with quicksort?
1.17
If we change the first line in the quicksort implementation above to call insertion sort when hi-lo <= M then the total number of comparisons to sort $N$ elements is described by the recurrence $$ C_N=\begin{cases}N+1+\displaystyle{1\over N} \sum_{1\le j\le N} (C_{j-1}+C_{N-j})&N>M;\\ {1\over4}N(N-1)&N\le M\\ \end{cases}$$ Solve this recurrence.
1.18
Ignoring small terms (those significantly less than $N$) in the answer to the previous exercise, find a function $f(M)$ so that the number of comparisons is approximately $$2N\ln N+f(M)N.$$ Plot the function $f(M)$, and find the value of $M$ that minimizes the function.
Review Questions
Q1.1
Given the recurrence $F_N=t_N+{2 \over N} \sum_{1\le j\le N} F_{j-1} {\ \rm for\ } N>0$ with $F_0=0$, give the order of growth of $F_N$ (constant, linear, linearithmic, quadratic, or cubic) for each of the following choices of the "toll function" $t_N$: (a) 0 (b) 1 (c) $N$ (d) $2N+1$ (e) $N^2$
Q1.2
In a particular (fictitious) sorting application with cloud computing, the cost of sorting files of size less than 1 million is negligible. Otherwise, the cost of comparisons is such that the budget only can cover $10^{12}$ comparisons. Of the following, which is the largest randomly-ordered file of distinct keys that you can expect to be able to sort within budget, using the standard Quicksort algorithm with cutoff to a free sort for files of size less than 1 million: $10^9$, $10^{10}$, $10^{11}$, $10^{12}$, $10^{13}$, or $10^{14}$?