Nothing Special   »   [go: up one dir, main page]

Cs 161 Lecture 04

Download as pdf or txt
Download as pdf or txt
You are on page 1of 6

CS161, Lecture 4 Median, Selection, and the Substitution Method

Scribe: Albert Chen and Juliana Cook (2015), Sam Kim (2016), Gregory Valiant (2017) Date:
January 23, 2017

1 Introduction
Last lecture, we covered the master theorem,  which can be used for recurrences of a certain form. Recall
that if we have a recurrence T (n) = a · T nb + O(nd ) where a ≥ 1, b > 1, then

d d
O(n log n) if a = b

T (n) = O(n ) d d
if a < b
 logb a
O(n ) if a > bd

Many algorithms that result from the divide-and-conquer paradigm yield recurrence relations for their run-
times that have the above form—namely algorithms that divide the problem into equal -sized sub-pieces at
each recursion.
Today, we will introduce a problem where the master theorem cannot be applied: the problem of finding
the kth smallest element in an unsorted array. First, we show it can be done in O(n log n) time via sorting
and that any correct algorithm must run in Ω(n) time. However, it is not obvious that a linear-time selection
algorithm exists. We present a linear-time selection algorithm, with an intuition for why it has the desired
properties to achieve O(n) running time. The two high-level goals of this lecture are 1) to cover a really
cool and surprising algorithm, and 2) illustrate that some divide-and-conquer algorithms yield recurrence
relations that cannot be analyzed via the “Master Method/Theorem”, yet one can (often) still successfully
analyze them.

2 Selection
The selection problem is to find the kth smallest number in an array A.

Input: array A of n numbers, and an integer k ∈ {1, · · · , n}.


Output: the k-th smallest number in A.

One approach is to sort the numbers in ascending order, and then return the kth number in the sorted
list. This takes O(n log n) time, since it takes O(n log n) time for the sort (e.g. by MergeSort) and O(1) time
to return kth number.

2.1 Minimum Element


As always, we ask if we can do better (i.e. faster in big-O terms). In the special case where k = 1, selection
is the problem of finding the minimum element. We can do this in O(n) time by scanning through the array
and keeping track of the minimum element so far. If the current element is smaller than the minimum so
far, we update the minimum.
In fact, this is the best running time we could hope for.
Definition 2.1. A deterministic algorithm is one which, given a fixed input, always performs the same
operations (as opposed to an algorithm which uses randomness).
Claim 1. Any deterministic algorithm for finding the minimum has runtime Ω(n).

1
Algorithm 1: SelectMin(A)
m ← ∞;
n ← length(A);
for i = 1 to n do
if A(i) < m then
m ← A(i);

return m;

Proof of Claim 1. Intuitively, the claim holds because any algorithm for the minimum must look at all the
elements, each of which could be the minimum. Suppose a correct deterministic algorithm does not look at
A(i) for some i. Then the output cannot depend on A(i), so the algorithm returns the same value whether
A(i) is the minimum element or the maximum element. Therefore the algorithm is not always correct, which
is a contradiction. So there is no sublinear deterministic algorithm for finding the minimum. 
So for k = 1, we have an algorithm which achieves the best running time possible. By similar reasoning,
this lower bound of Ω(n) applies to the general selection problem. So ideally we would like to have a
linear-time selection algorithm in the general case.

3 Linear-Time Selection
In fact, a linear-time selection algorithm does exist. Before showing the linear time selection algorithm, it’s
helpful to build some intuition on how to approach the problem. The high-level idea will be to try to do a
Binary Search over an unsorted input. At each step, we hope to divide the input into two parts, the subset
of smaller elements of A, and the subset of larger elements of A. We will then determine whether the kth
smallest element lies in the first part (with the “smaller” elements) or the part with larger elements, and
recurse on exactly one of those two parts.
How do we decide how to partition the array into these two pieces? Suppose we have a black-box
algorithm ChoosePivot that chooses some element in the array A, and we use this pivot to define the two
sets–any A[i] less than the pivot is in the set of “smaller” values, and any A[i] greater than the pivot is in
the other part. We will figure out precisely how to specify this subroutine ChoosePivot a bit later, after
specifying the high-level algorithm structure. For clarity we’ll assume all elements are distinct from now
on, but the idea generalizes easily. Let n be the size of the array and assume we are trying to find the k th
element.

Algorithm 2: Select(A, n, k)
if n == 1 then
return A[1];
p ← ChoosePivot(A, n);
A< ← {A(i) | A(i) < p};
A> ← {[A(i) | A(i) > p};
if |A< | = k − 1 then
return p;
else if |A< | > k − 1 then
return Select(A< , |A< |, k);
else if |A< | < k − 1 then
return Select(A> , |A> |, k − |A +< | − 1);

At each iteration, we use the element p to partition the array into two parts: all elements smaller than
the pivot and all elements larger than the pivot, which we denote A< and A> , respectively.

2
Depending on what the size of the resulting sub-arrays are, the runtime can be different. For example,
if one of these sub-arrays is of size n − 1, at each iteration, we only decreased the size of the problem by 1,
resulting in total running time O(n2 ). If the array is split into two equal parts, then the size of the problem
at iteration reduces by half, resulting in a linear time solution. (We assume ChoosePivot runs in O(n).)
Claim 2. If the pivot p is chosen to be the minimum or maximum element, then Select runs in Θ(n2 )
time.
Proof. At each iteration, the number of elements decreases by 1. Since running ChoosePivot and creating
A< and A> takes linear time, the recurrence for the runtime is T (n) = T (n − 1) + Θ(n). Expanding this,

T (n) ≤ c1 n + c1 (n − 1) + c1 (n − 2) + ... + c1 = c1 n(n + 1)/2

and
T (n) ≥ c2 n + c2 (n − 1) + c2 (n − 2) + ... + c2 = c2 n(n + 1)/2.
We conclude that T (n) = Θ(n2 ). 
Claim 3. If the pivot p is chosen to be the median element, then Select runs in O(n) time.
Proof. Intuitively, the running time is linear since we remove half of the elements from consideration each
iteration. Formally, each recursive call is made on inputs of half the size, namely, T (n) ≤ T (n/2) + cn.
Expanding this, the runtime is T (n) ≤ cn + cn/2 + cn/4 + ... + c ≤ 2cn, which is O(n). 
So how do we design ChoosePivot that chooses a pivot in linear time? In the following, we describe
three ideas.

3.1 Idea #1: Choose a random pivot


As we saw earlier, depending on the pivot chosen, the worst-case runtime can be O(n2 ) if we are unlucky in
the choice of the pivot at every iteration. As you might expect, it is extremely unlikely to be this unlucky,
and one can prove that the expected runtime is O(n) provided the pivot is chosen uniformly at random from
the set of elements of A. In practice, this randomized algorithm is what is implemented, and the hidden
constant in the O(n) runtime is very small. We cover the rather clever analysis of this randomized algorithm
in CS265 (“Randomized Algorithms and Probabilistic Analysis”). . . .

3.2 Idea #2: Choose a pivot that create the most “balanced” split
Consider ChoosePivot that returns the pivot that creates the most “balanced” split, which would be the
median of the array. However, this is exactly selection problem we are trying to solve, with k = n/2! As long
as we do not know how to find the median in linear time, we cannot use this procedure as ChoosePivot.

3.3 Idea #3: Find a pivot ”close enough” to the median


Given a linear-time median algorithm, we can solve the selection problem in linear time (and vice versa).
Although ideally we would want to find the median, notice that as far as correctness goes, there was nothing
special about partitioning around the median. We could use this same idea of partitioning and recursing on
a smaller problem even if we partition around an arbitrary element. To get a good runtime, however, we
need to guarantee that the subproblems get smaller quickly. In 1973, Blum, Floyd, Pratt, Rivest, and Tarjan
came up with the Median of Medians algorithm. It is similar to the previous algorithm, but rather than
partitioning around the exact median, uses a surrogate “median of medians”. We update ChoosePivot
accordingly.
What is this algorithm doing? First it divides A into segments of size 5. Within each group, it finds the
median by first sorting the elements with MergeSort. Recall that MergeSort sorts in O(n log n) time.
However, since each group has a constant number of elements, it takes constant time to sort. Then it makes a

3
Algorithm 3: ChoosePivot(A, n)
Split A into g = dn/5e groups p1 , . . . , pg ;
for i = 1 to g do
pi ← MergeSort(pi );
C ← {median of pi | i = 1, . . . , g};
p ← Select(C, g, g/2);
return p;

recursive call to Select to find the median of C, the median of medians. Intuitively, by partitioning around
this value, we are able to find something that is close to the true median for partitioning, yet is ‘easier’ to
compute, because it is the median of g = dn/5e elements rather than n. The last part is as before: once we
have our pivot element p, we split the array and recurse on the proper subproblem, or halt if we found our
answer.
We have devised a slightly complicated method to determine which element to partition around, but the
algorithm remains correct for the same reasons as before. So what is its running time? As before, we’re going
to show this by examining the size of the recursive subproblems. As it turns out, by taking the median of
medians approach, we have a guarantee on how much smaller the problem gets each iteration. The guarantee
is good enough to achieve O(n) runtime.

3.3.1 Running Time


Lemma 3.1. |A< | ≤ 7n/10 + 5 and |A> | ≤ 7n/10 + 5.
Proof of Lemma 3.1. p is the median of p1 , · · · , pg . Because p is the median of g = dn/5e elements, the
medians of dg/2e − 1 groups pi are smaller than p. If p is larger than a group median, it is larger than at
least three elements in that group (the median and the smaller two numbers). This applies to all groups
except the remainder group, which might have fewer than 5 elements. Accounting for the remainder group,
p is greater than at least 3 · (dg/2e − 2) elements of A. By symmetry, p is less than at least the same number
of elements.
Now,
|A> | = # of elements greater than p
≤ (n − 1) − 3 · (dg/2e − 2)
= n + 5 − 3 · dg/2e (1)
≤ n − 3n/10 + 5
= 7n/10 + 5.
By symmetry, |A< | ≤ 7n/10 + 5 as well.
Intuitively, we know that 60% of half of the groups are less than the pivot, which is 30% of the total number
of elements, n. Therefore, at most 70% of the elements are greater than the pivot. Hence, |A> | ≈ 7n/10.
We can make the same argument for |A< |. 
The recursive call used to find the median of medians has input of size dn/5e ≤ n/5 + 1. The other work
in the algorithm takes linear time: constant time on each of dn/5e groups for MergeSort (linear time total
for that part), O(n) time scanning A to make A< and A> .
Thus, we can write the full recurrence for the runtime,
(
c1 n + T (n/5 + 1) + T (7n/10 + 5) if n > 5
T (n) ≤
c2 if n ≤ 5.

How do we prove that T (n) = O(n)? The master theorem does not apply here. Instead, we will prove
this using the substitution method.

4
4 The Substitution Method
Recurrence trees can get quite messy when attempting to solve complex recurrences. With the substitution
method, we can guess what the runtime is, plug it in to the recurrence and see if it works out.
k
P
Given a recurrence T (n) ≤ cf (n) + T (ni ), we can guess that the solution to the recurrence is
i=1

d · g(n0 ) if n = n0
T (n) ≤
d · g(n) if n > n0

for some constants d > 0 and n0 ≥ 1 and a function g(n). We are essentially guessing that T (n) ≤ O(g(n)).
For our base case we must show that you can pick some d such that T (n0 ) ≤ d · g(n0 ). For example, this
can follow from our standard assumption that T (1) = 1.
Next we assume that our guess is correct for everything smaller than n, meaning T (n0 ) ≤ d · g(n0 ) for all
0
n < n. Using the inductive hypothesis, we prove the guess for n. We must pick some d such that
k
X
f (n) + d · g(ni ) ≤ d · g(n), whenever n ≥ n0 .
i=1

Typically the way this works is that you first try to prove the inductive step starting from the inductive
hypothesis, and then from this obtain a condition that d needs to obey. Using this condition you try to
figure out the base case, i.e., what n0 should be.

4.1 Solving the Recurrence of Select using the Substitution Method


For simplicity, we consider the recurrence T (n) ≤ T (n/5) + T (7n/10) + cn instead of the exact recurrence
of Select.
To prove that T (n) = O(n), we guess:

dn0 if n = n0
T (n) ≤
d · n if n > n0

For the base case, we pick n0 = 1 and use the standard assumption that T (1) = 1 ≤ d. For the inductive
hypothesis, we assume that our guess is correct for any n < k, and we prove our guess for k. That is, consider
d such that for all n0 ≤ n < k, T (n) ≤ dn.
To prove for n = k, we solve the following equation:

T (k) ≤ T (k/5) + T (7k/10) + ck ≤ dk/5 + 7dk/10 + ck ≤ dk

9/10d + c ≤ d
c ≤ d/10
d ≥ 10c
Therefore, we can choose d = max(1, 10c), which is a constant factor. The induction is completed. By the
definition of big-O, the recurrence runs in O(n) time.

4.2 Issues when using the Substitution Method


n

Now we will try out an example where our guess is incorrect. Consider the recurrence T (n) = 2T 2 +n
(similar to MergeSort). We will guess that the algorithm is linear.

dn0 if n = n0
T (n) ≤
d · n if n > n0

5
We try the inductive step. We try to pick some d such that for all n ≥ n0 ,
k
X
n+ dg(ni ) ≤ d · g(n)
i=1

n
n+2·d· ≤ dn
2
n(1 + d) ≤ dn
n + dn ≤ dn
n < 0,
However, the above can never be true, and there is no choice of d that works! Thus our guess was
incorrect.
This time the guess was incorrect since MergeSort takes superlinear time. Sometimes, however, the
guess can be asymptotically correct but the induction might not work out. Consider for instance T (n) ≤
2T (n/2) + 1.
We know that the runtime is O(n) so let’s try to prove it with the substitution method. Let’s guess that
T (n) ≤ cn for all n ≥ n0 .
First we do the induction step: We assume that T (n/2) ≤ cn/2 and consider T (n). We want that
2 · cn/2 + 1 ≤ cn, that is, cn + 1 ≤ cn. However, this is impossible.
This doesn’t mean that T (n) is not O(n), but in this case we chose the wrong linear function. We could
guess instead that T (n) ≤ cn − 1. Now for the induction we get 2 · (cn/2 − 1) + 1 = cn − 1 which is true for
all c. We can then choose the base case T (1) = 1.

You might also like