Nothing Special   »   [go: up one dir, main page]

3.3 Complexity of Algorithms: Exercises

Download as pdf or txt
Download as pdf or txt
You are on page 1of 3

Proof. Suppose that f1 (x) is O(g1 (x)) and that f2 (x) is O(g2 (x)).

Then ∃C1 , ∃C2 , ∃k1 , ∃k2 such


that
|f1 (x)| ≤ C1 |g1 (x)| for x > k1
and
|f2 (x)| ≤ C2 |g2 (x)| for x > k2 .
Let k = max(k1 , k2 ). Then

|f1 (x)f2 (x)| = |f1 (x)||f2 (x)| ≤ C1 |g1 (x)|C2 |g2 (x)| ≤ (C1 C2 )|g1 (x)g2 (x)| for x > k.

Remark. The goal in using big-O notation to estimate functions is to choose a function g(x) as
simple as possible, that grows relatively slowly so that f (x) is O(g(x)).

Example 1. Give a big-O estimate for f (n) = 3n log(n!) + (n2 + 3) log n, where n is a positive
integer.

Solution. Since 3n is O(n) and log(n!) is O(n log n), we have 3n log(n!) is O(n2 log n). Since n2 + 3
is O(n), (n2 + 3) log n is O(n2 log n). Therefore, f (n) = 3n log(n!) + (n2 + 3) log n is O(n2 log n).

Example 2. Give a big-O estimate for f (x) = (x + 1) log(x2 + 1) + 3x2 .

Solution. First of all,

log(x2 + 1) ≤ log(2x2 ) = log 2 + 2 log(x) ≤ 3 log(x)

for x > 2. This shows that log(x2 + 1) is O(log x). Thus (x + 1) log(x2 + 1) is O(x log x). Since 3x2
is O(x2 ), we have f (x) = (x + 1) log(x2 + 1) + 3x2 is O(max(x log x, x2 )) = O(x2 ).

Exercises
Page 216: 1-20

3.3 Complexity of Algorithms


When does an algorithm provide a satisfactory solution to a problem? First, it must always
produce the correct answer. Second, it should be efficient. The efficiency of algorithms will be
discussed in this section.
One measure of efficiency is the time used by a computer to solve a problem using the algorithm,
when input values are of a specified size.
Questions such as these involve the computational complexity of the algorithm. An analysis
of the time required to solve a problem of a particular size involves the time complexity of the
algorithm. An analysis of the computer memory required involves the space complexity of the
algorithm. Considerations of the time and space complexity of an algorithm are essential when
algorithms are implemented. It is obviously important to know whether an algorithm will produce
an answer in a microsecond, a minute, or a billion years. Likewise, the required memory must be
available to solve a problem, so that space complexity must be taken into account. We will restrict
our attention to time complexity.

Page 12 of 14
Time Complexity
The time complexity of an algorithm can be expressed in terms of the number of operations
used by the algorithm when the input has a particular size. The operations used to measure time
complexity can be the comparison of integers, the addition of integers, the multiplication of integers,
the division of integers, or any other basic operation.
Time complexity is described in terms of the number of operations required instead of actual
computer time because of the difference in time needed for different computers to perform basic
operations. Moreover, it is quite complicated to break all operations down to the basic bit opera-
tions that a computer uses. Furthermore, the fastest computers in existence can perform basic bit
operations (for instance, adding, multiplying, comparing, or exchanging two bits) in 10−11 second
(10 picoseconds), but personal computers may require 10−8 second (10 nanoseconds), which is 1000
times as long, to do the same operations.
Example 1. How many additions of integers and multiplications of integers are used by following
algorithm to multiply two n × n matrices with integer entries?

ALGORITHM (Multiply two n × n matrices)

procedure matrix multiplication ( A, B : matrices)


for i := 1 to m
for j := 1 to n
cij := 0
for q := 1 to k
cij := cij + aiq bqj
return C {C = [cij ] is the product of A and B}

Solution. There are n2 entries in the product of A and B. To find each entry requires a total of n
multiplications and n − 1 additions. Hence, a total of n3 multiplications and n2 (n − 1) additions are
used.
There are more efficient algorithms for matrix multiplication than that given in the above algo-
rithm.
Example 2. Describe the time complexity of Algorithm 1 of Section 3.1 for finding the maximum
element in a finite set of integers.
Solution. The number of comparisons will be used as the measure of the time complexity of the
algorithm, because comparisons are the basic operations used.
To find the maximum element of a set with n elements, listed in an arbitrary order, the temporary
maximum is first set equal to the initial term in the list. Then, after a comparison i ≤ n has been
done to determine that the end of the list has not yet been reached, the temporary maximum and
second term are compared, updating the temporary maximum to the value of the second term if it
is larger. This procedure is continued, using two additional comparisons for each term of the list-one
i ≤ n, to determine that the end of the list has not been reached and another max < ai , to determine
whether to update the temporary maximum. Because two comparisons are used for each of the
second through the nth elements and one more comparison is used to exit the loop when i = n + 1,
exactly 2(n − 1) + 1 = 2n − 1 comparisons are used whenever this algorithm is applied. Hence, the
algorithm for finding the maximum of a set of n elements has time complexity O(n), measured in
terms of the number of comparisons used. Note that for this algorithm the number of comparisons
is independent of particular input of n numbers.

Page 13 of 14
Example 3. Describe the time complexity of the linear search algorithm (Algorithm 2 of Section
3.1).

Solution. The number of comparisons used by Algorithm 2 in Section 3.1 will be taken as the
measure of the time complexity. At each step of the loop in the algorithm, two comparisons are
performed-one i ≤ n, to see whether the end of the list has been reached and one x ≤ ai , to compare
the element x with a term of the list. Finally, one more comparison i ≤ n is made outside the loop.
Consequently, if x = ai , 2i + 1 comparisons are used. The most comparisons, 2n + 2, are required
when the element is not in the list. In this case, 2n comparisons are used to determine that x is not
ai , for i = 1, 2, · · · , n, an additional comparison is used to exit the loop, and one comparison is made
outside the loop. So when x is not in the list, a total of 2n + 2 comparisons are used. Hence, a linear
search requires O(n) comparisons in the worst case, because 2n + 2 is O(n).

Worst-Case Complexity and Average-Case Complexity


By the worst-case performance of an algorithm, we mean the largest number of operations needed
to solve the given problem using this algorithm on input of specified size. Worst-case analysis tells
us how many operations an algorithm requires to guarantee that it will produce a solution.
By the average-case analysis of an algorithm, we mean the average number of operations needed
to solve the given problem over all possible inputs of a given size. Average case time complexity
analysis is usually much more complicated than worst-case analysis.
Exercises
Page 229: 1-9
References
[1] Kenneth H. Rosen, Discrete Mathematics and Its Applications, Seventh Edition, McGraw-Hill
(2012)

Page 14 of 14

You might also like