Asymptotic Notation Example, CPSC 331, Winter 2017

home page -  news -  syllabus -  schedule -  assignments -  tutorials -  java -  references -  Mike Jacobson


 Using Asymptotic Notation

About These Examples

As discussed in class, asymptotic notation is useful for making statements about the rates of growths of different functions of the integers. It is also very useful for making statements about the “asymptotic running times” of algorithms, that is, the rate of growth of running time as the size of the input increases: It can be used to simplify the analysis of the running time of an algorithm and it also makes it possible to give simpler expressions for these running times (or for bounds on them) than would otherwise be possible.

The following examples illustrate various things (concerning asymptotic notation) that you might be expected to do in this course.

First Example: Direct Proof Using a Definition

Sometimes you will be expected to prove something by applying a definition and then supplying a little bit of extra information that relates specifically to the problem you have been asked to solve.

Suppose, for example, that you have been asked to prove that

1 + √n ∈ O(n)

A proof of this result is as follows:

It follows by the definition of big-Oh notation that, in order to prove that 1 + √n ∈ O(n), it is necessary (and sufficient) to show that there exists a positive constant c and a nonnegative constant N such that

(1 + √n) ≤ cn     for every integer n ≥ N.

Let c = 2 and let N = 1; notice that if n ≥ N = 1 then 1 ≤ √n ≤ n. It follows that if n ≥ N then

1 + √n ≤ n + n (since 1 ≤ √n ≤ n if n ≥ N)
= 2n
= cn (since c = 2)

as required. Therefore, 1 + √n ∈ O(n).

Notice that this proof starts off by saying what needs to be done, and that it ends with a conclusion. I recommend that you include both in your proofs too. Saying what needs to be done, at the beginning, helps anyone who is reading your proof and it can also be very useful if getting started (as you try to find the proof). Ending with a conclusion is “good style.”

A variety of other problems can be solved in the same kind of way because they involve asymptotic notation whose definitions are similar to the definition of “big-Oh” notation. In particular, it is possible that you can give a proof whose structure resembles the above to prove that f ∈ Ω(g) for a given pair of functions f and g. You can often give a direct proof (working from the definition) to prove that f ∈ Θ(g) for a given pair of functions f and g as well.

Second Example: A Proof by Contradiction

Sometimes it will be useful to prove something by assuming that it is false and then obtaining a contradiction, as shown below.

Suppose, now, that you are asked to prove that √n is not in Ω(n).

A proof of this is as follows.

Suppose, in order to obtain a contradiction, that √n ∈ Ω(n). Then it follows by the definition of “big-Omega” that there exists a positive constant c and a nonnegative constant N such that

√n ≥ cn     for every integer n ≥ N.

In particular, the above inequality must be satisfied when

n ≥ ⌈ max(N, (1/c)2) ⌉ + 1

because this is an integer that is greater than or equal to N.

However, if n has the above value then n > (1/c)2, so that √n > (1/c). It follows that

cn = c × √n × √n
> c × (1/c) × √n (since √n > 1/c > 0 and since c and √n are positive)
= √n.

That is, √n < cn for this value of n, contradicting that fact (given above) that √n ≥ cn for every integer n ≥ N.

Since only one assumption was made, above, and a contradiction was obtained, the assumption must be false. Therefore, √n is not in Ω(n).

It is probably not very obvious how I discovered this proof! That is because the organization of the proof does not really show the steps that were taken to discover it.

I found the proof by realizing that it was necessary to find an integer n such that n ≥ N and such that √n < cn, for the constants c and N included in the definition of “big-Omega.”

Manipulating the inequality “√n < cn” by dividing both sides by √n, then dividing both sides by (the positive constant) c, one can see that the first inequality holds if and only if √n > 1/c. Since both the left hand and right hand of this are positive numbers, the left side is greater than or equal to right side if and only if the square of the left side is greater than the square of the right side, that is, if and only if n > (1/c)2.

I then chose the value ⌈ max(N, (1/c)2) ⌉ + 1 in order to make sure that n was an integer (which is why the “ceiling” operator is used in the expression) that satisfies the above inequality and is greater than or equal to N.

After that — since I had been working (in my head) from “what I wanted to prove” toward “things that implied this,” I reversed the order of the various claims that were involved — so that the resulting argument works from “what we know,” to “other things that are implied by what we know,” eventually reaching what we needed to show: This is — generally — a kind of argument that is easier for another person to read.

Recommendation: If you are asked to prove something like the above, on an assignment, make sure that you allow yourself enough time to do something like the above — that is, discover a proof and then write it over again (sometimes, several times) in order to discover a version that is as direct and easy to read as you can make it.

Arguments that are something like the above should be considered if you are asked to prove that a given function f is not O(g) for a given function g. Even though it does not involve a proof by contradiction, you will likely need to do something like the above if you are asked to prove that f ∈ o(g) or that f ∈ ω(g) for a given pair of functions f and g (and if “limit tests” cannot be used).

Third Example: Simplifying an Analysis of Worst-Case Running Time

The previous lecture supplement included an analysis of the worst-case running time of a program that included a nested loop. While the analysis could be completed it was somewhat more complicated than necessary. It also made use of a formula for the sum of the first n−1 positive integers for a given positive integer n, and it would have been difficult to complete the analysis without this.

We will focus attention on the analysis of the outer loop. This is executed n−1 times, for values of a variable i ranging from 1 to n−1, where n is the length of the array that is given as input.

When we were trying to find an upper bound for the worst-case running time of the algorithm, we used techniques from class to argue that the number of steps used by the body of this loop (for a given value of i was at most 5i+3.

In order to use this to conclude that the worst-case running time of the loop is in O(n2), notice that if 1 ≤ i ≤ n−1 (as is the case, here) and n ≥ 1, then the number of steps used by any one execution of the loop body is at most

5i+3 ≤ 5(n−1)+3 = 5n−2 ≤ 5n.

Since the loop body is executed at most n−1 times, we can conclude (using material from the lectures on this topic) that the total number of steps used by this loop is at most

(n−1)×5n + n×1 = 5n2 − 4n ≤ 5n2.

It should be easy, by now, for you to use this to show that the total number of steps used by this loop, in the worst case, is in O(n2).

In order to find a lower bound on the worst-case running time we described a family of inputs, that included an input In of size n, for every positive integer n, such that the loop body was executed for every value of i between 1 and n−1 and, furthermore, the number of steps used by the loop body (for a given value of i) was at least (indeed, exactly) 5i+3.

Now, we could certainly try to repeat the process that we followed when getting an upper bound — considering the smallest value that i can take to get a lower bound on the number of steps used each time the loop body is executed. Unfortunately this would give us a very poor result: Since we would have to consider the value i=1 and work from there, all that we would get is a linear lower bound (on the worst-case running time of the loop) instead of a quadratic one.

In order to get a better result it is possible to use a technique called splitting the sum that will be introduced in CPSC 413. Using this method, one can prove that he total number of steps used by the loop on input In is greater than or equal to (5/6)n2 whenever n≥3. The worst case running time of this loop is therefore in Ω(n2).

Since the worst case running time of the loop is in both O(n2) and Ω(n2), it is in Θ(n2) as well.

Easy Exercise: Extend the above argument to show that the worst case running time of the entire program is in Θ(n2).

Note that no particular effort has been made to make the “best” choice of constants (c and N) in this last example: These do not really effect the result. Instead, values have been chosen to make the proofs simple.

It is not clear that these constants are very meaningful anyway! The kind of analysis that has been described here is generally not very effective in measuring the value of such “hidden multiplicative constants,” and, indeed, if one is measuring “running time” in seconds (or milliseconds, or nanosecods) then their values might vary from machine to machine.

Additional tools that can be used to measure such things will be discussed in a later exercise.


Last updated:
http://www.cpsc.ucalgary.ca/~jacobs/Courses/cpsc331/W17/handouts/AsymptoticNotationExample.html