0% found this document useful (0 votes)
35 views

Lecture Chapter 2 Part 2

Uploaded by

De Shad Bostic
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
35 views

Lecture Chapter 2 Part 2

Uploaded by

De Shad Bostic
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 56

Lecture Chapter 2 Part 2

Section 2.3 -2.4


Fundamentals of the Analysis
of Algorithm Efficiency
(cont’d)
Mathematical Analysis of Nonrecursive
Algorithms
Mathematical Analysis of Nonrecursive
Algorithms
• EXAMPLE 1
• Consider the problem of finding the value of
the largest element in a list of n numbers.
• For simplicity, we assume that the list is
implemented as an array.
• The pseudocode following (next slide) is for a
standard algorithm for solving the problem.
Analysis
• The obvious measure of an input’s size here is the number of elements in
the array, i.e., n.
• The operations that are going to be executed most often are in the
algorithm’s for loop.
• There are two operations in the loop’s body:
– the comparison A[i]> maxval and
– the assignment maxval←A[i].
• Which of these two operations should we consider basic?
• Since the comparison is executed on each repetition of the loop and the
assignment is not, we should consider the comparison to be the basic
operation.
• Note that the number of comparisons will be the same for all arrays of size
n;
• therefore, in terms of this metric, there is no need to distinguish among the
worst, average, and best cases here.
• Let us denote C(n) the number of times this
comparison is executed
• We try to find a formula expressing it as a
function of size n.
• The algorithm makes one comparison on each
execution of the loop, which is repeated for
each value of the loop’s variable i within the
bounds 1 and n − 1, inclusive.
• Therefore, we get the following sum for C(n):
That is, C(n) = 1 + 1 + 1 + …. + 1 (done n – 1 times)
=n-1
General Plan for Analyzing the Time Efficiency of Nonrecursive Algorithms

1. Decide on a parameter (or parameters) indicating an input’s size.


2. Identify the algorithm’s basic operation. (As a rule, it is located in the
innermost loop.)
3. Check whether the number of times the basic operation is executed depends
only on the size of an input.
- If it also depends on some additional property,
– the worst-case, average-case, and, if necessary, best-case efficiencies have to be
investigated separately.
4. Set up a sum expressing the number of times the algorithm’s basic operation
is executed.
– Sometimes, an analysis of a nonrecursive algorithm requires setting up not a sum
but a recurrence relation for the number of times its basic operation is executed.

5. Using standard formulas and rules of sum manipulation, either find a closed form
formula for the count or
– at the very least, establish its order of growth.
Some summation examples
• Exercise
• (a)

• (b)
• EXAMPLE 2 Consider the element uniqueness
problem: check whether all the elements in a
given array of n elements are distinct.
• This problem can be solved by the following
straightforward algorithm.
– (Next slide)
Algorithm UniqueElements
• The natural measure of the input’s size here is again
n, the number of elements in the array.
• Since the innermost loop contains a single operation
(the comparison of two elements), we should
consider it as the algorithm’s basic operation.
• Note, however, that the number of element
comparisons depends not only on n but also on
whether there are equal elements in the array and, if
there are, which array positions they occupy.
• We will limit our investigation to the worst case only.
Analysis Method 1
• When i = 0, there are n – 1 comparisons in the inner loop,
• When i = 1, there are n – 2 comparisons in the inner loop,
• When i = 2, there are n – 3 comparisons in the inner loop,
• …
• When i = m, we can see that there are (n – 1) - m comparison in the
inner loop,
• Hence,
• When i = n - 2, there are (n – 1) – (n – 2) = 1 comparison in the inner
loop,
• This gives Cworst (n) = (n-1) + (n- 2) + (n – 3) + … + 3 + 2 + 1
= (n – 1)(n)/2 (Arithmetic Progression)
= O(n2)
Analysis Method 2
• Recall that the worst case input is an array for which the
number of element comparisons Cworst(n) is the largest
among all arrays of size n.
• In this case there are two kinds of worst-case inputs
– Inputs for which the algorithm does not exit the loop
prematurely
• arrays with no equal elements and
• arrays in which the last two elements are the only pair of equal
elements.
– For such input, one comparison is made for each repetition of
the innermost loop, i.e., for each value of the loop variable j
between its limits i + 1 and n − 1;
– this is repeated for each value of the outer loop, i.e., for each
value of the loop variable i between its limits 0 and n − 2.
EXAMPLE 3

• Given two n × n matrices A and B, find the


time efficiency of the definition-based
algorithm for computing their product C = AB.
• By definition, C is an n × n matrix whose
elements are computed as the scalar (dot)
products of the rows of matrix A and the
columns of matrix B:
Matrix multiplication
Algorithm MatrxMultiplication
Analysis
• We measure an input’s size by matrix order n.
• There are two arithmetical operations in the innermost loop here
– multiplication and addition
• Which is of these is the algorithm’s basic operation?

• We do not have to choose between them, because on each repetition of


the innermost loop each of the two is executed exactly once.
• So by counting one we automatically count the other.
• Still, following a well-established tradition, we consider multiplication as
the basic operation (see Section 2.1).
• Let us set up a sum for the total number of multiplications M(n) executed
by the algorithm.
• (Since this count depends only on the size of the input matrices, we do not
have to investigate the worst-case, average-case, and best-case efficiencies
separately.)
• Now, there is just one multiplication executed
on each repetition of the algorithm’s
innermost loop, which is governed by the
variable k ranging from the lower bound 0 to
the upper bound n − 1.
• Therefore, the number of multiplications made
for every pair of specific values of variables i
and j is equal to
• Hence the total number of multiplication M(n)
is expresses by the triple sum

• This simplifies to
`
• This example is simple enough so that we could get
this result without all the summation machinations.
• How?
• The algorithm computes n2 elements of the product
matrix. Each of the product’s elements is computed
as the scalar (dot) product of an n-element row of
the first matrix and an n-element column of the
second matrix, which takes n multiplications.
• So the total number of multiplications is n . n2 = n3.
• You should not have the erroneous impression that
the plan outlined (for analyzing algorithms) always
succeeds in analyzing a nonrecursive algorithm.
– An irregular change in a loop variable
– a sum too complicated to analyze, and
– the difficulties intrinsic to the average case analysis are
just some of the obstacles that can prove to be
insurmountable.
• These caveats notwithstanding, the plan does work
for many simple nonrecursive algorithms
EXAMPLE 4 The following algorithm finds the number
of binary digits in the binary representation of a
positive decimal integer.
Analysis
• First, notice that the most frequently executed
operation here is not inside the while loop but
rather the comparison n > 1 that determines
whether the loop’s body will be executed.
• Since the number of times the comparison will
be executed is larger than the number of
repetitions of the loop’s body by exactly 1, the
choice is not that important.
• A more significant feature of this example is that the loop
variable takes on only a few values between its lower and
upper limits;
• Therefore, we have to use an alternative way of computing
the number of times the loop is executed.
• Since the value of n is about halved on each repetition of the
loop, the answer should be about log 2 n .
• The exact formula for the number of times the comparison
n>1 will be executed is actually log2 n + 1 = the number of bits
in the binary representation of n

• We could also get this answer by applying the analysis


technique based on recurrence relations (more on that later)
• Mathematical Analysis of Recursive
Algorithms
Mathematical Analysis of Recursive
Algorithms
• EXAMPLE 1
• Compute the factorial function F(n) = n! for
an arbitrary nonnegative integer n.
• Since n!= 1 . . . . . (n − 1) . n = (n − 1)! . n
for n ≥ 1 and 0!= 1 by definition
• We can compute F(n) = F(n − 1) . n with the
following recursive algorithm (next slide)
Algorithm Factorial(n)
Analysis

• We consider n itself as an indicator of this


algorithm’s input size
• The basic operation of the algorithm is
multiplication,
– Alternatively, we could count the number of times the
comparison n = 0 is executed,
– this is the same as counting the total number of calls
made by the algorithm
• The number of multiplication executed is denoted
by M(n).
• Since the function F(n) is computed according
to the formula F(n) = F(n − 1) . n for n > 0, the
number of multiplications M(n) needed to
compute it must satisfy the equality
• The equation M(n) is not defined explicitly, i.e.,
as a function of n, but implicitly as a function
of its value at another point, namely n − 1.
• Such equations are called recurrence relations
or, simply, recurrences.
• Recurrence relations play an important role
in the analysis of algorithms
• Our goal now is to solve the recurrence
relation M(n) = M(n − 1) + 1, i.e., to find an
explicit formula for M(n) in terms of n only.
• A recurrence relation has infinitely many solutions
(sequences that satisfy the recurrence.)
• To determine a solution uniquely, we need an initial
condition that tells us the value with which the sequence
starts.
• This is obtain by inspecting the condition that makes the
algorithm stop its recursive calls:
– ie if n = 0 return 1.
• This tells us two things.
– First, since the calls stop when n = 0, the smallest value of n for
which this algorithm is executed and hence M(n) defined is 0.
– Second, by inspecting the pseudocode’s exiting line, we can see
that when n = 0, the algorithm performs no multiplications
• The initial condition is thus M(n ) = 0
• Thus, we get the recurrence relation and initial
condition for the algorithm’s number of
multiplications M(n):

M(n) = M(n − 1) + 1 for n > 0, and M(0) = 0.


• There are several techniques available for
solving recurrence relations
• we use what can be called the method of
backward substitutions.
• M(n) = M(n − 1) + 1
– substitute M(n − 1) = M(n − 2) + 1
• M(n) = [M(n − 2) + 1]+ 1
= M(n − 2) + 2
– substitute M(n − 2) = M(n − 3) + 1
• M(n) = [M(n − 3) + 1]+ 2
= M(n − 3) + 3.
• After inspecting the first three lines, we see an
emerging pattern,
• This makes it possible to predict not only the next
line (what would it be?) but also a general formula
for the pattern:
M(n) = M(n − i) + i.
• Strictly speaking, the correctness of this formula
should be proved by mathematical induction,
• it is easier to get to the solution as follows (next
slide) and then verify its correctness.
• Since initial condition is specified for n = 0, we
have to substitute i = n in the pattern’s formula
to get the ultimate result of our backward
substitutions:
• M(n) = M(n − 1) + 1
= . . . = M(n − i) + i
=...
= M(n − n) + n
= n.
• Exercise: Explain this statement.
• A simple iterative algorithm that accumulates
the product of n consecutive integers requires
the same number of multiplications, and it
does so without the overhead of time and
space used for maintaining the recursion’s
stack. [5 marks]
General Plan for Analyzing the Time Efficiency of Recursive
Algorithms

1. Decide on a parameter (or parameters) indicating an input’s


size.
2. Identify the algorithm’s basic operation.
3. Check whether the number of times the basic operation is
executed can vary on different inputs of the same size; if it
can, the worst-case, average-case, and best-case efficiencies
must be investigated separately.
4. Set up a recurrence relation, with an appropriate initial
condition, for the number of times the basic operation is
executed.
5. Solve the recurrence or, at least, ascertain the order of
growth of its solution
EXAMPLE 2: Tower of Hanoi puzzle
• In this puzzle, we (or mythical monks, if you do not like
to move disks) have n disks of different sizes that can
slide onto any of three pegs.
• Initially, all the disks are on the first peg in order of size,
the largest on the bottom and the smallest on top.
• The goal is to move all the disks to the third peg, using
the second one as an auxiliary, if necessary.
• We can move only one disk at a time, and it is
forbidden to place a larger disk on top of a smaller one.
Solution
• The problem has an elegant recursive solution, which
is illustrated in the next slide
• To move n>1 disks from peg 1 to peg 3 (with peg 2 as
auxiliary), we first move recursively n − 1 disks from
peg 1 to peg 2 (with peg 3 as auxiliary), then move
the largest disk directly from peg 1 to peg 3, and,
• finally, move recursively n − 1 disks from peg 2 to peg
3 (using peg 1 as auxiliary).
• Of course, if n = 1, we simply move the single disk
directly from the source peg to the destination peg.
Analysis
• Let us apply the general plan outlined above (slide 40) to
the Tower of Hanoi problem.
• The number of disks n is the obvious choice for the input’s
size indicator,
• moving one disk is the algorithm’s basic operation.
• The number of moves M(n) depends on n only, and we get
the following recurrence equation for it:
M(n) = M(n − 1) + 1+ M(n − 1) for n > 1.
• The obvious initial condition is M(1) = 1, so we have the
following recurrence relation for the number of moves
M(n):
M(n) = 2M(n − 1) + 1 for n > 1, with M(1) = 1.
Solving the Recurrence using Backwards
Substitution
• M(n) = 2M(n − 1) + 1
– sub. M(n − 1) = 2M(n − 2) + 1
• M(n) = 2[2M(n − 2) + 1]+ 1= 22M(n − 2) + 2 + 1
– sub. M(n − 2) = 2M(n − 3) + 1
• = 22 [2M(n − 3) + 1]+ 2 + 1= 23M(n − 3) + 22 + 2 + 1.
• The pattern of the first three sums on the left suggests that
the next one will be
• 24M(n − 4) + 23 + 22 + 2 + 1, and generally, after i
substitutions, we get
• M(n) = 2iM(n − i) + 2i-1 + 2i-2 + . . . + 2 + 1
= 2iM(n − i) + 2i − 1.
• Since the initial condition is specified for n = 1,
which is achieved for i = n − 1, we get the
following formula for the solution to
recurrence
• M(n) = 2n-1M(n − (n − 1)) + 2n-1 − 1
= 2n-1M(1) + 2n-1 − 1
= 2n-1 + 2n-1 − 1
= 2n-1 − 1.
Comments on the solution
• We have obtained an exponential algorithm,
• It will run for an unimaginably long time even for moderate
values of n
• This is not due to the fact that this particular algorithm is poor;
in fact, it is not difficult to prove that this is the most efficient
algorithm possible for this problem.
• It is the problem’s intrinsic difficulty that makes it so
computationally hard.

• This example makes an important general point:


– One should be careful with recursive algorithms because their
succinctness may mask their inefficiency
Using trees for recursion
• When a recursive algorithm makes more than a single
call to itself, it can be useful for analysis purposes to
construct a tree of its recursive calls.
• In this tree, nodes correspond to recursive calls, and we
can label them with the value of the parameter (or, more
generally, parameters) of the calls.
• For the Tower of Hanoi example, the tree is given in the
next slide
• By counting the number of nodes in the tree, we can get
the total number of calls made by the Tower of Hanoi
algorithm
• total number of calls made by the Tower of
Hanoi algorithm is given b

• This is simply the formula for counting the


number of nodes in a tree.
EXAMPLE 3: A recursive version of the algorithm
BinRec(n): an algorithm for finding the number
of binary digits in n’s binary representation
Analysis
• We now set up a recurrence and an initial condition for
the number of additions A(n) made by the algorithm.
• The number of additions made in computing
BinRec (n/2) is A(n/2), plus one more addition is
made by the algorithm to increase the returned value
by 1.
• This leads to the recurrence
A(n) = A(n/2) + 1 for n > 1 and A(1) = 0
• We get A(1) = 0, since the recursive calls end when n is
equal to 1 and there are no additions made.
• The presence of n/2 in the function’s argument
makes the method of backward substitutions
awkward for values of n that are not powers of 2.
• Therefore, the standard approach to solving such a
recurrence is to solve it only for n = 2k and
• The smoothness rule (Appendix B of text), allows
us to apply our answer to all values of n (arbitrary
value of n).
• (Alternatively, for arbitrary values, after getting a
solution for powers of 2, we can sometimes fine-
tune this solution to get a formula valid for an
arbitrary n.)
• So assume that n = 2k
• The recurrence takes the form
• A(2k ) = A(2k-1 ) + 1 for k > 0, A(20) = 0.

• Now backward substitutions encounter no problems:


A(2k ) = A(2k-1 ) + 1 substitute A(2k-1 ) = A(2k-2 ) + 1
= [A(2k-2 ) + 1]+ 1= A(2k-2 ) + 2
substitute A(2k-2 ) = A(2k-3 ) + 1
= [A(2k-3 ) + 1]+ 2 = A(2k-3 ) + 3
...
...
= A(2k-i ) + i
...
= A(2k-k ) + k.
• Thus, we end up with
A(2k) = A(1) + k = k,
• or, after returning to the original variable n = 2k
and hence k = log n , 2

A(n) = log n Є (log n).


2

• In fact, one can prove that the exact solution for


an arbitrary value of n is given by just a slightly
more refined formula
A(n) = (log n)
2
• Read Section 2.3 and 2.4

You might also like