01-Slides
01-Slides
Mohammad GANJTABESH
Grading
Final exam. 40%
Mid-Term exam. (1395/ / ) 30%
Exercises 20%
Seminar 10%
Algorithm
Definition (Algorithm)
An algorithm is a finite set of precise instructions for performing a
calculation or solving a problem.
Algorithm
Definition (Algorithm)
An algorithm is a finite set of precise instructions for performing a
calculation or solving a problem.
Important properties of the Algorithms:
1 Input
2 Output
3 Definiteness
4 Correctness
5 Finiteness
6 Effectiveness
7 Generality
Algorithm
Following steps should be done:
1 How to devise algorithms?
2 How to express algorithms?
3 How to validate algorithms?
4 How to analyze algorithms?
5 How to test algorithms?
Algorithm
Following steps should be done:
1 How to devise algorithms?
2 How to express algorithms?
3 How to validate algorithms?
4 How to analyze algorithms?
5 How to test algorithms?
Examples:
For an array =⇒ The size can be considered as the number of its
elements...
For an integer =⇒ The size can be considered as the number of
its bit in binary representation...
For a graph =⇒ The size can be considered as the number of
vertices or edges...
Analysis of Algorithms
Predicting the amount of resources that the algorithm requires.
We expect that the amount of resources used by any algorithm
growth with respect to the input size.
Measure the required time (time complexity) and memory (space
complexity).
Best case: Measuring the required amount of resources for easy
instances.
Worse case: Measuring the required amount of resources for
hard instances.
Average case: Measuring the required amount of resources for
all instances divided by the number of instances.
Should be independent from the current technology (Hardware,
Programming Languages, etc.).
Should be independent from the way of implementing the
algorithm.
By analyzing several candidate algorithms for a specific problem,
the most efficient one can be easily identified.
classifying the functions
classifying the functions
There are infinitely many functions. In order to compare them with
each other, we prefer to classify them as simple functions without loss
of their properties. In fact we want to choose one simple function for
each class of functions.
n n2
1
4n + 10 5n2 + 3
7 40
n + 500
300
10000 n2 + 2000 ···
70n − 1000
10100 1020 n2 − 2200
520 n
classifying the functions
There are infinitely many functions. In order to compare them with
each other, we prefer to classify them as simple functions without loss
of their properties. In fact we want to choose one simple function for
each class of functions.
n n2
1
4n + 10 5n2 + 3
7 40
n + 500
300
10000 n2 + 2000 ···
70n − 1000
10100 1020 n2 − 2200
520 n
Suppose that f (n) and g (n) are two functions, defined for
non-negative integers n and with values in the set of real numbers.
Θ Notation
Definition
f (n) = Θ(g (n)) if and only if
∃c1 , c2 > 0, ∃n0 > 0 3 ∀n ≥ n0 , 0 < c1 .g (n) ≤ f (n) ≤ c2 .g (n).
Θ Notation
Definition
f (n) = Θ(g (n)) if and only if
∃c1 , c2 > 0, ∃n0 > 0 3 ∀n ≥ n0 , 0 < c1 .g (n) ≤ f (n) ≤ c2 .g (n).
c2g(n)
f (n)
c1g(n)
n0 n
Definition
f (n) = O (g (n)) if and only if
∃c > 0, ∃n0 > 0 3 ∀n ≥ n0 , 0 < f (n) ≤ c .g (n).
Big-O Notation
Definition
f (n) = O (g (n)) if and only if
∃c > 0, ∃n0 > 0 3 ∀n ≥ n0 , 0 < f (n) ≤ c .g (n).
cg(n)
f (n)
n0 n
Definition
f (n) = Ω(g (n)) if and only if
∃c > 0, ∃n0 > 0 3 ∀n ≥ n0 , 0 < c .g (n) ≤ f (n).
Ω Notation
Definition
f (n) = Ω(g (n)) if and only if
∃c > 0, ∃n0 > 0 3 ∀n ≥ n0 , 0 < c .g (n) ≤ f (n).
f (n)
cg(n)
n0 n
straightforward manner).
Combine
3 Combine: Combine the solutions to
the subproblems into a solution for Solution
where:
T (n): is the time requred for an input of size n
n: is the size of problem
c: is a constant number
a: is the number of subproblems
n/b: is the size of each subproblem
D (n): is the time needed for Divide
C (n): is the time needed for Combine
Example: Analysing Merge Sort
(
Θ(1) if n ≤ 1,
T (n ) =
2T (n/2) + O (1) + O (n) Otherwise.
II. If f (n) = Θ(nlogb (a) ), then T (n) = Θ(nlogb (a) log (n)).
III. If f (n) = O (nlogb (a)+ε ) for some constant ε > 0, and if af (n/b) ≤ cf (n)
for some constant c < 1 and all sufficiently large n, then
T (n) = Θ(f (n)).
Quick Sort
unsorted elements
Partition
sorted x sorted
Quick Sort: Algorithm
Quick Sort (A, s, e){
if (s < e){
Partition(A, s, e, m);
Quick Sort (A, s, m − 1);
Quick Sort (A, m + 1, e);
}
}
Quick Sort: Algorithm
Quick Sort (A, s, e){
if (s < e){
Partition(A, s, e, m);
Quick Sort (A, s, m − 1);
Quick Sort (A, m + 1, e);
}
}
Partition(A, s, e, m){
x ← A[s]; i ← s + 1; j ← e;
do{
while(A[i ] < x ) i + +;
while(A[j ] > x ) j − −;
if (i < j ) swap(A[i ], A[j ]);
}while(i < j );
swap(A[s], A[j ]);
return(j );
}
Quick Sort: Analysis
Best case: when Partition divides the input array into two
subarrays with almost equal length.
T (n) = 2T (n/2) + O (n) =⇒ T (n) = O (n.Log (n)).
Quick Sort: Analysis
Best case: when Partition divides the input array into two
subarrays with almost equal length.
T (n) = 2T (n/2) + O (n) =⇒ T (n) = O (n.Log (n)).
Worse case: when Partition fails to divide the input array into two
subarrays.
T (n) = T (n − 1) + O (n) =⇒ T (n) = Θ(n2 ).
Quick Sort: Analysis
Best case: when Partition divides the input array into two
subarrays with almost equal length.
T (n) = 2T (n/2) + O (n) =⇒ T (n) = O (n.Log (n)).
Worse case: when Partition fails to divide the input array into two
subarrays.
T (n) = T (n − 1) + O (n) =⇒ T (n) = Θ(n2 ).
2 n−1
T (n ) = ∑ T (i ) + cn
n i =0
Now we prove that T (n) = O (n.Log (n)) (This is just a guess!). The
proof is based on induction:
√
Initiation: n = 2 =⇒ T (2) = T (1) + 2c = O (1).
Quick Sort: Average case analysis
2 n−1
T (n ) = ∑ T (i ) + cn
n i =0
Now we prove that T (n) = O (n.Log (n)) (This is just a guess!). The
proof is based on induction:
√
Initiation: n = 2 =⇒ T (2) = T (1) + 2c = O (1).
√
Hypothesis: ∀i < n =⇒ T (i ) = O (i .Log (i )) ≤ c 0 i .Log (i ).
Quick Sort: Average case analysis
2 n−1
T (n ) = T (i ) + cn
∑
n i =0
Now we prove that T (n) = O (n.Log (n)) (This is just a guess!). The
proof is based on induction:
√
Initiation: n = 2 =⇒ T (2) = T (1) + 2c = O (1).
√
Hypothesis: ∀i < n =⇒ T (i ) = O (i .Log (i )) ≤ c 0 i .Log (i ).
Induction step: prove the statement for n:
2 n−1 0
T (n) ≤ ∑ c i .Log (i ) + cn
n i =0
!
n /2 n−1
2c 0
≤ ∑ i .Log (n/2) + ∑ i .Log (n) + cn
n i =0 i =n/2+1
!
n /2 n /2 n−1
2c 0
≤ ∑ i .Log (n) − ∑ i + ∑ i .Log (n) + cn
n i =0 i =0 i =n/2+1
2c 0 n(n − 1) (n/2)(n/2 − 1)
≤ Log (n) − + cn
n 2 2
√
= O (n.Log (n)).
Sorting Lower Bound
a 1 < a2
N
Y
a 2 < a3 a 1 < a3
N
N
Y
Y
a 1 < a2 < a3 a 1 < a3 a 2 < a1 < a3 a 2 < a3
N
N
Y
Y
Log(n!)
n! terminal nodes
Sorting Lower Bound
In general we have:
Log(n!)
n! terminal nodes
√ n n
1
n! ' 2πn 1 + Θ( ) (Stirling Formula)
e n
√ n n
1
=⇒ log2 n! ' log2 ( 2πn) + log2 + log2 1 + Θ( )
e n
=⇒ log2 n! = Ω(n log2 (n)).
Techniques for the design of Algorithms
Definition
Suppose that we want to calculate the following matrix multiplication:
M = M1 × M2 × M3 × · · · × Mn
Definition
Suppose that we want to calculate the following matrix multiplication:
M = M1 × M2 × M3 × · · · × Mn
Example
For M = M1 × M2 × M3 × M4 where d0 = 10, d1 = 20, d2 = 50, d3 = 1,
and d4 = 100, the following orders are possible:
× ×
× M4 × M4
× M3 M1 ×
M1 M2 M2 M3
×
× ×
M1 M2 M3 M4
× ×
M1 × M1 ×
M2 × × M4
M3 M4 M2 M3
Dynamic Programming for Matrix Chain Multiplication
Mi × Mi+1 × · · · × Mk
Mk+1 × Mk+2 × · · · × Mj
Mi × Mi+1 × · · · × Mj
Dynamic Programming for Matrix Chain Multiplication
Mi × Mi+1 × · · · × Mk
Mk+1 × Mk+2 × · · · × Mj
Mi × Mi+1 × · · · × Mj
Example
For M = M1 × M2 × M3 × M4 where d0 = 10, d1 = 20, d2 = 50, d3 = 1, and d4 = 100, the
following orders is optimal:
1 2 3 4 1 2 3 4
2 0 0 1000 3000 2 0 0 2 3
m k
3 0 0 0 5000 3 0 0 0 3
4 0 0 0 0 4 0 0 0 0
Optimal Binary Tree Construction
Properties
∀Node ∈ BST .Nodes,
Key (Node.Left ) < Key (Node) < Key (Node.Right ).
Inorder traversal of any BST is sorted.
log(n) ≤ Depth(BST ) ≤ n.
Optimal Binary Tree Construction
Properties
∀Node ∈ BST .Nodes,
Key (Node.Left ) < Key (Node) < Key (Node.Right ).
Inorder traversal of any BST is sorted.
log(n) ≤ Depth(BST ) ≤ n.
50
45 70
10 67 74
7 22 53 69 90
Optimal Binary Tree Construction
Optimal Binary Tree Construction
Given an array of n sorted integers and a searching probability for
each element, i.e.:
Integers x1 x2 ··· xn
Probabilities p1 p2 ··· pn
Integers x1 x2 ··· xn
Probabilities p1 p2 ··· pn
Recall that the number of different binary trees with exactly n node can
be expressed by:
1 2n
Cn =
n+1 n
Example
Example
Consider the following instance:
Integers 1 2 3 4 5 6 7 8
Probabilities 0.2 0.1 0.3 0.07 0.15 0.04 0.08 0.06
Integers 1 2 3 4 5 6 7 8
Probabilities 0.2 0.1 0.3 0.07 0.15 0.04 0.08 0.06
3
5
3 7 1 5
2 4 6 8 2 4 7
1
6 8
Cost = 2.52
Cost = 2.15
Optimal Binary Tree Construction
Solution
Let Ci ,j be the minimum cost for the elements xi , xi +1 , · · · , xj as follows:
j
Ci ,j = ∑ pt × (Depth(xt ) + 1).
t =i
j
Also let pi ,j = ∑t =i pt .
Now, Ci ,j can be computed as follows:
0
if i > j,
Ci ,j = pi if i = j,
mini ≤k ≤j {Ci ,k −1 + Ck +1,j + pi ,j } if i < j.
Optimal Binary Tree Construction
xk
Depth
xi < xi+1 < · · · < xk−1
xk
Depth
xi < xi+1 < · · · < xk−1
Integers 1 2 3 4 5
Probabilities 0.3 0.05 0.08 0.45 0.12
1 2 3 4 5 1 2 3 4 5
1 0.3 0.4 0.61 1.49 1.73 1 0 2 1 4 4
4 0 0 0 0.45 0.69 4 0 0 0 0 4
5 0 0 0 0 0.12 5 0 0 0 0 0
C K
Example
1 2 3 4 5 1 2 3 4 5
1 0.3 0.4 0.61 1.49 1.73 1 0 2 1 4 4
4 0 0 0 0.45 0.69 4 0 0 0 0 4
5 0 0 0 0 0.12 5 0 0 0 0 0
C K
4
1 5
2
Techniques for the design of Algorithms
Definition
Suppose that we have n objects, say oi (i = 1, 2, · · · , n), each with
corresponding weight (wi ) and profit (pi ), and a weight bound b. The
goal of this problem is to find an X = (x1 , x2 , · · · , xn ) that maximize
∑ni=1 xi pi with respect to ∑ni=1 xi wi ≤ b.
if xi ∈ {0, 1} the this problem is called 0/1-Knapsack.
if xi ∈ [0, 1] the this problem is called fractional-Knapsack.
Knapsack Problem
Definition
Suppose that we have n objects, say oi (i = 1, 2, · · · , n), each with
corresponding weight (wi ) and profit (pi ), and a weight bound b. The
goal of this problem is to find an X = (x1 , x2 , · · · , xn ) that maximize
∑ni=1 xi pi with respect to ∑ni=1 xi wi ≤ b.
if xi ∈ {0, 1} the this problem is called 0/1-Knapsack.
if xi ∈ [0, 1] the this problem is called fractional-Knapsack.
j
0 1 2 3 ··· b-2 b-1 b
0 0 0 0 0 0 0 0
..
i .
n-2
n-1
wn−1
0/1-Knapsack Problem: Dynamic Programming
x1 x2 x3 ∑3i =1 xi wi ∑3i =1 xi pi
1/2 1/ 3 1/4 16.5 24.25
1 2/15 0 20 28.2
0 2/ 3 1 20 31
0 1 1/2 20 31.5
Example of fractional-Knapsack Problem
Example
Suppose that n = 3, b = 20, (p1 , p2 , p3 ) = (25, 24, 15), and
(w1 , w2 , w3 ) = (18, 15, 10). Then some of the feasible solutions are as
follows:
x1 x2 x3 ∑3i =1 xi wi ∑3i =1 xi pi
1/2 1/ 3 1/4 16.5 24.25
1 2/15 0 20 28.2
0 2/ 3 1 20 31
0 1 1/2 20 31.5
X ← 0;
rw ← b;
for i ← 1 to n do{
if wi < rw then{
xi ← 1;
rw ← rw − wi ;
}
}
rw
if i < n then xi ← wi
;
return(X );
}
Greedy Algorithm for fractional-Knapsack Problem
Theorem
If p1 /w1 ≥ p2 /w2 ≥ · · · ≥ pn /wn then the procedure
Greedy-fractional-Knapsack-Algorithm always return an optimal solution and its time
complexity is O (n log n).
Greedy Algorithm for fractional-Knapsack Problem
Theorem
If p1 /w1 ≥ p2 /w2 ≥ · · · ≥ pn /wn then the procedure
Greedy-fractional-Knapsack-Algorithm always return an optimal solution and its time
complexity is O (n log n).
Proof.
It is obvious that the time complexity is O (n log n). The form of X is as follows:
Proof.
It is obvious that the time complexity is O (n log n). The form of X is as follows:
Proof.
It is obvious that the time complexity is O (n log n). The form of X is as follows:
Proof (cont.)
n n n
∑ zi pi = ∑ yi pi + (zk − yk )pk − ∑ (yi − zi )pi
i =1 i =1 i =k +1
Greedy Algorithm for fractional-Knapsack Problem
Proof (cont.)
n n n
∑ zi pi = ∑ yi pi + (zk − yk )pk − ∑ (yi − zi )pi
i =1 i =1 i =k +1
n n
wk wi
= ∑ yi pi + (zk − yk )pk wk − ∑ (yi − zi )p i
wi
i =1 i =k +1
Greedy Algorithm for fractional-Knapsack Problem
Proof (cont.)
n n n
∑ zi pi = ∑ yi pi + (zk − yk )pk − ∑ (yi − zi )pi
i =1 i =1 i =k +1
n n
wk wi
= ∑ yi pi + (zk − yk )pk wk − ∑ (yi − zi )p i
wi
i =1 i =k +1
" #
n n
pk
≥ ∑ yi pi + (zk − yk )wk − ∑ (yi − zi )wi
i =1
wk i =k +1
Greedy Algorithm for fractional-Knapsack Problem
Proof (cont.)
n n n
∑ zi pi = ∑ yi pi + (zk − yk )pk − ∑ (yi − zi )pi
i =1 i =1 i =k +1
n n
wk wi
= ∑ yi pi + (zk − yk )pk wk − ∑ (yi − zi )p i
wi
i =1 i =k +1
" #
n n
pk
≥ ∑ yi pi + (zk − yk )wk − ∑ (yi − zi )wi
i =1
wk i =k +1
n
≥ ∑ yi pi
i =1
Problem P An instance x of P
Infinitely many
Solution space of x
instances of P
Backtracking Algorithms
Problem P An instance x of P
Infinitely many
Solution space of x
instances of P
k=0
k=1
k=2
k=3
0/1-Knapsack
Review
Suppose that we have n objects, say oi (i = 1, 2, · · · , n), each with
corresponding weight (wi ) and profit (pi ), and a weight bound b. The
goal of this problem is to find an X = (x1 , x2 , · · · , xn ) that maximize
∑ni=1 xi pi with respect to ∑ni=1 xi wi ≤ b.
if xi ∈ {0, 1} the this problem is called 0/1-Knapsack.
if xi ∈ [0, 1] the this problem is called fractional-Knapsack.
0/1-Knapsack
Review
Suppose that we have n objects, say oi (i = 1, 2, · · · , n), each with
corresponding weight (wi ) and profit (pi ), and a weight bound b. The
goal of this problem is to find an X = (x1 , x2 , · · · , xn ) that maximize
∑ni=1 xi pi with respect to ∑ni=1 xi wi ≤ b.
if xi ∈ {0, 1} the this problem is called 0/1-Knapsack.
if xi ∈ [0, 1] the this problem is called fractional-Knapsack.
Review
Fractional-Knapsack problem can be solved efficiently by using
Greedy approach and the answer is optimal.
0/1-Knapsack
1 0
1 -
1 0 1 0
2 - 2 -
1 0 1 0 1 0 1 0
3 - 3 - 3 - 3 -
X = []
B = 52.625
curW = 0
X = [1] X = [0]
B = 52.625 B = 50.14
curW = 11 curW = 0
X = [1, 1] X = [1, 0]
B = 52.625 B = 51
curW = 23 curW = 11
X = [1, 1, 0] X = [1, 0, 1]
B = 52.57 B = 51
curW = 23 curW = 19
X = [1, 1, 0, 0] X = [1, 0, 1, 1]
B = 52.33 B = 51
curW = 23 curW = 26
X = [1, 1, 0, 0, 0] X = [1, 0, 1, 1, 0]
P = 47 ⇒ optP = 47 P = 51 > optP ⇒ optP = 51
curW = 23 curW = 26
Comparision of previous algorithms
The following table represents the worse case size of search space of
random instances executed for 5 times.