0% found this document useful (0 votes)
145 views

01-Slides

The document discusses advanced algorithms, including definitions, properties, and techniques for algorithm design such as Divide and Conquer, Dynamic Programming, and Greedy Algorithms. It emphasizes the importance of analyzing algorithms based on input size, time complexity, and space complexity, providing definitions for Θ, O, and Ω notations. Additionally, it includes examples of algorithms like Merge Sort and Quick Sort, along with their analysis and performance characteristics.

Uploaded by

setayeshkaramian
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
145 views

01-Slides

The document discusses advanced algorithms, including definitions, properties, and techniques for algorithm design such as Divide and Conquer, Dynamic Programming, and Greedy Algorithms. It emphasizes the importance of analyzing algorithms based on input size, time complexity, and space complexity, providing definitions for Θ, O, and Ω notations. Additionally, it includes examples of algorithms like Merge Sort and Quick Sort, along with their analysis and performance characteristics.

Uploaded by

setayeshkaramian
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 109

Advanced Algorithms

Mohammad GANJTABESH

[email protected]

Department of Mathematics, Statistics and Computer Science,


University of Tehran.
References
1 Algorithmics for Hard Problems, J. Hromkovic, 2003.
2 Algorithm Design, J. Kleinberg, and E. Tardos, 2008.
3 Introduction to Algorithms, T. H. Cormen, C. E. Leiserson,
R. L. Rivest, and C. Stein, MIT Press, 2001.
References
1 Algorithmics for Hard Problems, J. Hromkovic, 2003.
2 Algorithm Design, J. Kleinberg, and E. Tardos, 2008.
3 Introduction to Algorithms, T. H. Cormen, C. E. Leiserson,
R. L. Rivest, and C. Stein, MIT Press, 2001.

Grading
Final exam. 40%
Mid-Term exam. (1395/ / ) 30%
Exercises 20%
Seminar 10%
Algorithm

Definition (Algorithm)
An algorithm is a finite set of precise instructions for performing a
calculation or solving a problem.
Algorithm

Definition (Algorithm)
An algorithm is a finite set of precise instructions for performing a
calculation or solving a problem.
Important properties of the Algorithms:
1 Input
2 Output
3 Definiteness
4 Correctness
5 Finiteness
6 Effectiveness
7 Generality
Algorithm
Following steps should be done:
1 How to devise algorithms?
2 How to express algorithms?
3 How to validate algorithms?
4 How to analyze algorithms?
5 How to test algorithms?
Algorithm
Following steps should be done:
1 How to devise algorithms?
2 How to express algorithms?
3 How to validate algorithms?
4 How to analyze algorithms?
5 How to test algorithms?

Following techniques are usually employed:


1 Divide and Conquer
2 Dynamic Programming
3 Greedy Algorithms
4 Backtracking Algorithms
5 Branch and Bound Algorithms
6 etc.
Analysis of Algorithms
Analysis of Algorithms

Different input requires different amount of resources. Instead of


dealing with input itself, we prefer to deal with the Size of Input.
Analysis of Algorithms

Different input requires different amount of resources. Instead of


dealing with input itself, we prefer to deal with the Size of Input.

Definition (Size of Input)


The amount of memory required for representing the input with respect
to a fixed coding scheme.
Analysis of Algorithms

Different input requires different amount of resources. Instead of


dealing with input itself, we prefer to deal with the Size of Input.

Definition (Size of Input)


The amount of memory required for representing the input with respect
to a fixed coding scheme.

Examples:
For an array =⇒ The size can be considered as the number of its
elements...
For an integer =⇒ The size can be considered as the number of
its bit in binary representation...
For a graph =⇒ The size can be considered as the number of
vertices or edges...
Analysis of Algorithms
Predicting the amount of resources that the algorithm requires.
We expect that the amount of resources used by any algorithm
growth with respect to the input size.
Measure the required time (time complexity) and memory (space
complexity).
Best case: Measuring the required amount of resources for easy
instances.
Worse case: Measuring the required amount of resources for
hard instances.
Average case: Measuring the required amount of resources for
all instances divided by the number of instances.
Should be independent from the current technology (Hardware,
Programming Languages, etc.).
Should be independent from the way of implementing the
algorithm.
By analyzing several candidate algorithms for a specific problem,
the most efficient one can be easily identified.
classifying the functions
classifying the functions
There are infinitely many functions. In order to compare them with
each other, we prefer to classify them as simple functions without loss
of their properties. In fact we want to choose one simple function for
each class of functions.

n n2
1
4n + 10 5n2 + 3
7 40
n + 500
300
10000 n2 + 2000 ···
70n − 1000
10100 1020 n2 − 2200
520 n
classifying the functions
There are infinitely many functions. In order to compare them with
each other, we prefer to classify them as simple functions without loss
of their properties. In fact we want to choose one simple function for
each class of functions.

n n2
1
4n + 10 5n2 + 3
7 40
n + 500
300
10000 n2 + 2000 ···
70n − 1000
10100 1020 n2 − 2200
520 n

Suppose that f (n) and g (n) are two functions, defined for
non-negative integers n and with values in the set of real numbers.
Θ Notation

Definition
f (n) = Θ(g (n)) if and only if
∃c1 , c2 > 0, ∃n0 > 0 3 ∀n ≥ n0 , 0 < c1 .g (n) ≤ f (n) ≤ c2 .g (n).
Θ Notation

Definition
f (n) = Θ(g (n)) if and only if
∃c1 , c2 > 0, ∃n0 > 0 3 ∀n ≥ n0 , 0 < c1 .g (n) ≤ f (n) ≤ c2 .g (n).

c2g(n)
f (n)
c1g(n)

n0 n

Upper and Lower bound at the same time.


Big-O Notation

Definition
f (n) = O (g (n)) if and only if
∃c > 0, ∃n0 > 0 3 ∀n ≥ n0 , 0 < f (n) ≤ c .g (n).
Big-O Notation

Definition
f (n) = O (g (n)) if and only if
∃c > 0, ∃n0 > 0 3 ∀n ≥ n0 , 0 < f (n) ≤ c .g (n).

cg(n)

f (n)

n0 n

Smalest upper bound.


Ω Notation

Definition
f (n) = Ω(g (n)) if and only if
∃c > 0, ∃n0 > 0 3 ∀n ≥ n0 , 0 < c .g (n) ≤ f (n).
Ω Notation

Definition
f (n) = Ω(g (n)) if and only if
∃c > 0, ∃n0 > 0 3 ∀n ≥ n0 , 0 < c .g (n) ≤ f (n).

f (n)

cg(n)

n0 n

Largest lower bound.


Techniques for the design of Algorithms

The classical techniques are as follows:


1 Divide and Conquer
2 Dynamic Programming
3 Greedy Algorithms
4 Backtracking Algorithms
5 Branch and Bound Algorithms
Divide and Conquer

This approach involves three steps:


1 Divide: Break down the problem into
two or more subproblems. These
subproblems should be similar to the
original problem, but smaller in size.
2 Conquer: Recursively solve the
subproblems (If they are small
enough, just solve them in a
straightforward manner).
3 Combine: Combine the solutions to
the subproblems into a solution for
the original problem (optional).
Divide and Conquer

This approach involves three steps:


Problem instance
1 Divide: Break down the problem into
two or more subproblems. These Divide

subproblems should be similar to the Subproblem 1 Subproblem 2

original problem, but smaller in size.


2 Conquer: Recursively solve the Conquer

subproblems (If they are small


enough, just solve them in a Solution 1 Solution 2

straightforward manner).
Combine
3 Combine: Combine the solutions to
the subproblems into a solution for Solution

the original problem (optional).


Example: Merge Sort
Example: Merge Sort
Example: Merge Sort
Analysing the Divide-and-Conquer Algorithms
Analysing the Divide-and-Conquer Algorithms

In general we have the following recurrence equation:


(
Θ(1) if n ≤ c ,
T (n) =
aT (n/b) + D (n) + C (n) Otherwise.

where:
T (n): is the time requred for an input of size n
n: is the size of problem
c: is a constant number
a: is the number of subproblems
n/b: is the size of each subproblem
D (n): is the time needed for Divide
C (n): is the time needed for Combine
Example: Analysing Merge Sort

(
Θ(1) if n ≤ 1,
T (n ) =
2T (n/2) + O (1) + O (n) Otherwise.

What is the implicit formula of T (n)?


How we can find it?
Solving the recurrence equations

There are different approaches to do this:


Constructing Recursion Tree
Performing Substitution
Using Induction
Master Theorem
Generating Functions
Master Theorem

Theorem (Master Theorem (without proof))


Let a ≥ 1 and b > 1 be constants, let f (n) be a function, and let T (n)
be defined on the nonnegative integers by the recurrence

T (n) = aT (n/b) + f (n).

Then T (n) can be bounded asymptotically as follows:


I. If f (n) = O (nlogb (a)−ε ) for some constant ε > 0, then
T (n) = Θ(nlogb (a) ).

II. If f (n) = Θ(nlogb (a) ), then T (n) = Θ(nlogb (a) log (n)).
III. If f (n) = O (nlogb (a)+ε ) for some constant ε > 0, and if af (n/b) ≤ cf (n)
for some constant c < 1 and all sufficiently large n, then
T (n) = Θ(f (n)).
Quick Sort

unsorted elements

Partition

elements < x x x < elements

Quick Sort Quick Sort

sorted x sorted
Quick Sort: Algorithm
Quick Sort (A, s, e){
if (s < e){
Partition(A, s, e, m);
Quick Sort (A, s, m − 1);
Quick Sort (A, m + 1, e);
}
}
Quick Sort: Algorithm
Quick Sort (A, s, e){
if (s < e){
Partition(A, s, e, m);
Quick Sort (A, s, m − 1);
Quick Sort (A, m + 1, e);
}
}

Partition(A, s, e, m){
x ← A[s]; i ← s + 1; j ← e;
do{
while(A[i ] < x ) i + +;
while(A[j ] > x ) j − −;
if (i < j ) swap(A[i ], A[j ]);
}while(i < j );
swap(A[s], A[j ]);
return(j );
}
Quick Sort: Analysis
Best case: when Partition divides the input array into two
subarrays with almost equal length.
T (n) = 2T (n/2) + O (n) =⇒ T (n) = O (n.Log (n)).
Quick Sort: Analysis
Best case: when Partition divides the input array into two
subarrays with almost equal length.
T (n) = 2T (n/2) + O (n) =⇒ T (n) = O (n.Log (n)).

Worse case: when Partition fails to divide the input array into two
subarrays.
T (n) = T (n − 1) + O (n) =⇒ T (n) = Θ(n2 ).
Quick Sort: Analysis
Best case: when Partition divides the input array into two
subarrays with almost equal length.
T (n) = 2T (n/2) + O (n) =⇒ T (n) = O (n.Log (n)).

Worse case: when Partition fails to divide the input array into two
subarrays.
T (n) = T (n − 1) + O (n) =⇒ T (n) = Θ(n2 ).

Average case: Average over all possible length for subarrays...


T (n) = T (0) + T (n − 1) + O (n)
T (n) = T (1) + T (n − 2) + O (n)
.. .. ..
. . .
T (n) = T (n − 2) + T (1) + O (n)
T (n) = T (n − 1) + T (0) + O (n)
nT (n) = ∑ni =−01 T (i ) + ∑ni =−01 T (i ) + nO (n)
Quick Sort: Average case analysis

2 n−1
T (n ) = ∑ T (i ) + cn
n i =0
Now we prove that T (n) = O (n.Log (n)) (This is just a guess!). The
proof is based on induction:

Initiation: n = 2 =⇒ T (2) = T (1) + 2c = O (1).
Quick Sort: Average case analysis

2 n−1
T (n ) = ∑ T (i ) + cn
n i =0
Now we prove that T (n) = O (n.Log (n)) (This is just a guess!). The
proof is based on induction:

Initiation: n = 2 =⇒ T (2) = T (1) + 2c = O (1).

Hypothesis: ∀i < n =⇒ T (i ) = O (i .Log (i )) ≤ c 0 i .Log (i ).
Quick Sort: Average case analysis

2 n−1
T (n ) = T (i ) + cn

n i =0
Now we prove that T (n) = O (n.Log (n)) (This is just a guess!). The
proof is based on induction:

Initiation: n = 2 =⇒ T (2) = T (1) + 2c = O (1).

Hypothesis: ∀i < n =⇒ T (i ) = O (i .Log (i )) ≤ c 0 i .Log (i ).
Induction step: prove the statement for n:
2 n−1 0
T (n) ≤ ∑ c i .Log (i ) + cn
n i =0
!
n /2 n−1
2c 0
≤ ∑ i .Log (n/2) + ∑ i .Log (n) + cn
n i =0 i =n/2+1
!
n /2 n /2 n−1
2c 0
≤ ∑ i .Log (n) − ∑ i + ∑ i .Log (n) + cn
n i =0 i =0 i =n/2+1

2c 0 n(n − 1) (n/2)(n/2 − 1)

≤ Log (n) − + cn
n 2 2

= O (n.Log (n)).
Sorting Lower Bound

At lease how many comparisons we need to sort three items?


Sorting Lower Bound

At lease how many comparisons we need to sort three items?

a 1 < a2
N
Y

a 2 < a3 a 1 < a3

N
N
Y

Y
a 1 < a2 < a3 a 1 < a3 a 2 < a1 < a3 a 2 < a3

N
N

Y
Y

a 1 < a3 < a2 a 3 < a1 < a2 a 2 < a3 < a1 a 3 < a2 < a1

We need at least 3 = dLog2 (3!)e to sort three items.


Sorting Lower Bound
In general we have:

Log(n!)

n! terminal nodes
Sorting Lower Bound
In general we have:

Log(n!)

n! terminal nodes

√ n n
1
n! ' 2πn 1 + Θ( ) (Stirling Formula)
e n
√ n n
1
=⇒ log2 n! ' log2 ( 2πn) + log2 + log2 1 + Θ( )
e n
=⇒ log2 n! = Ω(n log2 (n)).
Techniques for the design of Algorithms

The classical techniques are as follows:


1 Divide and Conquer
2 Dynamic Programming
3 Greedy Algorithms
4 Backtracking Algorithms
5 Branch and Bound Algorithms
Matrix Chain Multiplication

Definition
Suppose that we want to calculate the following matrix multiplication:

M = M1 × M2 × M3 × · · · × Mn

, where Mi has di −1 rows and di columns. The goal of this problem is


determining an order for these matrix multiplication in such a way that
the overall number of multiplications becomes minimum.
Matrix Chain Multiplication

Definition
Suppose that we want to calculate the following matrix multiplication:

M = M1 × M2 × M3 × · · · × Mn

, where Mi has di −1 rows and di columns. The goal of this problem is


determining an order for these matrix multiplication in such a way that
the overall number of multiplications becomes minimum.

How many different order exist for multiplying n matrices?


How we can determine the best one?
Matrix Chain Multiplication

Example
For M = M1 × M2 × M3 × M4 where d0 = 10, d1 = 20, d2 = 50, d3 = 1,
and d4 = 100, the following orders are possible:

((M1 × M2 ) × M3 ) × M4 =⇒ cost = 11500


(M1 × (M2 × M3 )) × M4 =⇒ cost = 2200
(M1 × M2 ) × (M3 × M4 ) =⇒ cost = 15000
M1 × ((M2 × M3 ) × M4 ) =⇒ cost = 23000
M1 × (M2 × (M3 × M4 )) =⇒ cost = 125000
Matrix Chain Multiplication

× ×
× M4 × M4

× M3 M1 ×
M1 M2 M2 M3

×
× ×
M1 M2 M3 M4

× ×
M1 × M1 ×
M2 × × M4

M3 M4 M2 M3
Dynamic Programming for Matrix Chain Multiplication

Mi × Mi+1 × · · · × Mk
Mk+1 × Mk+2 × · · · × Mj

Mi × Mi+1 × · · · × Mj
Dynamic Programming for Matrix Chain Multiplication

Mi × Mi+1 × · · · × Mk
Mk+1 × Mk+2 × · · · × Mj

Mi × Mi+1 × · · · × Mj

Suppose that mi ,j denotes the minimum cost for multiplying


Mi × Mi +1 × · · · × Mj . We can express it as follows:
(
0 if i = j
mi ,j =
mini ≤k <j {mi ,k + mk +1,j + di −1 × dk × dj } if i < j.
Matrix Chain Multiplication

Example
For M = M1 × M2 × M3 × M4 where d0 = 10, d1 = 20, d2 = 50, d3 = 1, and d4 = 100, the
following orders is optimal:

(M1 × (M2 × M3 )) × M4 =⇒ cost = 2200

1 2 3 4 1 2 3 4

1 0 10000 1200 2200 1 0 1 1 3

2 0 0 1000 3000 2 0 0 2 3
m k
3 0 0 0 5000 3 0 0 0 3

4 0 0 0 0 4 0 0 0 0
Optimal Binary Tree Construction
Properties
∀Node ∈ BST .Nodes,
Key (Node.Left ) < Key (Node) < Key (Node.Right ).
Inorder traversal of any BST is sorted.
log(n) ≤ Depth(BST ) ≤ n.
Optimal Binary Tree Construction
Properties
∀Node ∈ BST .Nodes,
Key (Node.Left ) < Key (Node) < Key (Node.Right ).
Inorder traversal of any BST is sorted.
log(n) ≤ Depth(BST ) ≤ n.

50

45 70

10 67 74

7 22 53 69 90
Optimal Binary Tree Construction
Optimal Binary Tree Construction
Given an array of n sorted integers and a searching probability for
each element, i.e.:

Integers x1 x2 ··· xn
Probabilities p1 p2 ··· pn

, where ∑ni=1 pi = 1. Construct a Binary Search Tree in such a way


that minimize
n
Cost = ∑ pi × (Depth(xi ) + 1).
i =1
Optimal Binary Tree Construction
Optimal Binary Tree Construction
Given an array of n sorted integers and a searching probability for
each element, i.e.:

Integers x1 x2 ··· xn
Probabilities p1 p2 ··· pn

, where ∑ni=1 pi = 1. Construct a Binary Search Tree in such a way


that minimize
n
Cost = ∑ pi × (Depth(xi ) + 1).
i =1

Recall that the number of different binary trees with exactly n node can
be expressed by:
1 2n
Cn =
n+1 n
Example
Example
Consider the following instance:

Integers 1 2 3 4 5 6 7 8
Probabilities 0.2 0.1 0.3 0.07 0.15 0.04 0.08 0.06

Two possible Binary Search Trees are as follows:


Example
Example
Consider the following instance:

Integers 1 2 3 4 5 6 7 8
Probabilities 0.2 0.1 0.3 0.07 0.15 0.04 0.08 0.06

Two possible Binary Search Trees are as follows:

3
5

3 7 1 5

2 4 6 8 2 4 7

1
6 8

Cost = 2.52
Cost = 2.15
Optimal Binary Tree Construction

Solution
Let Ci ,j be the minimum cost for the elements xi , xi +1 , · · · , xj as follows:

j
Ci ,j = ∑ pt × (Depth(xt ) + 1).
t =i

j
Also let pi ,j = ∑t =i pt .
Now, Ci ,j can be computed as follows:

0
 if i > j,
Ci ,j = pi if i = j,

mini ≤k ≤j {Ci ,k −1 + Ck +1,j + pi ,j } if i < j.

Optimal Binary Tree Construction

xk

Depth
xi < xi+1 < · · · < xk−1

xk+1 < xk+2 < · · · < xj

xi < x2 < · · · < xk−1 < xk < xk+1 < · · · < xj


Optimal Binary Tree Construction

xk

Depth
xi < xi+1 < · · · < xk−1

xk+1 < xk+2 < · · · < xj

xi < x2 < · · · < xk−1 < xk < xk+1 < · · · < xj

{Ci ,k −1 + pi ,k −1 } + {Ck +1,j + pk +1,j } + pk = Ci ,k −1 + Ck +1,j + pi ,j


Example
Example
Consider the following instance:

Integers 1 2 3 4 5
Probabilities 0.3 0.05 0.08 0.45 0.12

1 2 3 4 5 1 2 3 4 5
1 0.3 0.4 0.61 1.49 1.73 1 0 2 1 4 4

2 0 0.05 0.18 0.76 1 2 0 0 3 4 4

3 0 0 0.08 0.6 0.86 3 0 0 0 3 4

4 0 0 0 0.45 0.69 4 0 0 0 0 4

5 0 0 0 0 0.12 5 0 0 0 0 0

C K
Example
1 2 3 4 5 1 2 3 4 5
1 0.3 0.4 0.61 1.49 1.73 1 0 2 1 4 4

2 0 0.05 0.18 0.76 1 2 0 0 3 4 4

3 0 0 0.08 0.6 0.86 3 0 0 0 3 4

4 0 0 0 0.45 0.69 4 0 0 0 0 4

5 0 0 0 0 0.12 5 0 0 0 0 0

C K
4

1 5

2
Techniques for the design of Algorithms

The classical techniques are as follows:


1 Divide and Conquer
2 Dynamic Programming
3 Greedy Algorithms
4 Backtracking Algorithms
5 Branch and Bound Algorithms
Greedy Algorithms

Always make the choice that looks best at the moment.


They make a locally optimal choice in the hope that this choice
will lead to a globally optimal solution.
They have not any opportunity to go back and fix the partial
solution.
Do not always yield optimal solutions, but for many problems they
do.
Greedy Algorithms
Suppose that the following functions are available:
Feasible
Optimal
Greedy Algorithms
Suppose that the following functions are available:
Feasible
Optimal
Any Greedy Algorithm can be expressed as:
Greedy − Algorithm(Set C ){
S ← 0/ ;
While(not Solution(S ) and C 6= 0){
/
x ← Select (C );
C ← C − { x };
if (Feasible(S ∪ {x })){
S ← S ∪ {x };
}
if (Solution(S )) return(S );
else return ”nosolution”;
}
}
Techniques for the design of Algorithms

The classical techniques are as follows:


1 Divide and Conquer
2 Dynamic Programming
3 Greedy Algorithms
4 Backtracking Algorithms
5 Branch and Bound Algorithms
Knapsack Problem
Knapsack Problem

Definition
Suppose that we have n objects, say oi (i = 1, 2, · · · , n), each with
corresponding weight (wi ) and profit (pi ), and a weight bound b. The
goal of this problem is to find an X = (x1 , x2 , · · · , xn ) that maximize
∑ni=1 xi pi with respect to ∑ni=1 xi wi ≤ b.
if xi ∈ {0, 1} the this problem is called 0/1-Knapsack.
if xi ∈ [0, 1] the this problem is called fractional-Knapsack.
Knapsack Problem

Definition
Suppose that we have n objects, say oi (i = 1, 2, · · · , n), each with
corresponding weight (wi ) and profit (pi ), and a weight bound b. The
goal of this problem is to find an X = (x1 , x2 , · · · , xn ) that maximize
∑ni=1 xi pi with respect to ∑ni=1 xi wi ≤ b.
if xi ∈ {0, 1} the this problem is called 0/1-Knapsack.
if xi ∈ [0, 1] the this problem is called fractional-Knapsack.

For any optimization problem, we have two kinds of condition:


Feasibility: ask whether a solution is feasible.
Optimality: ask whether a solution is optimal.
0/1-Knapsack Problem: Dynamic Programming
Suppose that P [i , j ] denotes the maximum profit obtained by the first i
objects where the sum of their weights is at most j. The goal is to
calculate P [n, b].
0/1-Knapsack Problem: Dynamic Programming
Suppose that P [i , j ] denotes the maximum profit obtained by the first i
objects where the sum of their weights is at most j. The goal is to
calculate P [n, b]. To do this, the following formula can be used:

−∞
 if j < 0,
P [i , j ] = 0 if i = 0,

max {P [i − 1, j ], P [i − 1, j − wi ] + pi } otherwise.

0/1-Knapsack Problem: Dynamic Programming
Suppose that P [i , j ] denotes the maximum profit obtained by the first i
objects where the sum of their weights is at most j. The goal is to
calculate P [n, b]. To do this, the following formula can be used:

−∞
 if j < 0,
P [i , j ] = 0 if i = 0,

max {P [i − 1, j ], P [i − 1, j − wi ] + pi } otherwise.

j
0 1 2 3 ··· b-2 b-1 b
0 0 0 0 0 0 0 0

..
i .
n-2

n-1

wn−1
0/1-Knapsack Problem: Dynamic Programming

Dynamic-Programming-Knapsack(n, p[1 · · · n], w [1 · · · n], b){


for j ← 0 to b do{
P [0, j ] ← 0;
}
for i ← 1 to n do{
for j ← 0 to b do{
if wi ≤ j then
P [i , j ] ← max {P [i − 1, j ], P [i − 1, j − wk ] + pk };
else
P [i , j ] ← P [i − 1, j ];
}
}
return(P [n, b]);
}
Example of fractional-Knapsack Problem
Example
Suppose that n = 3, b = 20, (p1 , p2 , p3 ) = (25, 24, 15), and
(w1 , w2 , w3 ) = (18, 15, 10). Then some of the feasible solutions are as
follows:

x1 x2 x3 ∑3i =1 xi wi ∑3i =1 xi pi
1/2 1/ 3 1/4 16.5 24.25
1 2/15 0 20 28.2
0 2/ 3 1 20 31
0 1 1/2 20 31.5
Example of fractional-Knapsack Problem
Example
Suppose that n = 3, b = 20, (p1 , p2 , p3 ) = (25, 24, 15), and
(w1 , w2 , w3 ) = (18, 15, 10). Then some of the feasible solutions are as
follows:

x1 x2 x3 ∑3i =1 xi wi ∑3i =1 xi pi
1/2 1/ 3 1/4 16.5 24.25
1 2/15 0 20 28.2
0 2/ 3 1 20 31
0 1 1/2 20 31.5

Different selecting strategies:


Random selection.
Selecting by maximum profit (producing the order 1, 2, 3).
Selecting by minimum weight (producing the order 3, 2, 1).
Selecting by maximum profit per unit (producing the order 2, 3, 1).
Greedy Algorithm for fractional-Knapsack Problem

Greedy-fractional-Knapsack-Algorithm(n, b, p[1 · · · n], w [1 · · · n]){


p
Sort objects in nondecreasing order with respect to wpi ≥ wi +1 ;
i i +1

X ← 0;
rw ← b;
for i ← 1 to n do{
if wi < rw then{
xi ← 1;
rw ← rw − wi ;
}
}
rw
if i < n then xi ← wi
;
return(X );
}
Greedy Algorithm for fractional-Knapsack Problem
Theorem
If p1 /w1 ≥ p2 /w2 ≥ · · · ≥ pn /wn then the procedure
Greedy-fractional-Knapsack-Algorithm always return an optimal solution and its time
complexity is O (n log n).
Greedy Algorithm for fractional-Knapsack Problem
Theorem
If p1 /w1 ≥ p2 /w2 ≥ · · · ≥ pn /wn then the procedure
Greedy-fractional-Knapsack-Algorithm always return an optimal solution and its time
complexity is O (n log n).

Proof.
It is obvious that the time complexity is O (n log n). The form of X is as follows:

X = [1, 1, · · · , 1, xj , 0, 0, · · · , 0], where 0 ≤ xj < 1


Greedy Algorithm for fractional-Knapsack Problem
Theorem
If p1 /w1 ≥ p2 /w2 ≥ · · · ≥ pn /wn then the procedure
Greedy-fractional-Knapsack-Algorithm always return an optimal solution and its time
complexity is O (n log n).

Proof.
It is obvious that the time complexity is O (n log n). The form of X is as follows:

X = [1, 1, · · · , 1, xj , 0, 0, · · · , 0], where 0 ≤ xj < 1

Suppose that Y = [y1 , y2 , · · · yn ] be an optimal solution. Let k be the smallest index


such that xk 6= yk . First we prove that yk ≤ xk :
k < j: in this case xk = 1 and so yk ≤ xk .
k = j: since ∑ni=1 xi wi = b so yk ≤ xk (otherwise ∑ni=1 yi wi > b).
k > j: same as the previous case.
Greedy Algorithm for fractional-Knapsack Problem
Theorem
If p1 /w1 ≥ p2 /w2 ≥ · · · ≥ pn /wn then the procedure
Greedy-fractional-Knapsack-Algorithm always return an optimal solution and its time
complexity is O (n log n).

Proof.
It is obvious that the time complexity is O (n log n). The form of X is as follows:

X = [1, 1, · · · , 1, xj , 0, 0, · · · , 0], where 0 ≤ xj < 1

Suppose that Y = [y1 , y2 , · · · yn ] be an optimal solution. Let k be the smallest index


such that xk 6= yk . First we prove that yk ≤ xk :
k < j: in this case xk = 1 and so yk ≤ xk .
k = j: since ∑ni=1 xi wi = b so yk ≤ xk (otherwise ∑ni=1 yi wi > b).
k > j: same as the previous case.
Increasing yk to xk produces another solution Z = [z1 , z2 , · · · zn ], where zk = xk and
n
(zk − yk )wk = ∑ (yi − zi )wi
i =k +1
Greedy Algorithm for fractional-Knapsack Problem

Proof (cont.)

n n n
∑ zi pi = ∑ yi pi + (zk − yk )pk − ∑ (yi − zi )pi
i =1 i =1 i =k +1
Greedy Algorithm for fractional-Knapsack Problem

Proof (cont.)

n n n
∑ zi pi = ∑ yi pi + (zk − yk )pk − ∑ (yi − zi )pi
i =1 i =1 i =k +1
n n
wk wi
= ∑ yi pi + (zk − yk )pk wk − ∑ (yi − zi )p i
wi
i =1 i =k +1
Greedy Algorithm for fractional-Knapsack Problem

Proof (cont.)

n n n
∑ zi pi = ∑ yi pi + (zk − yk )pk − ∑ (yi − zi )pi
i =1 i =1 i =k +1
n n
wk wi
= ∑ yi pi + (zk − yk )pk wk − ∑ (yi − zi )p i
wi
i =1 i =k +1
" #
n n
pk
≥ ∑ yi pi + (zk − yk )wk − ∑ (yi − zi )wi
i =1
wk i =k +1
Greedy Algorithm for fractional-Knapsack Problem

Proof (cont.)

n n n
∑ zi pi = ∑ yi pi + (zk − yk )pk − ∑ (yi − zi )pi
i =1 i =1 i =k +1
n n
wk wi
= ∑ yi pi + (zk − yk )pk wk − ∑ (yi − zi )p i
wi
i =1 i =k +1
" #
n n
pk
≥ ∑ yi pi + (zk − yk )wk − ∑ (yi − zi )wi
i =1
wk i =k +1
n
≥ ∑ yi pi
i =1

Since Y is an optimal solution, so we have ∑ni=1 zi pi = ∑ni=1 yi pi . We can do the


same calculation for other indices and finally we obtain X = Y .
Techniques for the design of Algorithms

The classical techniques are as follows:


1 Divide and Conquer
2 Dynamic Programming
3 Greedy Algorithms
4 Backtracking Algorithms
5 Branch and Bound Algorithms
Backtracking Algorithms

Problem P An instance x of P

Infinitely many
Solution space of x
instances of P
Backtracking Algorithms

Problem P An instance x of P

Infinitely many
Solution space of x
instances of P

Backtracking is a systematic way to search the configuration of


solution space.
Each possible configuration must be generated exactly once.
Backtracking Algorithms

In general, we assume our solution is a vector


v = (a1 , a2 , · · · , an ).
Backtracking Algorithms

In general, we assume our solution is a vector


v = (a1 , a2 , · · · , an ).
At each step, we try to extend a partial solution
a = (a1 , a2 , · · · , ak ) by adding another element at the end.
Backtracking Algorithms

In general, we assume our solution is a vector


v = (a1 , a2 , · · · , an ).
At each step, we try to extend a partial solution
a = (a1 , a2 , · · · , ak ) by adding another element at the end.
Then we test whether what we now have is a solution: if so, we
should print it or count it.
Backtracking Algorithms

In general, we assume our solution is a vector


v = (a1 , a2 , · · · , an ).
At each step, we try to extend a partial solution
a = (a1 , a2 , · · · , ak ) by adding another element at the end.
Then we test whether what we now have is a solution: if so, we
should print it or count it.
If not, we check whether the partial solution is still potentially
extendible to some complete solution.
Backtracking Algorithms

In general, we assume our solution is a vector


v = (a1 , a2 , · · · , an ).
At each step, we try to extend a partial solution
a = (a1 , a2 , · · · , ak ) by adding another element at the end.
Then we test whether what we now have is a solution: if so, we
should print it or count it.
If not, we check whether the partial solution is still potentially
extendible to some complete solution.
Backtracking algorithm is modeled by a tree of partial solutions,
where each note represents a partial solution.
Backtracking Algorithms
Backtrack(A, k ){
if A = (a1 , a2 , · · · , ak ) is a solution, then report it;
else{
k ← k + 1;
compute Sk ;
while(Sk 6= 0){ /
ak ← an element of Sk ;
Sk ← Sk − {ak };
Backtrack(A, k );
}
}
}
Backtracking Algorithms
Backtrack(A, k ){
if A = (a1 , a2 , · · · , ak ) is a solution, then report it;
else{
k ← k + 1;
compute Sk ;
while(Sk 6= 0){ /
ak ← an element of Sk ;
Sk ← Sk − {ak };
Backtrack(A, k );
}
}
}

Backtracking ensures correctness by checking all possibilities.


It ensures efficiency by never visiting a configuration more than
once.
Backtracking Algorithms

k=0

k=1

k=2

k=3
0/1-Knapsack

Review
Suppose that we have n objects, say oi (i = 1, 2, · · · , n), each with
corresponding weight (wi ) and profit (pi ), and a weight bound b. The
goal of this problem is to find an X = (x1 , x2 , · · · , xn ) that maximize
∑ni=1 xi pi with respect to ∑ni=1 xi wi ≤ b.
if xi ∈ {0, 1} the this problem is called 0/1-Knapsack.
if xi ∈ [0, 1] the this problem is called fractional-Knapsack.
0/1-Knapsack

Review
Suppose that we have n objects, say oi (i = 1, 2, · · · , n), each with
corresponding weight (wi ) and profit (pi ), and a weight bound b. The
goal of this problem is to find an X = (x1 , x2 , · · · , xn ) that maximize
∑ni=1 xi pi with respect to ∑ni=1 xi wi ≤ b.
if xi ∈ {0, 1} the this problem is called 0/1-Knapsack.
if xi ∈ [0, 1] the this problem is called fractional-Knapsack.

Review
Fractional-Knapsack problem can be solved efficiently by using
Greedy approach and the answer is optimal.
0/1-Knapsack

To design a backtracking algorithm for this problem, we should


generate all subsets of {0, 1}n and check which one is optimal.
0/1-Knapsack

To design a backtracking algorithm for this problem, we should


generate all subsets of {0, 1}n and check which one is optimal.

1 0

1 -

1 0 1 0

2 - 2 -

1 0 1 0 1 0 1 0

3 - 3 - 3 - 3 -

[1,2,3] [1,2] [1,3] [1] [2,3] [2] [3] []


0/1-Knapsack: Backtracking Algorithms
Backtrack-Knapsack(X , optX , optP , `){
if ` = n + 1 then{
if ∑ni=1 xi wi ≤ b then{
curP ← ∑ni=1 xi pi ;
if curP ≥ optP then{
optP ← curP;
optX ← [x1 , x2 , · · · , xn ];
}
}
}
else{
xl ← 1;
Backtrack-Knapsack(X , optX , optP , ` + 1);
xl ← 0;
Backtrack-Knapsack(X , optX , optP , ` + 1);
}
}
Techniques for the design of Algorithms

The classical techniques are as follows:


1 Divide and Conquer
2 Dynamic Programming
3 Greedy Algorithms
4 Backtracking Algorithms
5 Branch and Bound Algorithms
Techniques for the design of Algorithms

The classical techniques are as follows:


1 Divide and Conquer
2 Dynamic Programming
3 Greedy Algorithms
4 Backtracking Algorithms
5 Branch and Bound Algorithms

Backtracking Reducing the search space Branch and Bound


Algorithms Algorithms
0/1-Knapsack: Branch and Bound Algorithms

B&B-Knapsack1(X , optX , optP , `, curW ){


if ` = n + 1 then{
if ∑ni=1 xi pi ≥ optP then{
optP ← ∑ni=1 xi pi ;
optX ← [x1 , x2 , · · · , xn ];
}
}
else{
if curW + w` ≤ b then C` ← {1, 0};
else C` ← {0};
}
for each x ∈ C` do {
xl ← x;
Backtrack-Knapsack(X , optX , optP , ` + 1, curW + x` w` );
}
}
0/1-Knapsack: Branch and Bound Algorithms (more pruning)
B&B-Knapsack2(X , optX , optP , `, curW ){
if ` = n + 1 then{
if ∑ni=1 xi pi ≥ optP then{
optP ← ∑ni=1 xi pi ;
optX ← [x1 , x2 , · · · , xn ];
}
}
else{
if curW + w` ≤ b then C` ← {1, 0};
else C` ← {0};
}
−1
B ← ∑il =1 xi pi + GFK (p` , p`+1 , · · · , pn , w` , w`+1 , · · · , wn , b − curW );
if B ≤ optP then return;
for each x ∈ C` do {
xl ← x;
Backtrack-Knapsack(X , optX , optP , ` + 1, curW + x` w` );
}
}
Example for B&B-Knapsack2 algorithm
Example
Suppose that P = [23, 24, 15, 13, 16], W = [11, 12, 8, 7, 9], and b = 26.
The algorithm B&B-Knapsack2 works as follows:

X = []
B = 52.625
curW = 0

X = [1] X = [0]
B = 52.625 B = 50.14
curW = 11 curW = 0

X = [1, 1] X = [1, 0]
B = 52.625 B = 51
curW = 23 curW = 11

X = [1, 1, 0] X = [1, 0, 1]
B = 52.57 B = 51
curW = 23 curW = 19

X = [1, 1, 0, 0] X = [1, 0, 1, 1]
B = 52.33 B = 51
curW = 23 curW = 26

X = [1, 1, 0, 0, 0] X = [1, 0, 1, 1, 0]
P = 47 ⇒ optP = 47 P = 51 > optP ⇒ optP = 51
curW = 23 curW = 26
Comparision of previous algorithms

The following table represents the worse case size of search space of
random instances executed for 5 times.

n Backtrack-Knapsack B&B-Knapsack1 B&B-Knapsack2


8 511 333 78
12 8191 4988 195
16 131071 78716 601
20 2097151 1257745 480
24 33554431 19814875 755

You might also like