0% found this document useful (0 votes)
2 views21 pages

Session 0 - Algorithms Efficiency Analysis and Order (3)

The document outlines a course on the Design and Analysis of Algorithms, taught by Matthew Huang, covering topics such as algorithm efficiency, complexity analysis, and various algorithm types including divide and conquer and dynamic programming. It includes a detailed schedule, evaluation criteria, and fundamental concepts like basic operations and time complexity analysis. The course utilizes the textbook 'Foundations of Algorithms' and emphasizes the importance of developing efficient algorithms.

Uploaded by

TopProgrammer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views21 pages

Session 0 - Algorithms Efficiency Analysis and Order (3)

The document outlines a course on the Design and Analysis of Algorithms, taught by Matthew Huang, covering topics such as algorithm efficiency, complexity analysis, and various algorithm types including divide and conquer and dynamic programming. It includes a detailed schedule, evaluation criteria, and fundamental concepts like basic operations and time complexity analysis. The course utilizes the textbook 'Foundations of Algorithms' and emphasizes the importance of developing efficient algorithms.

Uploaded by

TopProgrammer
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 21

Design and Analysis of

Algorithms - Session 0
Matthew Huang
Jan 7, 2025
Self introduction

• B.Sc. (China), M.Sc. (Canada), Ph.D. (U.S.A.) in computer science


• 20 years industrial experience across Wireless, Utilities, ERP/CRM, Cloud platform,
Security and Privacy, Artificial Intelligence, Distributed Systems; have assumed a
wide range of roles
Textbook: Foundations of Algorithms 5th Edition

Foundations of Algorithms: 9781284049190


Course Schedule
Week Subject Resources & References
1 Analysis of Algorithms  Lecture Notes
 Online Resources
2 Divide and Conquer  Lecture Notes
 Online Resources
3 Dynamic Programming  Lecture Notes
 Online Resources
4 Greedy Algorithm  Lecture Notes
 Online Resources
5 Backtracking  Lecture Notes
 Online Resources
6 Midterm
7 Reading week
8 Branch and Bound  Lecture Notes
 Online Resources
9 Introduction of Computational Complexity: The Sorting  Lecture Notes
Problem  Online Resources
8 Computational Complexity: The Searching Problem  Lecture Notes
 Online Resources
10 Introduction of the NP Theory / Project Presentation  Lecture Notes
 Online Resources
11 Genetic Algorithms and Genetic Programming  Lecture Notes
 Online Resources
12 Number-Theoretic Algorithms  Lecture Notes
 Online Resources
13 Introduction of Parallel Algorithms  Lecture Notes
 Online Resources
14 Study Break
Student Evaluation
Assignments (20%)
There will a few assignments throughout the semester

Midterm (30%)
This will be either in-class exam

Project (20%)
This will be individual/group project(s)

Final (30%)

This will be an in-class exam


Overview

• Algorithms
• The Importance of Developing Efficient Algorithms
• Analysis of Algorithms
• Order
• Outline of This Book
Problem, Solution, Algorithm
• A problem is a question to which we seek an answer.
• Sort a list S of n numbers in nondecreasing order.
• Answer: the numbers in sorted sequence.
• e.g. S = [10; 7; 11; 5; 13; 8] and n = 6  [5, 7, 8, 10, 11, 13].
• Determine whether the number x is in the list S of n numbers.
• Answer: yes if x is in S and no if it is not.
• e.g. S = [10; 7; 11; 5; 13; 8]; n = 6; and x = 5  "yes, x is in S."

• To produce a computer program that can solve all instances of a problem, we


must specify a general step-by-step procedure for producing the solution to
each instance. This step-by-step procedure is called an algorithm. We say that
the algorithm solves the problem.
• Please read page 2 – 9 to understand how the textbook defines pseudocode
The Importance of Developing
Efficient Algorithms
• Sequential Search Versus Binary Search
The Fibonacci Sequence
• The computation of the nth term of the Fibonacci sequence
is defined recursively as follows:
• f0 = 0

• f1 = 1

• fn = fn-1 + fn-2 for n >= 2 :

• Computing the first few terms, we have


• f2 = f1 + f0 = 1 + 0 = 1

• f3 = f2 + f1 = 1 + 1 = 2

• f4 = f3 + f2 = 2 + 1 = 3

• f5 = f4 + f3 = 3 + 2 = 5; etc.
• the number of terms in the tree more than doubles every time n
increases by 2. e.g., there are 9 terms in the tree when n = 4 and
25 terms when n = 6. If we call T(n) the number of terms in the
recursion tree for n and the number of terms more than doubled
every time n increased by 2, we would have the following for n
even:

Because T(0) = 1, this would mean T(n) > 2n/2. We use induction to show
that this is true for n >= 2 even if n is not even.
Theorem 1.1
An iterative approach
Analysis of Algorithms
• When evaluating an algorithm's time efficiency, we don't focus on the specific
number of CPU cycles or instructions executed, as these vary based on the
computer, programming language, and programmer. Instead, we use a measure that
is independent of these variables.
• Generally, the running time of an algorithm increases with the input size and is
roughly proportional to the number of times a basic operation (like a comparison) is
performed. Therefore, we analyze an algorithm's efficiency by counting the number
of basic operations it performs relative to the input size.
• After figuring out the input size, we identify an instruction (or group of instructions)
that represents the bulk of the algorithm's work. This is called the basic operation.
• For example, in the problem “Determine whether the number x is in the list S of
n numbers”, comparing a value x with an item in S is a suitable basic operation.
By counting how many times this comparison happens for different input sizes,
we can understand how efficient the algorithms are compared to each other.
• In general, a time complexity analysis of an algorithm is the determination of
how many times the basic operation is done for each value of the input size.
Complexity Analysis
• Every-Case Time Complexity: T(n) is defined as the number of times
the algorithm does the basic operation for an instance of size n. T(n) is
called the every-case time complexity of the algorithm, and the
determination of T(n) is called an every-case time complexity analysis.
Complexity Analysis - Cont.
In many scenarios, the basic operation is not done the same number of times for all
instances of size n and this algorithm does not have an every-case time complexity.
However, this does not mean that we cannot analyze such algorithms, because there are
other analysis techniques that can be tried.

Consider the maximum number of times the basic operation is done. For a given
algorithm, W(n) is defined as the maximum number of times the algorithm will ever do
its basic operation for an input size of n. So W(n) is called the worst-case time
complexity of the algorithm, and the determination of W(n) is called a worst-case time
complexity analysis.
Worst-Case Time Complexity

Although the worst-case analysis informs us of the absolute maximum amount of


time consumed; in some cases, we may be more interested in knowing how the
algorithm performs on the average.
Complexity Analysis - Cont.
Average-Case Time Complexity
Complexity Analysis - Cont.
Best-Case Time Complexity
A final type of time complexity analysis is the determination of the smallest
number of times the basic operation is done. This is called the best-case
time complexity of the algorithm.
Applying the Theory
• When analyzing algorithms, consider three types of instructions:
• Basic operations: Main tasks of the algorithm.
• Overhead instructions: Setup tasks that don’t depend on input size.
• Control instructions: Tasks like loop counters that depend on input size.
• These are properties of the algorithm, not the problem.
• Imagine two algorithms for the same problem:
• Algorithm 1: Takes ( n ) time.
• Algorithm 2: Takes ( n2 ) time.
• Algorithm 1 seems faster. But if Algorithm 1’s basic operation takes 1,000 times longer
than Algorithm 2’s, we need to compare their actual times:
• Algorithm 1: ( n X 1,000t )
• Algorithm 2: ( n2 X t )
• To find when Algorithm 1 is faster, solve ( n2 X t > n X 1,000t ). Simplifying, we get ( n >
1,000 ). So, Algorithm 1 is faster only when n is greater than 1,000.
• If the input size is never larger than 1,000, use Algorithm 2. Sometimes, it’s not easy to
determine which algorithm is faster, and we may need to approximate. If overhead
instructions aren’t negligible, they must also be considered.
Order
• Algorithms with time complexities
such as n and 100n are called linear-
time algorithms because their time
complexities are linear in the input
size n, whereas algorithms with time
complexities such as n2 and 0.01n2
are called quadratic-time algorithms
because their time complexities are
quadratic in the input size n.

• Any linear-time algorithm is


eventually more efficient than any
quadratic-time algorithm. In the
theoretical analysis of an algorithm,
we are interested in eventual
behavior.
Definition
• Big O: For a given complexity function f(n), O(f(n)) is the set of complexity
• functions g(n) for which there exists some positive real constant c and some nonnegative integer N such that
for all n >= N,
• g(n) <= c ⨉ f(n)
• If g(n) ∊ O(f(n)), we say that g(n) is big O of f(n)
• Omega: For a given complexity function f(n), Ω(f(n)) is the set of complexity functions g(n) for which there exists
some positive real constant c and some nonnegative integer N such that, for all n >= N,
• g (n) >= c ⨉ f (n)
• If g(n) ∊ Ω(f(n)), we say that g(n) is omega of f(n)
• Theta: For a given complexity function f(n), 𝚯 (f(n)) = O (f (n)) ∩ Ω(f(n))
• i.e., there exists some positive real constants c and d and some nonnegative integer N such that, for all n
>= N,
• c ⨉ f (n) <= g (n) <= d ⨉ f (n)
• If g(n) ∊ 𝚯(f(n)), we say that g(n) is order of f(n).

You might also like