0% found this document useful (0 votes)
30 views34 pages

EEB 435 Python LECTURE 3

This document discusses algorithm analysis and complexity analysis. It begins by defining an algorithm as a step-by-step procedure for solving a problem and provides examples like making a cup of tea. It then discusses analyzing algorithms to determine how efficiently they solve problems by examining time complexity and space complexity. Common time complexities described include constant, logarithmic, linear, quadratic, and exponential time. Understanding algorithm analysis is important for comparing algorithms and determining the most efficient solution.

Uploaded by

osward
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
30 views34 pages

EEB 435 Python LECTURE 3

This document discusses algorithm analysis and complexity analysis. It begins by defining an algorithm as a step-by-step procedure for solving a problem and provides examples like making a cup of tea. It then discusses analyzing algorithms to determine how efficiently they solve problems by examining time complexity and space complexity. Common time complexities described include constant, logarithmic, linear, quadratic, and exponential time. Understanding algorithm analysis is important for comparing algorithms and determining the most efficient solution.

Uploaded by

osward
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 34

EEB 435 COMPUTER PROGRAMMING II

Python Basics
By
DR. Ebenezer Esenogho
• Algorithm Analysis
• Complexity analysis,
• Big-O notation,
• Amortised Cost
• Algorithm Analysis
• What is an Algorithm?

 An algorithm is a procedure used for solving a problem or


performing a computation. Algorithms act as an exact list of
instructions that conduct specified actions step by step in
either hardware- or software-based routines.

 In other words, an algorithm is a step-by-step procedure or


formula to solve a problem. Think of it like a recipe in
MAGWIYA. It's a set sequence of steps that, if you follow
precisely, will produce a predictable outcome. (FAST CAKE)

 Let's explore this through explanations and examples:


Basic Definition
 An algorithm must have:
 A clear set of well-defined inputs: These are the data you
give to the algorithm.
 A clear set of well-defined outputs: These are the results
you expect from the algorithm.
 Definiteness: Each step is precisely defined; there's no
ambiguity.
 Finiteness: After a certain number of steps, it will
terminate.
 Effectiveness: Every step is basic enough that it can be
carried out exactly and in a finite amount of time.
 Generality: It should be applicable to a broad category of
problems, not just a single specific instance.
Ways of Representing An Algorithms
Before actual coding, algorithms are often represented using pseudo-code
(a high-level description) or flowcharts.

 Flow Charts: A visual representation using shapes like ovals for


start/end, rectangles for processes, diamonds for decisions, and
arrows to indicate flow.
 Pseudo code

Simple Examples
• Example 1: Making a Cup of Tea
• Imagine you're teaching an alien who has never seen a kitchen
before. Here's an algorithm for making a cup of tea:
• Input: Tea bag, water, cup, kettle, and optionally, sugar and milk.
• Steps:
• Steps:
– Fill the kettle with water.
– Turn on the kettle.
– Wait until the kettle boils.
– Place the tea bag in the cup.
– Pour the boiling water into the cup.
– Wait 3-5 minutes.
– Remove the tea bag.
– (Optional) Add sugar and/or milk to taste.
• Output: A cup of tea.
• Example 2: Finding the Maximum Number in a List
– Input: A list of numbers, e.g., [3, 1, 4, 1, 5, 9, 2, 6, 5].
– Steps:
– Assume the first number is the largest.
– For each number in the list, compare it to the current largest.
– If the current number is larger, it becomes the new largest.
– Continue until the end of the list.
– Output: The largest number (in this case, 9).
Pseudo code
Function Find Maximum(List)
Set Max to List[0]
For each Number in List
If Number > Max
Set Max to Number
Return Max
End Function
• 3. Real-World Algorithmic Examples
Binary Search: Given a sorted list of numbers and a target number,
binary search is an algorithm that finds if the target exists in the list.
It does this by repeatedly dividing the list in half until it either finds
the target or the range has no numbers left. Its efficiency is O(logn)

Merge Sort: A sorting algorithm that uses a divide-and-conquer


approach. It splits the list in half, sorts each of those halves, and
then merges them back together in sorted order. Its efficiency is
O(nlogn). Where The letter “n” here represents the input size.

Dijkstra's Algorithm: Used in graph theory and computer


networking, it finds the shortest path between nodes in a graph.
Useful, for example, in GPS systems to find the shortest route
between two locations.
Properties of Algorithms
For an algorithm to be effective, it should possess certain characteristics:
• Deterministic: Every step of the algorithm must be precisely defined
without any ambiguity.
• Feasible: The operations in the algorithm should be basic enough that
they can be executed in a finite amount of time and are practically
achievable.
• Independent: The algorithm should be stated in terms that are not
dependent on any particular programming language .

Note, algorithms are everywhere. Every time you follow a


sequence of steps or use a set method to solve a problem, you're
utilizing an algorithm. The study of algorithms, particularly in
computer science, is about finding the most efficient way to solve
large and complex problems, but the fundamental idea remains a
structured approach to problem-solving.
Algorithm Analysis
So far we have discussed why algorithm is import. Now we will look at
and its analysis and why it is important? In the analysis of the algorithm,
it generally focused on CPU (time) usage, Memory usage, Disk usage,
and Network usage. All are important, however the most concern is
about the CPU time. Be careful to differentiate between
Performance: How much time/memory/disk/etc. is used when a program
is run. This depends on the machine, compiler, interpreter etc. as well as
the code we write.

Complexity: How do the resource requirements of a program or


algorithm scale, i.e. what happens as the size of the problem being
solved by the code gets larger. Linear to quadratic to polynomials etc.

Algorithm analysis is the process of understanding the efficiency of an


algorithm in terms of time and/or space. The goal is to determine how
the performance of an algorithm grows as the input size grows.
• To predict the behaviour of an algorithm without implementing it on a
specific computer.
• It is much more convenient to have simple measures for the efficiency of an
algorithm than to implement the algorithm and test the efficiency every time a
certain parameter in the underlying computer system changes.
• It is impossible to predict the exact behaviour of an algorithm. There are too
many influencing factors.
• The analysis is thus only an approximation; it is not perfect.
• More importantly, by analysing different algorithms, we can compare them to
determine the best one for our purpose.

Types of Algorithm Analysis:


Best case
Worst case
Average case
Worst Case (Mostly used): Define the input for which algorithm takes a long time or
maximum time. In the worst calculate the upper bound of an algorithm. Example: In
the linear search when search data is not present at all then the worst case occurs. For
Linear Search, the worst case happens when the element to be searched (x) is not
present in the array. When x is not present, the search() function compares it with all
the elements of arr[] one by one. Therefore, the worst-case time complexity of the
linear search would be O(n).

Best case: Define the input for which algorithm takes less time or minimum time. In
the best case calculate the lower bound of an algorithm. Example: In the linear search
when search data is present at the first location of large data then the best case occurs.
In the linear search problem, the best case occurs when x is present at the first location.
The number of operations in the best case is constant (not dependent on n). So time
complexity in the best case would be Ω(1)

Average case: In the average case take all random inputs and calculate the
computation time for all inputs.
And then we divide it by the total number of inputs.

Average case = all random case time / total no of case


A) For some algorithms, all the cases (worst, best, average) are asymptotically
the same. i.e., there are no worst and best cases.

Example 1: Merge Sort does Θ(n log(n)) operations in all cases.


B) Where as most of the other sorting algorithms have worst and best cases.

Example 2: In the typical implementation of Quick Sort (where pivot is chosen


as a corner element), the worst occurs when the input array is already sorted and
the best occurs when the pivot elements always divide the array into two halves.

Example 3: For insertion sort, the worst case occurs when the array is reverse
sorted and the best case occurs when the array is sorted in the same order as
output.
Complexity analysis
Complexity analysis provides a high-level understanding of the efficiency
of an algorithm.
There are two primary types of complexity:

Time Complexity: Represents the amount of time an algorithm takes to


run as a function of the length of the input.

Space Complexity: Represents the amount of memory an algorithm uses


as a function of the length of the input.

,
Time Complexity:
Represents the amount of time an algorithm takes to run as a function of the length of the input.

•Time complexity is a measure of the amount of time an algorithm takes in terms of the amount of input to
the algorithm. The common notations used to describe time complexity are:
•O(1) – Constant Time:
–The running time of the algorithm is constant regardless of the input size.
–Example: Accessing an array element by its index.

•O(log n) – Logarithmic Time:


–Common in algorithms that decrease the input size with each step. as the input size grows, the number of
operations grows very slowly.
–Example: Binary Search.

•O(n) – Linear Time:


–The running time of the algorithm grows linearly with the input size.
–Example: Simple search algorithms like linear search. When an input doubles, so does the number of
operations in the algorithm.

,
O(n log n) – Linear Logarithmic Time:
–as input doubles, the number of operations performing in the algorithm only
increases by 1
–Common in advanced sorting algorithms. Example: Merge sort, quick sort (on
average).

O(n^2), O(n^3), ... – Polynomial Time:


–The running time grows polynomials with the input size.
–Examples: Nested loops, bubble sort (O(n^2)), and matrix multiplication (can
be O(n^3) with naive algorithms).

O(2^n) – Exponential Time:


–Algorithms with running times that grow exponentially with the input size.
These are generally considered infeasible for large inputs.
–Example: Naive solutions to problems like the Traveling Salesman Problem

,
,
,
For the first example, we only iterate through the list once,
making it linear time. For the second, we sort the list first (which
can take O(n log n) time with good algorithms) and then simply
select the last element. The space complexity is O(1) for the first
because we're using a constant amount of additional space (just
for the max_val variable), while for the second, if the sorting
isn't done in-place, it might require additional O(n) space.

In practice, when choosing an algorithm, it's important to


consider both time and space complexities. The best algorithm
often strikes a balance between the two based on the constraints
of the problem at hand.

,
Space Complexity
Space complexity is a measure of the amount of memory an algorithm uses in
terms of the amount of input to the algorithm. Here are common space
complexities:

O(1) – Constant Space:


The algorithm uses a fixed amount of memory regardless of the input size.
Example: Swapping two numbers using a temporary variable.
O(n) – Linear Space:
The memory used by the algorithm grows linearly with the input size.
Example: Implementing a queue with a linked list.
And so on….

,
Linked-list vs. Arrays

,
Design Techniques
Several foundational techniques are employed for crafting algorithms:

Divide and Conquer: Break the problem into smaller sub-problems, solve each sub-problem, and combine the results. Examples
include Merge Sort and Quick Sort.

Dynamic Programming: Break down the problem into smaller sub problems, solve each sub-problem, and store the results of each of
these sub problems to avoid duplicate work. Example: Fibonacci sequence.

Greedy Algorithms: Greedy algorithms aim to make the optimal choice at that given moment. Each step it chooses the optimal choice,
without knowing the future. . Example: Coin change problem for some specific denominations.

Backtracking: Solve the problem incrementally, backtrack upon hitting an obstacle until a solution is found. Example: The N-Queens
problem.

Brute Force: Try every possible solution until a satisfactory solution is found. It's often the most time-consuming. Example: Finding
the shortest path in a small graph by examining all possible paths.

,
Big-O notation,
Big O notation is one of the most fundamental tools for
computer scientists to analyse the cost of an algorithm.

Big-O notation is a way to describe how long an algorithm


takes to run or how much space it uses in relation to the size of
its input. It gives us an upper limit on the growth rate of an
algorithm's time or space requirements.

That’s why before you install a software or any application, in


your computer, it will ask for processor speed and memory size.

,
Now, imagine you have a task, like searching for a friend in a line of people:

• If you have to look at every single person in the line to find your friend, and the
time it takes grows with the number of people, we might say this takes O(n)
time, where n is the number of people.

• If you knew your friend was in the middle of the line and went straight to them
every time, no matter how long the line is, we would say it takes O(1) time. This
is constant time because it doesn't change based on the size of the line.

Therefore the idea behind Big-O notation is to give a high-level understanding of


how the time or space requirements of an algorithm grow as the input grows. It
helps us understand and compare the efficiency of different algorithms.

,
,
Why Is Big-O Important?

Comparing Algorithms: Big-O notation provides a high-level


measure of an algorithm's efficiency, making it easier to compare
different algorithms for the same task.

Predicting Performance: Knowing an algorithm's Big-O can help


predict how it will perform as the size of the input grows, ensuring it
remains performant (functioning well or as expected.) for larger data
sets.

Resource Planning: In real-world scenarios, understanding time and


space complexities helps in infrastructure planning and resource
allocation.
Assignment:
An image is represented by a 2D array of pixels. If you use a nested
for loop to iterate through every pixel (that is, you have a for loop
going through all the columns, then another for loop inside to go
through all the rows), what is the time complexity of the algorithm
when the image is considered as the input?

,
Amortised Cost
What is Amortized Cost?
Amortized cost analysis is a method used to find the average time required per
operation, in a sequence of operations, over the worst-case scenario. Instead of
considering each operation's worst-case individually, we look at the overall
performance over the entire sequence.

Simple Analogy:
Imagine you have a piggy bank. Most days, you put in just one coin, which takes a
second. But once a year, you count and wrap all the coins, which might take hours.
Even though this one event is time-consuming, on average (or "amortized" over the
year), the time you spend on the piggy bank is relatively low.
Most days, you put in just one coin, which takes a second."

Meaning: Every day, you have a small, consistent task. Adding one coin to the
piggy bank is quick and simple, analogous to a basic operation in an algorithm or
data structure that typically runs fast.
"But once a year, you count and wrap all the coins, which might take hours."

Meaning: Occasionally, there's a much more time-consuming task. This is


analogous to an algorithm or data structure occasionally having to do a much more
time-consuming operation (e.g., resizing an array).
"Even though this one event is time-consuming, on average (or 'amortized' over the
year), the time you spend on the piggy bank is relatively low."

Meaning: Even though there's a long task once a year, if you spread out that "cost"
over the whole year, the average time you spend per day is low. Similarly, even if
an algorithm occasionally takes a long time for a particular operation, its "average"
or "amortized" cost might still be low if those expensive operations are rare.

,
Python Example: Dynamic Array
One of the classic examples of amortized analysis is the dynamic array
in Python, commonly known as the list.

When you append an element to a Python list, most of the time it's a fast
O(1) operation. However, occasionally, the underlying array runs out of
space, and Python needs to create a larger array, copy all the elements
from the old array to the new one, and then finally append the new
element. This resizing operation is costly compared to a regular append.

However, the resizing doesn't happen often. Specifically, when Python


resizes the list, it typically doubles its size. This means that the next
resizing won't happen until we've added as many new elements as
currently exist in the list. Because of this growth strategy, when you
spread the cost of the resizing over all the appends that didn't require
resizing, the "amortized" cost remains O(1) per append
,
Now lets try something. Lets create a simple experiment to showcase
this:
We'll append elements to a list repeatedly.
We'll time each append operation.
Though most appends are quick, occasionally there'll be a longer
operation due to resizing.,
import time

def append_to_list(n):
"""
Append n items to a list and record the time taken for each append.
"""
# Create an empty list
my_list = []

# List to store time taken for each append


times = []

for _ in range(n):
start_time = time.time()

# Append a random element, here just an integer


my_list.append(1)

end_time = time.time()
times.append((end_time - start_time) * 1000) # Time in milliseconds

return times

# Let's append 10,000 items to the list and see the times.
times_taken = append_to_list(10000)

# Let's print out the 10 most time-consuming appends.


print(sorted(times_taken, reverse=True)[:10])
[1.0035037994384766, 0.9970664978027344, 0.9965896606445312, 0.0, 0.0, 0.0,
0.0, 0.0, 0.0, 0.0]

When we run the code, you will observe that the vast majority of
append operations are extremely quick, but a few will stand out as
significantly longer. Those longer times are the moments when
Python is resizing the list behind the scenes.

However, when averaged out across all operations, the time for each
append remains consistently low, demonstrating the concept of an
amortized O(1) time complexity for the append operation on a Python
list.

Now class find the average and see.


Average =Sum/number of element=
Thank you

Kea leboga

You might also like