0% found this document useful (0 votes)
42 views

Prac-6 Darshan ADA Gtu

Practical 6 gtu

Uploaded by

sinchain970
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views

Prac-6 Darshan ADA Gtu

Practical 6 gtu

Uploaded by

sinchain970
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Analysis and Design of Algorithms (3150703)

Experiment No: 6

Implement a program for randomized version of quick sort and compare its performance with the
normal version of quick sort using steps count on various inputs (1000 to 5000) ofrandom nature,
ascending order and descending order sorted data.Also, draw a comparative chart of number of
input versus steps executed/time taken for each cases (random, ascending, and descending).

Date:

Competency and Practical Skills: Algorithmic thinking, Programming Skills, Performance


analysis

Relevant CO: CO1, CO2

Objectives: (a) Improve the performance of quick sort in worst case.


(b) Compare the performance of both the version of quick sort on various inputs

Equipment/Instruments: Computer System, Any C language editor

Theory:

To compare the performance of randomized quicksort and basic quicksort, we implement


both algorithms and analyze their behavior on datasets of varying sizes (1000 to 5000) under
three scenarios: random data, ascending order, and descending order. Basic quicksort always
selects a fixed pivot (typically the first element), while randomized quicksort selects a random
pivot, reducing the risk of worst-case performance. By counting the steps executed and
measuring the time taken, we can clearly observe the differences. On random data, both
algorithms typically perform similarly with \(O(N \log N)\) complexity, though randomized
quicksort often makes slightly fewer steps due to better pivot selection. However, in
ascending or descending ordered data, the basic quicksort degrades significantly to \
(O(N^2)\), while randomized quicksort continues to perform efficiently, retaining its \(O(N \
log N)\) behavior.

In terms of performance comparison, we graph input size versus both the number of steps
executed and time taken. For random data, the two algorithms are comparable up to around
3000 elements, beyond which basic quicksort begins to lag. For already sorted or reverse-
sorted data, the basic version's performance worsens considerably as input size increases,
making more steps and taking more time due to its poor pivot choices. In contrast,
randomized quicksort maintains consistent performance across all cases, demonstrating that
its random pivot selection helps it handle worst-case scenarios more gracefully. This
comparison highlights that randomized quicksort is a more robust option, particularly when
input data is large and potentially ordered.

Implement a function of randomized version of quick sort as per above instructions and use
basic version of quick sort (that selects first element as pivot element). Compare the steps
count of both the functions on various inputs ranging from 1000 to 5000 for each case
(random, ascending, and descending).

1
Analysis and Design of Algorithms (3150703)

Code:
#include <stdio.h>
#include <stdlib.h>
#include <time.h>

long long steps = 0;

// Swap function
void swap(int* a, int* b) {
int t = *a;
*a = *b;
*b = t;
}

// Partition function for normal quicksort


int partition(int arr[], int low, int high) {
int pivot = arr[high];
int i = (low - 1);

for (int j = low; j <= high - 1; j++) {


steps++; // Counting comparison step
if (arr[j] < pivot) {
i++;
swap(&arr[i], &arr[j]);
steps += 3; // Counting swap steps
}
}
swap(&arr[i + 1], &arr[high]);
steps += 3; // Counting swap steps
return (i + 1);
}

// Normal quicksort
void quickSort(int arr[], int low, int high) {
if (low < high) {
int pi = partition(arr, low, high);
quickSort(arr, low, pi - 1);
quickSort(arr, pi + 1, high);
}
}

// Partition function for randomized quicksort


int randomizedPartition(int arr[], int low, int high) {
int randomIndex = low + rand() % (high - low + 1);
swap(&arr[randomIndex], &arr[high]);
steps += 3; // Counting swap steps
return partition(arr, low, high);
}

// Randomized quicksort
void randomizedQuickSort(int arr[], int low, int high) {
if (low < high) {
int pi = randomizedPartition(arr, low, high);
randomizedQuickSort(arr, low, pi - 1);

2
Analysis and Design of Algorithms (3150703)
randomizedQuickSort(arr, pi + 1, high);
}
}

// Function to generate random array


void generateRandomArray(int arr[], int n) {
for (int i = 0; i < n; i++) {
arr[i] = rand() % (n * 10);
}
}

// Function to generate ascending order array


void generateAscendingArray(int arr[], int n) {
for (int i = 0; i < n; i++) {
arr[i] = i;
}
}

// Function to generate descending order array


void generateDescendingArray(int arr[], int n) {
for (int i = 0; i < n; i++) {
arr[i] = n - i;
}
}

// Function to copy array


void copyArray(int source[], int destination[], int n) {
for (int i = 0; i < n; i++) {
destination[i] = source[i];
}
}

int main() {
srand(time(0)); // Seed for randomization

int n[] = {1000, 2000, 3000, 4000, 5000}; // Different input sizes
int size = sizeof(n) / sizeof(n[0]);

for (int i = 0; i < size; i++) {


int len = n[i];
int arr[len], copy[len];

// Random data
generateRandomArray(arr, len);
copyArray(arr, copy, len);
steps = 0;
clock_t start = clock();
quickSort(copy, 0, len - 1);
clock_t end = clock();
double normal_time_random = ((double)(end - start)) / CLOCKS_PER_SEC;
long long normal_steps_random = steps;

copyArray(arr, copy, len);


steps = 0;
start = clock();
randomizedQuickSort(copy, 0, len - 1);
end = clock();
double random_time_random = ((double)(end - start)) / CLOCKS_PER_SEC;

3
Analysis and Design of Algorithms (3150703)
long long randomized_steps_random = steps;

// Ascending order data


generateAscendingArray(arr, len);
copyArray(arr, copy, len);
steps = 0;
start = clock();
quickSort(copy, 0, len - 1);
end = clock();
double normal_time_asc = ((double)(end - start)) / CLOCKS_PER_SEC;
long long normal_steps_asc = steps;

copyArray(arr, copy, len);


steps = 0;
start = clock();
randomizedQuickSort(copy, 0, len - 1);
end = clock();
double random_time_asc = ((double)(end - start)) / CLOCKS_PER_SEC;
long long randomized_steps_asc = steps;

// Descending order data


generateDescendingArray(arr, len);
copyArray(arr, copy, len);
steps = 0;
start = clock();
quickSort(copy, 0, len - 1);
end = clock();
double normal_time_desc = ((double)(end - start)) / CLOCKS_PER_SEC;
long long normal_steps_desc = steps;

copyArray(arr, copy, len);


steps = 0;
start = clock();
randomizedQuickSort(copy, 0, len - 1);
end = clock();
double random_time_desc = ((double)(end - start)) / CLOCKS_PER_SEC;
long long randomized_steps_desc = steps;

printf("Input size: %d\n", len);


printf("Normal QuickSort (Random Data): Steps: %lld, Time: %f seconds\n", normal_steps_random,
normal_time_random);
printf("Randomized QuickSort (Random Data): Steps: %lld, Time: %f seconds\n", randomized_steps_random,
random_time_random);
printf("Normal QuickSort (Ascending Data): Steps: %lld, Time: %f seconds\n", normal_steps_asc,
normal_time_asc);
printf("Randomized QuickSort (Ascending Data): Steps: %lld, Time: %f seconds\n", randomized_steps_asc,
random_time_asc);
printf("Normal QuickSort (Descending Data): Steps: %lld, Time: %f seconds\n", normal_steps_desc,
normal_time_desc);
printf("Randomized QuickSort (Descending Data): Steps: %lld, Time: %f seconds\n", randomized_steps_desc,
random_time_desc);
printf("\n");
}

return 0;
}

4
Analysis and Design of Algorithms (3150703)
Output:

Observations:

Randomized QuickSort generally outperforms the normal version, especially in cases of sorted
(ascending or descending) data where normal QuickSort struggles due to poor pivot selection, leading
to more comparisons and swaps. Randomized QuickSort avoids this by selecting a random pivot,
distributing partition sizes more evenly and improving performance. On random input, both
algorithms perform similarly, but the randomized version tends to reduce the likelihood of worst-case
scenarios, offering more consistent efficiency. Overall, Randomized QuickSort is more robust and
efficient across various input types and sizes.

5
Analysis and Design of Algorithms (3150703)
Result: Complete the below table based on your implementation of sequential search algorithm and
steps executed by the function.

Prepare similar tables for descending order and ascending order sorted data.
6
Analysis and Design of Algorithms (3150703)

Chart:

Conclusion:

Randomized QuickSort is more efficient and reliable than Normal QuickSort, especially for sorted
data, as it reduces the chances of worst-case performance through random pivot selection.

Quiz:

1. What is the time complexity of Randomized Quick Sort in worst case?


Answer:

2. What is the time complexity of basic version of Quick Sort on sorted data? Give reason of
your answer.
Answer:

7
Analysis and Design of Algorithms (3150703)
3. Can we always ensure O(n*lg n) time complexity for Randomized Quick Sort?
Answer:

4. Which algorithm executes faster on ascending order sorted data?


Answer:

Suggested Reference:
1. "Introduction to Algorithms" by Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest,
and Clifford Stein
2. “Fundamentals of Algorithms”by E. Horowitz et al.

References used by the students:

Rubric wise marks obtained:

Rubrics Understanding of Program Documentation Total


problem Implementation &Timely Submission (10)
(3) (5) (2)
Good Avg. Poor Good Avg. Poor Good Avg. Poor
(3) (2-1) (1-0) (5-4) (3-2) (1-0) (2) (1) (0)
Marks

8
Analysis and Design of Algorithms (3150703)

You might also like