DAA Miniproject
DAA Miniproject
Project Report
on
1
Abstract
The Cache Memory Simulation project focuses on the implementation and evaluation of different
cache replacement policies, specifically FIFO (First-In-First-Out), LRU (Least Recently Used),
and Random. Cache memory, a small but high-speed memory, plays a crucial role in reducing the
latency by storing frequently accessed data closer to the CPU. This simulation evaluates how these
replacement policies influence cache performance through metrics such as hit rate and miss rate.
By analyzing the performance with varying cache sizes and associativity, this project provides
insights into the most efficient policies under different scenarios. The project’s findings are
essential for optimizing cache performance in computer architecture, aiding system designers in
making data storage and retrieval faster and more efficient.
2
Functions used
Following are the functions used in this project:-
It calculates block address, checks for a hit, and executes the replacement policy if
required.
Key Logic:
FIFO Replacement:
o Replaces the oldest entry in the cache. This is implemented by keeping track of the
insertion order and replacing the entry that was first added.
LRU Replacement:
o Keeps track of the least recently accessed block and replaces it. Each cache block
has a timestamp that updates on each access to determine the least recently used
block.
Random Replacement:
o Chooses a random cache block for replacement. This function leverages a random
selection method to remove a block without considering access history.
This function runs the simulation for a set number of random memory accesses, invoking
the cache access function for each access and storing results.
Tracks the overall hit and miss count and calculates the hit rate based on total accesses.
3
main() Function
The main function in this project initializes cache settings (size, block size, and policy choice) and
coordinates the cache memory simulation. It generates random memory accesses, calls the cache
access function for each access, and calculates performance metrics like hit rate and miss rate. At the
end of the simulation, it displays a comparative performance analysis of FIFO, LRU, and Random
policies, providing valuable insights into their effectiveness in reducing cache misses under different
conditions.
4
Requirements(Libraries)
stdio.h:
o Used for basic input and output functions such as printf and scanf.
o Essential for displaying simulation results and obtaining user input for cache configuration.
stdlib.h:
o Provides utility functions, including memory allocation ( malloc and free) and random
number generation (rand), which is essential for the Random Replacement policy.
o Allows for dynamic memory allocation for cache blocks and setting up simulation data
structures.
time.h:
o Used to generate time-based data, crucial for the LRU policy, where each cache block’s last
accessed time must be tracked.
o In this project, time.h helps track access times and timestamps necessary for implementing
the LRU algorithm.
math.h (optional):
o Could be used for rounding operations or logarithmic calculations if needed to determine
cache size and block relationships, though not always essential in basic simulations.
5
CODE
#include <stdio.h>
#include <stdlib.h>
#define FIFO 0
#define LRU 1
#define RANDOM 2
// Initialize cache
void init_cache(CacheLine cache[]) {
for (int i = 0; i < CACHE_SIZE; i++) {
cache[i].valid = 0;
cache[i].last_used = -1; // Set initial last_used time to -1 for LRU
}
}
6
// Access cache with a given memory address based on the selected policy
int access_cache(CacheLine cache[], int address, int policy, int *time) {
int block_address = address / BLOCK_SIZE; // Get block address
int cache_index = block_address % CACHE_SIZE; // Direct-mapped
cache
int tag = block_address / CACHE_SIZE; // Extract tag from address
7
lru_index = i;
}
}
cache[lru_index].valid = 1;
cache[lru_index].tag = tag;
cache[lru_index].last_used = (*time)++;
} else if (policy == RANDOM) {
// RANDOM: Replace a random block
int replace_index = rand() % CACHE_SIZE;
cache[replace_index].valid = 1;
cache[replace_index].tag = tag;
}
return 0; // Cache miss
}
}
// Print results
printf("Cache Hits: %d\n", hits);
printf("Cache Misses: %d\n", misses);
printf("Hit Rate: %.2f%%\n", 100.0 * hits / (hits + misses));
}
int main() {
CacheLine cache[CACHE_SIZE]; // Cache with defined size
init_cache(cache); // Initialize cache
return 0;
9
}
Output
10
Conclusion
This Cache Memory Simulation project provides a comprehensive analysis of the effectiveness of
different cache replacement policies—FIFO, LRU, and Random—in managing cache memory
efficiently. By simulating these policies and evaluating their performance metrics (cache hits,
misses, and hit rate), we demonstrated that the choice of replacement policy significantly impacts
overall system performance.
The simulation results generally indicate that the LRU (Least Recently Used) policy performs
better in retaining frequently accessed data, leading to a higher hit rate compared to FIFO and
Random. FIFO, while simpler to implement, may lead to more cache misses under certain access
patterns, as it does not consider usage frequency. Random replacement, being independent of
access patterns, tends to perform unpredictably and may yield lower efficiency.
In conclusion, selecting an appropriate cache replacement policy is essential for optimizing cache
performance. This project underscores the importance of evaluating different policies in practical
scenarios, helping system architects make data-informed decisions on cache design and
management. The findings from this simulation are applicable to real-world computing systems,
where efficient cache management can significantly improve processing speed and resource
utilization.
11