0% found this document useful (0 votes)
120 views

Cache Memory and Cache Mapping (Kashavi)

Cache memory is a fast memory located between the CPU and main memory that stores frequently accessed data. It works to speed up access times for the CPU. There are three types of cache mapping - direct mapping stores each block in a specific line; associative mapping can store blocks anywhere but requires checking all lines; set associative mapping divides the cache into sets to reduce conflicts and comparisons. Set associative mapping balances performance and overhead better than the other approaches.

Uploaded by

Varun Bajlotra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
120 views

Cache Memory and Cache Mapping (Kashavi)

Cache memory is a fast memory located between the CPU and main memory that stores frequently accessed data. It works to speed up access times for the CPU. There are three types of cache mapping - direct mapping stores each block in a specific line; associative mapping can store blocks anywhere but requires checking all lines; set associative mapping divides the cache into sets to reduce conflicts and comparisons. Set associative mapping balances performance and overhead better than the other approaches.

Uploaded by

Varun Bajlotra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 14

Cache Memory And Cache

Mapping
Cache Memory
Cache Memory is a special very high-speed memory. It is
used to speed up and synchronizing with high-speed CPU.
Cache memory is costlier than main memory or disk
memory but economical than CPU registers. Cache
memory is an extremely fast memory type that acts as a
buffer between RAM and the CPU. It holds frequently
requested data and instructions so that they are immediately
available to the CPU when needed.

2
Cache Performance
When the processor needs to read or write a location in main
memory, it first checks for a corresponding entry in the cache.
■ If the processor finds that the memory location is in the cache,
a cache hit has occurred and data is read from cache
■ If the processor does not find the memory location in the
cache, a cache miss has occurred. For a cache miss, the cache
allocates a new entry and copies in data from main memory,
then the request is fulfilled from the contents of the cache.

The performance of cache memory is frequently


measured in terms of a quantity called Hit ratio.

Hit ratio = hit / (hit + miss) = no. of hits/total accesses

3
Types of
Cache Mapping
◇ Direct Mapping
◇ Associative Mapping
◇ Set Associative Mapping

4
Direct Mapping
Fields of Storage
For purposes of cache access, each main memory address can be
viewed as consisting of three fields. The least significant w bits
identify a unique word or byte within a block of main memory. The
most significant t bits represent the tag bits which store the unique id
of each block mapped to that particular line. The remaining l bits in
between determine the line no. to which a particular block will be
mapped.

6
Mapping Technique
In Direct mapping, each memory block If ith block of main memory has to
is assigned to a specific line in the be placed at jth line of cache memory
cache. Direct mapping`s performance is
directly proportional to the Hit ratio.
Disadvantage i = j modulo m
High Conflict Miss : If a line is where
previously taken up by a memory block i =cache line number
when a new block needs to be loaded,
j= main memory block number
the old block is trashed even when other
blocks in the cache memory are empty. m=number of lines in the cache

7
Associative Mapping
Fields of Storage
To overcome the disadvantage of Direct Mapping,
Associative Mapping was introduced. The idea is to avoid
the High Conflict Miss. This is achieved by storing any
block of the main memory in any block of the cache
memory. So instead of storing l line bits, we’ll be using
those bits to store the tag.

9
Advantage
0% chance of a high
conflict miss
No. Of tag comparisons = No. Of
Disadvantage blocks inside the cache (Entire
For every block from the main Cache Search)
memory, we need to check in the
cache memory which cache block is
empty. So a very high number of tag
comparisons are required.

10
Set Associative Mapping
◇ Overcomes the shortcomings of Direct Mapping and Associative
Mapping, i.e., ensures no conflict misses and less no. of
comparisons.
◇ Cache Blocks are divided into sets.
◇ Size of one set is always a power of two, i.e., 2,4,8,16,etc.
◇ The storage fields are divided in three:
■ The least significant w word bits identify a unique word in
main memory.
■ The most significant t tag bits are required to compare two

Features blocks which belong to the same set.


■ The s bits in between identify the corresponding set number
in which a particular block will be placed.

12
Mapping Technique
If ith block of the main memory has to be stored in the jth
set in the cache memory, then
j = i mod (No. of Sets in Cache Memory)
A new block can be stored anywhere in the set. Usually it
acquires the first empty block it encounters while searching

Set Associative Mapping decreases the chance conflict


misses and also requires less comparisons of tag bits.
It employs the good features both the mapping techniques
discussed previously and hence is the best mapping
technique.

13
Thank You
Made By :
KASHAVI SHARMA
35614803118
4-I6
IT

14

You might also like