In the previous article, we discussed about cache and its properties. We also learnt some concepts and terminology related to cache. If you haven't read that article, i will recommend you to read that article first. This article contains many terms which i've discussed in that article. In this article, we will learn cache replacement strategies, cache mapping techniques and cache writing techniques. Below is the index of this complete cache tutorial.
Cache Replacement Strategies
In the previous article, we looked at the process of searching data for an address in the cache but there was 1 step missing in that process. We didn't look at the scenario of cache miss. In case of a cache miss, we look at other invalid entry on that index page (note down that we are talking about an index page not the whole index table, recall the terminology from previous article if you want). We add the missing entry in cache index from main memory. But when there is no invalid entry in our cache index then we will have to replace some existing entry to make room for our new entry. Replacing an existing entry is a frequent task and thus plays an important role in determining cache efficiency. There are majorly 2 techniques for finding the entry which is to be replaced by the new one.
Random Replacement
Using this strategy, we find any random entry on that index page and replace it with the new one. This strategy has the advantage of being easy to implement but it may also increase cache misses if we are not lucky enough. A better random algorithm can ensure some improvement but still its random.
Least Recently Used (LRU) Replacement
This is the most widely adopted technique and is somewhat more intuitive. In this strategy, we select the entry which has not been used for a while on the selected page. This means that the oldest used entry is replaced by the new one. This is intuitive because there is no worth of keeping an entry in the index which is lying stale. This strategy ensures better cache performance but it is also complex to implement mainly in hardware. But the algorithm used in strategy can be adopted in many different problem domains such as recommended product listing.
Cache Mapping Techniques
There are total 3 mapping techniques all of which have their own advantages and their disadvantages but all of them have many things in common like tag bits, line bits, word bits, byte select bits etc. The only thing which differs in these techniques is cache entries per index page. This also determines the total number of index pages or total number of cache lines. Correct Cache line is determined by line bits. Below is a list of formulas which you can derive easily if you understood the previous article
Formula for total number of cache lines (index pages) :
(cache_index_size)/(number_of_entries_per_page)
Formula for total number of byte select bits :
log2(word_size)
Formula for total number of word bits :
log2(number_of_words_per_block)
Formula for total number of line bits :
log2(number_of_cache_lines)
Formula for total number of tag bits :
total_address_bits - number_line_bits - word_bits - byte_select_bits
Direct Mapped Cache
In this technique, there is only 1 index entry per index page. This implies that we won't have to perform a linear search for our tag bits. Once we find the correct page , we examine its tag bits to check if this is the address we are looking for.
Set Associative Mapped Cache
In this technique, we explicitly define the number of entries in one index page. For eg, in a 16 item cache index, if we set number of entries per page = 2 then there will be total 8 index pages (cache lines = 16/2). Thus we will have 3 line bits. Once we find the correct page, there will be 2 entries. We will have to check both of them to determine whether we have a cache hit or a cache miss. If we denote the number of entries per line by N then the technique is said to be N-way set associative mapping technique. Our previous example has a 2-way set associative mapped cache.
Fully Associative Mapped Cache
In this technique, there is only one index page which contains all the index entries. Thus we don't require line bits for this technique. This technique make most use of unoccupied entries but it also introduce inefficiency due to search needed to match each entry against given address.
The Tedious Task of Cache Writing
Cache writing is the process of updating cache data after computations. Cache writing is somewhat a complex task as compared to reading because we need to update main memory as well to avoid data inconsistency.
Write Through
This is the technique in which we update the main memory along with cache memory on each update. This technique keeps the main memory always in sync with cache. So basically whenever we want to update some address data, we search it in the cache, update it in cache if there is a cache hit and update it in main memory too. But there are 2 variants of this technique based on the action when there is a cache miss.
- No Allocate (write around) In case of cache miss, we simply update the main memory without writing anything to cache.
- Allocate In case of cache miss, we store index entry and data in cache and update the main memory as well.
Write Back
In this technique, main memory is not updated until the cache entry needs to be replaced. To improve the efficiency further, we maintain a dirty bit associated with each cache index entry. This dirty bit represents whether the cache data has been modified or not. In case when the cache entry needs to be replaced, we update the main memory only if this dirty bit is set to 1 otherwise we just replace the entry.
This was all about cache and its related concepts. We successfully summed up all this in 2 articles. This post is more theory and less pictures so i won't blame you if you found it too boring to read, but it also clears all the basic fundamentals about cache and will make you ready to implement your own cache simulator. If you think this article helped you in some way then do share it. Also if you have any doubts, feel free to comment it below.
Comments
Post a Comment
Comment on articles for more info.