MIT creates a novel cache management system

MIT cache

From the MIT, specifically thanks to one of its teams of researchers from the Laboratory of Computer Science and Artificial Intelligence, the creation of a much more efficient version of the cache management system. As explained in the published paper, this novel management system adjusts much better to the requirements of current processors while paving the way for the arrival of a hypothetical generation of chips with thousands of cores.

As a reminder, the cache is the memory closest to the CPU, where a memory is kept. temporary copy of some data in order to speed up the retrieval of information. In multi-core chips, each core has its own cache to hold the most frequently required data. In addition to this, there is also a large shared cache for all cores with a directory that contains the information that each processing unit stores in it.

MIT talks about its new cache management system.

Curiously, this directory occupies a large part of the shared memory, a size that increases as the number of cores increases. We have a clear example to understand this, for example in that a 64-core processor uses around 12% of the memory to store and update this directory, if the number of cores grows, for example with 128, 256 or 512 chips cores, the system will need a higher percentage, just to save the directories, so it is imperative that it becomes much more efficient to maintain the coherence of the cache.

This is the point where they have been working at MIT. The main challenge lies in multi-core chips that execute instructions in parallel since they must write information at the same time to the system. As explained Xiangyao yu, one of the team members:

Let's say a kernel performs a write operation, and the next operation is a read operation. Under sequential consistency, I have to wait for the writing to finish. If I can't find the data in the cache, I have to go to the central memory that manages the data ownership.

What this new MIT system does is coordinate memory operations of cores according to logical time rather than chronological time. With this scheme, each data packet in a memory bank has its own timestamp, something that in turn makes it very easy for this type of cache memory system to be very easy for manufacturers to implement, despite the fact that each of them has its own access rules.


Leave a Comment

Your email address will not be published. Required fields are marked with *

*

*

  1. Responsible for the data: Miguel Ángel Gatón
  2. Purpose of the data: Control SPAM, comment management.
  3. Legitimation: Your consent
  4. Communication of the data: The data will not be communicated to third parties except by legal obligation.
  5. Data storage: Database hosted by Occentus Networks (EU)
  6. Rights: At any time you can limit, recover and delete your information.