The paper discusses a technique to reduce competitive cache misses in multicore processors by allowing each thread to have a virtual section of the first-level cache. This approach enables threads to access all cache memory during hits and stores data in their designated virtual areas during misses, resulting in up to a 15% performance improvement. The authors highlight the challenge of cache misses due to increased core counts and propose a method that enhances memory-level parallelism without drastically changing existing processor architectures.
Related topics: