Indexed by:
Abstract:
Cache memory helps in expediting the speed of data retrieval time in processors in heterogeneous multi-core architecture, which is the main factor that affects system performance and power consumption. The implementation algorithm of cache replacement in current heterogeneous multi-core environment is thread-blinded, leading to a lower utilization of the cache. In fact, each of the CPU and GPU applications has its own characteristics, where CPU is responsible for the implementation of tasks and serial logic control, while GPU has a great advantage in parallel computing, which causes the need of cache blocks for CPU more sensitive than those for GPU. With that in mind, this research gives full consideration to the increment of thread priority in the cache replacement algorithm and takes a novel strategy to improve the work efficiency of last-level-cache (LLC), where the CPU and GPU applications share LLC dynamically and not in an absolutely fair status. Furthermore, our methodology switches policies between the LRU (Least Recently Used) and LFU (Least Frequently Used) effectively by comparing the number of cache misses on the LLC, which takes both the time and frequency of the accessing cache block into consideration. The experimental results indicate that this optimization method can effectively improve system performance.
Keyword:
Reprint Author's Address:
Email:
Source :
ACM INTERNATIONAL SYMPOSIUM ON CLUSTER, CLOUD AND GRID COMPUTING (CCGRID)
ISSN: 2376-4414
Year: 2017
Page: 723-,
Language: English
Cited Count:
WoS CC Cited Count: 3
SCOPUS Cited Count: 7
ESI Highly Cited Papers on the List: 0 Unfold All
WanFang Cited Count:
Chinese Cited Count:
30 Days PV: 7
Affiliated Colleges: