SlideShare a Scribd company logo
An Application Classification Guided Cache
Tuning Heuristic for Multi-core Architectures
PRESENTED BY:- GUIDED BY:-
DEBABRATA PAUL CHOWDHURY(14014081002) PRO F. PRASHANT MODI
KHYATI RAJPUT (14014081007) (UVPCE)
M.TECH-CE(SEM -II)
Contents
• Introduction
• Multi core System Optimization
• Cache Tuning
• Cache Tuning Process
• Multi-core Architectural Layout
• Application Classification Guided Cache Tuning Heuristic
• Experimental Work
• Conclusion
Introduction
Basic Concepts
• Single Core :- In single core architecture, computing component having one
independent processing unit.
Introduction(cont.)
• Multi Core:-
• In multi core architecture single computing component with two or more independent
actual processing units (called "cores").
• Run multiple instructions of a program at the same time, increasing overall speed for
programs –”Parallel Computing”.
Muti-Core System Optimization
•Previous multi-core cache optimizations only focused on improving performance(such
as number of Hits, misses, and write backs).
•But now multi-core optimizations focused on reducing energy consumption via tuning
individual cores.
•Definition of Multi-core system optimization
• Multi-core system optimization improve system performance and energy
consumption by tuning the system to the application’s runtime behavior and
resource requirements.
What is Cache Tuning?
•Cache tuning the task of choosing the best configuration of cache design parameters
for a particular application, or for a particular phase of an application, such that
performance, power and/or energy are optimized.
Cache Tuning Process
Step 1:- Execute the application for one tuning interval in each potential
configuration (tuning intervals must be long enough for the cache behavior to
stabilize).
Step 2:- Gather cache statistics, such as the number of accesses, misses, and
write backs, for each explored configuration.
Step 3:- Combine the cache statistics with an energy model to determine the
optimal cache configuration.
Step 4:- Fix the cache parameter values to the optimal cache configuration’s
parameter values.
Multi-core Architectural Layout
Multi-core Architectural Layout(cont.)
• Multi- core Architecture consist of:-
1. Arbitrary number of cores
2. A cache tuner
• Each core has a private data cache (L1).
• Global cache tuner connected to each core’s private data cache(L1).
• It calculates the cache tuner heuristics by gathering cache statistics, coordinating
cache tuning among the cores and calculates the cache’s energy consumption.
Multi-core Architectural Layout(cont.)
Overheads in this Multi-core Architecture Layout
• During tuning, applications incur stall cycles while the tuner gathers cache statistics,
calculates energy consumption, and changes the cache configuration.
• These tuning stall cycles introduce Energy and Performance overhead.
• Our tuning heuristic considers these overheads incurred during the tuning stall cycles,
and thus minimizes the number of simultaneously tuned cores and the tuning energy
and performance overheads.
Multi-core Architectural Layout(cont.)
Multi-core Architectural Layout(cont.)
• Figure illustrates the similarities using actual data cache miss rates for an 8-core system
(the cores are denoted as P0 to P7).
•We evaluate cache miss rate similarity by normalizing the caches’ miss rates to the core
with the lowest miss rate.
•In first figure, normalized miss rates are nearly 1.0 for all cores, all caches are classified
as having similar behavior.
•In second figure, normalized miss rates show that P1 has similar cache behavior as P2 to
P7 (i.e. P1 to P7’s normalized miss rates are nearly 3.5), but P0 has different cache
behavior than P1 to P7.
Application Classification Guided Cache
Tuning Heuristic
• Application classification is based on the two things :-
1. Cache Behaviour
2. Data Sharing or Non Data Sharing Application
• Cache accesses and misses are used to determine if data sets have similar cache
behavior.
•In data-sharing application’s if coherence misses attribute to more than 5% of the total
cache misses , then application is classified as data sharing otherwise the application is
non-data-sharing.
Application Classification Guided Cache
Tuning Heuristic(cont.)
Application Classification Guided Cache
Tuning Heuristic(cont.)
• Application classification guided cache tuning heuristic, which consists of three
main steps:
1) Application profiling and initial tuning
2) Application classification
3) Final tuning actions
Application Classification Guided Cache
Tuning Heuristic(cont.)
•Step 1 profiles the application to gather the caches statistics, which are used to determine
cache behavior and data sharing in step 2.
•Step 1 is critical for avoiding redundant cache tuning in situations where the data sets have
similar cache behavior and similar optimal configurations.
•Condition 1 and Condition 2 classify the applications based on whether or not the cores have
similar cache behavior and/or exhibit data sharing, respectively.
•Evaluating these conditions determines the necessary cache tuning effort in Step 3.
•If condition 1 is evaluated as true. In these situations, only a single cache needs to be tuned.
•When final configuration is obtained apply this configuration to all other cores.
Application Classification Guided Cache
Tuning Heuristic(cont.)
•If the data sets have different cache behavior, or Condition 1 is false, tuning is
more complex and several cores must be tuned.
•If the application does not shares data, or Condition 2 is false, the heuristic only
tunes one core from each group and cores can be tuned independently without
affecting the behavior of the other cores.
•If the application shares data, or Condition 2 is true, the heuristic still only tunes
one core from each group but the tuning must be coordinated among the cores.
Experimental Results
• We quantified the energy savings and performance of our heuristic using SPLASH-2
multithreaded application.
• The SPLASH-2 suite is one of the most widely used collections of multithreaded
workloads.
• On the SESC simulator for a 1-, 2-, 4-, 8- and 16- core system. In SESC, we modeled a
heterogeneous system with the L1 data cache parameters.
•Since the L1 data cache has 36 possible configurations, our design space is 36^n where
n is the of cores in the system.
•The L1 instruction cache and L2 unified cache were fixed at the base configuration and
256 KB, 4-way set associative cache with a 64 byte line size, respectively. We modified
SESC to identify coherence misses.
Experimental Results(cont.)
Energy Model for the multi-core system
• total energy = ∑(energy consumed by each core)
• energy consumed by each core:
energy = dynamic_energy + static_energy + fill_energy + writeback_energy + CPU_stall_energy
• dynamic_energy: The dynamic power consumption originates from logic-gate activities in the
CPU.
dynamic_energy = dL1_accesses * dL1_access_energy
• static energy: The static energy consumption enables energy-aware software development.
Static energy is actually not good for the system at all.
static_energy = ((dL1_misses * miss_latency_cycles) + (dL1_hits * hit_latency_cycles) +
(dL1_writebacks * writeback_latency_cycles)) * dL1_static_energy
Experimental Results(cont.)
•fill_energy: fill_energy = dL1_misses * (linesize / wordsize) *mem_read_energy_perword
• writeback_energy: Write back is a storage method in which data is written into the cache
writeback_energy = dL1_writebacks * (linesize / wordsize) *
mem_write_energy_perword
•CPU_stall_energy: CPU_stall_energy = ((dL1_misses * miss_latency_cycles) +
(dL1_writebacks * writeback_latency_cycles)) * CPU_idle_energy
• Our model calculates the dynamic and static energy of each data cache, the energy needed to
fill the cache on a miss, the energy consumed on a cache write back, and the energy consumed
when the processor is stalled during cache fills and write backs.
• We gathered dL1_misses, dL1_hits, and dL1_writebacks cache statistics using SESC.
Experimental Results(cont.)
• We assumed the core’s idle energy (CPU_idle_energy) to be 25% and the static energy
per cycle to be 25% of the cache’s dynamic energy.
• Let the tuning interval of 50,000 cycles.
• Using configuration_energy_per_cycle to determine the energy consumed during each
500,000 cycle tuning interval and the energy consumed in the final configuration.
• Energy savings were calculated by normalizing the energy to the energy consumed
executing the application in the base configuration.
Results and Analysis
•Figure given below depict the energy savings and performance, respectively, for the
optimal configuration determined via exhaustive design space exploration (optimal) for 2-
and 4-core systems and for the final configuration found by our application classification
cache tuning heuristic (heuristic) for 2-, 4-, 8-, and 16-core systems, for each application
and averaged across all applications (Avg).
•Our heuristic achieved 26% and 25% energy savings, incurred 9% and 6% performance
penalties, and achieved average speedups for the 8- and 16-core systems, respectively.
Results and Analysis(Cont..)
• Normalised performance for the optimal cache (optimal) for 2- and 4-core systems and
the final configuration for the application classification cache tuning heuristic for 2-, 4-,
8- and 16-core systems as compared to the systems respective base configurations.
Results and Analysis(Cont..)
• Energy Saving
• We can get this much of energy consumption.
Conclusion
•Our heuristic classified applications based on data sharing and cache behavior, and used
this classification to identify which cores needed to be tuned and to reduce the number
of cores being tuned simultaneously.
Future Work
•Our heuristic searched at most 1% of the design space, yielded configurations within 2%
of the optimal, and achieved an average of 25% energy savings.
•In future work we plan to investigate how our heuristic will be applicable to a larger
system with hundreds of cores.
An application classification guided cache tuning heuristic for

More Related Content

PDF
F017423643
PPTX
OOW-IMC-final
PPT
Les 06 rec
PPT
Les 00 intro
PDF
Presentation backup and recovery best practices for very large databases (v...
PPTX
Oracle Database 12c features for DBA
PDF
Scheduling and Allocation Algorithm for an Elliptic Filter
F017423643
OOW-IMC-final
Les 06 rec
Les 00 intro
Presentation backup and recovery best practices for very large databases (v...
Oracle Database 12c features for DBA
Scheduling and Allocation Algorithm for an Elliptic Filter

What's hot (19)

PPT
Les 14 perf_db
PPTX
Adaptive Query Optimization
PDF
Comparative Study on the Performance of A Coherency-based Simple Dynamic Equi...
PDF
Energy efficient-resource-allocation-in-distributed-computing-systems
PDF
A multi objective hybrid aco-pso optimization algorithm for virtual machine p...
PPT
Les 16 resource
PPTX
Memory management in oracle
PPT
Les 13 memory
PPT
Les 05 create_bu
PDF
Prepare for the Worst: Reliable Data Protection with Oracle RMAN and Oracle D...
PDF
Thesis-MitchellColgan_LongTerm_PowerSystem_Planning
PDF
Oracle Database 12.1.0.2 New Performance Features
PDF
Comparative study to realize an automatic speaker recognition system
PPT
Les 10 fl1
PDF
Dynamic task scheduling on multicore automotive ec us
PPT
Les 19 space_db
PPT
Les 01 core
DOC
Windows server power_efficiency___robben_and_worthington__final
PPT
Les 04 config_bu
Les 14 perf_db
Adaptive Query Optimization
Comparative Study on the Performance of A Coherency-based Simple Dynamic Equi...
Energy efficient-resource-allocation-in-distributed-computing-systems
A multi objective hybrid aco-pso optimization algorithm for virtual machine p...
Les 16 resource
Memory management in oracle
Les 13 memory
Les 05 create_bu
Prepare for the Worst: Reliable Data Protection with Oracle RMAN and Oracle D...
Thesis-MitchellColgan_LongTerm_PowerSystem_Planning
Oracle Database 12.1.0.2 New Performance Features
Comparative study to realize an automatic speaker recognition system
Les 10 fl1
Dynamic task scheduling on multicore automotive ec us
Les 19 space_db
Les 01 core
Windows server power_efficiency___robben_and_worthington__final
Les 04 config_bu
Ad

Viewers also liked (13)

PDF
Dz'iat0310 copy
DOC
Orientaciones
PPS
La chine (guangxi)
PPTX
WK6ProjKoulagnaR
PDF
S_aptitud
PDF
Nomina de personas autorizadas a legalizar documentacion de brasil
DOC
Taller investigativo y reflexivo decisiones financierras
PDF
Haushaltsauflösung sowie wohnungsauflösung nrw
PDF
Cagri Merkezi Mesleki Yabanci Dil Ornek Diyalog 3
PPTX
Innovacio_Menedzsment_Divizo English
DOCX
Henry Castro resume feb 2017
PPTX
Family org sg
PDF
InternationalConferenceonAgeing-Newsletter
Dz'iat0310 copy
Orientaciones
La chine (guangxi)
WK6ProjKoulagnaR
S_aptitud
Nomina de personas autorizadas a legalizar documentacion de brasil
Taller investigativo y reflexivo decisiones financierras
Haushaltsauflösung sowie wohnungsauflösung nrw
Cagri Merkezi Mesleki Yabanci Dil Ornek Diyalog 3
Innovacio_Menedzsment_Divizo English
Henry Castro resume feb 2017
Family org sg
InternationalConferenceonAgeing-Newsletter
Ad

Similar to An application classification guided cache tuning heuristic for (20)

PPTX
참여기관_발표자료-국민대학교 201301 정기회의
PPTX
BIRA recent.pptx
PDF
AN EFFICIENT MEMORY DESIGN FOR ERROR TOLERANT APPLICATION1 (1).pdf
PPTX
AN EFFICIENT MEMORY DESIGN FOR ERROR TOLERANT APPLICATION1.pptx
DOCX
CS 301 Computer ArchitectureStudent # 1 EID 09Kingdom of .docx
PPT
TASK SCHEDULING ON ADAPTIVE MULTI-CORE
PPTX
Project Slides for Website 2020-22.pptx
PPT
PPTX
A methodology for full system power modeling in heterogeneous data centers
PDF
Run-time power management in cloud and containerized environments
PPTX
Chip Multithreading Systems Need a New Operating System Scheduler
PPT
On chip cache
PDF
CNR @ VMUG.IT 20150304
PDF
Approximation techniques used for general purpose algorithms
PPTX
Wait-free data structures on embedded multi-core systems
PPTX
Project Presentation Final
PPTX
Simulation of Heterogeneous Cloud Infrastructures
PDF
RT15 Berkeley | Optimized Power Flow Control in Microgrids - Sandia Laboratory
PPTX
Oracle ebs capacity_analysisusingstatisticalmethods
PDF
[EWiLi2016] Enabling power-awareness for the Xen Hypervisor
참여기관_발표자료-국민대학교 201301 정기회의
BIRA recent.pptx
AN EFFICIENT MEMORY DESIGN FOR ERROR TOLERANT APPLICATION1 (1).pdf
AN EFFICIENT MEMORY DESIGN FOR ERROR TOLERANT APPLICATION1.pptx
CS 301 Computer ArchitectureStudent # 1 EID 09Kingdom of .docx
TASK SCHEDULING ON ADAPTIVE MULTI-CORE
Project Slides for Website 2020-22.pptx
A methodology for full system power modeling in heterogeneous data centers
Run-time power management in cloud and containerized environments
Chip Multithreading Systems Need a New Operating System Scheduler
On chip cache
CNR @ VMUG.IT 20150304
Approximation techniques used for general purpose algorithms
Wait-free data structures on embedded multi-core systems
Project Presentation Final
Simulation of Heterogeneous Cloud Infrastructures
RT15 Berkeley | Optimized Power Flow Control in Microgrids - Sandia Laboratory
Oracle ebs capacity_analysisusingstatisticalmethods
[EWiLi2016] Enabling power-awareness for the Xen Hypervisor

Recently uploaded (20)

PPTX
Lecture Notes Electrical Wiring System Components
PDF
Digital Logic Computer Design lecture notes
PPTX
OOP with Java - Java Introduction (Basics)
PPTX
CYBER-CRIMES AND SECURITY A guide to understanding
DOCX
573137875-Attendance-Management-System-original
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PDF
R24 SURVEYING LAB MANUAL for civil enggi
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PDF
Well-logging-methods_new................
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PPT
Project quality management in manufacturing
PDF
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
PDF
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
PPTX
Foundation to blockchain - A guide to Blockchain Tech
DOCX
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PPTX
web development for engineering and engineering
PPTX
additive manufacturing of ss316l using mig welding
Lecture Notes Electrical Wiring System Components
Digital Logic Computer Design lecture notes
OOP with Java - Java Introduction (Basics)
CYBER-CRIMES AND SECURITY A guide to understanding
573137875-Attendance-Management-System-original
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
R24 SURVEYING LAB MANUAL for civil enggi
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
Well-logging-methods_new................
UNIT-1 - COAL BASED THERMAL POWER PLANTS
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
Project quality management in manufacturing
Mohammad Mahdi Farshadian CV - Prospective PhD Student 2026
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
Foundation to blockchain - A guide to Blockchain Tech
ASol_English-Language-Literature-Set-1-27-02-2023-converted.docx
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
web development for engineering and engineering
additive manufacturing of ss316l using mig welding

An application classification guided cache tuning heuristic for

  • 1. An Application Classification Guided Cache Tuning Heuristic for Multi-core Architectures PRESENTED BY:- GUIDED BY:- DEBABRATA PAUL CHOWDHURY(14014081002) PRO F. PRASHANT MODI KHYATI RAJPUT (14014081007) (UVPCE) M.TECH-CE(SEM -II)
  • 2. Contents • Introduction • Multi core System Optimization • Cache Tuning • Cache Tuning Process • Multi-core Architectural Layout • Application Classification Guided Cache Tuning Heuristic • Experimental Work • Conclusion
  • 3. Introduction Basic Concepts • Single Core :- In single core architecture, computing component having one independent processing unit.
  • 4. Introduction(cont.) • Multi Core:- • In multi core architecture single computing component with two or more independent actual processing units (called "cores"). • Run multiple instructions of a program at the same time, increasing overall speed for programs –”Parallel Computing”.
  • 5. Muti-Core System Optimization •Previous multi-core cache optimizations only focused on improving performance(such as number of Hits, misses, and write backs). •But now multi-core optimizations focused on reducing energy consumption via tuning individual cores. •Definition of Multi-core system optimization • Multi-core system optimization improve system performance and energy consumption by tuning the system to the application’s runtime behavior and resource requirements.
  • 6. What is Cache Tuning? •Cache tuning the task of choosing the best configuration of cache design parameters for a particular application, or for a particular phase of an application, such that performance, power and/or energy are optimized.
  • 7. Cache Tuning Process Step 1:- Execute the application for one tuning interval in each potential configuration (tuning intervals must be long enough for the cache behavior to stabilize). Step 2:- Gather cache statistics, such as the number of accesses, misses, and write backs, for each explored configuration. Step 3:- Combine the cache statistics with an energy model to determine the optimal cache configuration. Step 4:- Fix the cache parameter values to the optimal cache configuration’s parameter values.
  • 9. Multi-core Architectural Layout(cont.) • Multi- core Architecture consist of:- 1. Arbitrary number of cores 2. A cache tuner • Each core has a private data cache (L1). • Global cache tuner connected to each core’s private data cache(L1). • It calculates the cache tuner heuristics by gathering cache statistics, coordinating cache tuning among the cores and calculates the cache’s energy consumption.
  • 10. Multi-core Architectural Layout(cont.) Overheads in this Multi-core Architecture Layout • During tuning, applications incur stall cycles while the tuner gathers cache statistics, calculates energy consumption, and changes the cache configuration. • These tuning stall cycles introduce Energy and Performance overhead. • Our tuning heuristic considers these overheads incurred during the tuning stall cycles, and thus minimizes the number of simultaneously tuned cores and the tuning energy and performance overheads.
  • 12. Multi-core Architectural Layout(cont.) • Figure illustrates the similarities using actual data cache miss rates for an 8-core system (the cores are denoted as P0 to P7). •We evaluate cache miss rate similarity by normalizing the caches’ miss rates to the core with the lowest miss rate. •In first figure, normalized miss rates are nearly 1.0 for all cores, all caches are classified as having similar behavior. •In second figure, normalized miss rates show that P1 has similar cache behavior as P2 to P7 (i.e. P1 to P7’s normalized miss rates are nearly 3.5), but P0 has different cache behavior than P1 to P7.
  • 13. Application Classification Guided Cache Tuning Heuristic • Application classification is based on the two things :- 1. Cache Behaviour 2. Data Sharing or Non Data Sharing Application • Cache accesses and misses are used to determine if data sets have similar cache behavior. •In data-sharing application’s if coherence misses attribute to more than 5% of the total cache misses , then application is classified as data sharing otherwise the application is non-data-sharing.
  • 14. Application Classification Guided Cache Tuning Heuristic(cont.)
  • 15. Application Classification Guided Cache Tuning Heuristic(cont.) • Application classification guided cache tuning heuristic, which consists of three main steps: 1) Application profiling and initial tuning 2) Application classification 3) Final tuning actions
  • 16. Application Classification Guided Cache Tuning Heuristic(cont.) •Step 1 profiles the application to gather the caches statistics, which are used to determine cache behavior and data sharing in step 2. •Step 1 is critical for avoiding redundant cache tuning in situations where the data sets have similar cache behavior and similar optimal configurations. •Condition 1 and Condition 2 classify the applications based on whether or not the cores have similar cache behavior and/or exhibit data sharing, respectively. •Evaluating these conditions determines the necessary cache tuning effort in Step 3. •If condition 1 is evaluated as true. In these situations, only a single cache needs to be tuned. •When final configuration is obtained apply this configuration to all other cores.
  • 17. Application Classification Guided Cache Tuning Heuristic(cont.) •If the data sets have different cache behavior, or Condition 1 is false, tuning is more complex and several cores must be tuned. •If the application does not shares data, or Condition 2 is false, the heuristic only tunes one core from each group and cores can be tuned independently without affecting the behavior of the other cores. •If the application shares data, or Condition 2 is true, the heuristic still only tunes one core from each group but the tuning must be coordinated among the cores.
  • 18. Experimental Results • We quantified the energy savings and performance of our heuristic using SPLASH-2 multithreaded application. • The SPLASH-2 suite is one of the most widely used collections of multithreaded workloads. • On the SESC simulator for a 1-, 2-, 4-, 8- and 16- core system. In SESC, we modeled a heterogeneous system with the L1 data cache parameters. •Since the L1 data cache has 36 possible configurations, our design space is 36^n where n is the of cores in the system. •The L1 instruction cache and L2 unified cache were fixed at the base configuration and 256 KB, 4-way set associative cache with a 64 byte line size, respectively. We modified SESC to identify coherence misses.
  • 19. Experimental Results(cont.) Energy Model for the multi-core system • total energy = ∑(energy consumed by each core) • energy consumed by each core: energy = dynamic_energy + static_energy + fill_energy + writeback_energy + CPU_stall_energy • dynamic_energy: The dynamic power consumption originates from logic-gate activities in the CPU. dynamic_energy = dL1_accesses * dL1_access_energy • static energy: The static energy consumption enables energy-aware software development. Static energy is actually not good for the system at all. static_energy = ((dL1_misses * miss_latency_cycles) + (dL1_hits * hit_latency_cycles) + (dL1_writebacks * writeback_latency_cycles)) * dL1_static_energy
  • 20. Experimental Results(cont.) •fill_energy: fill_energy = dL1_misses * (linesize / wordsize) *mem_read_energy_perword • writeback_energy: Write back is a storage method in which data is written into the cache writeback_energy = dL1_writebacks * (linesize / wordsize) * mem_write_energy_perword •CPU_stall_energy: CPU_stall_energy = ((dL1_misses * miss_latency_cycles) + (dL1_writebacks * writeback_latency_cycles)) * CPU_idle_energy • Our model calculates the dynamic and static energy of each data cache, the energy needed to fill the cache on a miss, the energy consumed on a cache write back, and the energy consumed when the processor is stalled during cache fills and write backs. • We gathered dL1_misses, dL1_hits, and dL1_writebacks cache statistics using SESC.
  • 21. Experimental Results(cont.) • We assumed the core’s idle energy (CPU_idle_energy) to be 25% and the static energy per cycle to be 25% of the cache’s dynamic energy. • Let the tuning interval of 50,000 cycles. • Using configuration_energy_per_cycle to determine the energy consumed during each 500,000 cycle tuning interval and the energy consumed in the final configuration. • Energy savings were calculated by normalizing the energy to the energy consumed executing the application in the base configuration.
  • 22. Results and Analysis •Figure given below depict the energy savings and performance, respectively, for the optimal configuration determined via exhaustive design space exploration (optimal) for 2- and 4-core systems and for the final configuration found by our application classification cache tuning heuristic (heuristic) for 2-, 4-, 8-, and 16-core systems, for each application and averaged across all applications (Avg). •Our heuristic achieved 26% and 25% energy savings, incurred 9% and 6% performance penalties, and achieved average speedups for the 8- and 16-core systems, respectively.
  • 23. Results and Analysis(Cont..) • Normalised performance for the optimal cache (optimal) for 2- and 4-core systems and the final configuration for the application classification cache tuning heuristic for 2-, 4-, 8- and 16-core systems as compared to the systems respective base configurations.
  • 24. Results and Analysis(Cont..) • Energy Saving • We can get this much of energy consumption.
  • 25. Conclusion •Our heuristic classified applications based on data sharing and cache behavior, and used this classification to identify which cores needed to be tuned and to reduce the number of cores being tuned simultaneously.
  • 26. Future Work •Our heuristic searched at most 1% of the design space, yielded configurations within 2% of the optimal, and achieved an average of 25% energy savings. •In future work we plan to investigate how our heuristic will be applicable to a larger system with hundreds of cores.