Don’t give up, You can... Cache!
...Reasoning about why Caching Systems are sometimes a pain...
Crafted Software Meetup - 30/01/2020
Hi!
● Stefano Fago
● Software Designer in UBI Banca
● Legacy Application, Middleware and R&D Backend
https://www.linkedin.com/in/stefanofago/
https://github.com/stefanofago73
Where are the Caches?
Don’t give up, You can... Cache!
How we face Caching?
Don’t give up, You can... Cache!
Why Caches can be a pain?
Why Caches can be a pain?
...Because we forget that...
● a Cache hosts our DATA
● a cache IS NOT JUST AN ASSOCIATION ARRAY
● a cache is NOT a BUFFER
● a cache is NOT a POOL
● Business application ! = Twitter / Facebook / ... [Michael Plöd]
Caching is not what you think it is!
https://medium.com/@mauridb/caching-is-not-what-you-think-it-is-5104f8891b51
<< ...caching should be done to decrease costs needed to increase performance
and scalability and NOT to solve performance and scalability problems… >>
[Davide Mauri]
Why Caches can be a pain?
...Because we forget to set well defined goals and trade-off between:
– Offloading :
decrease the load of a system with limited and/or expensive
resources
– Performance :
decrease network/cpu usage
– Scale-out :
horizontal growth of systems having data locality and working-sets
ready
– Resilience
service resilience with fallback, default values, reuse of errors
Why Caches can be a pain?
...Because we forget the important things...
What can we do?
What can we do?
In order not to suffer with Caching we should:
● Decide on the type of cache to use
● Decide on an adoption path (How can we introduce
Caches in our projects?)
● Know our data
● Decide on the trade-off between reads and writes
● Define trade-offs for Resilience and Security
Different Kind of Caches
Different Kind of Caches
● Local/Internal
● In-Process
● Near Cache
Processo
Cache
Processo
Near
Cache
Cache
Server
Cache
Server
Cache
Server
Different Kind of Caches
● Remote/External
● Replicated
● Distributed(Partitioned)
Processo
Cache Cache Cache
Different Kind of Caches
● In-Process : for reads and writes, small/medium size, it does not
scale because it is limited to the process
● Near-Cache : better for reads, small/medium size, can scale in the
relationship with the cluster (of which it is an local
extension/expression)
● Replicated : data consistency for reads, small size, limited
scalability
● Partitioned : for reads and writes, different sizes and ability to
scale with fault tolerance
Different Kind of Caches : DEV
● Small Cache Read-Only/Timed (In-Process)
● Memoization (In-Process/Near Cache)
● Cache (In-Process/Distributed-Partitioned)
● User Session/Working Set (In-Process/Distributed-
Partitioned + … or NoSql)
● Distributed Memory (IMDG)
Different Kind of Caches : Problems
● Cache Stampede/Thunderig Herd ( concurrent calls on a
specific key not already there )
● Cache Fault Tollerance ( error handling for the Caching
subsystem, hierarchical caches, network error
management, ...)
● Cache Security (privacy and security policies, regulations
conformance, technical solutions)
Adopt a Cache
Adopt a Cache
Can follow two paths depending on
whether:
● Cache as First-Citizen in the
Software Architecture (Caching
Application Profiles)
● Cache as an evolution of a pre-
existing system
Added value is in the creation of a data
models, to be changed over time, born
from the evidence from the first phase. [Michael Plöd]
(1) (2) (3)
Adopt a Cache
Cache observability, especially if distributed:
● Hit : the value sought is available
● Miss : the value sought is not available
● Cold/Hot : cache is empty/full
● Warm-Up : populating cache
● Hit Ratio : Hit/(Hits + Miss)
● Hit Rate : Hit/seconds
● Items Size : number of elements in cache
● Conc. Request/s : number of concurrent requests/s
● ...many others!
Adopt a Cache
Having Operations support: collaboration/synergy is important
for network, metrics, deployment and emergency management
aspects
Have an alternative Plan : prepare alternatives that allow the
system/service to be online in the event of widespread errors or
unavailability of the Caching System
Prepare a design where the Cache Provider is abstracted and
appropriately hidden in terms of implementation to avoid
unsolvable dependencies in the future!
Know the Data
Know the Data
What are the data to put in Cache?
● Most Used/Required
● Expensive to Calculate
● Expensive to Retrieve
● Common/Shareable Data
The best are: read-only, frequently used and/or
expensive to calculate
Know the Data
What characteristics of the data to choose?
● Data Type (Better NOT the DTO, NOT Business Object)
● Data Format (Textual? Binary? Custom?)
● Life Time of the Data (When It’s Stale/Fresh)
● Data Type volumes
● Serialization/Deserialization issues
● Data Affinity
● Data Compression (...if you really have to...)
Know the Data
What issues are related to the Data (areas):
● Cache Access
● Cache Eviction
● Cache Invalidation
● Data Search/Data Collections Management
● Definition of Unique Keys
● Cache Concurrency Support
● Storage (RAM, SSD, … )
● Security/Regulations
Know the Data : Eviction
Forgetting is difficult for a cache: we have to find the trade-off between the
usefulness of the data and the size of the cache!
Concepts born from the optimization of linear research (Self-Organizing List
https://en.wikipedia.org/wiki/Self-organizing_list )
● Move To the Front
● Transpose
● Counting
LRU
LFU
Know the Data : Eviction
The frameworks, in a best-effort perspective, essentially offer LRU, LFU and
the ability of creating customized policies.
● LRU: (recency) deletes the least recently used items.
● LFU: (frequency) based on access frequency, eliminates less frequently used
Studies rise in the direction of Adaptive Systems using AI or statistical
processing (on the history of data); can be offered better results in the
compromise between memory, competition, speed!
● https://arxiv.org/pdf/1512.00727.pdf
● https://www.cs.bgu.ac.il/~tanm201/wiki.files/S1P2%20Adaptive%20Software
%20Cache%20Management.pdf
Know the Data : Eviction
When LRU, LFU are not enough which element can improve the
situation?
Time!
Applying timing policies or time windows for the aging of
data or which restrict the validity of data, helps to have a better
degree of adaptability ... but there is more!
Know the Data : Eviction
Know the Data : Problems
● Cache Trashing
the pattern of data usage is such that the cache is useless
● Cold Cache
an empty cache takes time to be useful!
● Cache Security
like any system there are privacy and security issues (what about
GDPR?):
● Data anonymization
● Cache Penetration
● Cache Avalanche
Access Patterns
Access Patterns
Accessing or entering Data in a cache means also choosing
its role and the trade-off between reads and writes...
● Cache-Aside
● Cache-Through
● Write-Around
● Refresh-Ahead
● Write-Back ( Write-Behind)
Access Patterns
Cache-Aside : the application is responsible for reads and writes to storage
as well as to the cache that is collateral to storage
Access Patterns
Related to Cache-Aside are:
● Look-Aside : The value is first searched in the cache and then in the storage
● Demand-Fill : Implies that in the case of MISS not only is the value returned
from the storage but it will also be placed in the cache
Cache-Aside generally provides both the LOOK-ASIDE and the DEMAND-
FILL but it is not mandatory that both are present: in a Pub/Sub system, Cache
and Storage can be subscribers of the same Publisher but they materialize the
data for two different reasons.
Access Patterns
Cache-Through : Write-Through/Read-Through
The application treats the cache as if it were the main storage; reads /
writes take place through the cache and propagated synchronously on
the storage
Access Patterns
Write-Around : The application reads from the cache but for
writes this is avoided. When data is new then is written directly to
the storage: it’s in the case of reads that the cache is filled with
data. (Useful when there are many writes and few reads).
Access Patterns
Refresh-Ahead : The cache is updated, also by scheduling,
asynchronously, for the recently accessed elements, before these expire
Access Patterns
Write-Back ( Write-Behind) : The application writes on the
cache but the propagation on the storage takes place
asynchronously (generally there is a delay configured, it assumes a queue
system; trade-off between high throughput and problems on data consistency)
Access Patterns : DEV
● Cache Aside
● Cache Through
● Cache Selective Bypass
● Cache Massive Load
● Cache Full Cleaning (+ Warmer)
Fight Cache Club
The Club Rules
1) Don’t speak about cache
2) Don’t speak about cache: if you do it, made it not at the expense of
your services
3) Define the price you are willing to pay
4) If you change the rules of the game you must be aware of it
5) Design in a simple way: start local, works on definable models
6) Measure, measure, measure: the cache gives you data and hints
7) Cache tuning takes time and changes over time
8) If you are in the Club ‘cause Microservices ... You have to fight!
Microservices
Microservices & Caching
Microservices amplify the importance of Caching Systems; among
the characteristics that explain this increase, worth mention:
● Microservices have their own data and there are many
● Microservices need to communicate!
● Different microservices have different needs
● Caching becomes part of the Resilience policies
● Caching to support a different persistence vision
● Microservices involve a more complex and powerful infrastructure
Microservices (EVCache Netflix)
Look Aside
Primary Storage
High-Availability
Transient Store
Microservices
The Microservices, on the infrastructural perspective, highlighted the need for a
layer of mediation and coordination of communications, today defined as
Service-Mesh.
Among the patterns deriving from the use of the Service-Mesh vision, Sidecar
has relevance: a container to aid a given Microservice.
Microservices
Service Mesh define, for Caching Systems, new possible topologies:
1) In-Process Cache for Microservice
2) Remote Cache (partitioned) external to the Service-Mesh
3) Remote Cache (partitioned) with Cache Client inside the Service-Mesh
(Sidecar)
4) Remote Cache (partitioned) with Caching System inside the Service-Mesh
( using Operators/Agent/Sidecar)
Microservices
In these scenarios the concepts of Eventual Consistency and Idempotency are
strengthened. The importance of having Streaming Systems and CDC Systems
emerges in the collaboration with the Caching System for important aspects,
among which:
● The persistence of Save Point / Critical Operations
● Alternative to 2PC Transactions
● The Data Propagation to suitable Listener subsystems
https://debezium.io/blog/2018/12/05/automating-cache-invalidation-with-change-data-capture/
https://medium.com/trabe/cache-invalidation-using-mqtt-e3bd8f6c2cf5
Microservices (EVCache Netflix - Replication)
...and remember that...
<< ...Everyone knows WHAT they do,
Some know HOW They do it, Few
people know WHY they do it!... >>
That's All Folks!

More Related Content

PPTX
From cache to in-memory data grid. Introduction to Hazelcast.
PPT
Tempdb Not your average Database
PPTX
CREAM - That Conference Austin - January 2024.pptx
PPTX
Cache Rules Everything Around Me - Momentum - October 2022.pptx
PPTX
Cache Rules Everything Around Me - DevIntersection - December 2022
PPTX
Mini-Training: To cache or not to cache
PPTX
Distributed Cache with dot microservices
PPTX
Caching
From cache to in-memory data grid. Introduction to Hazelcast.
Tempdb Not your average Database
CREAM - That Conference Austin - January 2024.pptx
Cache Rules Everything Around Me - Momentum - October 2022.pptx
Cache Rules Everything Around Me - DevIntersection - December 2022
Mini-Training: To cache or not to cache
Distributed Cache with dot microservices
Caching

Similar to Don’t give up, You can... Cache! (20)

PPTX
Selecting the right cache framework
PDF
Caching principles-solutions
PDF
Overview of the ehcache
PDF
Caching for Microservices Architectures: Session II - Caching Patterns
PPTX
Cache-Aside Cloud Design Pattern
PDF
Caching in Distributed Environment
PPTX
[Hanoi-August 13] Tech Talk on Caching Solutions
PDF
International Journal of Engineering Research and Development (IJERD)
KEY
Memcached: What is it and what does it do?
PPTX
Scalable Resilient Web Services In .Net
PPTX
Introduction to Redis and its features.pptx
PDF
Tulsa tech fest 2010 - web speed and scalability
PPTX
From distributed caches to in-memory data grids
PPTX
Windows Server AppFabric Cache
PDF
Caching 101: Caching on the JVM (and beyond)
PDF
Simple server side cache for Express.js with Node.js
PDF
Caching Patterns for lazy devs for lazy loading - Luigi Fugaro VDTJAN23
PPT
Caching for J2ee Enterprise Applications
PPTX
Distributed Caching - Cache Unleashed
PPTX
Training Webinar: Enterprise application performance with distributed caching
Selecting the right cache framework
Caching principles-solutions
Overview of the ehcache
Caching for Microservices Architectures: Session II - Caching Patterns
Cache-Aside Cloud Design Pattern
Caching in Distributed Environment
[Hanoi-August 13] Tech Talk on Caching Solutions
International Journal of Engineering Research and Development (IJERD)
Memcached: What is it and what does it do?
Scalable Resilient Web Services In .Net
Introduction to Redis and its features.pptx
Tulsa tech fest 2010 - web speed and scalability
From distributed caches to in-memory data grids
Windows Server AppFabric Cache
Caching 101: Caching on the JVM (and beyond)
Simple server side cache for Express.js with Node.js
Caching Patterns for lazy devs for lazy loading - Luigi Fugaro VDTJAN23
Caching for J2ee Enterprise Applications
Distributed Caching - Cache Unleashed
Training Webinar: Enterprise application performance with distributed caching
Ad

More from Stefano Fago (13)

PDF
Exploring Open Source Licensing
PDF
Non solo Microservizi: API, Prodotti e Piattaforme
PDF
Api and Fluency
PDF
Resisting to The Shocks
ODP
Gamification - Introduzione e Idee di un NON GIOCATORE
ODP
Quale IT nel futuro delle Banche?
PDF
Microservices & Bento
ODP
Giochi in Azienda
PDF
What drives Innovation? Innovations And Technological Solutions for the Distr...
PDF
Reasoning about QRCode
PDF
... thinking about Microformats!
PDF
Uncommon Design Patterns
PPT
Riuso Object Oriented
Exploring Open Source Licensing
Non solo Microservizi: API, Prodotti e Piattaforme
Api and Fluency
Resisting to The Shocks
Gamification - Introduzione e Idee di un NON GIOCATORE
Quale IT nel futuro delle Banche?
Microservices & Bento
Giochi in Azienda
What drives Innovation? Innovations And Technological Solutions for the Distr...
Reasoning about QRCode
... thinking about Microformats!
Uncommon Design Patterns
Riuso Object Oriented
Ad

Recently uploaded (20)

PDF
Ableton Live Suite for MacOS Crack Full Download (Latest 2025)
PDF
How to Make Money in the Metaverse_ Top Strategies for Beginners.pdf
PDF
Website Design Services for Small Businesses.pdf
DOCX
Modern SharePoint Intranet Templates That Boost Employee Engagement in 2025.docx
PDF
Designing Intelligence for the Shop Floor.pdf
PDF
How AI/LLM recommend to you ? GDG meetup 16 Aug by Fariman Guliev
PDF
DuckDuckGo Private Browser Premium APK for Android Crack Latest 2025
PPTX
Monitoring Stack: Grafana, Loki & Promtail
PPTX
GSA Content Generator Crack (2025 Latest)
PDF
Microsoft Office 365 Crack Download Free
PPTX
Introduction to Windows Operating System
PDF
Top 10 Software Development Trends to Watch in 2025 🚀.pdf
PPTX
Weekly report ppt - harsh dattuprasad patel.pptx
PPTX
assetexplorer- product-overview - presentation
PDF
Product Update: Alluxio AI 3.7 Now with Sub-Millisecond Latency
PDF
AI Guide for Business Growth - Arna Softech
PDF
CCleaner 6.39.11548 Crack 2025 License Key
PDF
AI/ML Infra Meetup | Beyond S3's Basics: Architecting for AI-Native Data Access
PPTX
WiFi Honeypot Detecscfddssdffsedfseztor.pptx
PPTX
Computer Software - Technology and Livelihood Education
Ableton Live Suite for MacOS Crack Full Download (Latest 2025)
How to Make Money in the Metaverse_ Top Strategies for Beginners.pdf
Website Design Services for Small Businesses.pdf
Modern SharePoint Intranet Templates That Boost Employee Engagement in 2025.docx
Designing Intelligence for the Shop Floor.pdf
How AI/LLM recommend to you ? GDG meetup 16 Aug by Fariman Guliev
DuckDuckGo Private Browser Premium APK for Android Crack Latest 2025
Monitoring Stack: Grafana, Loki & Promtail
GSA Content Generator Crack (2025 Latest)
Microsoft Office 365 Crack Download Free
Introduction to Windows Operating System
Top 10 Software Development Trends to Watch in 2025 🚀.pdf
Weekly report ppt - harsh dattuprasad patel.pptx
assetexplorer- product-overview - presentation
Product Update: Alluxio AI 3.7 Now with Sub-Millisecond Latency
AI Guide for Business Growth - Arna Softech
CCleaner 6.39.11548 Crack 2025 License Key
AI/ML Infra Meetup | Beyond S3's Basics: Architecting for AI-Native Data Access
WiFi Honeypot Detecscfddssdffsedfseztor.pptx
Computer Software - Technology and Livelihood Education

Don’t give up, You can... Cache!

  • 1. Don’t give up, You can... Cache! ...Reasoning about why Caching Systems are sometimes a pain... Crafted Software Meetup - 30/01/2020
  • 2. Hi! ● Stefano Fago ● Software Designer in UBI Banca ● Legacy Application, Middleware and R&D Backend https://www.linkedin.com/in/stefanofago/ https://github.com/stefanofago73
  • 3. Where are the Caches?
  • 5. How we face Caching?
  • 7. Why Caches can be a pain?
  • 8. Why Caches can be a pain? ...Because we forget that... ● a Cache hosts our DATA ● a cache IS NOT JUST AN ASSOCIATION ARRAY ● a cache is NOT a BUFFER ● a cache is NOT a POOL ● Business application ! = Twitter / Facebook / ... [Michael Plöd] Caching is not what you think it is! https://medium.com/@mauridb/caching-is-not-what-you-think-it-is-5104f8891b51 << ...caching should be done to decrease costs needed to increase performance and scalability and NOT to solve performance and scalability problems… >> [Davide Mauri]
  • 9. Why Caches can be a pain? ...Because we forget to set well defined goals and trade-off between: – Offloading : decrease the load of a system with limited and/or expensive resources – Performance : decrease network/cpu usage – Scale-out : horizontal growth of systems having data locality and working-sets ready – Resilience service resilience with fallback, default values, reuse of errors
  • 10. Why Caches can be a pain? ...Because we forget the important things...
  • 11. What can we do?
  • 12. What can we do? In order not to suffer with Caching we should: ● Decide on the type of cache to use ● Decide on an adoption path (How can we introduce Caches in our projects?) ● Know our data ● Decide on the trade-off between reads and writes ● Define trade-offs for Resilience and Security
  • 14. Different Kind of Caches ● Local/Internal ● In-Process ● Near Cache Processo Cache Processo Near Cache Cache Server Cache Server Cache Server
  • 15. Different Kind of Caches ● Remote/External ● Replicated ● Distributed(Partitioned) Processo Cache Cache Cache
  • 16. Different Kind of Caches ● In-Process : for reads and writes, small/medium size, it does not scale because it is limited to the process ● Near-Cache : better for reads, small/medium size, can scale in the relationship with the cluster (of which it is an local extension/expression) ● Replicated : data consistency for reads, small size, limited scalability ● Partitioned : for reads and writes, different sizes and ability to scale with fault tolerance
  • 17. Different Kind of Caches : DEV ● Small Cache Read-Only/Timed (In-Process) ● Memoization (In-Process/Near Cache) ● Cache (In-Process/Distributed-Partitioned) ● User Session/Working Set (In-Process/Distributed- Partitioned + … or NoSql) ● Distributed Memory (IMDG)
  • 18. Different Kind of Caches : Problems ● Cache Stampede/Thunderig Herd ( concurrent calls on a specific key not already there ) ● Cache Fault Tollerance ( error handling for the Caching subsystem, hierarchical caches, network error management, ...) ● Cache Security (privacy and security policies, regulations conformance, technical solutions)
  • 20. Adopt a Cache Can follow two paths depending on whether: ● Cache as First-Citizen in the Software Architecture (Caching Application Profiles) ● Cache as an evolution of a pre- existing system Added value is in the creation of a data models, to be changed over time, born from the evidence from the first phase. [Michael Plöd] (1) (2) (3)
  • 21. Adopt a Cache Cache observability, especially if distributed: ● Hit : the value sought is available ● Miss : the value sought is not available ● Cold/Hot : cache is empty/full ● Warm-Up : populating cache ● Hit Ratio : Hit/(Hits + Miss) ● Hit Rate : Hit/seconds ● Items Size : number of elements in cache ● Conc. Request/s : number of concurrent requests/s ● ...many others!
  • 22. Adopt a Cache Having Operations support: collaboration/synergy is important for network, metrics, deployment and emergency management aspects Have an alternative Plan : prepare alternatives that allow the system/service to be online in the event of widespread errors or unavailability of the Caching System Prepare a design where the Cache Provider is abstracted and appropriately hidden in terms of implementation to avoid unsolvable dependencies in the future!
  • 24. Know the Data What are the data to put in Cache? ● Most Used/Required ● Expensive to Calculate ● Expensive to Retrieve ● Common/Shareable Data The best are: read-only, frequently used and/or expensive to calculate
  • 25. Know the Data What characteristics of the data to choose? ● Data Type (Better NOT the DTO, NOT Business Object) ● Data Format (Textual? Binary? Custom?) ● Life Time of the Data (When It’s Stale/Fresh) ● Data Type volumes ● Serialization/Deserialization issues ● Data Affinity ● Data Compression (...if you really have to...)
  • 26. Know the Data What issues are related to the Data (areas): ● Cache Access ● Cache Eviction ● Cache Invalidation ● Data Search/Data Collections Management ● Definition of Unique Keys ● Cache Concurrency Support ● Storage (RAM, SSD, … ) ● Security/Regulations
  • 27. Know the Data : Eviction Forgetting is difficult for a cache: we have to find the trade-off between the usefulness of the data and the size of the cache! Concepts born from the optimization of linear research (Self-Organizing List https://en.wikipedia.org/wiki/Self-organizing_list ) ● Move To the Front ● Transpose ● Counting LRU LFU
  • 28. Know the Data : Eviction The frameworks, in a best-effort perspective, essentially offer LRU, LFU and the ability of creating customized policies. ● LRU: (recency) deletes the least recently used items. ● LFU: (frequency) based on access frequency, eliminates less frequently used Studies rise in the direction of Adaptive Systems using AI or statistical processing (on the history of data); can be offered better results in the compromise between memory, competition, speed! ● https://arxiv.org/pdf/1512.00727.pdf ● https://www.cs.bgu.ac.il/~tanm201/wiki.files/S1P2%20Adaptive%20Software %20Cache%20Management.pdf
  • 29. Know the Data : Eviction When LRU, LFU are not enough which element can improve the situation? Time! Applying timing policies or time windows for the aging of data or which restrict the validity of data, helps to have a better degree of adaptability ... but there is more!
  • 30. Know the Data : Eviction
  • 31. Know the Data : Problems ● Cache Trashing the pattern of data usage is such that the cache is useless ● Cold Cache an empty cache takes time to be useful! ● Cache Security like any system there are privacy and security issues (what about GDPR?): ● Data anonymization ● Cache Penetration ● Cache Avalanche
  • 33. Access Patterns Accessing or entering Data in a cache means also choosing its role and the trade-off between reads and writes... ● Cache-Aside ● Cache-Through ● Write-Around ● Refresh-Ahead ● Write-Back ( Write-Behind)
  • 34. Access Patterns Cache-Aside : the application is responsible for reads and writes to storage as well as to the cache that is collateral to storage
  • 35. Access Patterns Related to Cache-Aside are: ● Look-Aside : The value is first searched in the cache and then in the storage ● Demand-Fill : Implies that in the case of MISS not only is the value returned from the storage but it will also be placed in the cache Cache-Aside generally provides both the LOOK-ASIDE and the DEMAND- FILL but it is not mandatory that both are present: in a Pub/Sub system, Cache and Storage can be subscribers of the same Publisher but they materialize the data for two different reasons.
  • 36. Access Patterns Cache-Through : Write-Through/Read-Through The application treats the cache as if it were the main storage; reads / writes take place through the cache and propagated synchronously on the storage
  • 37. Access Patterns Write-Around : The application reads from the cache but for writes this is avoided. When data is new then is written directly to the storage: it’s in the case of reads that the cache is filled with data. (Useful when there are many writes and few reads).
  • 38. Access Patterns Refresh-Ahead : The cache is updated, also by scheduling, asynchronously, for the recently accessed elements, before these expire
  • 39. Access Patterns Write-Back ( Write-Behind) : The application writes on the cache but the propagation on the storage takes place asynchronously (generally there is a delay configured, it assumes a queue system; trade-off between high throughput and problems on data consistency)
  • 40. Access Patterns : DEV ● Cache Aside ● Cache Through ● Cache Selective Bypass ● Cache Massive Load ● Cache Full Cleaning (+ Warmer)
  • 42. The Club Rules 1) Don’t speak about cache 2) Don’t speak about cache: if you do it, made it not at the expense of your services 3) Define the price you are willing to pay 4) If you change the rules of the game you must be aware of it 5) Design in a simple way: start local, works on definable models 6) Measure, measure, measure: the cache gives you data and hints 7) Cache tuning takes time and changes over time 8) If you are in the Club ‘cause Microservices ... You have to fight!
  • 44. Microservices & Caching Microservices amplify the importance of Caching Systems; among the characteristics that explain this increase, worth mention: ● Microservices have their own data and there are many ● Microservices need to communicate! ● Different microservices have different needs ● Caching becomes part of the Resilience policies ● Caching to support a different persistence vision ● Microservices involve a more complex and powerful infrastructure
  • 45. Microservices (EVCache Netflix) Look Aside Primary Storage High-Availability Transient Store
  • 46. Microservices The Microservices, on the infrastructural perspective, highlighted the need for a layer of mediation and coordination of communications, today defined as Service-Mesh. Among the patterns deriving from the use of the Service-Mesh vision, Sidecar has relevance: a container to aid a given Microservice.
  • 47. Microservices Service Mesh define, for Caching Systems, new possible topologies: 1) In-Process Cache for Microservice 2) Remote Cache (partitioned) external to the Service-Mesh 3) Remote Cache (partitioned) with Cache Client inside the Service-Mesh (Sidecar) 4) Remote Cache (partitioned) with Caching System inside the Service-Mesh ( using Operators/Agent/Sidecar)
  • 48. Microservices In these scenarios the concepts of Eventual Consistency and Idempotency are strengthened. The importance of having Streaming Systems and CDC Systems emerges in the collaboration with the Caching System for important aspects, among which: ● The persistence of Save Point / Critical Operations ● Alternative to 2PC Transactions ● The Data Propagation to suitable Listener subsystems https://debezium.io/blog/2018/12/05/automating-cache-invalidation-with-change-data-capture/ https://medium.com/trabe/cache-invalidation-using-mqtt-e3bd8f6c2cf5
  • 50. ...and remember that... << ...Everyone knows WHAT they do, Some know HOW They do it, Few people know WHY they do it!... >>