SlideShare a Scribd company logo
Speed Layer
Architecture
April 2019
Rob Jackson and Pete Cracknell
Nationwide Building Society
2
Contents
1. Who is Nationwide Building Society?
2. What is the business challenge we’re responding to?
3. What is the Speed Layer?
4. Typical current state architecture
5. Target state architecture
6. How does data flow through the Speed Layer?
7. How we consume data from the Speed Layer
8. How the Speed Layer is deployed
9. Progress
10. Streaming assessment
11. Value achieved
12. Demo
3
Who is Nationwide Building Society?
• Formed in 1884 and renamed
to become the Nationwide
Building Society in 1970
• We’re the largest building
society in the world.
• A major provider of mortgages,
loans, savings and current
accounts in the UK and
launched the first (or 2nd)
Internet Banking Service in
1997
• We recently announced an
investment of an additional
£1.4 billion (total £4.1bn) over
5 years to simplify, digitise and
transform our IT estate.
• Confluent and Kafka form the
heart of an important part o
that investment.
4
• Regulation such as Open Banking
• Business growth
• 24 x 7 availability expectations from
customers and regulators
• Cloud adoption
• Capitalising on our data
• A need for agility and innovation
… and our existing platforms were making this
difficult
The Speed Layer will be the preferred source of data for high-volume read-only data requests and event sourcing. It will
deliver secure, near real time customer, account and transaction information from back end systems to front end systems with
speed and resilience. It will use the latest technologies built for cloud as highly available and distributed. It will provide NBS
with the first event based real-time data platform ready for digital.
DEFINITION
FOUR KEY CHARACTERISTICS
SCALABILITY:
The Speed Layer platform will be
built on cloud ready PaaS
architecture to allow for significant
and frictionless scaling that is cost
efficient.
FAST AND AGILE:
The Speed Layer will unlock data
in systems of record enabling
digital and agile development
teams to rapidly deliver new
features and services
RICH DATA SET:
Provide a rich accessible data set
enhanced with data and analytics
from OpenBanking and social
media. It will also future proof for
other interactions such as IoT
RESILIENT:
Reduce the load on core systems
and isolate them from the demands
of the digital platforms: mobile,
internet and OpenBanking in
particular. Built with proven
scalable cloud ready components
for greater capacity and with
resilience
5
What is the Speed Layer?
6
As is logical E2E Architecture
API Gateway
Channel Web Services
Enterprise Web Services
Back-end Services
Mainframes
Fairly normal, is there a problem?
7
API Gateway
Stream Processing
Mainframes + other sources of data
Kafka Topics
Target System Architecture
CDC
Kafka Topics
Microservices Channel Services
Enterprise Services
Protocol adapters
WritesReads
System of Record(s)
CDC
Replication
Engine
Source
DB
Kafka Raw Topic – raw data
Stream
processingA
Microservice
Kafka Published Topic – processed data
Materialisation Microservice
NoSQL tables
{REST APIs}
1. Change Data Capture (CDC) is deployed to the System of Record (SoR) and
pushes changes from source database to Kafka Topic
2. Kafka topics contain data in the format of the source system. There will be one
raw topic per table replicated. Data is typically held here for c7 days.
3. Streams processing (Kafka Streams framework) is used to transform data into
processed data made available to consumers through “Published Topics”
4. Kafka Published Topics retain data long term (in line with retention policies
and GDPR) and can be used by many Speed Layer Microservices.
5. Speed Layer Microservices are consumers of Kafka Published Topics and push
the data they need into their persistence store (NoSQL, in-memory, etc.)
6. APIs expose data to consumers
7. Channel applications call Speed Layer Microservices to request data
8. Note, applications can subscribe to events and respond to events without
materialising them in a database, e.g., push notification to device.
124
5
3
6
Consuming Applications
7 8
Data Flow Diagram
9
There are three main approaches for consumption of data from Speed Layer. 1. Immediate real time message consumption in the Event Driven pattern, 2. Usage specific
data sets are materialised and exposed through APIs in the Request Driven pattern. 3. Functionally aligned enterprise level data stores are materialised.
Enterprise/
Functional
microservices
Consumption
Ms & apps
Consumers are microservices that subscribe to
topics and materialise data to their requirements.
A set of functional microservices are created. For
example an “account” microservice from which all
consuming microservices and applications read
account data when needed.
Kafka consumers listen and respond to
messages that are arriving in near real time and
take immediate action on receipt of the message.
In this pattern there is no need to materialise the
data.
SL
subs
Producer Producer Producer Producer
SL
subs
Producer Producer Producer Producer
SL
Producer Producer Producer Producer
FUNCTIONAL SERVICEEVENT DRIVEN REQUEST DRIVEN
Legacy applications and/or services can be re-written to consume data from the Speed Layer to improve performance and reduce compute demand from other systems.
Consumption Patterns Overview
10
Multi-site deployment and resilience
Primary DC for SORs Standby DC for SORs Cloud hosting Deployment
1. CDC writes to a local Kafka Cluster, i.e., in the same
DC as the mainframe
2. Kafka topics are replicated to a separate Kafka cluster
in our 2nd DC
3. Independent database clusters in each datacentre.
4. When required, Kafka topics are replicated using
Confluent Replicator to cloud providers
Progress so far…
• Architectural PoC completed:
1. Initial logical proving
2. Functional and non functional proving
3. Load testing/benchmarking in Azure and IBM labs
• Speed Layer project launched to deliver the production capability and first
use cases
1. Split into 3 use cases, with the first one code complete, 2 & 3 progressing well
• Adopting Confluent Kafka across multiple LOBs
1. Speed Layer
2. Event Based designs for originations journeys
3. High volume messaging in Payments
• Working on Streaming Maturity Assessment with Confluent
Adopting an Enterprise Event-Streaming Platform is a Journey
Nationwide nearly here - with Speed Layer +
platforms for Mortgages & Payments - but more
potential to share common ways of working and
utilise a common platform for more use cases
VALUE
1
Early
interest
2
Identify a project
/ start to set up
pipeline
3
Mission-critical, but
disparate LOBs
4
Mission-critical,
connected LOBs
5
Central Nervous
System
Projects Platform
Developer
downloads Kafka &
experiments,
Pilot(s).
LOB(s); Small teams
experimenting;
→ 1-3 basic pipeline use
cases - moved into
Production - but
fragmented.
Multiple mission critical use
cases in production with
scale, DR & SLAs.
→ Streaming clearly
delivering business value,
with C-suite visibility but
fragmented across LOBs.
Streaming Platform
managing majority of
mission critical data
processes, globally, with
multi-datacenter replication
across on-prem and hybrid
clouds.
All data in the
organization managed
through a single
Streaming Platform.
Typically → Digital
natives / digital pure
players - probably using
Machine Learning & AI.
Expected value (this time next year)
 Enables agility and autonomy in digital development teams
 The first use case alone will remove c7bn requests / year from the HPNS.
 Will help us maintain our service availability despite unprecedented demand
 Kafka and streaming being adopted across multiple lines of business
 The move to micro services with Confluent Kafka enables Nationwide to onboard
new use cases quickly and easily
 Speed Layer, Streaming and Kafka will help Nationwide head-off the threat from
agile challenger banks
The Speed Layer will help Nationwide provide customers with a better customer experience leading to
better customer retention and new revenue streams.
14
Demo of Speed Layer
 Why we did the Proof of Concept
 Functional walk through
 Non-functional view

More Related Content

PDF
AI in Telecom: How artificial intelligence is reshaping the vision of telco i...
PDF
Enterprise Architecture for Business Model Innovation in a Connected Economy
PDF
Artifacts to Enable Data Goverance
PPTX
Forging an Analytics Center of Excellence
PDF
Big Data Fabric Capability Maturity Model
PDF
Enterprise Architecture Report
PPTX
Application Management Service Offerings
PPTX
Big Data & Smart Cities
AI in Telecom: How artificial intelligence is reshaping the vision of telco i...
Enterprise Architecture for Business Model Innovation in a Connected Economy
Artifacts to Enable Data Goverance
Forging an Analytics Center of Excellence
Big Data Fabric Capability Maturity Model
Enterprise Architecture Report
Application Management Service Offerings
Big Data & Smart Cities

What's hot (17)

PDF
IMPLEMENTATION BEST PRACTICES Sep 22.pdf
PDF
Data Architecture Strategies: Building an Enterprise Data Strategy – Where to...
PDF
Modernizing Infrastructure Monitoring and Management with AIOps
PDF
Interviewing the Chief Financial Officer, CFO Assessment
PDF
The Chief Data Officer Agenda: Metrics for Information and Data Management
PPT
Service Delivery Management (Lucia Eversley)
PDF
Solution Architecture And User And Customer Experience
PDF
Analytics, Business Intelligence, and Data Science - What's the Progression?
PDF
The First 100 Days for a New CIO - Using the Innovation Value Institute IT Ca...
PDF
BI & Big data use case for banking - by rully feranata
PPTX
Building a Data Analytics Center of Excellence - Digital Transformation
PDF
Data Analysis in Manufacturing Application to Steel Industry
PPTX
Knowledge management maturity model
PPTX
CRM Solution Blueprint
PPT
It infrastructure hardware and software
PPTX
Data Strategy - Executive MBA Class, IE Business School
PPTX
Unit iii mis
IMPLEMENTATION BEST PRACTICES Sep 22.pdf
Data Architecture Strategies: Building an Enterprise Data Strategy – Where to...
Modernizing Infrastructure Monitoring and Management with AIOps
Interviewing the Chief Financial Officer, CFO Assessment
The Chief Data Officer Agenda: Metrics for Information and Data Management
Service Delivery Management (Lucia Eversley)
Solution Architecture And User And Customer Experience
Analytics, Business Intelligence, and Data Science - What's the Progression?
The First 100 Days for a New CIO - Using the Innovation Value Institute IT Ca...
BI & Big data use case for banking - by rully feranata
Building a Data Analytics Center of Excellence - Digital Transformation
Data Analysis in Manufacturing Application to Steel Industry
Knowledge management maturity model
CRM Solution Blueprint
It infrastructure hardware and software
Data Strategy - Executive MBA Class, IE Business School
Unit iii mis
Ad

Similar to Introducing Events and Stream Processing into Nationwide Building Society (20)

PDF
Introducing Events and Stream Processing into Nationwide Building Society (Ro...
PDF
Toyota Financial Services Digital Transformation - Think 2019
PDF
ALT-F1.BE : The Accelerator (Google Cloud Platform)
PPTX
Streaming Data and Stream Processing with Apache Kafka
PPTX
OCC Overview OMG Clouds Meeting 07-13-09 v3
PPT
Technology Overview
PDF
Confluent Partner Tech Talk with QLIK
PPT
Excellent slides on the new z13s announced on 16th Feb 2016
PDF
Cloud-Native Patterns for Data-Intensive Applications
PPTX
Black Friday Brilliance Managing a Billion Transactions with Tech, Tactics, a...
PDF
Initiative Based Technology Consulting Case Studies
PDF
IoT Physical Servers and Cloud Offerings.pdf
PDF
Application Modernisation through Event-Driven Microservices
PDF
Idc analyst report a new breed of servers for digital transformation
PDF
Event Driven Architecture with a RESTful Microservices Architecture (Kyle Ben...
PDF
Confluent kafka meetupseattle jan2017
PDF
Data Engineer, Patterns & Architecture The future: Deep-dive into Microservic...
PDF
Architecting and Tuning IIB/eXtreme Scale for Maximum Performance and Reliabi...
PPS
Qo Introduction V2
PPTX
Accelerating a Path to Digital with a Cloud Data Strategy
Introducing Events and Stream Processing into Nationwide Building Society (Ro...
Toyota Financial Services Digital Transformation - Think 2019
ALT-F1.BE : The Accelerator (Google Cloud Platform)
Streaming Data and Stream Processing with Apache Kafka
OCC Overview OMG Clouds Meeting 07-13-09 v3
Technology Overview
Confluent Partner Tech Talk with QLIK
Excellent slides on the new z13s announced on 16th Feb 2016
Cloud-Native Patterns for Data-Intensive Applications
Black Friday Brilliance Managing a Billion Transactions with Tech, Tactics, a...
Initiative Based Technology Consulting Case Studies
IoT Physical Servers and Cloud Offerings.pdf
Application Modernisation through Event-Driven Microservices
Idc analyst report a new breed of servers for digital transformation
Event Driven Architecture with a RESTful Microservices Architecture (Kyle Ben...
Confluent kafka meetupseattle jan2017
Data Engineer, Patterns & Architecture The future: Deep-dive into Microservic...
Architecting and Tuning IIB/eXtreme Scale for Maximum Performance and Reliabi...
Qo Introduction V2
Accelerating a Path to Digital with a Cloud Data Strategy
Ad

More from confluent (20)

PDF
Stream Processing Handson Workshop - Flink SQL Hands-on Workshop (Korean)
PPTX
Webinar Think Right - Shift Left - 19-03-2025.pptx
PDF
Migration, backup and restore made easy using Kannika
PDF
Five Things You Need to Know About Data Streaming in 2025
PDF
Data in Motion Tour Seoul 2024 - Keynote
PDF
Data in Motion Tour Seoul 2024 - Roadmap Demo
PDF
From Stream to Screen: Real-Time Data Streaming to Web Frontends with Conflue...
PDF
Confluent per il settore FSI: Accelerare l'Innovazione con il Data Streaming...
PDF
Data in Motion Tour 2024 Riyadh, Saudi Arabia
PDF
Build a Real-Time Decision Support Application for Financial Market Traders w...
PDF
Strumenti e Strategie di Stream Governance con Confluent Platform
PDF
Compose Gen-AI Apps With Real-Time Data - In Minutes, Not Weeks
PDF
Building Real-Time Gen AI Applications with SingleStore and Confluent
PDF
Unlocking value with event-driven architecture by Confluent
PDF
Il Data Streaming per un’AI real-time di nuova generazione
PDF
Unleashing the Future: Building a Scalable and Up-to-Date GenAI Chatbot with ...
PDF
Break data silos with real-time connectivity using Confluent Cloud Connectors
PDF
Building API data products on top of your real-time data infrastructure
PDF
Speed Wins: From Kafka to APIs in Minutes
PDF
Evolving Data Governance for the Real-time Streaming and AI Era
Stream Processing Handson Workshop - Flink SQL Hands-on Workshop (Korean)
Webinar Think Right - Shift Left - 19-03-2025.pptx
Migration, backup and restore made easy using Kannika
Five Things You Need to Know About Data Streaming in 2025
Data in Motion Tour Seoul 2024 - Keynote
Data in Motion Tour Seoul 2024 - Roadmap Demo
From Stream to Screen: Real-Time Data Streaming to Web Frontends with Conflue...
Confluent per il settore FSI: Accelerare l'Innovazione con il Data Streaming...
Data in Motion Tour 2024 Riyadh, Saudi Arabia
Build a Real-Time Decision Support Application for Financial Market Traders w...
Strumenti e Strategie di Stream Governance con Confluent Platform
Compose Gen-AI Apps With Real-Time Data - In Minutes, Not Weeks
Building Real-Time Gen AI Applications with SingleStore and Confluent
Unlocking value with event-driven architecture by Confluent
Il Data Streaming per un’AI real-time di nuova generazione
Unleashing the Future: Building a Scalable and Up-to-Date GenAI Chatbot with ...
Break data silos with real-time connectivity using Confluent Cloud Connectors
Building API data products on top of your real-time data infrastructure
Speed Wins: From Kafka to APIs in Minutes
Evolving Data Governance for the Real-time Streaming and AI Era

Recently uploaded (20)

PDF
Electronic commerce courselecture one. Pdf
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Approach and Philosophy of On baking technology
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
Machine learning based COVID-19 study performance prediction
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
DOCX
The AUB Centre for AI in Media Proposal.docx
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
PDF
Encapsulation theory and applications.pdf
PDF
Empathic Computing: Creating Shared Understanding
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PPTX
MYSQL Presentation for SQL database connectivity
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Electronic commerce courselecture one. Pdf
Dropbox Q2 2025 Financial Results & Investor Presentation
NewMind AI Weekly Chronicles - August'25 Week I
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Digital-Transformation-Roadmap-for-Companies.pptx
Approach and Philosophy of On baking technology
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Machine learning based COVID-19 study performance prediction
20250228 LYD VKU AI Blended-Learning.pptx
The AUB Centre for AI in Media Proposal.docx
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
Encapsulation theory and applications.pdf
Empathic Computing: Creating Shared Understanding
Per capita expenditure prediction using model stacking based on satellite ima...
Understanding_Digital_Forensics_Presentation.pptx
The Rise and Fall of 3GPP – Time for a Sabbatical?
Building Integrated photovoltaic BIPV_UPV.pdf
MYSQL Presentation for SQL database connectivity
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx

Introducing Events and Stream Processing into Nationwide Building Society

  • 1. Speed Layer Architecture April 2019 Rob Jackson and Pete Cracknell Nationwide Building Society
  • 2. 2 Contents 1. Who is Nationwide Building Society? 2. What is the business challenge we’re responding to? 3. What is the Speed Layer? 4. Typical current state architecture 5. Target state architecture 6. How does data flow through the Speed Layer? 7. How we consume data from the Speed Layer 8. How the Speed Layer is deployed 9. Progress 10. Streaming assessment 11. Value achieved 12. Demo
  • 3. 3 Who is Nationwide Building Society? • Formed in 1884 and renamed to become the Nationwide Building Society in 1970 • We’re the largest building society in the world. • A major provider of mortgages, loans, savings and current accounts in the UK and launched the first (or 2nd) Internet Banking Service in 1997 • We recently announced an investment of an additional £1.4 billion (total £4.1bn) over 5 years to simplify, digitise and transform our IT estate. • Confluent and Kafka form the heart of an important part o that investment.
  • 4. 4 • Regulation such as Open Banking • Business growth • 24 x 7 availability expectations from customers and regulators • Cloud adoption • Capitalising on our data • A need for agility and innovation … and our existing platforms were making this difficult
  • 5. The Speed Layer will be the preferred source of data for high-volume read-only data requests and event sourcing. It will deliver secure, near real time customer, account and transaction information from back end systems to front end systems with speed and resilience. It will use the latest technologies built for cloud as highly available and distributed. It will provide NBS with the first event based real-time data platform ready for digital. DEFINITION FOUR KEY CHARACTERISTICS SCALABILITY: The Speed Layer platform will be built on cloud ready PaaS architecture to allow for significant and frictionless scaling that is cost efficient. FAST AND AGILE: The Speed Layer will unlock data in systems of record enabling digital and agile development teams to rapidly deliver new features and services RICH DATA SET: Provide a rich accessible data set enhanced with data and analytics from OpenBanking and social media. It will also future proof for other interactions such as IoT RESILIENT: Reduce the load on core systems and isolate them from the demands of the digital platforms: mobile, internet and OpenBanking in particular. Built with proven scalable cloud ready components for greater capacity and with resilience 5 What is the Speed Layer?
  • 6. 6 As is logical E2E Architecture API Gateway Channel Web Services Enterprise Web Services Back-end Services Mainframes Fairly normal, is there a problem?
  • 7. 7 API Gateway Stream Processing Mainframes + other sources of data Kafka Topics Target System Architecture CDC Kafka Topics Microservices Channel Services Enterprise Services Protocol adapters WritesReads
  • 8. System of Record(s) CDC Replication Engine Source DB Kafka Raw Topic – raw data Stream processingA Microservice Kafka Published Topic – processed data Materialisation Microservice NoSQL tables {REST APIs} 1. Change Data Capture (CDC) is deployed to the System of Record (SoR) and pushes changes from source database to Kafka Topic 2. Kafka topics contain data in the format of the source system. There will be one raw topic per table replicated. Data is typically held here for c7 days. 3. Streams processing (Kafka Streams framework) is used to transform data into processed data made available to consumers through “Published Topics” 4. Kafka Published Topics retain data long term (in line with retention policies and GDPR) and can be used by many Speed Layer Microservices. 5. Speed Layer Microservices are consumers of Kafka Published Topics and push the data they need into their persistence store (NoSQL, in-memory, etc.) 6. APIs expose data to consumers 7. Channel applications call Speed Layer Microservices to request data 8. Note, applications can subscribe to events and respond to events without materialising them in a database, e.g., push notification to device. 124 5 3 6 Consuming Applications 7 8 Data Flow Diagram
  • 9. 9 There are three main approaches for consumption of data from Speed Layer. 1. Immediate real time message consumption in the Event Driven pattern, 2. Usage specific data sets are materialised and exposed through APIs in the Request Driven pattern. 3. Functionally aligned enterprise level data stores are materialised. Enterprise/ Functional microservices Consumption Ms & apps Consumers are microservices that subscribe to topics and materialise data to their requirements. A set of functional microservices are created. For example an “account” microservice from which all consuming microservices and applications read account data when needed. Kafka consumers listen and respond to messages that are arriving in near real time and take immediate action on receipt of the message. In this pattern there is no need to materialise the data. SL subs Producer Producer Producer Producer SL subs Producer Producer Producer Producer SL Producer Producer Producer Producer FUNCTIONAL SERVICEEVENT DRIVEN REQUEST DRIVEN Legacy applications and/or services can be re-written to consume data from the Speed Layer to improve performance and reduce compute demand from other systems. Consumption Patterns Overview
  • 10. 10 Multi-site deployment and resilience Primary DC for SORs Standby DC for SORs Cloud hosting Deployment 1. CDC writes to a local Kafka Cluster, i.e., in the same DC as the mainframe 2. Kafka topics are replicated to a separate Kafka cluster in our 2nd DC 3. Independent database clusters in each datacentre. 4. When required, Kafka topics are replicated using Confluent Replicator to cloud providers
  • 11. Progress so far… • Architectural PoC completed: 1. Initial logical proving 2. Functional and non functional proving 3. Load testing/benchmarking in Azure and IBM labs • Speed Layer project launched to deliver the production capability and first use cases 1. Split into 3 use cases, with the first one code complete, 2 & 3 progressing well • Adopting Confluent Kafka across multiple LOBs 1. Speed Layer 2. Event Based designs for originations journeys 3. High volume messaging in Payments • Working on Streaming Maturity Assessment with Confluent
  • 12. Adopting an Enterprise Event-Streaming Platform is a Journey Nationwide nearly here - with Speed Layer + platforms for Mortgages & Payments - but more potential to share common ways of working and utilise a common platform for more use cases VALUE 1 Early interest 2 Identify a project / start to set up pipeline 3 Mission-critical, but disparate LOBs 4 Mission-critical, connected LOBs 5 Central Nervous System Projects Platform Developer downloads Kafka & experiments, Pilot(s). LOB(s); Small teams experimenting; → 1-3 basic pipeline use cases - moved into Production - but fragmented. Multiple mission critical use cases in production with scale, DR & SLAs. → Streaming clearly delivering business value, with C-suite visibility but fragmented across LOBs. Streaming Platform managing majority of mission critical data processes, globally, with multi-datacenter replication across on-prem and hybrid clouds. All data in the organization managed through a single Streaming Platform. Typically → Digital natives / digital pure players - probably using Machine Learning & AI.
  • 13. Expected value (this time next year)  Enables agility and autonomy in digital development teams  The first use case alone will remove c7bn requests / year from the HPNS.  Will help us maintain our service availability despite unprecedented demand  Kafka and streaming being adopted across multiple lines of business  The move to micro services with Confluent Kafka enables Nationwide to onboard new use cases quickly and easily  Speed Layer, Streaming and Kafka will help Nationwide head-off the threat from agile challenger banks The Speed Layer will help Nationwide provide customers with a better customer experience leading to better customer retention and new revenue streams.
  • 14. 14 Demo of Speed Layer  Why we did the Proof of Concept  Functional walk through  Non-functional view

Editor's Notes

  • #3: Good morning all and thanks for joining this web cast I’m Rob Jackson HO Application Architecture for Nationwide and also from Nationwide we have Pete Cracknell who’ll be doing a demo for you today. Today we’re going to talk to you about an architecture we’ve called the “Speed Layer”. You might be familiar with the concept of a Speed Layer from Lambda Architectures and the world of Data architecture, but this isn’t that, it’s just a name that stuck, so sorry for any confusion we’ve caused with the name… I talk to you about the reasons why we’re doing this architecture, contrast it with our current state architecture Describe how we can consume data from the Speed Layer A bit about how it’s deployed Where we’re heading next The best bit is the demo and then we’ll do a Q&A with Tim (Vincent) from Confluent. I hope that’s ok!
  • #4: Formed in 1884 Renamed to the name we are now in 1970 Worlds largest building society In the UK we’re a major provider of mortgages, loans, etc. Launched our first IB in 1997 and first or second Recently announced a large investment of 4.1bn What I’m talking to you about today forms an important part of that investment
  • #5: I think the main headline for why we’re doing the speed layer is “digital disruption” Some might not see Open Banking as digital disruption as it’s a regulatory requirement However, it means we have to expose our data through APIs, and if we don’t offer good digital services through our own apps, customers will use other banks and organisations apps that use our APIs to disintermediate us. The other reason to mention Open Banking is that it was the catalyst to work on the Speed Layer. OB, had the potential for high and unpredictable read volumes along with stringent requirements for availability. We knew the other CMA9 were building OB with similar data caches to protect their core SORs from this load and we intended to do the same. I’m sure you’ll recognise the other headings there: higher volumes, 24x7 expectations, people expecting to use their data in new ways, e.g., how easy is it to search your emails vs. your bank’s transaction history? The final point is that, despite all the good work we do with them, our core ledgers do not make it easy to use data in new ways, aggregate multiple data sources, push events to customers and scale cost effectively.
  • #6: Speed Layer one of the answers to Digital Disruption… First looking at the definition: It’s a source of data that can be queried for a near real-time copy of mainframe data. On top of that it introduces event sourcing and stream processing into the society. It’s built using modern technologies: Confluent Kafka, MongoDB, Microservices, OpenShift and will initially be deployed on-prem, but is very much an enabler for cloud – which I’ll come back to later. The 4 key characteristics: Resilient: the application is designed to tolerate infrastructure failure with build in data redundancy, horizonal scaling and automated recovery. Fast and agile: once we’ve extracted data from SORs, we can allow consumers to join, aggregate, structure, query and search data in ways the SORs do not easily allow. Rich Data set: Allows is to enrich SOR data with analytics, 3rd party sources, etc. And Scalability: scalability is designed in to the technologies we’re using. Kafka and MongoDB are both heavility used in internet scale deployments. My favourite statistic for Kafka is that Ali Baba use it to source events at a peak rate of 425 million TPS. A big number for us is a small proportion of that.
  • #7: This isn’t really our current state, it’s just a representative sample of one small part of our estate and fairly common. But it allows me to describe a fairly normal transaction path A REST/http request for some data comes in from a device on the internet That hits an API Gateway in our datacentre, that then makes onward http requests until it eventually finds it’s way to the data in the mainframe If any of those layers is unresponsive the request will time out And of course, if the mainframe is not available, it doesn’t work at all. Thankfully, that doesn’t happen very often. If we want to move any of those components out into the cloud, that’s ok, but they still have to call back into our data centre to get to the mainframe. So, it’s not wrong and it’s how write requests will continue, perhaps with a simplified estate, fewer layers and modern technologies. However, for read requests, we can do something different.
  • #8: This shows how Speed layer for reads and event sourcing will sit alongside our enterprise middleware A write comes into the SOR by existing means: batch, middleware, legacy services, payments gateway, whatever… It’s picked up by CDC and pushed into Kafka where it’s processed and stored before being materialised, in our case that’s MongoDB. Using this pattern, read requests are removed from our SORs or even our Data Centres, replicated to where it’s needed and materialised to requirements. Going into that in a bit more detail…
  • #9: I’ll just talk you through the data flow… Step 1 is CDC on the mainframe. Of course, this enables multiple data sources, for example, batch files using Kafka Connect, applications creating events, but for us right now, it’s Change Data Capture on our Mainframes. So that’s how it works, next we’ll look at how consumers use it.
  • #10: These show the ways in which we can consume data from the speed layer There are cases for all of these, but I’m very much looking forward to seeing the first 2 come to fruition Event driven – consumers subscribe to topics and act on events. Requst driven – data is materialised to requirements Functional – these are our cored shared services we expect to be re-used.
  • #11: This shows how SL is an enabler for cloud and a good place to describe how resilience is baked into the architecture. Pete will show this for real during the demo.
  • #12: Slide 10
  • #13: Slide 11 We’re now getting into next steps We’re about to embark on a streaming assessment with Confluent’s help. We at around step 3, we’re using kafka, streaming, in SL, Mortgage and payments, but we’re currently doing things slightly differently on different platforms We want to look at new use cases, new demand and what capabilities we need to create to support that demand. This will feed into the various roadmaps, including the SI squad but also the IT Strategy.
  • #14: Final slide before questions.
  • #15: Pete was heading up the architectural proving team when we did this I approached him with an architecture I wanted to prove and I think Pete’s approach was to try to break it. He’ll let you know how he got on in the Q&A He can also cover the alternatives we looked at for stream processing and maybe some of the stuff we learnt along the way.