SlideShare a Scribd company logo
The computer science behind a
modern distributed data store
Max Neunhöffer
Malaga, 19 May 2017
www.arangodb.com
Overview
Topics
Resilience and Consensus
Sorting
Log-structured Merge Trees
Hybrid Logical Clocks
Distributed ACID Transactions
Overview
Topics
Resilience and Consensus
Sorting
Log-structured Merge Trees
Hybrid Logical Clocks
Distributed ACID Transactions
Bottom line: You need CompSci to implement a modern data store
Resilience and Consensus
The Problem
A modern data store is distributed,
Resilience and Consensus
The Problem
A modern data store is distributed, because it needs to scale out and/or
be resilient.
Resilience and Consensus
The Problem
A modern data store is distributed, because it needs to scale out and/or
be resilient.
Different parts of the system need to agree on things.
Resilience and Consensus
The Problem
A modern data store is distributed, because it needs to scale out and/or
be resilient.
Different parts of the system need to agree on things.
Consensus is the art to achieve this as well as possible in software.
This is relatively easy, if things are good, but very hard, if:
Resilience and Consensus
The Problem
A modern data store is distributed, because it needs to scale out and/or
be resilient.
Different parts of the system need to agree on things.
Consensus is the art to achieve this as well as possible in software.
This is relatively easy, if things are good, but very hard, if:
the network has outages,
the network has dropped or delayed or duplicated packets,
disks fail (and come back with corrupt data),
machines fail (and come back with old data),
racks fail (and come back with or without data).
Resilience and Consensus
The Problem
A modern data store is distributed, because it needs to scale out and/or
be resilient.
Different parts of the system need to agree on things.
Consensus is the art to achieve this as well as possible in software.
This is relatively easy, if things are good, but very hard, if:
the network has outages,
the network has dropped or delayed or duplicated packets,
disks fail (and come back with corrupt data),
machines fail (and come back with old data),
racks fail (and come back with or without data).
(And we have not even talked about malicious attacks and enemy action.)
Paxos and Raft
Traditionally, one uses the Paxos Consensus Protocol (1998).
More recently, Raft (2013) has been proposed.
Paxos is a challenge to understand and to implement efficiently.
Various variants exist.
Raft is designed to be understandable.
Paxos and Raft
Traditionally, one uses the Paxos Consensus Protocol (1998).
More recently, Raft (2013) has been proposed.
Paxos is a challenge to understand and to implement efficiently.
Various variants exist.
Raft is designed to be understandable.
My advice:
First try to understand Paxos for some time (do not implement it!), then
enjoy the beauty of Raft,
Paxos and Raft
Traditionally, one uses the Paxos Consensus Protocol (1998).
More recently, Raft (2013) has been proposed.
Paxos is a challenge to understand and to implement efficiently.
Various variants exist.
Raft is designed to be understandable.
My advice:
First try to understand Paxos for some time (do not implement it!), then
enjoy the beauty of Raft, but do not implement it either!
Paxos and Raft
Traditionally, one uses the Paxos Consensus Protocol (1998).
More recently, Raft (2013) has been proposed.
Paxos is a challenge to understand and to implement efficiently.
Various variants exist.
Raft is designed to be understandable.
My advice:
First try to understand Paxos for some time (do not implement it!), then
enjoy the beauty of Raft, but do not implement it either!
Use some battle-tested implementation you trust!
Paxos and Raft
Traditionally, one uses the Paxos Consensus Protocol (1998).
More recently, Raft (2013) has been proposed.
Paxos is a challenge to understand and to implement efficiently.
Various variants exist.
Raft is designed to be understandable.
My advice:
First try to understand Paxos for some time (do not implement it!), then
enjoy the beauty of Raft, but do not implement it either!
Use some battle-tested implementation you trust!
But most importantly: DO NOT TRY TO INVENT YOUR OWN!
Raft in a slide
An odd number of servers each keep a persisted log of events.
Raft in a slide
An odd number of servers each keep a persisted log of events.
Everything is replicated to everybody.
Raft in a slide
An odd number of servers each keep a persisted log of events.
Everything is replicated to everybody.
They democratically elect a leader with absolute majority.
Raft in a slide
An odd number of servers each keep a persisted log of events.
Everything is replicated to everybody.
They democratically elect a leader with absolute majority.
Only the leader may append to the replicated log.
Raft in a slide
An odd number of servers each keep a persisted log of events.
Everything is replicated to everybody.
They democratically elect a leader with absolute majority.
Only the leader may append to the replicated log.
An append only counts when a majority has persisted and confirmed it.
Raft in a slide
An odd number of servers each keep a persisted log of events.
Everything is replicated to everybody.
They democratically elect a leader with absolute majority.
Only the leader may append to the replicated log.
An append only counts when a majority has persisted and confirmed it.
Very smart logic to ensure a unique leader and automatic recovery from failure.
Raft in a slide
An odd number of servers each keep a persisted log of events.
Everything is replicated to everybody.
They democratically elect a leader with absolute majority.
Only the leader may append to the replicated log.
An append only counts when a majority has persisted and confirmed it.
Very smart logic to ensure a unique leader and automatic recovery from failure.
It is all a lot of fun to get right, but it is proven to work.
Raft in a slide
An odd number of servers each keep a persisted log of events.
Everything is replicated to everybody.
They democratically elect a leader with absolute majority.
Only the leader may append to the replicated log.
An append only counts when a majority has persisted and confirmed it.
Very smart logic to ensure a unique leader and automatic recovery from failure.
It is all a lot of fun to get right, but it is proven to work.
One puts a key/value store on top, the log contains the changes.
Raft demo
file:///home/neunhoef/talks/talks/public/Presentation/
2017-05-19-JonTheBeach/raft/raft.github.io/raftscope/index.html
http://raft.github.io/raftscope/index.html
(by Diego Ongaro)
Sorting
The Problem
Data stores need indexes. In practice, we need to sort things.
Sorting
The Problem
Data stores need indexes. In practice, we need to sort things.
Most published algorithms are rubbish on modern hardware.
Sorting
The Problem
Data stores need indexes. In practice, we need to sort things.
Most published algorithms are rubbish on modern hardware.
The problem is no longer the
comparison computations but the data movement
.
Sorting
The Problem
Data stores need indexes. In practice, we need to sort things.
Most published algorithms are rubbish on modern hardware.
The problem is no longer the
comparison computations but the data movement
.
Since the time when I was a kid and have played with an Apple IIe,
compute power in one core has increased by ×20000
a single memory access by ×40
and now we have 32 cores in a CPU
this means computation has outpaced memory access by ×1280!
Idea for a parallel sorting algorithm: Merge Sort
Min−Heap:
sorted
merged
Idea for a parallel sorting algorithm: Merge Sort
Min−Heap:
sorted
merged
Nearly all comparisons hit the L2 cache!
Log structured merge trees (LSM-trees)
The Problem
People rightfully expect from a data store, that it
can hold more data than the available RAM,
works well with SSDs and spinning rust,
allows fast bulk inserts into large data sets, and
provides fast reads in a hot set that fits into RAM.
Log structured merge trees (LSM-trees)
The Problem
People rightfully expect from a data store, that it
can hold more data than the available RAM,
works well with SSDs and spinning rust,
allows fast bulk inserts into large data sets, and
provides fast reads in a hot set that fits into RAM.
Traditional B-tree based structures often fail to deliver with the last 2.
Log structured merge trees (LSM-trees)
(Source: http://www.benstopford.com/2015/02/14/log-structured-merge-trees/, Author: Ben Stopford, License: Creative Commons)
Log structured merge trees (LSM-trees)
LSM-trees — summary
writes first go into memtables,
all files are sorted and immutable,
compaction happens in the background,
merge sort can be used,
all writes use sequential I/O,
Bloom filters or Cuckoo filters for fast reads,
=⇒ good write throughput and reasonable read performance,
used in ArangoDB, BigTable, Cassandra, HBase, InfluxDB, LevelDB,
MongoDB, RocksDB, SQLite4 and WiredTiger, etc.
Hybrid Logical Clocks (HLC)
The Problem
Clocks in different nodes of distributed systems are not in sync.
Hybrid Logical Clocks (HLC)
The Problem
Clocks in different nodes of distributed systems are not in sync.
general relativity poses fundamental obstructions to synchronizity,
in practice, clock skew happens,
Google can use atomic clocks,
even with NTP (network time protocol) we have to live with ≈ 20ms.
Hybrid Logical Clocks (HLC)
The Problem
Clocks in different nodes of distributed systems are not in sync.
general relativity poses fundamental obstructions to synchronizity,
in practice, clock skew happens,
Google can use atomic clocks,
even with NTP (network time protocol) we have to live with ≈ 20ms.
Therefore, we cannot compare time stamps from different nodes!
Hybrid Logical Clocks (HLC)
The Problem
Clocks in different nodes of distributed systems are not in sync.
general relativity poses fundamental obstructions to synchronizity,
in practice, clock skew happens,
Google can use atomic clocks,
even with NTP (network time protocol) we have to live with ≈ 20ms.
Therefore, we cannot compare time stamps from different nodes!
Why would this help?
establish “happened after” relationship between events,
e.g. for conflict resolution, log sorting, detecting network delays,
time to live could be implemented easily.
Hybrid Logical Clocks (HLC)
The Idea
Every computer has a local clock, and we use NTP to synchronize.
Hybrid Logical Clocks (HLC)
The Idea
Every computer has a local clock, and we use NTP to synchronize.
If two events on different machines are linked by causality, the cause
should have a smaller time stamp than the effect.
Hybrid Logical Clocks (HLC)
The Idea
Every computer has a local clock, and we use NTP to synchronize.
If two events on different machines are linked by causality, the cause
should have a smaller time stamp than the effect.
causality ⇐⇒ a message is sent
Send a time stamp with every message. The HLC always returns a value
> max(local clock, largest time stamp ever seen).
Hybrid Logical Clocks (HLC)
The Idea
Every computer has a local clock, and we use NTP to synchronize.
If two events on different machines are linked by causality, the cause
should have a smaller time stamp than the effect.
causality ⇐⇒ a message is sent
Send a time stamp with every message. The HLC always returns a value
> max(local clock, largest time stamp ever seen).
Causality is preserved, time can “catch up” with logical time eventually.
http://muratbuffalo.blogspot.com.es/2014/07/hybrid-logical-clocks.html
Distributed ACID Transactions
Atomic either happens in its entirety or not at all
Consistent reading sees a consistent state, writing preser-
vers consistency
Isolated concurrent transactions do not see each other
Durable committed writes are preserved after shut-
down and crashes
Distributed ACID Transactions
Atomic either happens in its entirety or not at all
Consistent reading sees a consistent state, writing preser-
vers consistency
Isolated concurrent transactions do not see each other
Durable committed writes are preserved after shut-
down and crashes
(All relatively doable when transactions happen one after another!)
Distributed ACID Transactions
The Problem
In a distributed system:
How to make sure, that all nodes agree on whether
the transaction has happened? (Atomicity)
Distributed ACID Transactions
The Problem
In a distributed system:
How to make sure, that all nodes agree on whether
the transaction has happened? (Atomicity)
How to create a consistent snapshot across nodes? (Consistency)
Distributed ACID Transactions
The Problem
In a distributed system:
How to make sure, that all nodes agree on whether
the transaction has happened? (Atomicity)
How to create a consistent snapshot across nodes? (Consistency)
How to hide ongoing activities until commit? (Isolation)
Distributed ACID Transactions
The Problem
In a distributed system:
How to make sure, that all nodes agree on whether
the transaction has happened? (Atomicity)
How to create a consistent snapshot across nodes? (Consistency)
How to hide ongoing activities until commit? (Isolation)
How to handle lost nodes? (Durability)
Distributed ACID Transactions
The Problem
In a distributed system:
How to make sure, that all nodes agree on whether
the transaction has happened? (Atomicity)
How to create a consistent snapshot across nodes? (Consistency)
How to hide ongoing activities until commit? (Isolation)
How to handle lost nodes? (Durability)
We have to take replication, resilience and failover into account.
Distributed ACID Transactions
WITHOUT
Distributed databases without ACID transactions:
ArangoDB, BigTable, Couchbase, Datastax, Dynamo, Elastic, HBase,
MongoDB, RethinkDB, Riak, and lots more . . .
WITH
Distributed databases with ACID transactions:
(ArangoDB,) CockroachDB, FoundationDB, Spanner
Distributed ACID Transactions
WITHOUT
Distributed databases without ACID transactions:
ArangoDB, BigTable, Couchbase, Datastax, Dynamo, Elastic, HBase,
MongoDB, RethinkDB, Riak, and lots more . . .
WITH
Distributed databases with ACID transactions:
(ArangoDB,) CockroachDB, FoundationDB, Spanner
=⇒ Very few distributed engines promise ACID, because this is hard!
Distributed ACID Transactions
Basic Idea
Use Multi Version Concurrency Control (MVCC), i.e. multiple revisions of
a data item are kept.
Distributed ACID Transactions
Basic Idea
Use Multi Version Concurrency Control (MVCC), i.e. multiple revisions of
a data item are kept.
Do writes and replication decentrally and distributed, without them
becoming visible from other transactions.
Distributed ACID Transactions
Basic Idea
Use Multi Version Concurrency Control (MVCC), i.e. multiple revisions of
a data item are kept.
Do writes and replication decentrally and distributed, without them
becoming visible from other transactions.
Then have some place, where there is a switch, which decides when the
transaction becomes visible.
Distributed ACID Transactions
Basic Idea
Use Multi Version Concurrency Control (MVCC), i.e. multiple revisions of
a data item are kept.
Do writes and replication decentrally and distributed, without them
becoming visible from other transactions.
Then have some place, where there is a switch, which decides when the
transaction becomes visible. These “switches” need to
be persisted somewhere (durability),
scale out (no bottleneck for commit/abort),
be replicated (no single point of failure),
be resilient in case of fail-over (fault-tolerance).
Distributed ACID Transactions
Basic Idea
Use Multi Version Concurrency Control (MVCC), i.e. multiple revisions of
a data item are kept.
Do writes and replication decentrally and distributed, without them
becoming visible from other transactions.
Then have some place, where there is a switch, which decides when the
transaction becomes visible. These “switches” need to
be persisted somewhere (durability),
scale out (no bottleneck for commit/abort),
be replicated (no single point of failure),
be resilient in case of fail-over (fault-tolerance).
Transaction visibility needs to be implemented (MVCC), time stamps play
a crucial role.
Links
http://the-paper-trail.org/blog/consensus-protocols-paxos
https://raft.github.io
https://en.wikipedia.org/wiki/Merge_sort
http://www.benstopford.com/2015/02/14/log-structured-merge-trees/
http://muratbuffalo.blogspot.com.es/2014/07/hybrid-logical-clocks.html
https://research.google.com/archive/spanner.html
https://www.cockroachlabs.com/docs/cockroachdb-architecture.html
https://www.arangodb.com
http://mesos.apache.org

More Related Content

PDF
OSDC 2018 | The Computer science behind a modern distributed data store by Ma...
PPT
PDF
The Computer Science Behind a modern Distributed Database
PPTX
Scylla Summit 2018: Consensus in Eventually Consistent Databases
PPT
Big Data & NoSQL - EFS'11 (Pavlo Baron)
PDF
Design Patterns For Distributed NO-reational databases
PDF
Jay Kreps on Project Voldemort Scaling Simple Storage At LinkedIn
ODP
Consensus algo with_distributed_key_value_store_in_distributed_system
OSDC 2018 | The Computer science behind a modern distributed data store by Ma...
The Computer Science Behind a modern Distributed Database
Scylla Summit 2018: Consensus in Eventually Consistent Databases
Big Data & NoSQL - EFS'11 (Pavlo Baron)
Design Patterns For Distributed NO-reational databases
Jay Kreps on Project Voldemort Scaling Simple Storage At LinkedIn
Consensus algo with_distributed_key_value_store_in_distributed_system

Similar to The computer science behind a modern disributed data store (20)

ODP
Everything you always wanted to know about Distributed databases, at devoxx l...
PPTX
NoSQL Introduction, Theory, Implementations
PDF
Highly available distributed databases, how they work, javier ramirez at teowaki
PDF
Basics of the Highly Available Distributed Databases - teowaki - javier ramir...
PDF
Everything you always wanted to know about highly available distributed datab...
PDF
Design Patterns for Distributed Non-Relational Databases
PDF
Bearded gurus
PDF
NoSQL Rollercoaster
PDF
Nzpug welly-cassandra-02-12-2010
PDF
Scalable Data Storage Getting You Down? To The Cloud!
PDF
Scalable Data Storage Getting you Down? To the Cloud!
PDF
Building a Distributed Message Log from Scratch
PDF
Introduction to Raft algorithm
PDF
Raft_Diego_Ongaro.pdf
ODP
Distributed systems and consistency
ODP
Manging scalability of distributed system
PDF
Voldemort Nosql
PDF
Beyond Off the-Shelf Consensus
PDF
NoSQL overview implementation free
ODP
Clouds, Grids and Data
Everything you always wanted to know about Distributed databases, at devoxx l...
NoSQL Introduction, Theory, Implementations
Highly available distributed databases, how they work, javier ramirez at teowaki
Basics of the Highly Available Distributed Databases - teowaki - javier ramir...
Everything you always wanted to know about highly available distributed datab...
Design Patterns for Distributed Non-Relational Databases
Bearded gurus
NoSQL Rollercoaster
Nzpug welly-cassandra-02-12-2010
Scalable Data Storage Getting You Down? To The Cloud!
Scalable Data Storage Getting you Down? To the Cloud!
Building a Distributed Message Log from Scratch
Introduction to Raft algorithm
Raft_Diego_Ongaro.pdf
Distributed systems and consistency
Manging scalability of distributed system
Voldemort Nosql
Beyond Off the-Shelf Consensus
NoSQL overview implementation free
Clouds, Grids and Data
Ad

More from J On The Beach (20)

PDF
Massively scalable ETL in real world applications: the hard way
PPTX
Big Data On Data You Don’t Have
PPTX
Acoustic Time Series in Industry 4.0: Improved Reliability and Cyber-Security...
PDF
Pushing it to the edge in IoT
PDF
Drinking from the firehose, with virtual streams and virtual actors
PDF
How do we deploy? From Punched cards to Immutable server pattern
PDF
Java, Turbocharged
PDF
When Cloud Native meets the Financial Sector
PDF
The big data Universe. Literally.
PDF
Streaming to a New Jakarta EE
PDF
The TIPPSS Imperative for IoT - Ensuring Trust, Identity, Privacy, Protection...
PDF
Pushing AI to the Client with WebAssembly and Blazor
PDF
Axon Server went RAFTing
PDF
The Six Pitfalls of building a Microservices Architecture (and how to avoid t...
PDF
Madaari : Ordering For The Monkeys
PDF
Servers are doomed to fail
PDF
Interaction Protocols: It's all about good manners
PDF
A race of two compilers: GraalVM JIT versus HotSpot JIT C2. Which one offers ...
PDF
Leadership at every level
PDF
Machine Learning: The Bare Math Behind Libraries
Massively scalable ETL in real world applications: the hard way
Big Data On Data You Don’t Have
Acoustic Time Series in Industry 4.0: Improved Reliability and Cyber-Security...
Pushing it to the edge in IoT
Drinking from the firehose, with virtual streams and virtual actors
How do we deploy? From Punched cards to Immutable server pattern
Java, Turbocharged
When Cloud Native meets the Financial Sector
The big data Universe. Literally.
Streaming to a New Jakarta EE
The TIPPSS Imperative for IoT - Ensuring Trust, Identity, Privacy, Protection...
Pushing AI to the Client with WebAssembly and Blazor
Axon Server went RAFTing
The Six Pitfalls of building a Microservices Architecture (and how to avoid t...
Madaari : Ordering For The Monkeys
Servers are doomed to fail
Interaction Protocols: It's all about good manners
A race of two compilers: GraalVM JIT versus HotSpot JIT C2. Which one offers ...
Leadership at every level
Machine Learning: The Bare Math Behind Libraries
Ad

Recently uploaded (20)

PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Review of recent advances in non-invasive hemoglobin estimation
PPT
Teaching material agriculture food technology
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PPTX
Spectroscopy.pptx food analysis technology
PDF
Approach and Philosophy of On baking technology
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PPTX
Programs and apps: productivity, graphics, security and other tools
PPTX
sap open course for s4hana steps from ECC to s4
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
cuic standard and advanced reporting.pdf
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Reach Out and Touch Someone: Haptics and Empathic Computing
Review of recent advances in non-invasive hemoglobin estimation
Teaching material agriculture food technology
Mobile App Security Testing_ A Comprehensive Guide.pdf
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Spectroscopy.pptx food analysis technology
Approach and Philosophy of On baking technology
NewMind AI Weekly Chronicles - August'25 Week I
Network Security Unit 5.pdf for BCA BBA.
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Dropbox Q2 2025 Financial Results & Investor Presentation
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Programs and apps: productivity, graphics, security and other tools
sap open course for s4hana steps from ECC to s4
Chapter 3 Spatial Domain Image Processing.pdf
20250228 LYD VKU AI Blended-Learning.pptx
cuic standard and advanced reporting.pdf
Diabetes mellitus diagnosis method based random forest with bat algorithm
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...

The computer science behind a modern disributed data store

  • 1. The computer science behind a modern distributed data store Max Neunhöffer Malaga, 19 May 2017 www.arangodb.com
  • 2. Overview Topics Resilience and Consensus Sorting Log-structured Merge Trees Hybrid Logical Clocks Distributed ACID Transactions
  • 3. Overview Topics Resilience and Consensus Sorting Log-structured Merge Trees Hybrid Logical Clocks Distributed ACID Transactions Bottom line: You need CompSci to implement a modern data store
  • 4. Resilience and Consensus The Problem A modern data store is distributed,
  • 5. Resilience and Consensus The Problem A modern data store is distributed, because it needs to scale out and/or be resilient.
  • 6. Resilience and Consensus The Problem A modern data store is distributed, because it needs to scale out and/or be resilient. Different parts of the system need to agree on things.
  • 7. Resilience and Consensus The Problem A modern data store is distributed, because it needs to scale out and/or be resilient. Different parts of the system need to agree on things. Consensus is the art to achieve this as well as possible in software. This is relatively easy, if things are good, but very hard, if:
  • 8. Resilience and Consensus The Problem A modern data store is distributed, because it needs to scale out and/or be resilient. Different parts of the system need to agree on things. Consensus is the art to achieve this as well as possible in software. This is relatively easy, if things are good, but very hard, if: the network has outages, the network has dropped or delayed or duplicated packets, disks fail (and come back with corrupt data), machines fail (and come back with old data), racks fail (and come back with or without data).
  • 9. Resilience and Consensus The Problem A modern data store is distributed, because it needs to scale out and/or be resilient. Different parts of the system need to agree on things. Consensus is the art to achieve this as well as possible in software. This is relatively easy, if things are good, but very hard, if: the network has outages, the network has dropped or delayed or duplicated packets, disks fail (and come back with corrupt data), machines fail (and come back with old data), racks fail (and come back with or without data). (And we have not even talked about malicious attacks and enemy action.)
  • 10. Paxos and Raft Traditionally, one uses the Paxos Consensus Protocol (1998). More recently, Raft (2013) has been proposed. Paxos is a challenge to understand and to implement efficiently. Various variants exist. Raft is designed to be understandable.
  • 11. Paxos and Raft Traditionally, one uses the Paxos Consensus Protocol (1998). More recently, Raft (2013) has been proposed. Paxos is a challenge to understand and to implement efficiently. Various variants exist. Raft is designed to be understandable. My advice: First try to understand Paxos for some time (do not implement it!), then enjoy the beauty of Raft,
  • 12. Paxos and Raft Traditionally, one uses the Paxos Consensus Protocol (1998). More recently, Raft (2013) has been proposed. Paxos is a challenge to understand and to implement efficiently. Various variants exist. Raft is designed to be understandable. My advice: First try to understand Paxos for some time (do not implement it!), then enjoy the beauty of Raft, but do not implement it either!
  • 13. Paxos and Raft Traditionally, one uses the Paxos Consensus Protocol (1998). More recently, Raft (2013) has been proposed. Paxos is a challenge to understand and to implement efficiently. Various variants exist. Raft is designed to be understandable. My advice: First try to understand Paxos for some time (do not implement it!), then enjoy the beauty of Raft, but do not implement it either! Use some battle-tested implementation you trust!
  • 14. Paxos and Raft Traditionally, one uses the Paxos Consensus Protocol (1998). More recently, Raft (2013) has been proposed. Paxos is a challenge to understand and to implement efficiently. Various variants exist. Raft is designed to be understandable. My advice: First try to understand Paxos for some time (do not implement it!), then enjoy the beauty of Raft, but do not implement it either! Use some battle-tested implementation you trust! But most importantly: DO NOT TRY TO INVENT YOUR OWN!
  • 15. Raft in a slide An odd number of servers each keep a persisted log of events.
  • 16. Raft in a slide An odd number of servers each keep a persisted log of events. Everything is replicated to everybody.
  • 17. Raft in a slide An odd number of servers each keep a persisted log of events. Everything is replicated to everybody. They democratically elect a leader with absolute majority.
  • 18. Raft in a slide An odd number of servers each keep a persisted log of events. Everything is replicated to everybody. They democratically elect a leader with absolute majority. Only the leader may append to the replicated log.
  • 19. Raft in a slide An odd number of servers each keep a persisted log of events. Everything is replicated to everybody. They democratically elect a leader with absolute majority. Only the leader may append to the replicated log. An append only counts when a majority has persisted and confirmed it.
  • 20. Raft in a slide An odd number of servers each keep a persisted log of events. Everything is replicated to everybody. They democratically elect a leader with absolute majority. Only the leader may append to the replicated log. An append only counts when a majority has persisted and confirmed it. Very smart logic to ensure a unique leader and automatic recovery from failure.
  • 21. Raft in a slide An odd number of servers each keep a persisted log of events. Everything is replicated to everybody. They democratically elect a leader with absolute majority. Only the leader may append to the replicated log. An append only counts when a majority has persisted and confirmed it. Very smart logic to ensure a unique leader and automatic recovery from failure. It is all a lot of fun to get right, but it is proven to work.
  • 22. Raft in a slide An odd number of servers each keep a persisted log of events. Everything is replicated to everybody. They democratically elect a leader with absolute majority. Only the leader may append to the replicated log. An append only counts when a majority has persisted and confirmed it. Very smart logic to ensure a unique leader and automatic recovery from failure. It is all a lot of fun to get right, but it is proven to work. One puts a key/value store on top, the log contains the changes.
  • 24. Sorting The Problem Data stores need indexes. In practice, we need to sort things.
  • 25. Sorting The Problem Data stores need indexes. In practice, we need to sort things. Most published algorithms are rubbish on modern hardware.
  • 26. Sorting The Problem Data stores need indexes. In practice, we need to sort things. Most published algorithms are rubbish on modern hardware. The problem is no longer the comparison computations but the data movement .
  • 27. Sorting The Problem Data stores need indexes. In practice, we need to sort things. Most published algorithms are rubbish on modern hardware. The problem is no longer the comparison computations but the data movement . Since the time when I was a kid and have played with an Apple IIe, compute power in one core has increased by ×20000 a single memory access by ×40 and now we have 32 cores in a CPU this means computation has outpaced memory access by ×1280!
  • 28. Idea for a parallel sorting algorithm: Merge Sort Min−Heap: sorted merged
  • 29. Idea for a parallel sorting algorithm: Merge Sort Min−Heap: sorted merged Nearly all comparisons hit the L2 cache!
  • 30. Log structured merge trees (LSM-trees) The Problem People rightfully expect from a data store, that it can hold more data than the available RAM, works well with SSDs and spinning rust, allows fast bulk inserts into large data sets, and provides fast reads in a hot set that fits into RAM.
  • 31. Log structured merge trees (LSM-trees) The Problem People rightfully expect from a data store, that it can hold more data than the available RAM, works well with SSDs and spinning rust, allows fast bulk inserts into large data sets, and provides fast reads in a hot set that fits into RAM. Traditional B-tree based structures often fail to deliver with the last 2.
  • 32. Log structured merge trees (LSM-trees) (Source: http://www.benstopford.com/2015/02/14/log-structured-merge-trees/, Author: Ben Stopford, License: Creative Commons)
  • 33. Log structured merge trees (LSM-trees) LSM-trees — summary writes first go into memtables, all files are sorted and immutable, compaction happens in the background, merge sort can be used, all writes use sequential I/O, Bloom filters or Cuckoo filters for fast reads, =⇒ good write throughput and reasonable read performance, used in ArangoDB, BigTable, Cassandra, HBase, InfluxDB, LevelDB, MongoDB, RocksDB, SQLite4 and WiredTiger, etc.
  • 34. Hybrid Logical Clocks (HLC) The Problem Clocks in different nodes of distributed systems are not in sync.
  • 35. Hybrid Logical Clocks (HLC) The Problem Clocks in different nodes of distributed systems are not in sync. general relativity poses fundamental obstructions to synchronizity, in practice, clock skew happens, Google can use atomic clocks, even with NTP (network time protocol) we have to live with ≈ 20ms.
  • 36. Hybrid Logical Clocks (HLC) The Problem Clocks in different nodes of distributed systems are not in sync. general relativity poses fundamental obstructions to synchronizity, in practice, clock skew happens, Google can use atomic clocks, even with NTP (network time protocol) we have to live with ≈ 20ms. Therefore, we cannot compare time stamps from different nodes!
  • 37. Hybrid Logical Clocks (HLC) The Problem Clocks in different nodes of distributed systems are not in sync. general relativity poses fundamental obstructions to synchronizity, in practice, clock skew happens, Google can use atomic clocks, even with NTP (network time protocol) we have to live with ≈ 20ms. Therefore, we cannot compare time stamps from different nodes! Why would this help? establish “happened after” relationship between events, e.g. for conflict resolution, log sorting, detecting network delays, time to live could be implemented easily.
  • 38. Hybrid Logical Clocks (HLC) The Idea Every computer has a local clock, and we use NTP to synchronize.
  • 39. Hybrid Logical Clocks (HLC) The Idea Every computer has a local clock, and we use NTP to synchronize. If two events on different machines are linked by causality, the cause should have a smaller time stamp than the effect.
  • 40. Hybrid Logical Clocks (HLC) The Idea Every computer has a local clock, and we use NTP to synchronize. If two events on different machines are linked by causality, the cause should have a smaller time stamp than the effect. causality ⇐⇒ a message is sent Send a time stamp with every message. The HLC always returns a value > max(local clock, largest time stamp ever seen).
  • 41. Hybrid Logical Clocks (HLC) The Idea Every computer has a local clock, and we use NTP to synchronize. If two events on different machines are linked by causality, the cause should have a smaller time stamp than the effect. causality ⇐⇒ a message is sent Send a time stamp with every message. The HLC always returns a value > max(local clock, largest time stamp ever seen). Causality is preserved, time can “catch up” with logical time eventually. http://muratbuffalo.blogspot.com.es/2014/07/hybrid-logical-clocks.html
  • 42. Distributed ACID Transactions Atomic either happens in its entirety or not at all Consistent reading sees a consistent state, writing preser- vers consistency Isolated concurrent transactions do not see each other Durable committed writes are preserved after shut- down and crashes
  • 43. Distributed ACID Transactions Atomic either happens in its entirety or not at all Consistent reading sees a consistent state, writing preser- vers consistency Isolated concurrent transactions do not see each other Durable committed writes are preserved after shut- down and crashes (All relatively doable when transactions happen one after another!)
  • 44. Distributed ACID Transactions The Problem In a distributed system: How to make sure, that all nodes agree on whether the transaction has happened? (Atomicity)
  • 45. Distributed ACID Transactions The Problem In a distributed system: How to make sure, that all nodes agree on whether the transaction has happened? (Atomicity) How to create a consistent snapshot across nodes? (Consistency)
  • 46. Distributed ACID Transactions The Problem In a distributed system: How to make sure, that all nodes agree on whether the transaction has happened? (Atomicity) How to create a consistent snapshot across nodes? (Consistency) How to hide ongoing activities until commit? (Isolation)
  • 47. Distributed ACID Transactions The Problem In a distributed system: How to make sure, that all nodes agree on whether the transaction has happened? (Atomicity) How to create a consistent snapshot across nodes? (Consistency) How to hide ongoing activities until commit? (Isolation) How to handle lost nodes? (Durability)
  • 48. Distributed ACID Transactions The Problem In a distributed system: How to make sure, that all nodes agree on whether the transaction has happened? (Atomicity) How to create a consistent snapshot across nodes? (Consistency) How to hide ongoing activities until commit? (Isolation) How to handle lost nodes? (Durability) We have to take replication, resilience and failover into account.
  • 49. Distributed ACID Transactions WITHOUT Distributed databases without ACID transactions: ArangoDB, BigTable, Couchbase, Datastax, Dynamo, Elastic, HBase, MongoDB, RethinkDB, Riak, and lots more . . . WITH Distributed databases with ACID transactions: (ArangoDB,) CockroachDB, FoundationDB, Spanner
  • 50. Distributed ACID Transactions WITHOUT Distributed databases without ACID transactions: ArangoDB, BigTable, Couchbase, Datastax, Dynamo, Elastic, HBase, MongoDB, RethinkDB, Riak, and lots more . . . WITH Distributed databases with ACID transactions: (ArangoDB,) CockroachDB, FoundationDB, Spanner =⇒ Very few distributed engines promise ACID, because this is hard!
  • 51. Distributed ACID Transactions Basic Idea Use Multi Version Concurrency Control (MVCC), i.e. multiple revisions of a data item are kept.
  • 52. Distributed ACID Transactions Basic Idea Use Multi Version Concurrency Control (MVCC), i.e. multiple revisions of a data item are kept. Do writes and replication decentrally and distributed, without them becoming visible from other transactions.
  • 53. Distributed ACID Transactions Basic Idea Use Multi Version Concurrency Control (MVCC), i.e. multiple revisions of a data item are kept. Do writes and replication decentrally and distributed, without them becoming visible from other transactions. Then have some place, where there is a switch, which decides when the transaction becomes visible.
  • 54. Distributed ACID Transactions Basic Idea Use Multi Version Concurrency Control (MVCC), i.e. multiple revisions of a data item are kept. Do writes and replication decentrally and distributed, without them becoming visible from other transactions. Then have some place, where there is a switch, which decides when the transaction becomes visible. These “switches” need to be persisted somewhere (durability), scale out (no bottleneck for commit/abort), be replicated (no single point of failure), be resilient in case of fail-over (fault-tolerance).
  • 55. Distributed ACID Transactions Basic Idea Use Multi Version Concurrency Control (MVCC), i.e. multiple revisions of a data item are kept. Do writes and replication decentrally and distributed, without them becoming visible from other transactions. Then have some place, where there is a switch, which decides when the transaction becomes visible. These “switches” need to be persisted somewhere (durability), scale out (no bottleneck for commit/abort), be replicated (no single point of failure), be resilient in case of fail-over (fault-tolerance). Transaction visibility needs to be implemented (MVCC), time stamps play a crucial role.