SlideShare a Scribd company logo
Eventually-
Consistent Data
   Structures
      Sean Cribbs
   @seancribbs #CRDT
   StrangeLoop 2012
I work for Basho
       We make
Riak is
     Eventually
     Consistent
So are Voldemort and Cassandra
No ACID!
Duals or Duels?
Duals or Duels?
object-oriented / functional
Duals or Duels?
object-oriented / functional
     static / dynamic
Duals or Duels?
object-oriented / functional
     static / dynamic
 consistency / availability
Duals or Duels?
object-oriented / functional
     static / dynamic
 consistency / availability
   throughput / latency
Duals or Duels?
object-oriented / functional
     static / dynamic
 consistency / availability
   throughput / latency
    threaded / evented
Duals or Duels?
object-oriented / functional
     static / dynamic
 consistency / availability
   throughput / latency
    threaded / evented
     safety / liveness
Safety / Liveness
Proving the Correctness of Multiprocess Programs - Leslie
                 Lamport (March 1977)
Safety / Liveness
Proving the Correctness of Multiprocess Programs - Leslie
                 Lamport (March 1977)


     •Safety: “nothing bad happens”
       (partial correctness)
Safety / Liveness
Proving the Correctness of Multiprocess Programs - Leslie
                 Lamport (March 1977)


     •Safety: “nothing bad happens”
       (partial correctness)

     •Liveness: “something good
       eventually happens” (termination)
Safety / Liveness
 Proving the Correctness of Multiprocess Programs - Leslie
                  Lamport (March 1977)


       •Safety: “nothing bad happens”
          (partial correctness)

       •Liveness: “something good
          eventually happens” (termination)
“Safety and liveness: Eventual consistency is not safe” - Peter
                            Bailis
http://www.bailis.org/blog/safety-and-liveness-eventual-consistency-is-
                               not-safe/
Eventual
Consistency

          Replicated
      Loose coordination
  3     Convergence
Eventual is Good

   ✔ Fault-tolerant
   ✔ Highly available
   ✔ Low-latency
Consistency?

           No clear winner!
           Throw one out?
       3
             Keep both?
B
Consistency?

              No clear winner!
              Throw one out?
       3
                  Keep both?
B     Cassandra
Consistency?

              No clear winner!
              Throw one out?
       3
                  Keep both?
B     Cassandra

                  Riak & Voldemort
Conflicts!
 A!   B!
Semantic
        Resolution
• Your app knows the domain - use
  business rules to resolve

• Amazon Dynamo’s shopping cart
Semantic
           Resolution
  • Your app knows the domain - use
     business rules to resolve

  • Amazon Dynamo’s shopping cart
“Ad hoc approaches have proven brittle and
              error-prone”
Conflict-Free
 Replicated
 Data Types
Conflict-Free
 Replicated
 Data Types
       useful abstractions
Conflict-Free
          Replicated
          Data Types
     multiple
independent copies   useful abstractions
resolves automatically
                      toward a single value



         Conflict-Free
          Replicated
          Data Types
     multiple
independent copies      useful abstractions
http://db.cs.berkeley.edu/papers/UCB-lattice-tr.pdf
Bounded Join Semi-Lattices
Bounded Join Semi-Lattices
 〈S, ⊔, ⊥〉
Bounded Join Semi-Lattices
 〈S, ⊔, ⊥〉

 ‣ S is a set
Bounded Join Semi-Lattices
 〈S, ⊔, ⊥〉

 ‣ S is a set
 ‣ ⊔ is a least-upper bound (join/merge) on
   S
Bounded Join Semi-Lattices
 〈S, ⊔, ⊥〉

 ‣ S is a set
 ‣ ⊔ is a least-upper bound (join/merge) on
   S
 ‣ ⊥∈S
Bounded Join Semi-Lattices
 〈S, ⊔, ⊥〉

 ‣ S is a set
 ‣ ⊔ is a least-upper bound (join/merge) on
   S
 ‣ ⊥∈S
 ‣ ∀x, y ∈ S: x ≤S y   x⊔y=y
Bounded Join Semi-Lattices
 〈S, ⊔, ⊥〉

 ‣ S is a set
 ‣ ⊔ is a least-upper bound (join/merge) on
   S
 ‣ ⊥∈S
 ‣ ∀x, y ∈ S: x ≤S y   x⊔y=y

 ‣ ∀x ∈ S: x ⊔ ⊥ = x
lmax Lattice

S≔ℛ
a ⊔b ≔ max(a,b)
⊥ ≔ -∞
lset Lattice
                             {a,b,c,d,e}

                 {a,b,c,d}                 {b,c,d,e}
                               {b,c,d}
Time




             {a,b,c}                             {c,d,e}
                       {b,c}             {c,d}         {d,e}
         {a,b}



       {a}       {b}             {c}             {d}           {e}
Eventually Consistent Data Structures (from strangeloop12)
CRDT Flavors
• Convergent: State
 • Weak messaging requirements
•Commutative: Operations
 •Reliable broadcast required
 •Causal ordering sufficient
Convergent CRDTs
Commutative
  CRDTs
Registers
A place to put your stuff
Registers

• Last-Write Wins (LWW-Register)
 • e.g. Columns in Cassandra
• Multi-Valued (MV-Register)
 • e.g. Objects (values) in Riak
Counters
 Keeping tabs
G-Counter
G-Counter
// Starts empty
[]
G-Counter
// Starts empty
[]

// A increments twice, forwarding state
[{a,1}] // == 1
[{a,2}] // == 2
G-Counter
// Starts empty
[]

// A increments twice, forwarding state
[{a,1}] // == 1
[{a,2}] // == 2

             // B increments
             [{b,1}] // == 1
G-Counter
// Starts empty
[]

// A increments twice, forwarding state
[{a,1}] // == 1
[{a,2}] // == 2

                 // B increments
                 [{b,1}] // == 1

// Merging
[{a,2}, {b,1}]     [{a,1}, {b,1}]
PN-Counter
// A PN-Counter
{
  P = [{a,10},{b,2}],
  N = [{a,1},{c,5}]
}
// == (10+2)-(1+5) == 12-6 == 6
Sets
Members Only
G-Set
G-Set
// Starts empty
{}
G-Set
// Starts empty
{}

// A adds a and b, forwarding state
{a}
{a,b}
G-Set
// Starts empty
{}

// A adds a and b, forwarding state
{a}
{a,b}

             // B adds c
             {c}
G-Set
// Starts empty
{}

// A adds a and b, forwarding state
{a}
{a,b}

             // B adds c
             {c}

// Merging
{a,b,c}        {a,c}
2P-Set
2P-Set
// Starts empty
{A={},R={}}
2P-Set
// Starts empty
{A={},R={}}

// A adds a and b, forwarding state,
// removes a
{A={a}, R={}}   // == {a}
{A={a,b},R={}} // == {a,b}
{A={a,b},R={a}} // == {b}
2P-Set
// Starts empty
{A={},R={}}

// A adds a and b, forwarding state,
// removes a
{A={a}, R={}}   // == {a}
{A={a,b},R={}} // == {a,b}
{A={a,b},R={a}} // == {b}

             // B adds c
             {A={c},R={}} // == {c}
2P-Set
// Starts empty
{A={},R={}}

// A adds a and b, forwarding state,
// removes a
{A={a}, R={}}   // == {a}
{A={a,b},R={}} // == {a,b}
{A={a,b},R={a}} // == {b}

              // B adds c
              {A={c},R={}} // == {c}
// Merging
{A={a,b,c},R={a}}   {A={a,c}, R={}}
LWW-Element-Set
OR-Set
G = (V,E)
Graphs   E⊆V×V
G = (V,E)
Graphs   E⊆V×V
G = (V,E)
Graphs   E⊆V×V
Use-Cases

• Social graph (OR-Set or a Graph)
• Web page visits (G-Counter)
• Shopping Cart (Modified OR-Set)
• “Like” button (U-Set)
Challenges: GC


• CRDTs are inefficient
• Synchronization may be required
Challenges:
    Responsibility
• Client
 • Erlang: mochi/statebox
 • Clojure: reiddraper/knockbox
 • Ruby: aphyr/meangirls, bkerley/
   hanover

• Server
Thanks

More Related Content

PDF
RDBMS to Graph
PDF
Introduction of Knowledge Graphs
PDF
CIMPA : Enhancing Data Exposition & Digital Twin for Airbus Helicopters
PDF
Cybersecurity | Risk. Impact. Innovations.
PPTX
Data quality problem and solution
PDF
Apache Hadoop YARN
PPTX
MongoDB.pptx
PDF
Data Modeling a Scheduling App (Adam Hutson, DataScale) | Cassandra Summit 2016
RDBMS to Graph
Introduction of Knowledge Graphs
CIMPA : Enhancing Data Exposition & Digital Twin for Airbus Helicopters
Cybersecurity | Risk. Impact. Innovations.
Data quality problem and solution
Apache Hadoop YARN
MongoDB.pptx
Data Modeling a Scheduling App (Adam Hutson, DataScale) | Cassandra Summit 2016

What's hot (20)

PPTX
Cluster computing
PDF
The evaluation for the defense of adversarial attacks
PDF
Literature Review: Security on cloud computing
PDF
Управление инцидентами информационной безопасности от А до Я
PDF
Introduction to Cassandra
PPTX
How to build a successful Data Lake
PDF
ArCo: the Italian Cultural Heritage Knowledge Graph
PPTX
Ppt for Application of big data
PPTX
Chap 5 software as a service (saass)
PDF
Basic distributed systems principles
PPT
Mining Frequent Patterns, Association and Correlations
PPT
Data Mining: Concepts and Techniques (3rd ed.) - Chapter 3 preprocessing
PDF
Data Platform on GCP
PDF
Semantic Role Labeling
PDF
Computational Qualitative Data Analytics
PPTX
Building a Big Data Pipeline
PDF
Danish Business Authority: Explainability and causality in relation to ML Ops
PDF
NoSQL databases
PPTX
Big data architectures and the data lake
PPTX
Introduction to Cloud Computing
Cluster computing
The evaluation for the defense of adversarial attacks
Literature Review: Security on cloud computing
Управление инцидентами информационной безопасности от А до Я
Introduction to Cassandra
How to build a successful Data Lake
ArCo: the Italian Cultural Heritage Knowledge Graph
Ppt for Application of big data
Chap 5 software as a service (saass)
Basic distributed systems principles
Mining Frequent Patterns, Association and Correlations
Data Mining: Concepts and Techniques (3rd ed.) - Chapter 3 preprocessing
Data Platform on GCP
Semantic Role Labeling
Computational Qualitative Data Analytics
Building a Big Data Pipeline
Danish Business Authority: Explainability and causality in relation to ML Ops
NoSQL databases
Big data architectures and the data lake
Introduction to Cloud Computing
Ad

Similar to Eventually Consistent Data Structures (from strangeloop12) (20)

KEY
Eventually-Consistent Data Structures
ODP
Distributed systems and consistency
PDF
Automated conflict resolution - enabling masterless data distribution (Rune S...
PDF
Self healing data
PDF
Uwe Friedrichsen – Extreme availability and self-healing data with CRDTs - No...
PPT
Big Data & NoSQL - EFS'11 (Pavlo Baron)
PDF
RedisConf18 - CRDTs and Redis - From sequential to concurrent executions
PDF
Thoughts on consistency models
PPTX
Distributed system sans consensus
PPT
Distributed System by Pratik Tambekar
PDF
From Mainframe to Microservice: An Introduction to Distributed Systems
PDF
Uwe Friedrichsen - CRDT und mehr - über extreme Verfügbarkeit und selbstheile...
PDF
Dimitry Solovyov - The imminent threat of functional programming
PDF
Conflict Resolution In Kai
PPTX
Ch-7-Part-2-Distributed-System.pptx
PDF
GoshawkDB: Making Time with Vector Clocks
PDF
RedisDay London 2018 - CRDTs and Redis From sequential to concurrent executions
PDF
FP Days: Down the Clojure Rabbit Hole
PDF
Consistency Models in New Generation Databases
PDF
Consistency-New-Generation-Databases
Eventually-Consistent Data Structures
Distributed systems and consistency
Automated conflict resolution - enabling masterless data distribution (Rune S...
Self healing data
Uwe Friedrichsen – Extreme availability and self-healing data with CRDTs - No...
Big Data & NoSQL - EFS'11 (Pavlo Baron)
RedisConf18 - CRDTs and Redis - From sequential to concurrent executions
Thoughts on consistency models
Distributed system sans consensus
Distributed System by Pratik Tambekar
From Mainframe to Microservice: An Introduction to Distributed Systems
Uwe Friedrichsen - CRDT und mehr - über extreme Verfügbarkeit und selbstheile...
Dimitry Solovyov - The imminent threat of functional programming
Conflict Resolution In Kai
Ch-7-Part-2-Distributed-System.pptx
GoshawkDB: Making Time with Vector Clocks
RedisDay London 2018 - CRDTs and Redis From sequential to concurrent executions
FP Days: Down the Clojure Rabbit Hole
Consistency Models in New Generation Databases
Consistency-New-Generation-Databases
Ad

More from Sean Cribbs (18)

KEY
A Case of Accidental Concurrency
KEY
Embrace NoSQL and Eventual Consistency with Ripple
KEY
Riak with node.js
KEY
Schema Design for Riak (Take 2)
PDF
Riak (Øredev nosql day)
PDF
Riak Tutorial (Øredev)
PDF
The Radiant Ethic
KEY
Introduction to Riak and Ripple (KC.rb)
KEY
Riak with Rails
KEY
Schema Design for Riak
KEY
Introduction to Riak - Red Dirt Ruby Conf Training
PDF
Introducing Riak and Ripple
ZIP
Round PEG, Round Hole - Parsing Functionally
PDF
Story Driven Development With Cucumber
KEY
Achieving Parsing Sanity In Erlang
PDF
Of Rats And Dragons
KEY
Erlang/OTP for Rubyists
PDF
Content Management That Won't Rot Your Brain
A Case of Accidental Concurrency
Embrace NoSQL and Eventual Consistency with Ripple
Riak with node.js
Schema Design for Riak (Take 2)
Riak (Øredev nosql day)
Riak Tutorial (Øredev)
The Radiant Ethic
Introduction to Riak and Ripple (KC.rb)
Riak with Rails
Schema Design for Riak
Introduction to Riak - Red Dirt Ruby Conf Training
Introducing Riak and Ripple
Round PEG, Round Hole - Parsing Functionally
Story Driven Development With Cucumber
Achieving Parsing Sanity In Erlang
Of Rats And Dragons
Erlang/OTP for Rubyists
Content Management That Won't Rot Your Brain

Recently uploaded (20)

PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PPTX
Big Data Technologies - Introduction.pptx
PDF
NewMind AI Monthly Chronicles - July 2025
PDF
Approach and Philosophy of On baking technology
PDF
Empathic Computing: Creating Shared Understanding
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PPTX
A Presentation on Artificial Intelligence
PDF
Encapsulation theory and applications.pdf
PDF
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PPT
Teaching material agriculture food technology
PDF
Electronic commerce courselecture one. Pdf
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
The Rise and Fall of 3GPP – Time for a Sabbatical?
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Big Data Technologies - Introduction.pptx
NewMind AI Monthly Chronicles - July 2025
Approach and Philosophy of On baking technology
Empathic Computing: Creating Shared Understanding
Understanding_Digital_Forensics_Presentation.pptx
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
A Presentation on Artificial Intelligence
Encapsulation theory and applications.pdf
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Diabetes mellitus diagnosis method based random forest with bat algorithm
Chapter 3 Spatial Domain Image Processing.pdf
NewMind AI Weekly Chronicles - August'25 Week I
Teaching material agriculture food technology
Electronic commerce courselecture one. Pdf
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...

Eventually Consistent Data Structures (from strangeloop12)

Editor's Notes

  • #2: \n
  • #3: \n
  • #4: \n
  • #5: There’s no ACID! But don’t worry, there’s no need to be upset, despite what you may have heard.\n
  • #6: I think the fear people have about giving up ACID is really just a tendency to see things in black and white, because subtlety is much harder to understand and accept. Everyday in the wider technical community and the Internet we are presented with binary choices which are often not really in conflict, but either orthogonal albeit related concepts, or simply different ends of a spectrum. We too often perceive a Hegelian dialectic when one doesn’t exist (without the synthesis part!). \n\nAn important pair we need to understand, but is not frequently discussed outside of academia is safety and liveness.\n
  • #7: I think the fear people have about giving up ACID is really just a tendency to see things in black and white, because subtlety is much harder to understand and accept. Everyday in the wider technical community and the Internet we are presented with binary choices which are often not really in conflict, but either orthogonal albeit related concepts, or simply different ends of a spectrum. We too often perceive a Hegelian dialectic when one doesn’t exist (without the synthesis part!). \n\nAn important pair we need to understand, but is not frequently discussed outside of academia is safety and liveness.\n
  • #8: I think the fear people have about giving up ACID is really just a tendency to see things in black and white, because subtlety is much harder to understand and accept. Everyday in the wider technical community and the Internet we are presented with binary choices which are often not really in conflict, but either orthogonal albeit related concepts, or simply different ends of a spectrum. We too often perceive a Hegelian dialectic when one doesn’t exist (without the synthesis part!). \n\nAn important pair we need to understand, but is not frequently discussed outside of academia is safety and liveness.\n
  • #9: I think the fear people have about giving up ACID is really just a tendency to see things in black and white, because subtlety is much harder to understand and accept. Everyday in the wider technical community and the Internet we are presented with binary choices which are often not really in conflict, but either orthogonal albeit related concepts, or simply different ends of a spectrum. We too often perceive a Hegelian dialectic when one doesn’t exist (without the synthesis part!). \n\nAn important pair we need to understand, but is not frequently discussed outside of academia is safety and liveness.\n
  • #10: I think the fear people have about giving up ACID is really just a tendency to see things in black and white, because subtlety is much harder to understand and accept. Everyday in the wider technical community and the Internet we are presented with binary choices which are often not really in conflict, but either orthogonal albeit related concepts, or simply different ends of a spectrum. We too often perceive a Hegelian dialectic when one doesn’t exist (without the synthesis part!). \n\nAn important pair we need to understand, but is not frequently discussed outside of academia is safety and liveness.\n
  • #11: I think the fear people have about giving up ACID is really just a tendency to see things in black and white, because subtlety is much harder to understand and accept. Everyday in the wider technical community and the Internet we are presented with binary choices which are often not really in conflict, but either orthogonal albeit related concepts, or simply different ends of a spectrum. We too often perceive a Hegelian dialectic when one doesn’t exist (without the synthesis part!). \n\nAn important pair we need to understand, but is not frequently discussed outside of academia is safety and liveness.\n
  • #12: Safety and Liveness were terms which were defined for concurrent programs in this 1977 paper by Leslie Lamport. Colloquially, safety means that in the course of running your program, “nothing bad will happen” and liveness means that “something good will eventually happen”. Both are desirable properties, but sometimes enforcing one property may cause you to give up the other. I thought Peter Bailis stated this eloquently in his recent blog post that Eventual Consistency is not safe by itself - but a trivially satisfiable liveness property. That is, it helps keep your system available, but doesn’t make any guarantees about whether correct answers will be given at all. His larger point was that practical systems like Riak/Voldemort/Cassandra do make safety guarantees but tend not to state them. It’s not all “garbage”.\n
  • #13: Safety and Liveness were terms which were defined for concurrent programs in this 1977 paper by Leslie Lamport. Colloquially, safety means that in the course of running your program, “nothing bad will happen” and liveness means that “something good will eventually happen”. Both are desirable properties, but sometimes enforcing one property may cause you to give up the other. I thought Peter Bailis stated this eloquently in his recent blog post that Eventual Consistency is not safe by itself - but a trivially satisfiable liveness property. That is, it helps keep your system available, but doesn’t make any guarantees about whether correct answers will be given at all. His larger point was that practical systems like Riak/Voldemort/Cassandra do make safety guarantees but tend not to state them. It’s not all “garbage”.\n
  • #14: Safety and Liveness were terms which were defined for concurrent programs in this 1977 paper by Leslie Lamport. Colloquially, safety means that in the course of running your program, “nothing bad will happen” and liveness means that “something good will eventually happen”. Both are desirable properties, but sometimes enforcing one property may cause you to give up the other. I thought Peter Bailis stated this eloquently in his recent blog post that Eventual Consistency is not safe by itself - but a trivially satisfiable liveness property. That is, it helps keep your system available, but doesn’t make any guarantees about whether correct answers will be given at all. His larger point was that practical systems like Riak/Voldemort/Cassandra do make safety guarantees but tend not to state them. It’s not all “garbage”.\n
  • #15: In an eventually consistent system, you tend to have multiple copies of the same datum, which means that it’s replicated. They also tend to allow loose coordination and things like sloppy quorums, since you don’t require expensive multi-phase commit protocols. This also makes them resilient to network partitions, which DO EXIST. Eventually consistent systems must also include means for state to move forward when staleness is detected. In Dynamo-like systems, this is usually done with read-repair, that is, writing the newer value to stale replicas when reading.\n
  • #16: While not as simple to understand as an ACID system, eventual consistency has many practical benefits. When encountering failures, especially network-related ones, the system can more often remain available to reads and writes despite the failures. In the same vein, relying on dynamic participation in operations lends itself to systems with low, consistent latency because only promptly-responding replicas need to be considered.\n
  • #17: Of course the tradeoff of those benefits, thanks to the CAP theorem, is that you sacrifice strict consistency. There is no total ordering of events in the system, you have no transactions, you have weak guarantees of delivery at best. This means it’s incredibly difficult to decide who wins when there are concurrent writes in the system. The solutions to the problem are both non-ideal, but they are generally: first, to throw one version out by applying an arbitrary ordering, usually a timestamp of sorts; second, to keep both values around and let the user decide. These are the approaches of Cassandra, and Riak/Voldemort respectively.\n
  • #18: Of course the tradeoff of those benefits, thanks to the CAP theorem, is that you sacrifice strict consistency. There is no total ordering of events in the system, you have no transactions, you have weak guarantees of delivery at best. This means it’s incredibly difficult to decide who wins when there are concurrent writes in the system. The solutions to the problem are both non-ideal, but they are generally: first, to throw one version out by applying an arbitrary ordering, usually a timestamp of sorts; second, to keep both values around and let the user decide. These are the approaches of Cassandra, and Riak/Voldemort respectively.\n
  • #19: So maybe you chose Riak or Voldemort, you get write conflicts (Riak calls them siblings). Now that you’ve got both values, how do you decide what the real state should be?\n
  • #20: One strategy, which I call “semantic resolution”, is to say that your application encodes the domain of the problem and so it can use business rules to resolve the conflict. This is the strategy implemented by the “shopping cart” described in the Amazon Dynamo paper. It merges toward the maximum quantity of each item in the cart; however, it exhibits some problems -- namely that sometimes items that were removed from the cart can reappear! From Amazon’s point of view this is okay because it might encourage the customer to buy more, but it is a bewildering user-experience!\n\nFortunately, there is some interesting recent research about a more rigorous approach to eventual consistency.\n\n\n
  • #21: ...and that is Conflict-Free Replicated Data Types. This basically means that instead of strictly opaque values, the datastore provides useful abstract data structures. Since we’re in an eventually consistent system, the data structure is replicated to multiple locations, all of which act independently. But by far the most compelling part is that these data structures have the ability to resolve automatically toward a single value, given any number of conflicting values at individual replicas. CRDTs provide a strong safety property for eventually consistent systems that doesn’t sacrifice liveness in the process.\n
  • #22: ...and that is Conflict-Free Replicated Data Types. This basically means that instead of strictly opaque values, the datastore provides useful abstract data structures. Since we’re in an eventually consistent system, the data structure is replicated to multiple locations, all of which act independently. But by far the most compelling part is that these data structures have the ability to resolve automatically toward a single value, given any number of conflicting values at individual replicas. CRDTs provide a strong safety property for eventually consistent systems that doesn’t sacrifice liveness in the process.\n
  • #23: ...and that is Conflict-Free Replicated Data Types. This basically means that instead of strictly opaque values, the datastore provides useful abstract data structures. Since we’re in an eventually consistent system, the data structure is replicated to multiple locations, all of which act independently. But by far the most compelling part is that these data structures have the ability to resolve automatically toward a single value, given any number of conflicting values at individual replicas. CRDTs provide a strong safety property for eventually consistent systems that doesn’t sacrifice liveness in the process.\n
  • #24: The theory behind what I’m going to talk about is the idea of bounded join semi-lattices, or “lattices” for short, and is rooted in the theory of monotonic logic. The definition I’m giving here comes from a recent paper by Neil Conway and others at UC-Berkeley.\n
  • #25: A lattice is a triple of a set, a function, and a value. S is a set (possibly infinite) representing the possible values of the lattice. The upside-down T is the “least element” of the set. The “square U” is a binary operator over S that produces a least-upper bound of its operands that is also a member of S, also called the “join” or “merge” operator. The merge operator is commutative, associative, and idempotent. Finally, a lattice has the property such that for any two members of the set S, the merge operator creates a partial ordering over the set. This also means that merging any element with the least element is an identity operation.\n
  • #26: A lattice is a triple of a set, a function, and a value. S is a set (possibly infinite) representing the possible values of the lattice. The upside-down T is the “least element” of the set. The “square U” is a binary operator over S that produces a least-upper bound of its operands that is also a member of S, also called the “join” or “merge” operator. The merge operator is commutative, associative, and idempotent. Finally, a lattice has the property such that for any two members of the set S, the merge operator creates a partial ordering over the set. This also means that merging any element with the least element is an identity operation.\n
  • #27: A lattice is a triple of a set, a function, and a value. S is a set (possibly infinite) representing the possible values of the lattice. The upside-down T is the “least element” of the set. The “square U” is a binary operator over S that produces a least-upper bound of its operands that is also a member of S, also called the “join” or “merge” operator. The merge operator is commutative, associative, and idempotent. Finally, a lattice has the property such that for any two members of the set S, the merge operator creates a partial ordering over the set. This also means that merging any element with the least element is an identity operation.\n
  • #28: A lattice is a triple of a set, a function, and a value. S is a set (possibly infinite) representing the possible values of the lattice. The upside-down T is the “least element” of the set. The “square U” is a binary operator over S that produces a least-upper bound of its operands that is also a member of S, also called the “join” or “merge” operator. The merge operator is commutative, associative, and idempotent. Finally, a lattice has the property such that for any two members of the set S, the merge operator creates a partial ordering over the set. This also means that merging any element with the least element is an identity operation.\n
  • #29: A lattice is a triple of a set, a function, and a value. S is a set (possibly infinite) representing the possible values of the lattice. The upside-down T is the “least element” of the set. The “square U” is a binary operator over S that produces a least-upper bound of its operands that is also a member of S, also called the “join” or “merge” operator. The merge operator is commutative, associative, and idempotent. Finally, a lattice has the property such that for any two members of the set S, the merge operator creates a partial ordering over the set. This also means that merging any element with the least element is an identity operation.\n
  • #30: A lattice is a triple of a set, a function, and a value. S is a set (possibly infinite) representing the possible values of the lattice. The upside-down T is the “least element” of the set. The “square U” is a binary operator over S that produces a least-upper bound of its operands that is also a member of S, also called the “join” or “merge” operator. The merge operator is commutative, associative, and idempotent. Finally, a lattice has the property such that for any two members of the set S, the merge operator creates a partial ordering over the set. This also means that merging any element with the least element is an identity operation.\n
  • #31: Just for the sake of illustration, let’s look at one of the simpler lattices defined in Conway’s paper, the “lmax” lattice. The set of values in the lattice are the Real numbers. The merge function is defined as taking the maximum of the two values. The minimum value is negative infinity. I hope you can see that this definition is a lattice: nothing is less than negative infinity, and the merging of any two values trends toward positive infinity, without exceeding the seen values.\n
  • #32: Let’s take another example for those who might be visual learners, the lset lattice. The set of values for the lattice are all simple sets, with the empty set being the minimum value. The merge function is set-union, which you should be able to see in this diagram, allow any ordering of operation delivery to eventually converge on the same value. This diagram doesn’t even show all of the possible orderings, in fact.\n\nNow why is this stuff important? Remember how we had conflicts and we needed a sane way to resolve those conflicts? Lattices are a generic type that give us determinism in how we merge our conflicts. In the case of the “lmax” lattice, if one value has 10 and another has 15, you pick 15 because it’s the larger one. This foundation gives us what we need to understand a larger study of the topic of conflict-resolution in eventual consistency.\n
  • #33: The primary work on this research has been done by two researchers at INRIA and their colleagues in Portugal. Marc Shapiro also gave a great talk on the subject at Microsoft Research called “Strong Eventual Consistency” which you can easily find online.\n\nThe paper above is where I’ve gotten most of the content and diagrams, but I’ve tried to simplify the content so that we can get through it in the scope of this talk. If you want the real thing, search for <title>, it’s free to download.\n
  • #34: There are two flavors of CRDTs as you might have noticed. They both provide the same conflict-free property, but differ in their implementation strategy.\n\nConvergent types are based on a local modification of state, followed by forwarding the resulting state downstream, where a merge operation is performed at other replicas. The state itself encodes all information needed to converge. They are great for systems with weak message delivery guarantees - for example, a Dynamo-style system. Convergent types can also be resolved in clients, which is helpful for systems that do not provide rich datatypes.\n\nCommutative types, on the other hand, replicate commutative operations rather than state, and tend to rely on systems with reliable broadcast (that assures operations reach all replicas). Operations are generally not required to have a total ordering -- a local causal ordering is sufficient.\n
  • #35: This diagram from the paper shows the basic format of a convergent, state based CRDT. Note how the mutation is applied locally, then forwarded downstream as a merge operation. As long as all replicas eventually receive states that include all mutations, they will converge on the same value. (The merge function is basically the merge function in a lattice.)\n
  • #36: Again, in Commutative types forward operations to other replicas, not the state. Obviously, if an operation is not delivered, or applied out-of-order locally, the states don’t converge. However, again, unlike the convergent type, a reliable broadcast channel is required. As long as functions f() and g() commute, state will converge.\n
  • #37: A register is the simplest type of data structure - a memory cell storing an opaque value. It only supports two operations - “assign” and “value” (get and set). Concurrent updates will not commute (who should win?). We’ve seen this problem before.\n
  • #38: The two approaches to concurrent resolution are the same ones taken by Cassandra and Riak, respectively. That is, Last-Write-Wins (called an LWW-Register) and Multi-Valued (called MV-Register)-- keeping all divergent values. For resolution, LWW tend to use timestamps with a reasonable guarantee of ordering (which is difficult in practice, but in some systems sufficient). MV on the other hand, requires the more expensive version vector to resolve conflicts and produces the union of all divergent values (but it doesn’t behave like a set!)\n
  • #39: Counters are simply integers that are replicated and support the increment and decrement operations. Counters are useful for things like tracking the number of logged-in users, or click-throughs on an advertisement.\n\nThe simplest type of counter is a Commutative or operation-based type, since add and subtract are commutative, any delivery order is sufficient (ignoring over-/under-flow). The state-based counters are more interesting so we’ll look at those.\n
  • #40: A G-Counter only counts up and is basically a version vector (vector clock). Each replica increments its own pair only, the value is computed by summing the count of all replicas. Convergence is achieved by taking the maximum count for each replica. This is basically the Cassandra counters implementation.\n
  • #41: A G-Counter only counts up and is basically a version vector (vector clock). Each replica increments its own pair only, the value is computed by summing the count of all replicas. Convergence is achieved by taking the maximum count for each replica. This is basically the Cassandra counters implementation.\n
  • #42: A G-Counter only counts up and is basically a version vector (vector clock). Each replica increments its own pair only, the value is computed by summing the count of all replicas. Convergence is achieved by taking the maximum count for each replica. This is basically the Cassandra counters implementation.\n
  • #43: A G-Counter only counts up and is basically a version vector (vector clock). Each replica increments its own pair only, the value is computed by summing the count of all replicas. Convergence is achieved by taking the maximum count for each replica. This is basically the Cassandra counters implementation.\n
  • #44: A G-Counter only counts up and is basically a version vector (vector clock). Each replica increments its own pair only, the value is computed by summing the count of all replicas. Convergence is achieved by taking the maximum count for each replica. This is basically the Cassandra counters implementation.\n
  • #45: A G-Counter only counts up and is basically a version vector (vector clock). Each replica increments its own pair only, the value is computed by summing the count of all replicas. Convergence is achieved by taking the maximum count for each replica. This is basically the Cassandra counters implementation.\n
  • #46: A G-Counter only counts up and is basically a version vector (vector clock). Each replica increments its own pair only, the value is computed by summing the count of all replicas. Convergence is achieved by taking the maximum count for each replica. This is basically the Cassandra counters implementation.\n
  • #47: A G-Counter only counts up and is basically a version vector (vector clock). Each replica increments its own pair only, the value is computed by summing the count of all replicas. Convergence is achieved by taking the maximum count for each replica. This is basically the Cassandra counters implementation.\n
  • #48: A G-Counter only counts up and is basically a version vector (vector clock). Each replica increments its own pair only, the value is computed by summing the count of all replicas. Convergence is achieved by taking the maximum count for each replica. This is basically the Cassandra counters implementation.\n
  • #49: A G-Counter only counts up and is basically a version vector (vector clock). Each replica increments its own pair only, the value is computed by summing the count of all replicas. Convergence is achieved by taking the maximum count for each replica. This is basically the Cassandra counters implementation.\n
  • #50: A G-Counter only counts up and is basically a version vector (vector clock). Each replica increments its own pair only, the value is computed by summing the count of all replicas. Convergence is achieved by taking the maximum count for each replica. This is basically the Cassandra counters implementation.\n
  • #51: A G-Counter only counts up and is basically a version vector (vector clock). Each replica increments its own pair only, the value is computed by summing the count of all replicas. Convergence is achieved by taking the maximum count for each replica. This is basically the Cassandra counters implementation.\n
  • #52: A G-Counter only counts up and is basically a version vector (vector clock). Each replica increments its own pair only, the value is computed by summing the count of all replicas. Convergence is achieved by taking the maximum count for each replica. This is basically the Cassandra counters implementation.\n
  • #53: A G-Counter only counts up and is basically a version vector (vector clock). Each replica increments its own pair only, the value is computed by summing the count of all replicas. Convergence is achieved by taking the maximum count for each replica. This is basically the Cassandra counters implementation.\n
  • #54: PN-Counter - composed of two G-Counters - P for increments and N for decrements. The value is the difference between the values of the two G-Counters. The resolution is the pairwise resolution of the P and N counters.\n
  • #55: Sets constitute one of the most basic data structures. Containers, Maps, and Graphs are all based on Sets. There are two operations, add and remove.\n
  • #56: Like a G-Counter, a G-Set only grows in size. That is, it doesn’t allow removal - its merge operation is a simple set-union, returning the maximal grouping without duplicates. Since add commutes with union, a G-Set can also be implemented as a commutative type. However, it’s not an incredibly useful data-type on its own, but it can be part of another data structure.\n
  • #57: Like a G-Counter, a G-Set only grows in size. That is, it doesn’t allow removal - its merge operation is a simple set-union, returning the maximal grouping without duplicates. Since add commutes with union, a G-Set can also be implemented as a commutative type. However, it’s not an incredibly useful data-type on its own, but it can be part of another data structure.\n
  • #58: Like a G-Counter, a G-Set only grows in size. That is, it doesn’t allow removal - its merge operation is a simple set-union, returning the maximal grouping without duplicates. Since add commutes with union, a G-Set can also be implemented as a commutative type. However, it’s not an incredibly useful data-type on its own, but it can be part of another data structure.\n
  • #59: Like a G-Counter, a G-Set only grows in size. That is, it doesn’t allow removal - its merge operation is a simple set-union, returning the maximal grouping without duplicates. Since add commutes with union, a G-Set can also be implemented as a commutative type. However, it’s not an incredibly useful data-type on its own, but it can be part of another data structure.\n
  • #60: Like a G-Counter, a G-Set only grows in size. That is, it doesn’t allow removal - its merge operation is a simple set-union, returning the maximal grouping without duplicates. Since add commutes with union, a G-Set can also be implemented as a commutative type. However, it’s not an incredibly useful data-type on its own, but it can be part of another data structure.\n
  • #61: Like a G-Counter, a G-Set only grows in size. That is, it doesn’t allow removal - its merge operation is a simple set-union, returning the maximal grouping without duplicates. Since add commutes with union, a G-Set can also be implemented as a commutative type. However, it’s not an incredibly useful data-type on its own, but it can be part of another data structure.\n
  • #62: Like a G-Counter, a G-Set only grows in size. That is, it doesn’t allow removal - its merge operation is a simple set-union, returning the maximal grouping without duplicates. Since add commutes with union, a G-Set can also be implemented as a commutative type. However, it’s not an incredibly useful data-type on its own, but it can be part of another data structure.\n
  • #63: Like a G-Counter, a G-Set only grows in size. That is, it doesn’t allow removal - its merge operation is a simple set-union, returning the maximal grouping without duplicates. Since add commutes with union, a G-Set can also be implemented as a commutative type. However, it’s not an incredibly useful data-type on its own, but it can be part of another data structure.\n
  • #64: Like a G-Counter, a G-Set only grows in size. That is, it doesn’t allow removal - its merge operation is a simple set-union, returning the maximal grouping without duplicates. Since add commutes with union, a G-Set can also be implemented as a commutative type. However, it’s not an incredibly useful data-type on its own, but it can be part of another data structure.\n
  • #65: Like a G-Counter, a G-Set only grows in size. That is, it doesn’t allow removal - its merge operation is a simple set-union, returning the maximal grouping without duplicates. Since add commutes with union, a G-Set can also be implemented as a commutative type. However, it’s not an incredibly useful data-type on its own, but it can be part of another data structure.\n
  • #66: Like a G-Counter, a G-Set only grows in size. That is, it doesn’t allow removal - its merge operation is a simple set-union, returning the maximal grouping without duplicates. Since add commutes with union, a G-Set can also be implemented as a commutative type. However, it’s not an incredibly useful data-type on its own, but it can be part of another data structure.\n
  • #67: Like a G-Counter, a G-Set only grows in size. That is, it doesn’t allow removal - its merge operation is a simple set-union, returning the maximal grouping without duplicates. Since add commutes with union, a G-Set can also be implemented as a commutative type. However, it’s not an incredibly useful data-type on its own, but it can be part of another data structure.\n
  • #68: Like a G-Counter, a G-Set only grows in size. That is, it doesn’t allow removal - its merge operation is a simple set-union, returning the maximal grouping without duplicates. Since add commutes with union, a G-Set can also be implemented as a commutative type. However, it’s not an incredibly useful data-type on its own, but it can be part of another data structure.\n
  • #69: Like a G-Counter, a G-Set only grows in size. That is, it doesn’t allow removal - its merge operation is a simple set-union, returning the maximal grouping without duplicates. Since add commutes with union, a G-Set can also be implemented as a commutative type. However, it’s not an incredibly useful data-type on its own, but it can be part of another data structure.\n
  • #70: The second type of Set is a two-phase set, where a removed set member cannot be re-added. It is basically two G-Sets, one for add and one for remove. The removal set is sometimes called a tombstone set. To prevent spurious states (e.g. remove-before-add, making add have no effect), it has a precondition for remove that the local state must already contain the member.\n\nA special case of the 2P-Set is the U-Set. If the system can reasonably guarantee uniqueness, that is, the element will never be added again after removal, then the tombstone set is unnecessary. Uniqueness could be satisfied with a Lamport clock or suitably large RNG space.\n
  • #71: The second type of Set is a two-phase set, where a removed set member cannot be re-added. It is basically two G-Sets, one for add and one for remove. The removal set is sometimes called a tombstone set. To prevent spurious states (e.g. remove-before-add, making add have no effect), it has a precondition for remove that the local state must already contain the member.\n\nA special case of the 2P-Set is the U-Set. If the system can reasonably guarantee uniqueness, that is, the element will never be added again after removal, then the tombstone set is unnecessary. Uniqueness could be satisfied with a Lamport clock or suitably large RNG space.\n
  • #72: The second type of Set is a two-phase set, where a removed set member cannot be re-added. It is basically two G-Sets, one for add and one for remove. The removal set is sometimes called a tombstone set. To prevent spurious states (e.g. remove-before-add, making add have no effect), it has a precondition for remove that the local state must already contain the member.\n\nA special case of the 2P-Set is the U-Set. If the system can reasonably guarantee uniqueness, that is, the element will never be added again after removal, then the tombstone set is unnecessary. Uniqueness could be satisfied with a Lamport clock or suitably large RNG space.\n
  • #73: The second type of Set is a two-phase set, where a removed set member cannot be re-added. It is basically two G-Sets, one for add and one for remove. The removal set is sometimes called a tombstone set. To prevent spurious states (e.g. remove-before-add, making add have no effect), it has a precondition for remove that the local state must already contain the member.\n\nA special case of the 2P-Set is the U-Set. If the system can reasonably guarantee uniqueness, that is, the element will never be added again after removal, then the tombstone set is unnecessary. Uniqueness could be satisfied with a Lamport clock or suitably large RNG space.\n
  • #74: The second type of Set is a two-phase set, where a removed set member cannot be re-added. It is basically two G-Sets, one for add and one for remove. The removal set is sometimes called a tombstone set. To prevent spurious states (e.g. remove-before-add, making add have no effect), it has a precondition for remove that the local state must already contain the member.\n\nA special case of the 2P-Set is the U-Set. If the system can reasonably guarantee uniqueness, that is, the element will never be added again after removal, then the tombstone set is unnecessary. Uniqueness could be satisfied with a Lamport clock or suitably large RNG space.\n
  • #75: The second type of Set is a two-phase set, where a removed set member cannot be re-added. It is basically two G-Sets, one for add and one for remove. The removal set is sometimes called a tombstone set. To prevent spurious states (e.g. remove-before-add, making add have no effect), it has a precondition for remove that the local state must already contain the member.\n\nA special case of the 2P-Set is the U-Set. If the system can reasonably guarantee uniqueness, that is, the element will never be added again after removal, then the tombstone set is unnecessary. Uniqueness could be satisfied with a Lamport clock or suitably large RNG space.\n
  • #76: The second type of Set is a two-phase set, where a removed set member cannot be re-added. It is basically two G-Sets, one for add and one for remove. The removal set is sometimes called a tombstone set. To prevent spurious states (e.g. remove-before-add, making add have no effect), it has a precondition for remove that the local state must already contain the member.\n\nA special case of the 2P-Set is the U-Set. If the system can reasonably guarantee uniqueness, that is, the element will never be added again after removal, then the tombstone set is unnecessary. Uniqueness could be satisfied with a Lamport clock or suitably large RNG space.\n
  • #77: The second type of Set is a two-phase set, where a removed set member cannot be re-added. It is basically two G-Sets, one for add and one for remove. The removal set is sometimes called a tombstone set. To prevent spurious states (e.g. remove-before-add, making add have no effect), it has a precondition for remove that the local state must already contain the member.\n\nA special case of the 2P-Set is the U-Set. If the system can reasonably guarantee uniqueness, that is, the element will never be added again after removal, then the tombstone set is unnecessary. Uniqueness could be satisfied with a Lamport clock or suitably large RNG space.\n
  • #78: The second type of Set is a two-phase set, where a removed set member cannot be re-added. It is basically two G-Sets, one for add and one for remove. The removal set is sometimes called a tombstone set. To prevent spurious states (e.g. remove-before-add, making add have no effect), it has a precondition for remove that the local state must already contain the member.\n\nA special case of the 2P-Set is the U-Set. If the system can reasonably guarantee uniqueness, that is, the element will never be added again after removal, then the tombstone set is unnecessary. Uniqueness could be satisfied with a Lamport clock or suitably large RNG space.\n
  • #79: The second type of Set is a two-phase set, where a removed set member cannot be re-added. It is basically two G-Sets, one for add and one for remove. The removal set is sometimes called a tombstone set. To prevent spurious states (e.g. remove-before-add, making add have no effect), it has a precondition for remove that the local state must already contain the member.\n\nA special case of the 2P-Set is the U-Set. If the system can reasonably guarantee uniqueness, that is, the element will never be added again after removal, then the tombstone set is unnecessary. Uniqueness could be satisfied with a Lamport clock or suitably large RNG space.\n
  • #80: The second type of Set is a two-phase set, where a removed set member cannot be re-added. It is basically two G-Sets, one for add and one for remove. The removal set is sometimes called a tombstone set. To prevent spurious states (e.g. remove-before-add, making add have no effect), it has a precondition for remove that the local state must already contain the member.\n\nA special case of the 2P-Set is the U-Set. If the system can reasonably guarantee uniqueness, that is, the element will never be added again after removal, then the tombstone set is unnecessary. Uniqueness could be satisfied with a Lamport clock or suitably large RNG space.\n
  • #81: The second type of Set is a two-phase set, where a removed set member cannot be re-added. It is basically two G-Sets, one for add and one for remove. The removal set is sometimes called a tombstone set. To prevent spurious states (e.g. remove-before-add, making add have no effect), it has a precondition for remove that the local state must already contain the member.\n\nA special case of the 2P-Set is the U-Set. If the system can reasonably guarantee uniqueness, that is, the element will never be added again after removal, then the tombstone set is unnecessary. Uniqueness could be satisfied with a Lamport clock or suitably large RNG space.\n
  • #82: The second type of Set is a two-phase set, where a removed set member cannot be re-added. It is basically two G-Sets, one for add and one for remove. The removal set is sometimes called a tombstone set. To prevent spurious states (e.g. remove-before-add, making add have no effect), it has a precondition for remove that the local state must already contain the member.\n\nA special case of the 2P-Set is the U-Set. If the system can reasonably guarantee uniqueness, that is, the element will never be added again after removal, then the tombstone set is unnecessary. Uniqueness could be satisfied with a Lamport clock or suitably large RNG space.\n
  • #83: The second type of Set is a two-phase set, where a removed set member cannot be re-added. It is basically two G-Sets, one for add and one for remove. The removal set is sometimes called a tombstone set. To prevent spurious states (e.g. remove-before-add, making add have no effect), it has a precondition for remove that the local state must already contain the member.\n\nA special case of the 2P-Set is the U-Set. If the system can reasonably guarantee uniqueness, that is, the element will never be added again after removal, then the tombstone set is unnecessary. Uniqueness could be satisfied with a Lamport clock or suitably large RNG space.\n
  • #84: The second type of Set is a two-phase set, where a removed set member cannot be re-added. It is basically two G-Sets, one for add and one for remove. The removal set is sometimes called a tombstone set. To prevent spurious states (e.g. remove-before-add, making add have no effect), it has a precondition for remove that the local state must already contain the member.\n\nA special case of the 2P-Set is the U-Set. If the system can reasonably guarantee uniqueness, that is, the element will never be added again after removal, then the tombstone set is unnecessary. Uniqueness could be satisfied with a Lamport clock or suitably large RNG space.\n
  • #85: The second type of Set is a two-phase set, where a removed set member cannot be re-added. It is basically two G-Sets, one for add and one for remove. The removal set is sometimes called a tombstone set. To prevent spurious states (e.g. remove-before-add, making add have no effect), it has a precondition for remove that the local state must already contain the member.\n\nA special case of the 2P-Set is the U-Set. If the system can reasonably guarantee uniqueness, that is, the element will never be added again after removal, then the tombstone set is unnecessary. Uniqueness could be satisfied with a Lamport clock or suitably large RNG space.\n
  • #86: Tag each element in A and R with timestamp. Greatest timestamp wins out for each individual element. Could be implemented with Cassandra super-columns.\n\nFigure 12: LWW-element-Set; elements masked by one with a higher timestamp are elided (state-based)\n\n
  • #87: Tag each added element uniquely (without exposing them). When removing, remove all seen and forward operation downstream with tags. State-based version would be based on U-Set.\n\n
  • #88: You might notice we’re going up in complexity here in terms of the types of data-structures. Graphs are incredibly useful for many problems, but also have a bunch of potential anomalies within them - concurrent add/removes of vertices and edges may not converge - that is, global invariants can’t be guaranteed. For example, in the case of a DAG or linked-list where elements can be removed or added concurrently. Some anomalies may be removed via restricting the semantics, for example, making a graph add-only. I’m not going to go into detail about how Graphs are implemented, but a simple one is the 2P2P graph, based on a pair of 2P-sets, one for vertices and one for edges. In the case where a vertex is removed, the most reliable (and intuitive) solution is to remove all attached edges, thus a 2P-Set paradigm works well for the components of a generic graph.\n\n\n
  • #89: You might notice we’re going up in complexity here in terms of the types of data-structures. Graphs are incredibly useful for many problems, but also have a bunch of potential anomalies within them - concurrent add/removes of vertices and edges may not converge - that is, global invariants can’t be guaranteed. For example, in the case of a DAG or linked-list where elements can be removed or added concurrently. Some anomalies may be removed via restricting the semantics, for example, making a graph add-only. I’m not going to go into detail about how Graphs are implemented, but a simple one is the 2P2P graph, based on a pair of 2P-sets, one for vertices and one for edges. In the case where a vertex is removed, the most reliable (and intuitive) solution is to remove all attached edges, thus a 2P-Set paradigm works well for the components of a generic graph.\n\n\n
  • #90: You might notice we’re going up in complexity here in terms of the types of data-structures. Graphs are incredibly useful for many problems, but also have a bunch of potential anomalies within them - concurrent add/removes of vertices and edges may not converge - that is, global invariants can’t be guaranteed. For example, in the case of a DAG or linked-list where elements can be removed or added concurrently. Some anomalies may be removed via restricting the semantics, for example, making a graph add-only. I’m not going to go into detail about how Graphs are implemented, but a simple one is the 2P2P graph, based on a pair of 2P-sets, one for vertices and one for edges. In the case where a vertex is removed, the most reliable (and intuitive) solution is to remove all attached edges, thus a 2P-Set paradigm works well for the components of a generic graph.\n\n\n
  • #91: You might notice we’re going up in complexity here in terms of the types of data-structures. Graphs are incredibly useful for many problems, but also have a bunch of potential anomalies within them - concurrent add/removes of vertices and edges may not converge - that is, global invariants can’t be guaranteed. For example, in the case of a DAG or linked-list where elements can be removed or added concurrently. Some anomalies may be removed via restricting the semantics, for example, making a graph add-only. I’m not going to go into detail about how Graphs are implemented, but a simple one is the 2P2P graph, based on a pair of 2P-sets, one for vertices and one for edges. In the case where a vertex is removed, the most reliable (and intuitive) solution is to remove all attached edges, thus a 2P-Set paradigm works well for the components of a generic graph.\n\n\n
  • #92: \n
  • #93: CRDTs tend to create a lot of garbage: tombstones grow and internal structures become unbalanced. In general, garbage collection is extremely difficult to do without synchronization. Luckily, this doesn’t impact correctness, only efficiency and performance.\n
  • #94: Client - have to come up with a common representation across languages, allocation of actor IDs is problematic, can only use state-based CRDTs.\nServer - no one implements them yet, really (Cassandra’s counter has some anomalies), but we’re working hard to bring them to Riak.\n
  • #95: \n