SlideShare a Scribd company logo
NOSQL
THEORY, IMPLEMENTATION
S,
AN INTRODUCTION
FIRAT ATAGUN
firat@yahoo-inc.com
http://firatatagun.com
NoSQL


What does it mean?
 Not

Only SQL.
Use Cases









Massive write performance.
Fast key value look ups.
Flexible schema and data types.
No single point of failure.
Fast prototyping and development.
Out of the box scalability.
Easy maintenance.
Motives Behind NoSQL





Big data.
Scalability.
Data format.
Manageability.
Big Data







Collect.
Store.
Organize.
Analyze.
Share.

Data growth outruns the ability to manage it so
we need scalable solutions.
Scalability


Scale up, Vertical scalability.
 Increasing

server capacity.
 Adding more CPU, RAM.
 Managing is hard.
 Possible down times
Scalability


Scale out, Horizontal scalability.


Adding servers to existing system with little effort, aka
Elastically scalable.










Shared nothing.
Use of commodity/cheap hardware.
Heterogeneous systems.
Controlled Concurrency (avoid locks).
Service Oriented Architecture. Local states.






Bugs, hardware errors, things fail all the time.
It should become cheaper. Cost efficiency.

Decentralized to reduce bottlenecks.
Avoid Single point of failures.

Asynchrony.
Symmetry, you don’t have to know what is happening. All
nodes should be symmetric.
What is Wrong With RDBMS?



Nothing. One size fits all? Not really.
Impedance mismatch.













Object Relational Mapping doesn't work quite well.

Rigid schema design.
Harder to scale.
Replication.
Joins across multiple nodes? Hard.
How does RDMS handle data growth? Hard.
Need for a DBA.
Many programmers are already familiar with it.
Transactions and ACID make development easy.
Lots of tools to use.
ACID Semantics







Atomicity: All or nothing.
Consistency: Consistent state of data and
transactions.
Isolation: Transactions are isolated from each
other.
Durability: When the transaction is
committed, state will be durable.

Any data store can achieve Atomicity, Isolation and
Durability but do you always need consistency? No.
By giving up ACID properties, one can achieve
higher performance and scalability.
Enter CAP Theorem



Also known as Brewer’s Theorem by Prof. Eric
Brewer, published in 2000 at University of
Berkeley.
“Of three properties of a shared data system:
data consistency, system availability and
tolerance to network partitions, only two can
be achieved at any given moment.”
Proven by Nancy Lynch et al. MIT labs.



http://www.cs.berkeley.edu/~brewer/cs262b-2004/PODC-keynote.pdf




CAP Semantics


Consistency: Clients should read the same
data. There are many levels of consistency.
Consistency – RDBMS.
 Tunable Consistency – Cassandra.
 Eventual Consistency – Amazon Dynamo.
 Strict




Availability: Data to be available.
Partial Tolerance: Data to be partitioned across
network segments due to network failures.
A Simple Proof
Consistent and available
No partition.
App

Data

A

Data

B
A Simple Proof
Available and partitioned
Not consistent, we get back old data.
App

Data

A

Old Data

B
A Simple Proof
Consistent and partitioned
Not available, waiting…
App

New Data
Wait for new data

A

B
BASE, an ACID Alternative
Almost the opposite of ACID.
 Basically available: Nodes in the a distributed
environment can go down, but the whole
system shouldn’t be affected.
 Soft State (scalable): The state of the system
and data changes over time.
 Eventual Consistency: Given enough
time, data will be consistent across the
distributed system.
A Clash of cultures
ACID:
• Strong consistency.
• Less availability.
• Pessimistic concurrency.
• Complex.
BASE:
• Availability is the most important thing. Willing
to sacrifice for this (CAP).
• Weaker consistency (Eventual).
• Best effort.
• Simple and fast.
• Optimistic.
Distributed Transactions


Two phase commit.




Starbucks doesn’t use two phase commit by Gregor Hophe.

Possible failures


Network errors.



Node errors.



Database errors.

Commit

Rollback
Coordinator
Acknowledge

Problems:
Locking the entire cluster if one node is down
Possible to implement timeouts.
Possible to use Quorum.
Quorum: in a distributed environment, if there is
partition, then the nodes vote to commit or
rollback.

Complete operation
Release locks
Consistent Hashing



Solves Partitioning Problem.
Consistent Hashing, Memcahced.





servers = [s1, s2, s3, s4, s5]
serverToSendData = servers[hash(data) % servers.length]

A New Hope
 Continuum Approach.

Virtual Nodes in a cycle.
 Hash both objects and caches.
 Easy Replication.




Eventually Consistent.

What happens if nodes fail?
 How do you add nodes?


http://www.akamai.com/dl/technical_publications/ConsistenHashingandRandomTreesDistributedCachingprotocolsforrelievingHotSpotsontheworldwideweb.pdf
Concurrency models




Optimistic concurrency.
Pessimistic concurrency.
MVCC.
Vector Clocks



Used for conflict detection of data.
Timestamp based resolution of conflicts is not
enough.

Time 1:
Time 2:

Replicated

Time 3:

Update

Time 4: Update
Time 5:

Replicated

Conflict detection
Vector Clocks
Document.v.1([A, 1])

A

Update
Document.v.2([A, 2])

Document.v.2([A, 2],[B,1])

A

B

C

Conflicts are detected.

Document.v.2([A, 2],[C,1])
Read Repair
Value = Data.v2

Client
GET (K, Q=2)
Value = Data.v2

Update K = Data.v2

Value = Data.v1
Gossip Protocol & Hinted Handoffs


Most preferred communication protocol in a
distributed environment is Gossip Protocol.
D

A
G
H

• All the nodes talk to each other peer wise.
• There is no global state.
• No single point of coordinator.
• If one node goes down and there is a Quorum
load for that node is shared among others.
• Self managing system.
• If a new node joins, load is also distributed.
B

C

F

Requests coming to F will be handled by
the nodes who takes the load of F, lets say C with
the hint that it took the requests which was for F,
when F becomes available, F will get this
Information from C. Self healing property.
Data Models








Key/Value Pairs.
Tuples (rows).
Documents.
Columns.
Objects.
Graphs.

There are corresponding data stores.
Complexity
Key-Value Stores










Memcached – Key value stores.
Membase – Memcached with persistence and
improved consistent hashing.
AppFabric Cache – Multi region Cache.
Redis – Data structure server.
Riak – Based on Amazon’s Dynamo.
Project Voldemort – eventual consistent key
value stores, auto scaling.
Memcached











Very easy to setup and use.
Consistent hashing.
Scales very well.
In memory caching, no persistence.
LRU eviction policy.
O(1) to set/get/delete.
Atomic operations set/get/delete.
No iterators, or very difficult.
Membase














Easy to manage via web console.
Monitoring and management via Web console.
Consistency and Availability.
Dynamic/Linear Scalability, add a node, hit join to
cluster and rebalance.
Low latency, high throughput.
Compatible with current Memcached Clients.
Data Durability, persistent to disk asynchronously.
Rebalancing (Peer to peer replication).
Fail over (Master/Slave).
vBuckets are used for consistent hashing.
O(1) to set/get/delete.
Redis














Distributed Data structure server.
Consistent hashing at client.
Non-blocking I/O, single threaded.
Values are binary safe strings: byte strings.
String : Key/Value Pair, set/get. O(1) many string operations.
Lists: lpush, lpop, rpush, rpop.you can use it as stack or
queue. O(1). Publisher/Subscriber is available.
Set: Collection of Unique
elements, add, pop, union, intersection etc. set operations.
Sorted Set: Unique elements sorted by scores. O(logn).
Supports range operations.
Hashes: Multiple Key/Value pairs
HMSET user 1 username foo password bar age 30
HGET user 1 age
Microsoft AppFabric










Add a node to the cluster easily. Elastic
scalability.
Namespaces to organize different caches.
LRU Eviction policy.
Timeout/Time to live is default to 10 min.
No persistence.
O(1) to set/get/delete.
Optimistic and pessimistic concurrency.
Supports tagging.
Document Stores







Schema Free.
Usually JSON like interchange model.
Query Model: JavaScript or custom.
Aggregations: Map/Reduce.
Indexes are done via B-Trees.
Mongodb












Data types:
bool, int, double, string, object(bson), oid, array, null, d
ate.
Database and collections are created automatically.
Lots of Language Drivers.
Capped collections are fixed size
collections, buffers, very fast, FIFO, good for logs. No
indexes.
Object id are generated by client, 12 bytes packed
data. 4 byte time, 3 byte machine, 2 byte pid, 3 byte
counter.
Possible to refer other documents in different
collections but more efficient to embed documents.
Replication is very easy to setup. You can read from
Mongodb



Connection pooling is done for you. Sweet.
Supports aggregation.







Map Reduce with JavaScript.

You have indexes, B-Trees. Ids are always
indexed.
Updates are atomic. Low contention locks.
Querying mongo done with a document:
Lazy, returns a cursor.
 Reduceable to SQL, select, insert, update limit, sort
etc.






Several operators:




There is more: upsert (either inserts of updates)
$ne, $and, $or, $lt, $gt, $incr,$decr and so on.

Repository Pattern makes development very easy.
Mongodb - Sharding

Config servers: Keeps mapping
Mongos: Routing servers
Mongod: master-slave replicas
Couchdb




Availability and Partial Tolerance.
Views are used to query. Map/Reduce.
MVCC – Multiple Concurrent versions. No locks.














A little overhead with this approach due to garbage collection.
Conflict resolution.

Very simple, REST based. Schema Free.
Shared nothing, seamless peer based Bi-Directional replication.
Auto Compaction. Manual with Mongodb.
Uses B-Trees
Documents and indexes are kept in memory and flushed to disc
periodically.
Documents have states, in case of a failure, recovery can continue
from the state documents were left.
No built in auto-sharding, there are open source projects.
You can’t define your indexes.
Object Stores



Objectivity.
Db4o.
Objectivity






No need for ORM. Closer to OOP.
Complex data modeling.
Schema evolution.
Scalable Collections: List, Set, Map.
Object relations.
 Bi-Directional






relations

ACID properties.
Blazingly fast, uses paging.
Supports replication and clustering.
Column Stores
Row oriented
Id

username

email

Department

1

John

john@foo.com

Sales

2

Mary

mary@foo.com

Marketing

3

Yoda

yoda@foo.com

IT

Column oriented
Id

Username

email

Department

1

John

john@foo.com

Sales

2

Mary

mary@foo.com

Marketing

3

Yoda

yoda@foo.com

IT
Cassandra













Tunable consistency.
Decentralized.
Writes are faster than reads.
No Single point of failure.
Incremental scalability.
Uses consistent hashing (logical partitioning)
when clustered.
Hinted handoffs.
Peer to peer routing(ring).
Thrift API.
Multi data center support.
Cassandra at Netflix

http://techblog.netflix.com/2011/11/benchmarking-cassandra-scalability-on.html
Graph Stores




Based on Graph Theory.
Scale vertically, no clustering.
You can use graph algorithms easily.
Neo4J









Nodes, Relationship.
Traversals.
HTTP/REST.
ACID.
Web Admin.
Not too much support for languages.
Has transactions.
Which one to use?


Key-value stores:




Document databases:




Quirky stuff.

Columnar:




Complex object models.

Data Structure Server:




OLTP. SQL. Transactions. Relations.

OODBMS




Natural data modeling. Programmer friendly. Rapid development. Web
friendly, CRUD.

RDMBS:




Processing a constant stream of small reads and writes.

Handles size well. Massive write loads. High availability. Multiple-data centers.
MapReduce

Graph:


Graph algorithms and relations.

Want more ideas ?
http://highscalability.com/blog/2011/6/20/35-use-cases-for-choosing-your-nextnosql-database.html



Thank you.

More Related Content

PPT
Software Design Principles
PPT
Six Principles of Software Design to Empower Scientists
PPTX
Software design principles for evolving architectures
PPTX
Software design principles
PPTX
The 5 principles of Model Based Systems Engineering (MBSE)
PPTX
GRASP Principles
PPTX
Software Design Principles
Six Principles of Software Design to Empower Scientists
Software design principles for evolving architectures
Software design principles
The 5 principles of Model Based Systems Engineering (MBSE)
GRASP Principles

What's hot (20)

PPTX
From catalogues to models: transitioning from existing requirements technique...
PPS
Software design principles
PPT
Slides chapter 11
PPTX
Grasp principles
PPTX
Cqrs and Event Sourcing Intro For Developers
PPT
07 software design
PDF
Model-Based Systems Engineering Demystified
PDF
software-architecture-patterns
PPT
Unit iii(part b - architectural design)
PPTX
Software Design 1: Coupling & cohesion
PDF
Cohesion and Coupling - The Keys To Changing Your Code With Confidence
PPT
Final grasp ASE
PDF
Aspect oriented software development
PPTX
Architectural patterns part 1
PPT
Chapter 08
PPTX
How I Learned To Apply Design Patterns
PDF
Clean Code .Net Cheetsheets
PPTX
Software architectural patterns - A Quick Understanding Guide
PPT
Design final
From catalogues to models: transitioning from existing requirements technique...
Software design principles
Slides chapter 11
Grasp principles
Cqrs and Event Sourcing Intro For Developers
07 software design
Model-Based Systems Engineering Demystified
software-architecture-patterns
Unit iii(part b - architectural design)
Software Design 1: Coupling & cohesion
Cohesion and Coupling - The Keys To Changing Your Code With Confidence
Final grasp ASE
Aspect oriented software development
Architectural patterns part 1
Chapter 08
How I Learned To Apply Design Patterns
Clean Code .Net Cheetsheets
Software architectural patterns - A Quick Understanding Guide
Design final
Ad

Similar to NoSQL Introduction, Theory, Implementations (20)

PDF
SDEC2011 NoSQL concepts and models
PPTX
Master.pptx
PPTX
PPT
No sql
PPTX
UNIT I Introduction to NoSQL.pptx
PPT
6269441.ppt
PDF
NoSQL overview implementation free
PDF
PDF
Lecture-04-Principles of data management.pdf
PDF
Jay Kreps on Project Voldemort Scaling Simple Storage At LinkedIn
PPTX
UNIT I Introduction to NoSQL.pptx
PDF
NoSQL Basics - A Quick Tour
PDF
Why Distributed Databases?
PDF
SDEC2011 NoSQL Data modelling
PPTX
NoSQL and Couchbase
PDF
NoSql and it's introduction features-Unit-1.pdf
PPTX
CS 542 Parallel DBs, NoSQL, MapReduce
PDF
Seminar.2010.NoSql
PDF
Intro to Databases
SDEC2011 NoSQL concepts and models
Master.pptx
No sql
UNIT I Introduction to NoSQL.pptx
6269441.ppt
NoSQL overview implementation free
Lecture-04-Principles of data management.pdf
Jay Kreps on Project Voldemort Scaling Simple Storage At LinkedIn
UNIT I Introduction to NoSQL.pptx
NoSQL Basics - A Quick Tour
Why Distributed Databases?
SDEC2011 NoSQL Data modelling
NoSQL and Couchbase
NoSql and it's introduction features-Unit-1.pdf
CS 542 Parallel DBs, NoSQL, MapReduce
Seminar.2010.NoSql
Intro to Databases
Ad

Recently uploaded (20)

PPTX
A Presentation on Artificial Intelligence
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
Machine learning based COVID-19 study performance prediction
PPTX
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
PDF
Encapsulation theory and applications.pdf
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PDF
Empathic Computing: Creating Shared Understanding
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PPTX
Big Data Technologies - Introduction.pptx
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
KodekX | Application Modernization Development
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
cuic standard and advanced reporting.pdf
A Presentation on Artificial Intelligence
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
20250228 LYD VKU AI Blended-Learning.pptx
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Machine learning based COVID-19 study performance prediction
PA Analog/Digital System: The Backbone of Modern Surveillance and Communication
Encapsulation theory and applications.pdf
Chapter 3 Spatial Domain Image Processing.pdf
Dropbox Q2 2025 Financial Results & Investor Presentation
Empathic Computing: Creating Shared Understanding
Review of recent advances in non-invasive hemoglobin estimation
Mobile App Security Testing_ A Comprehensive Guide.pdf
Big Data Technologies - Introduction.pptx
Encapsulation_ Review paper, used for researhc scholars
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
Reach Out and Touch Someone: Haptics and Empathic Computing
KodekX | Application Modernization Development
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Per capita expenditure prediction using model stacking based on satellite ima...
cuic standard and advanced reporting.pdf

NoSQL Introduction, Theory, Implementations

  • 1. NOSQL THEORY, IMPLEMENTATION S, AN INTRODUCTION FIRAT ATAGUN firat@yahoo-inc.com http://firatatagun.com
  • 2. NoSQL  What does it mean?  Not Only SQL.
  • 3. Use Cases        Massive write performance. Fast key value look ups. Flexible schema and data types. No single point of failure. Fast prototyping and development. Out of the box scalability. Easy maintenance.
  • 4. Motives Behind NoSQL     Big data. Scalability. Data format. Manageability.
  • 5. Big Data      Collect. Store. Organize. Analyze. Share. Data growth outruns the ability to manage it so we need scalable solutions.
  • 6. Scalability  Scale up, Vertical scalability.  Increasing server capacity.  Adding more CPU, RAM.  Managing is hard.  Possible down times
  • 7. Scalability  Scale out, Horizontal scalability.  Adding servers to existing system with little effort, aka Elastically scalable.        Shared nothing. Use of commodity/cheap hardware. Heterogeneous systems. Controlled Concurrency (avoid locks). Service Oriented Architecture. Local states.     Bugs, hardware errors, things fail all the time. It should become cheaper. Cost efficiency. Decentralized to reduce bottlenecks. Avoid Single point of failures. Asynchrony. Symmetry, you don’t have to know what is happening. All nodes should be symmetric.
  • 8. What is Wrong With RDBMS?   Nothing. One size fits all? Not really. Impedance mismatch.           Object Relational Mapping doesn't work quite well. Rigid schema design. Harder to scale. Replication. Joins across multiple nodes? Hard. How does RDMS handle data growth? Hard. Need for a DBA. Many programmers are already familiar with it. Transactions and ACID make development easy. Lots of tools to use.
  • 9. ACID Semantics     Atomicity: All or nothing. Consistency: Consistent state of data and transactions. Isolation: Transactions are isolated from each other. Durability: When the transaction is committed, state will be durable. Any data store can achieve Atomicity, Isolation and Durability but do you always need consistency? No. By giving up ACID properties, one can achieve higher performance and scalability.
  • 10. Enter CAP Theorem  Also known as Brewer’s Theorem by Prof. Eric Brewer, published in 2000 at University of Berkeley. “Of three properties of a shared data system: data consistency, system availability and tolerance to network partitions, only two can be achieved at any given moment.” Proven by Nancy Lynch et al. MIT labs.  http://www.cs.berkeley.edu/~brewer/cs262b-2004/PODC-keynote.pdf  
  • 11. CAP Semantics  Consistency: Clients should read the same data. There are many levels of consistency. Consistency – RDBMS.  Tunable Consistency – Cassandra.  Eventual Consistency – Amazon Dynamo.  Strict   Availability: Data to be available. Partial Tolerance: Data to be partitioned across network segments due to network failures.
  • 12. A Simple Proof Consistent and available No partition. App Data A Data B
  • 13. A Simple Proof Available and partitioned Not consistent, we get back old data. App Data A Old Data B
  • 14. A Simple Proof Consistent and partitioned Not available, waiting… App New Data Wait for new data A B
  • 15. BASE, an ACID Alternative Almost the opposite of ACID.  Basically available: Nodes in the a distributed environment can go down, but the whole system shouldn’t be affected.  Soft State (scalable): The state of the system and data changes over time.  Eventual Consistency: Given enough time, data will be consistent across the distributed system.
  • 16. A Clash of cultures ACID: • Strong consistency. • Less availability. • Pessimistic concurrency. • Complex. BASE: • Availability is the most important thing. Willing to sacrifice for this (CAP). • Weaker consistency (Eventual). • Best effort. • Simple and fast. • Optimistic.
  • 17. Distributed Transactions  Two phase commit.   Starbucks doesn’t use two phase commit by Gregor Hophe. Possible failures  Network errors.  Node errors.  Database errors. Commit Rollback Coordinator Acknowledge Problems: Locking the entire cluster if one node is down Possible to implement timeouts. Possible to use Quorum. Quorum: in a distributed environment, if there is partition, then the nodes vote to commit or rollback. Complete operation Release locks
  • 18. Consistent Hashing   Solves Partitioning Problem. Consistent Hashing, Memcahced.    servers = [s1, s2, s3, s4, s5] serverToSendData = servers[hash(data) % servers.length] A New Hope  Continuum Approach. Virtual Nodes in a cycle.  Hash both objects and caches.  Easy Replication.   Eventually Consistent. What happens if nodes fail?  How do you add nodes?  http://www.akamai.com/dl/technical_publications/ConsistenHashingandRandomTreesDistributedCachingprotocolsforrelievingHotSpotsontheworldwideweb.pdf
  • 20. Vector Clocks   Used for conflict detection of data. Timestamp based resolution of conflicts is not enough. Time 1: Time 2: Replicated Time 3: Update Time 4: Update Time 5: Replicated Conflict detection
  • 21. Vector Clocks Document.v.1([A, 1]) A Update Document.v.2([A, 2]) Document.v.2([A, 2],[B,1]) A B C Conflicts are detected. Document.v.2([A, 2],[C,1])
  • 22. Read Repair Value = Data.v2 Client GET (K, Q=2) Value = Data.v2 Update K = Data.v2 Value = Data.v1
  • 23. Gossip Protocol & Hinted Handoffs  Most preferred communication protocol in a distributed environment is Gossip Protocol. D A G H • All the nodes talk to each other peer wise. • There is no global state. • No single point of coordinator. • If one node goes down and there is a Quorum load for that node is shared among others. • Self managing system. • If a new node joins, load is also distributed. B C F Requests coming to F will be handled by the nodes who takes the load of F, lets say C with the hint that it took the requests which was for F, when F becomes available, F will get this Information from C. Self healing property.
  • 24. Data Models       Key/Value Pairs. Tuples (rows). Documents. Columns. Objects. Graphs. There are corresponding data stores.
  • 26. Key-Value Stores       Memcached – Key value stores. Membase – Memcached with persistence and improved consistent hashing. AppFabric Cache – Multi region Cache. Redis – Data structure server. Riak – Based on Amazon’s Dynamo. Project Voldemort – eventual consistent key value stores, auto scaling.
  • 27. Memcached         Very easy to setup and use. Consistent hashing. Scales very well. In memory caching, no persistence. LRU eviction policy. O(1) to set/get/delete. Atomic operations set/get/delete. No iterators, or very difficult.
  • 28. Membase            Easy to manage via web console. Monitoring and management via Web console. Consistency and Availability. Dynamic/Linear Scalability, add a node, hit join to cluster and rebalance. Low latency, high throughput. Compatible with current Memcached Clients. Data Durability, persistent to disk asynchronously. Rebalancing (Peer to peer replication). Fail over (Master/Slave). vBuckets are used for consistent hashing. O(1) to set/get/delete.
  • 29. Redis          Distributed Data structure server. Consistent hashing at client. Non-blocking I/O, single threaded. Values are binary safe strings: byte strings. String : Key/Value Pair, set/get. O(1) many string operations. Lists: lpush, lpop, rpush, rpop.you can use it as stack or queue. O(1). Publisher/Subscriber is available. Set: Collection of Unique elements, add, pop, union, intersection etc. set operations. Sorted Set: Unique elements sorted by scores. O(logn). Supports range operations. Hashes: Multiple Key/Value pairs HMSET user 1 username foo password bar age 30 HGET user 1 age
  • 30. Microsoft AppFabric         Add a node to the cluster easily. Elastic scalability. Namespaces to organize different caches. LRU Eviction policy. Timeout/Time to live is default to 10 min. No persistence. O(1) to set/get/delete. Optimistic and pessimistic concurrency. Supports tagging.
  • 31. Document Stores      Schema Free. Usually JSON like interchange model. Query Model: JavaScript or custom. Aggregations: Map/Reduce. Indexes are done via B-Trees.
  • 32. Mongodb        Data types: bool, int, double, string, object(bson), oid, array, null, d ate. Database and collections are created automatically. Lots of Language Drivers. Capped collections are fixed size collections, buffers, very fast, FIFO, good for logs. No indexes. Object id are generated by client, 12 bytes packed data. 4 byte time, 3 byte machine, 2 byte pid, 3 byte counter. Possible to refer other documents in different collections but more efficient to embed documents. Replication is very easy to setup. You can read from
  • 33. Mongodb   Connection pooling is done for you. Sweet. Supports aggregation.     Map Reduce with JavaScript. You have indexes, B-Trees. Ids are always indexed. Updates are atomic. Low contention locks. Querying mongo done with a document: Lazy, returns a cursor.  Reduceable to SQL, select, insert, update limit, sort etc.    Several operators:   There is more: upsert (either inserts of updates) $ne, $and, $or, $lt, $gt, $incr,$decr and so on. Repository Pattern makes development very easy.
  • 34. Mongodb - Sharding Config servers: Keeps mapping Mongos: Routing servers Mongod: master-slave replicas
  • 35. Couchdb    Availability and Partial Tolerance. Views are used to query. Map/Reduce. MVCC – Multiple Concurrent versions. No locks.           A little overhead with this approach due to garbage collection. Conflict resolution. Very simple, REST based. Schema Free. Shared nothing, seamless peer based Bi-Directional replication. Auto Compaction. Manual with Mongodb. Uses B-Trees Documents and indexes are kept in memory and flushed to disc periodically. Documents have states, in case of a failure, recovery can continue from the state documents were left. No built in auto-sharding, there are open source projects. You can’t define your indexes.
  • 37. Objectivity      No need for ORM. Closer to OOP. Complex data modeling. Schema evolution. Scalable Collections: List, Set, Map. Object relations.  Bi-Directional    relations ACID properties. Blazingly fast, uses paging. Supports replication and clustering.
  • 38. Column Stores Row oriented Id username email Department 1 John john@foo.com Sales 2 Mary mary@foo.com Marketing 3 Yoda yoda@foo.com IT Column oriented Id Username email Department 1 John john@foo.com Sales 2 Mary mary@foo.com Marketing 3 Yoda yoda@foo.com IT
  • 39. Cassandra           Tunable consistency. Decentralized. Writes are faster than reads. No Single point of failure. Incremental scalability. Uses consistent hashing (logical partitioning) when clustered. Hinted handoffs. Peer to peer routing(ring). Thrift API. Multi data center support.
  • 41. Graph Stores    Based on Graph Theory. Scale vertically, no clustering. You can use graph algorithms easily.
  • 43. Which one to use?  Key-value stores:   Document databases:   Quirky stuff. Columnar:   Complex object models. Data Structure Server:   OLTP. SQL. Transactions. Relations. OODBMS   Natural data modeling. Programmer friendly. Rapid development. Web friendly, CRUD. RDMBS:   Processing a constant stream of small reads and writes. Handles size well. Massive write loads. High availability. Multiple-data centers. MapReduce Graph:  Graph algorithms and relations. Want more ideas ? http://highscalability.com/blog/2011/6/20/35-use-cases-for-choosing-your-nextnosql-database.html 

Editor's Notes

  • #12: N : Number of nodes with a replica of data.W: Number of nodes that must acknowledge the update.R : Minimum number of nodes that succeeds read operation.W + R > N Strong ConsistencyW + R <= N Weak Consistency