Performance at Scale 
October 2014 
Amir Ish-Shalom 
Senior Solutions Architect
About Viber
The Viber service 
• Free, cross platform text messaging 
• Free, cross platform VoIP calls 
(voice and video) 
• Photo, video and location sharing 
• Stickers and Emoticons 
• Group communication platform 
(up to 100 participants) 
• Push To Talk
Monetization 
Viber out Sticker market
Simplicity and User Experience 
• No registration needed 
• User ID = your mobile number 
• Automatic friends detection 
(no add a friend) 
• Always on. No battery impact 
• 32 languages 
• Multiple devices experience: 
Mobile, Tablets and Desktop
Viber in numbers
Viber in numbers 
• Over 400 million users 
• 1 million added daily 
• Billions of messages every day 
• Billions of talking minutes every month
Viber Growth 
2011-02 
2011-03 
2011-04 
2011-05 
2011-06 
2011-07 
2011-08 
2011-09 
2011-10 
2011-11 
2011-12 
2012-01 
2012-02 
2012-03 
2012-04 
2012-05 
2012-06 
2012-07 
2012-08 
2012-09 
2012-10 
2012-11 
2012-12 
2013-01 
2013-02 
2013-03 
2013-04 
2013-05 
2013-06 
2013-07 
2013-08 
2013-09 
2013-10 
2013-11 
2013-12 
In 2013 vs. 2012 there was: 
• Over 3x growth in talking minutes 
• Over 5x growth in messages 
• Over 12x growth in group messages
Viber DB Architecture
Viber DB Architecture – 1st Generation 
In-house 
in-memory DB 
Viber Application Servers 
Clients
Viber DB Architecture – 2nd Generation 
Redis Cluster 
Redis Cache MongoDB Cluster 
Viber Application Servers 
Clients 
Redis 
Sharder 
Redis 
Sharder
2nd generation DB architecture advantages 
• Got us through the first few years of extreme growth 
• Never lost data from MongoDB 
• Redis performance
2nd generation DB architecture problems 
• MongoDB performance 
• MongoDB does not scale well with many application servers 
• Redis – In-memory database with no sharding 
• Redis Sharder – Not manageable and robust enough
3rd generation DB architecture requirements 
• High performance 
• Large data sets 
Solution: 
• Scalable 
• Robust 
• Backed-up 
• Always on 
• Easy to monitor 
• Prefer single DB solution
Viber DB Architecture – 3rd Generation 
Viber Application Servers 
Clients 
Couchbase 
Backup 
XDCR 
Couchbase Cluster 
Clusters
Migrating from 2nd to 3rd generation DB’s 
• Migrate a live system 
• Zero downtime 
• No data loss 
• Consistent data
How did we migrate? 
• Stage 1: Add new CB cluster in parallel to existing cluster 
Only delete keys from CB 
• Stage 2: Read only from MongoDB 
Write/Delete to both CB & MongoDB 
• Stage 3: Background process that copies all data from 
MongoDB to CB (if it doesn’t exist) 
• Stage 4: Validate data (both DB’s should be identical) 
• Stage 5: Read only from CB 
Write/Delete from both CB & MongoDB 
• Stage 6: Remove MongoDB and use only CB
Couchbase Cluster with 60 nodes 
Migration
Back-end servers 
• Over 500 application servers 
• 2nd generation DB architecture: 
Increased 
performance using 
fewer DB servers! 
• MongoDB – 1 cluster with 150 servers (master + 2 slaves) 
• Redis – 3 clusters with a total of 144 servers (master + 1 slave) 
• 3rd generation DB architecture: 
• 7 Couchbase clusters (up to 60 nodes each) 
• 1 – 2 replicas, XDCR & external backup 
• Total of less than 200 Couchbase servers
Interesting facts 
1. We have recently doubled our Couchbase instance sizes to 
cope with increased usage 
2. Total of ~1.5 million DB operations per second 
3. Bigger clusters don’t necessarily do more ops 
4. Highest performing clusters have 100% of their data in 
memory
Couchbase Cluster with 12 nodes & XDCR
Questions?
Thank you

More Related Content

PPTX
Zapping ever faster: how Zap sped up by two orders of magnitude using RavenDB
PDF
Tis the Season to Scale
PPTX
Stream upload and asynchronous job processing in large scale systems
PPTX
[Rakuten TechConf2014] [D-6] Rakuten BaaS in ROOM & Rakuten Kobo
PPTX
[RakutenTechConf2013] [C-1] Rakuten new infrastructure
PPTX
Challenges due to globalization
PPTX
[Rakuten TechConf2014] [E-6] Rakuten Ichiba Globalization - Challenges and So...
PPTX
[RakutenTechConf2013] [B-3_3] Rakuten Category
Zapping ever faster: how Zap sped up by two orders of magnitude using RavenDB
Tis the Season to Scale
Stream upload and asynchronous job processing in large scale systems
[Rakuten TechConf2014] [D-6] Rakuten BaaS in ROOM & Rakuten Kobo
[RakutenTechConf2013] [C-1] Rakuten new infrastructure
Challenges due to globalization
[Rakuten TechConf2014] [E-6] Rakuten Ichiba Globalization - Challenges and So...
[RakutenTechConf2013] [B-3_3] Rakuten Category

Viewers also liked (18)

KEY
12 Months of Learning about eBooks in 40 minutes
PPTX
[RakutenTechConf2013][C-4_3] Our Goals and Activities at Rakuten Institute o...
PPTX
Kobo Executive Team
PDF
Rakuten's Private Cloud
PDF
Recommendations @ Rakuten Group
PPTX
Building Faster Horses: Taking Over An Existing Software Product
DOCX
Core Management - Task 1
PDF
Technical product manager
PDF
docker
PPT
Lotus Forms Webform Server 3.0 Overview & Architecture
PDF
IEA DSM Task 24 Transport Panel at BECC conference
PPTX
Algorithm - Introduction
PPTX
kafka-steaming-data
PPT
Introduction To Algorithm [2]
PPSX
University Course Timetabling by using Multi Objective Genetic Algortihms
PPTX
VMworld 2015: vSphere Web Client- Yesterday, Today, and Tomorrow
PPTX
Enterprise Architecture: The role of the Design Authority
PDF
What’s attractive in Rakuten Technology Conference 2016. (English Version)
12 Months of Learning about eBooks in 40 minutes
[RakutenTechConf2013][C-4_3] Our Goals and Activities at Rakuten Institute o...
Kobo Executive Team
Rakuten's Private Cloud
Recommendations @ Rakuten Group
Building Faster Horses: Taking Over An Existing Software Product
Core Management - Task 1
Technical product manager
docker
Lotus Forms Webform Server 3.0 Overview & Architecture
IEA DSM Task 24 Transport Panel at BECC conference
Algorithm - Introduction
kafka-steaming-data
Introduction To Algorithm [2]
University Course Timetabling by using Multi Objective Genetic Algortihms
VMworld 2015: vSphere Web Client- Yesterday, Today, and Tomorrow
Enterprise Architecture: The role of the Design Authority
What’s attractive in Rakuten Technology Conference 2016. (English Version)
Ad

Similar to [Rakuten TechConf2014] [B-1] Performance at scale (20)

PPTX
PlayStation and Searchable Cassandra Without Solr (Dustin Pham & Alexander Fi...
PPTX
Add Redis to Postgres to Make Your Microservices Go Boom!
PDF
MongoDB: What, why, when
PPTX
Conceptos básicos. Seminario web 6: Despliegue de producción
PDF
MongoDB .local London 2019: Nationwide Building Society: Building Mobile Appl...
PPTX
Managing 50K+ Redis Databases Over 4 Public Clouds ... with a Tiny Devops Team
PPT
Mongo DB at Community Engine
PPT
MongoDB at community engine
PPTX
Global Cluster Topologies in MongoDB Atlas - Andrew Davidson
PPTX
Антон Бойко "Разделяй и властвуй — набор практик для построения масштабируемо...
PDF
Into The Box 2023 Keynote Day 1
PDF
Rakuten Ichiba_Rakuten Technology Conference 2016
PPTX
Webinar: Migrating from RDBMS to MongoDB
PDF
MongoDB World 2018: Global Clusters for a Global World
PPTX
Cosmos DB and Azure Functions A serverless database processing.pptx
PPTX
DataWeekender 4_2 Cosmos DB and Azure Functions- A serverless database proces...
PDF
Open Source Software, Distributed Systems, Database as a Cloud Service
PPTX
MongoDB San Francisco 2013: Storing eBay's Media Metadata on MongoDB present...
PPTX
Storing eBay's Media Metadata on MongoDB, by Yuri Finkelstein, Architect, eBay
PDF
Microservices
PlayStation and Searchable Cassandra Without Solr (Dustin Pham & Alexander Fi...
Add Redis to Postgres to Make Your Microservices Go Boom!
MongoDB: What, why, when
Conceptos básicos. Seminario web 6: Despliegue de producción
MongoDB .local London 2019: Nationwide Building Society: Building Mobile Appl...
Managing 50K+ Redis Databases Over 4 Public Clouds ... with a Tiny Devops Team
Mongo DB at Community Engine
MongoDB at community engine
Global Cluster Topologies in MongoDB Atlas - Andrew Davidson
Антон Бойко "Разделяй и властвуй — набор практик для построения масштабируемо...
Into The Box 2023 Keynote Day 1
Rakuten Ichiba_Rakuten Technology Conference 2016
Webinar: Migrating from RDBMS to MongoDB
MongoDB World 2018: Global Clusters for a Global World
Cosmos DB and Azure Functions A serverless database processing.pptx
DataWeekender 4_2 Cosmos DB and Azure Functions- A serverless database proces...
Open Source Software, Distributed Systems, Database as a Cloud Service
MongoDB San Francisco 2013: Storing eBay's Media Metadata on MongoDB present...
Storing eBay's Media Metadata on MongoDB, by Yuri Finkelstein, Architect, eBay
Microservices
Ad

More from Rakuten Group, Inc. (20)

PDF
EPSS (Exploit Prediction Scoring System)モニタリングツールの開発
PPTX
コードレビュー改善のためにJenkinsとIntelliJ IDEAのプラグインを自作してみた話
PDF
楽天における安全な秘匿情報管理への道のり
PDF
What Makes Software Green?
PDF
Simple and Effective Knowledge-Driven Query Expansion for QA-Based Product At...
PDF
DataSkillCultureを浸透させる楽天の取り組み
PDF
大規模なリアルタイム監視の導入と展開
PDF
楽天における大規模データベースの運用
PDF
楽天サービスを支えるネットワークインフラストラクチャー
PDF
楽天の規模とクラウドプラットフォーム統括部の役割
PDF
Rakuten Services and Infrastructure Team.pdf
PDF
The Data Platform Administration Handling the 100 PB.pdf
PDF
Supporting Internal Customers as Technical Account Managers.pdf
PDF
Making Cloud Native CI_CD Services.pdf
PDF
How We Defined Our Own Cloud.pdf
PDF
Travel & Leisure Platform Department's tech info
PDF
Travel & Leisure Platform Department's tech info
PDF
OWASPTop10_Introduction
PDF
Introduction of GORA API Group technology
PDF
100PBを越えるデータプラットフォームの実情
EPSS (Exploit Prediction Scoring System)モニタリングツールの開発
コードレビュー改善のためにJenkinsとIntelliJ IDEAのプラグインを自作してみた話
楽天における安全な秘匿情報管理への道のり
What Makes Software Green?
Simple and Effective Knowledge-Driven Query Expansion for QA-Based Product At...
DataSkillCultureを浸透させる楽天の取り組み
大規模なリアルタイム監視の導入と展開
楽天における大規模データベースの運用
楽天サービスを支えるネットワークインフラストラクチャー
楽天の規模とクラウドプラットフォーム統括部の役割
Rakuten Services and Infrastructure Team.pdf
The Data Platform Administration Handling the 100 PB.pdf
Supporting Internal Customers as Technical Account Managers.pdf
Making Cloud Native CI_CD Services.pdf
How We Defined Our Own Cloud.pdf
Travel & Leisure Platform Department's tech info
Travel & Leisure Platform Department's tech info
OWASPTop10_Introduction
Introduction of GORA API Group technology
100PBを越えるデータプラットフォームの実情

Recently uploaded (20)

PDF
DP Operators-handbook-extract for the Mautical Institute
PDF
Microsoft Solutions Partner Drive Digital Transformation with D365.pdf
PDF
sustainability-14-14877-v2.pddhzftheheeeee
PDF
Transform Your ITIL® 4 & ITSM Strategy with AI in 2025.pdf
PDF
Hybrid horned lizard optimization algorithm-aquila optimizer for DC motor
PPTX
Group 1 Presentation -Planning and Decision Making .pptx
PPTX
Web Crawler for Trend Tracking Gen Z Insights.pptx
PPTX
MicrosoftCybserSecurityReferenceArchitecture-April-2025.pptx
PDF
A Late Bloomer's Guide to GenAI: Ethics, Bias, and Effective Prompting - Boha...
PPTX
Modernising the Digital Integration Hub
PDF
August Patch Tuesday
PPTX
Chapter 5: Probability Theory and Statistics
PPT
Geologic Time for studying geology for geologist
PPTX
The various Industrial Revolutions .pptx
PPT
What is a Computer? Input Devices /output devices
PPT
Module 1.ppt Iot fundamentals and Architecture
PDF
How ambidextrous entrepreneurial leaders react to the artificial intelligence...
PDF
DASA ADMISSION 2024_FirstRound_FirstRank_LastRank.pdf
PPTX
Benefits of Physical activity for teenagers.pptx
PDF
CloudStack 4.21: First Look Webinar slides
DP Operators-handbook-extract for the Mautical Institute
Microsoft Solutions Partner Drive Digital Transformation with D365.pdf
sustainability-14-14877-v2.pddhzftheheeeee
Transform Your ITIL® 4 & ITSM Strategy with AI in 2025.pdf
Hybrid horned lizard optimization algorithm-aquila optimizer for DC motor
Group 1 Presentation -Planning and Decision Making .pptx
Web Crawler for Trend Tracking Gen Z Insights.pptx
MicrosoftCybserSecurityReferenceArchitecture-April-2025.pptx
A Late Bloomer's Guide to GenAI: Ethics, Bias, and Effective Prompting - Boha...
Modernising the Digital Integration Hub
August Patch Tuesday
Chapter 5: Probability Theory and Statistics
Geologic Time for studying geology for geologist
The various Industrial Revolutions .pptx
What is a Computer? Input Devices /output devices
Module 1.ppt Iot fundamentals and Architecture
How ambidextrous entrepreneurial leaders react to the artificial intelligence...
DASA ADMISSION 2024_FirstRound_FirstRank_LastRank.pdf
Benefits of Physical activity for teenagers.pptx
CloudStack 4.21: First Look Webinar slides

[Rakuten TechConf2014] [B-1] Performance at scale

  • 1. Performance at Scale October 2014 Amir Ish-Shalom Senior Solutions Architect
  • 3. The Viber service • Free, cross platform text messaging • Free, cross platform VoIP calls (voice and video) • Photo, video and location sharing • Stickers and Emoticons • Group communication platform (up to 100 participants) • Push To Talk
  • 4. Monetization Viber out Sticker market
  • 5. Simplicity and User Experience • No registration needed • User ID = your mobile number • Automatic friends detection (no add a friend) • Always on. No battery impact • 32 languages • Multiple devices experience: Mobile, Tablets and Desktop
  • 7. Viber in numbers • Over 400 million users • 1 million added daily • Billions of messages every day • Billions of talking minutes every month
  • 8. Viber Growth 2011-02 2011-03 2011-04 2011-05 2011-06 2011-07 2011-08 2011-09 2011-10 2011-11 2011-12 2012-01 2012-02 2012-03 2012-04 2012-05 2012-06 2012-07 2012-08 2012-09 2012-10 2012-11 2012-12 2013-01 2013-02 2013-03 2013-04 2013-05 2013-06 2013-07 2013-08 2013-09 2013-10 2013-11 2013-12 In 2013 vs. 2012 there was: • Over 3x growth in talking minutes • Over 5x growth in messages • Over 12x growth in group messages
  • 10. Viber DB Architecture – 1st Generation In-house in-memory DB Viber Application Servers Clients
  • 11. Viber DB Architecture – 2nd Generation Redis Cluster Redis Cache MongoDB Cluster Viber Application Servers Clients Redis Sharder Redis Sharder
  • 12. 2nd generation DB architecture advantages • Got us through the first few years of extreme growth • Never lost data from MongoDB • Redis performance
  • 13. 2nd generation DB architecture problems • MongoDB performance • MongoDB does not scale well with many application servers • Redis – In-memory database with no sharding • Redis Sharder – Not manageable and robust enough
  • 14. 3rd generation DB architecture requirements • High performance • Large data sets Solution: • Scalable • Robust • Backed-up • Always on • Easy to monitor • Prefer single DB solution
  • 15. Viber DB Architecture – 3rd Generation Viber Application Servers Clients Couchbase Backup XDCR Couchbase Cluster Clusters
  • 16. Migrating from 2nd to 3rd generation DB’s • Migrate a live system • Zero downtime • No data loss • Consistent data
  • 17. How did we migrate? • Stage 1: Add new CB cluster in parallel to existing cluster Only delete keys from CB • Stage 2: Read only from MongoDB Write/Delete to both CB & MongoDB • Stage 3: Background process that copies all data from MongoDB to CB (if it doesn’t exist) • Stage 4: Validate data (both DB’s should be identical) • Stage 5: Read only from CB Write/Delete from both CB & MongoDB • Stage 6: Remove MongoDB and use only CB
  • 18. Couchbase Cluster with 60 nodes Migration
  • 19. Back-end servers • Over 500 application servers • 2nd generation DB architecture: Increased performance using fewer DB servers! • MongoDB – 1 cluster with 150 servers (master + 2 slaves) • Redis – 3 clusters with a total of 144 servers (master + 1 slave) • 3rd generation DB architecture: • 7 Couchbase clusters (up to 60 nodes each) • 1 – 2 replicas, XDCR & external backup • Total of less than 200 Couchbase servers
  • 20. Interesting facts 1. We have recently doubled our Couchbase instance sizes to cope with increased usage 2. Total of ~1.5 million DB operations per second 3. Bigger clusters don’t necessarily do more ops 4. Highest performing clusters have 100% of their data in memory
  • 21. Couchbase Cluster with 12 nodes & XDCR

Editor's Notes

  • #3: Viber was founded almost 4 years ago. It started as a free app for iPhones providing free VoIP calls. After a few months an Android version was released and text messaging was introduced. Since then many new features were added and today Viber is a social communications platform available for almost all mobile phones, tablets and desktop OS’s. A few months ago we were bought by Rakuten, the largest Japanese e-commerce company.
  • #4: Viber provides reliable text messaging, giving you indications when a message was sent, delivered to the recipient and even when they were read. Groups of up to 100 different users are possible supporting all media options such as sending photo’s, video’s, stickers, doodles and sending your location. Recently we added a new Push To Talk feature which sends your voice as you are talking without waiting for the recording to finish. In a group conversation with PTT, you can broadcast your voice instantly to up to 100 people.
  • #5: In 2014 we have started to monetize the Viber service. Viber out – VOIP calls from Viber to non-Viber phone number (land lines & mobile numbers) for very low rates Stickers – Both free and premium stickers that can be purchased. In addition to branded content such as Smurfs and Garfield, we have created Viber characters such as Violet, Eve, Freddy, Blu, Zoe & more that you can see in the pictures here.
  • #6: Viber is very easy to use. It uses your mobile number as your registration id and detects which of your friends have Viber from your AB. In order to provide the best user experience, Viber clients are always on and connected to our servers allowing for sub-second updates. We were able to provide this level of service without sacrificing battery life. MD: Viber is primarily a mobile application, but we also have support for both desktop’s and tablets. All your devices are registered under the same phone number and are fully sync’ed between them. All messages & calls are received by all devices and if you read a message on one device it is automatically shown as read on the other. Sent messages from one device appear on all other devices instantly. Calls can be seamlessly transferred between the devices without the other side even noticing.
  • #10: Next I would like to talk about what runs the Viber service. The back-end which allows sending billions of messages and talking minutes with sub-second latencies to hundreds of millions of users.
  • #11: At first Viber was a much smaller service, and for the first few months Viber used an in-house in-memory database solution. As Viber usage grew exponentially, we had to move to a more scalable solution. We decided to use a sharded NoSQL database solution to provide fast implementation and very easy scaling. In early 2011 this technology was not even cutting edge technology, but bleeding edge technology. We initially ran on the beta version of MongoDB’s very version that started to support sharding. We were one of the first big MongoDB deployments back then (if not the biggest).
  • #12: All Viber servers on AWS Redis Sharder developed in house by Viber because Redis does not support sharding Redis in Master/Slave configuration MongoDB with 2 additional replicas for each node MongoDB uses SSD based instances for active and 1st replica and EBS for 2nd replica Redis used both as cache for MongoDB and stand-alone DB for either high-throughput activity or for very large datasets (Billions of keys)
  • #13: Got us this far – 3 years of extreme growth Never lost data from MongoDB – even though we had many server failures and even caused a few down times, but we were always able to access the data at the end of the day Redis performance – Redis is a very fast DB and was able to give us the speed we needed
  • #14: MongoDB performance: Only provided tens of thousands of ops whereas we needed hundreds of thousands of ops Performance of databases with billions of keys dropped significantly MongoDB scale: each application server had many worker threads, all of which would connect to a single MongoDB cluster. MongoDB would manage each connection with a separate thread and stack wasting a lot of memory and CPU. When we reached hundreds of application servers this started to become a serious problem. Redis Sharder – built in-house and is not a commercial-grade solution. Has VERY limited manageability and not robust enough. Scalability is limited and must be done in exponents of 2. Client implementation support most of the Redis commands, but not bulk commands, hindering performance. Redis In-Memory DB – Redis is an in-memory DB which has limited persistence to disk, but because MongoDB could not perform fast enough, we use Redis for most of our DB operations, without MongoDB at all.
  • #15: When looking for a 3rd generation DB architecture, we were not looking to replace a standard RDBMS based system with a NoSQL system like most companies. We were already using a NoSQL solution by one of the market leaders which was simply not working good enough. High performance – hundreds of thousands of ops at consistent low latencies Large data sets – Billions of keys Scalable – Easy to add additional server nodes without interrupting production Robust – Solution should be able to withstand node failures without any downtime. Can be persisted to disk with a varying amount of replica and backups for different data (each bucket/cluster will have different robustness settings) Backed-up – Daily / weekly backups that can be used to perform a full recovery in case of failure Always on – no downtime, including during SW/HW upgrades, backups, etc. Easy to monitor – good monitoring solution which can show both live and historical statistics. Interface should be both graphical UI but also accessible via external interface to connect to our monitoring/alert system Prefer single DB solution (instead of cache + persistent DB)
  • #16: Several Couchbase clusters (up to 60 nodes each) Each cluster has different access patterns (mainly read, mainly write/delete, large data sets, heavy disk usage) - all with SSD drives though for very fast access Different replica settings for each bucket, depending on data requirements We are currently using CB v2.5.1 All clusters are spread evenly across 3 AZ’s for redundancy Backup Couchbase cluster Sync. using XDCR for specific buckets This cluster contains views for real-time data analytics Can be used as alternative cluster in case of full failure of primary cluster Daily / weekly backups from most of the CB clusters. Backup is compressed and uploaded to S3.
  • #17: Migrate live system – We need to migrate the back-end databases while the system is receiving millions of new users, and hundreds of thousands of requests per seconds. Zero downtime – The system must continue running throughout the whole migration process without even a minute of downtime We must make sure no data is lost during the migration process As data is constantly being updated we must make sure we are migrating the most up-to-date data and that data can be modified multiple times during the migration process As we have hundreds of database servers, all in AWS, we need to take into consideration that during the migration process several machines will probably fail and should not affect data migration and consistency Because of the complexity of this process, it was probably the most time consuming and delicate part of moving to Couchbase.
  • #18: As the CB was divided into several clusters, we only introduced 1-2 new clusters at a time Stage 1 – We need this stage to maintain data consistency because we have hundreds of application servers and upgrading them can take a few hours. When we move from stage 1 to stage 2 we need to make sure that if a server in stage 2 writes a key and then a server in stage 1 deletes it, it will not appear in CB. Stage 2 – This stage will make sure that ongoing changes are written to CB Stage 3 – We exported all data from MongoDB after all servers have been upgraded to stage 2. A background process reads all this data and inserts it into couchbase only if the key did not exist (if it existed it will always be newer). Stage 4 – After background data migration is complete, both databases should be identical. To validate this we log all data import transactions and live updates. If there are any errors during the import we can always re-import the data. We also compare the list of keys from the MongoDB export to the logs of the actual keys inserted and make sure we didn’t miss anything. We also do a random check on a few tens of thousands of keys and compare the data between MongoDB and CB to check for inconsistencies. If there are any problems we can always start the migration process again. Stage 5 – This stage is necessary just to maintain data consistency during server upgrade (stage 4 servers are still reading from MongoDB).
  • #21: CB does not support nodes with different bucket sizes, so to increase size we had to replace all nodes with new bigger instances and only then increase bucket sizes. All this was done using the CB rebalance feature without interrupting normal operations Total DB ops is about 1.5Mops, though it is not spread evenly across clusters. They range from 20K ops to 600K ops. Cluster performing highest ops is only a 21 node cluster and the two lowest performing ops are actually the 2 biggest ones. The reason they are so large is because they hold much more data. To achieve highest performance data should be in memory and not read from disk which impedes performance greatly
  • #22: Daily oscillation between 100K to 350K ops Over 2.5 billion keys using 2 replicas This cluster is replicated to the backup cluster using XDCR. You can see that the latency for XDCR replication is about 1.5ms.
  • #23: MongoDB support updating documents, but since CB is so fast, we are able to retrieve the document, update it and set it back (using CAS to verify it wasn’t changed) much faster than server side updating Redis supports server-side data structures on a key level. In order to achieve similar functionality with CB, we used several solutions. To simulate sets and lists we used the append function which is an atomic operation and much faster than retrieving, updating and setting the key. The problem is that appending is not possible on valid JSON documents. We appended valid JSON objects with a delimiter between them and to remove an object add a minus and only specify the object key. To simulate large maps that we want to be able to retrieve a single object fast – but not put every object in a separate key because that would create very large metadata. The solution was to break a single map key down into several keys using our own hashing algorithm to know in which key a specific object is located. So instead of having 1 large value, would have 10 values – which would provide a good trade-off between speed and metadata size. Initially we processed the daily backups which are stored in sqlite3 format to retrieve large range queries – such as a list of all Viber phone numbers in a certain country or that have certain data in their JSON object. We currently create views only on our backup cluster as to not impede performance. We plan to move more toward using views so we work with live data.