SlideShare a Scribd company logo
DISTRIBUTED STORAGE
SYSTEM
Mr. Dương Công Lợi
Company: VNG-Corp
Tel: +84989510016/+84908522017
Email:loidc@vng.com.vn/loiduongcong@gmail.com
CONTENTS
 1. What is distributed-computing system?
 2. Principle of distributed database/storage
system
 3. Distributed storage system paradigm
 4. Canonical problems in distributed systems
 5. Common solution for canonical problems in
distributed systems
 6. UniversalDistributedStorage
 7. Appendix
1. WHAT IS DISTRIBUTED-COMPUTING
SYSTEM?
 Distributed-Computing is the process of solving a
computational problem using a distributed
system.
 A distributed system is a computing system in
which a number of components on multiple
computers cooperate by communicating over a
network to achieve a common goal.
DISTRIBUTED DATABASE/STORAGE
SYSTEM
 A distributed database system, the database is
stored on several computers .
 A distributed database is a collection of multiple
, logic computer network .
DISTRIBUTED SYSTEM ADVANCE
 Advance
 Avoid bottleneck & single-point-of-failure
 More Scalability
 More Availability
 Routing model
 Client routing: client request to appropriate server to
read/write data
 Server routing: server forward request of client to
appropriate server and send result to this client
* can combine the two model above into a system
DISTRIBUTED STORAGE SYSTEM
 Store some data {1,2,3,4,6,7,8} into 1 server
 And store them into 3 distributed server
1,2,3,4,
6,7,8
1,2,3
4,6
7,8
2. PRINCIPLE OF DISTRIBUTED
DATABASE/STORAGE SYSTEM
 Shard data key and store it into appropriate
server use Distributed Hash Table (DHT)
 DHT must be consistent hashing:
 Uniform distribution of generation
 Consistent
 Jenkins, Murmur are the good choice;some else:
MD5, SHA slower
3. DISTRIBUTED STORAGE SYSTEM
PARADIGM
 Data Hashing/Addressing
 Determine server for data store in
 Data Replication
 Store data into multi server node for more
availability, fault-tolerance
DISTRIBUTED STORAGE SYSTEM
ARCHITECT
 Data Hashing/Addressing
 Use DHT to addressing server (use server-name) to a
number, performing it on one circle called the keys
space
 Use DHT to addressing data and find server store it
by successor(k)=ceiling(addressing(k))
 successor(k): server store k
0
server3
server1
server2
DISTRIBUTED STORAGE SYSTEM
ARCHITECT
 Addressing – Virtual node
 Each server node is generated to more node-id for
evenly distributed, load balance
Server1: n1, n4, n6
Server2: n2, n7
Server3: n3, n5, n8
0
server3
server1
server2
n7
n1
n5
n2
n4
n8
n3
n6
DISTRIBUTED STORAGE SYSTEM
ARCHITECT
 Data Replication
Data k1 store in server1 as master and store in
server2 as slave
0
server3
server1
server2
k1
4. CANONICAL PROBLEMS IN DISTRIBUTED
SYSTEMS
 Distributed transactions: ACID (Atomicity,
Consistency, Isolation, Durability) requirement
 Distributed data independence
 Fault tolerance
 Transparency
5. COMMON SOLUTION FOR CANONICAL
PROBLEMS IN DISTRIBUTED SYSTEMS
 Atomicity and Consistency with Two Phase
Commit protocal
 Distributed data independence with consistent
hashing algorithm
 Fault tolerance with leader election, multi
master and data replication
 Transparency with server routing, client seen
distributed system as a single server
TWO PHASE COMMIT PROTOCAL
 What is this?
 Two-phase commit is a transaction protocol designed for
the complications that arise with distributed resource
managers.
 Two-phase commit technology is used for hotel and
airline reservations, stock market transactions, banking
applications, and credit card systems.
 With a two-phase commit protocol, the distributed
transaction manager employs a coordinator to manage
the individual resource managers. The commit process
proceeds as follows:
TWO PHASE COMMIT PROTOCAL
 Phase1: Obtaining a Decision
 Step 1  Coordinator asks all participants to prepare
to commit transaction Ti.
 Ci adds the records <prepare T> to the log and
forces log to stable storage (a log is a file which
maintains a record of all changes to the database)
 sends prepare T messages to all sites where T
executed
TWO PHASE COMMIT PROTOCAL
 Phase1: Making a Decision
 Step 2  Upon receiving message, transaction
manager at site determines if it can commit the
transaction
 if not:
add a record <no T> to the log and send abort T
message to Ci
 if the transaction can be committed,
then:
1). add the record <ready T> to the log
2). force all records for T to stable storage
3). send ready T message to Ci
TWO PHASE COMMIT PROTOCAL
 Phase 2: Recording the Decision
 Step 1  T can be committed of Ci received a ready T
message from all the participating sites: otherwise T
must be aborted.
 Step 2  Coordinator adds a decision record, <commit
T> or <abort T>, to the log and forces record onto stable
storage. Once the record is in stable storage, it cannot
be revoked (even if failures occur)
 Step 3  Coordinator sends a message to each
participant informing it of the decision (commit or abort)
 Step 4  Participants take appropriate action locally.
Distribute Storage System May-2014
TWO PHASE COMMIT PROTOCAL
 Costs and Limitations
 If one database server is unavailable, none of the
servers gets the updates.
 This is correctable through network tuning and
correctly building the data distribution through
database optimization techniques.
LEADER ELECTION
 Some leader election algorithm can use: LCR
(LeLann-Chang-Roberts), Pitterson, HS
(Hirschberg-Sinclair)
LEADER ELECTION
 Bully Leader Election algorithm
Distribute Storage System May-2014
MULTI MASTER
 Multi-master replication
 Problem of multi-master replication
MULTI MASTER
 Solution, 2 candicate model:
 Two phase commit (always consistency)
 Asynchronize sync data among multi node
 Still active despite some node dies
 Faster than 2PC
MULTI MASTER
 Asynchronize sync data
 Data store to main master (called sub-leader), then
this data post to queue to sync to other master.
MULTI MASTER
 Asynchronize sync data
req1
req2
Server1
(leader )
Server2
data queue
req2: forward
X
UNIVERSALDISTRIBUTEDSTORAGE
a distributed storage system
6. UNIVERSALDISTRIBUTEDSTORAGE
 UniversalDistributedStorage is a distributed
storage system develop for:
 Distributed transactions (ACID)
 Distributed data independence
 Fault tolerance
 Leader election (decision for join or leave server node)
 Replicate with multiple master replication
 Transparency
UNIVERSALDISTRIBUTEDSTORAGE
ARCHITECTURE
 Overview
Bussiness
Layer
Distrib
uted
Layer
Storage
Layer
Bussiness
Layer
Distrib
uted
Layer
Storage
Layer
Bussiness
Layer
Distrib
uted
Layer
Storage
Layer
Server
UNIVERSALDISTRIBUTEDSTORAGE
ARCHITECTURE
 Internal Overview
Business
Layer
Distributed
Layer
StorageLayer
dataLocate(),
dataRemote()
Result(s)
localData()
Result{s}
Client request(s)
remote
queuing
ARCHITECTURE OVERVIEW
UNIVERSALDISTRIBUTEDSTORAGE
FEATURE
 Data hashing/addressing
 Use Murmur hashing function
UNIVERSALDISTRIBUTEDSTORAGE
FEATURE
 Leader election
 Use Bully Leader Election algorithm
UNIVERSALDISTRIBUTEDSTORAGE
FEATURE
 Multi-master replication
 Use asynchronize sync data among server nodes
UNIVERSALDISTRIBUTEDSTORAGE
STATISTIC
 System information:
 3 machine 8GB Ram, core i5 3,220GHz
 LAN/WAN network
 7 physical servers on 3 above mechine
 Concurrence write 16500000 items in 3680s, rate~
4480req/sec (at client computing)
 Concurrence read 16500000 items in 1458s, rate~
11320req/sec (at client computing)
* It doesn’t limit of this system, it limit at clients (this
test using 3 client thread)
Q & A
Contact:
Duong Cong Loi
loidc@vng.com.vn
loiduongcong@gmail.com
https://www.facebook.com/duongcong.loi
7. APPENDIX
APPENDIX - 001
 How to join/leave server(s)
1. join/leave
2. join/leave:forward
Leaderserver
4. broadcast result
3. process join/leave
Server A
Server B Server C
APPENDIX - 002
 How to move data when join/leave server(s)
 Make appropriate data for the moving
 Async data for the moving by thread, and control
speed of the moving
APPENDIX - 003
 How to detect Leader or sub-leader die
 Easy dectect by polling connection
APPENDIX - 004
 How to make multi virtual node for one server
 Easy generate multi virtual node for one server by
hash server-name
 Ex:
make 200 virtual node for server ‘photoTokyo’:
use hash value of: photoTokyo1, photoTokyo2, …,
photoTokyo200
APPENDIX - 005
 For fast moving data
 Use bloomfilter for dectect exist hash value of data-
key
 Use a storage for store all data-key for this local
server
APPENDIX - 006
 How to avoid network turnning
 Use client connection pool with screening strategy
before, it avoid many connection hanging when call
remote via network between two server

More Related Content

PDF
PPTX
Virtualization in 4-4 1-4 Data Center Network.
PDF
3.5 SDN CloudStack Developer Day
PDF
call for papers, research paper publishing, where to publish research paper, ...
PDF
NameNode and DataNode Coupling for a Power-proportional Hadoop Distributed F...
PPTX
Network Switching | Computer Science
PDF
Distributed process and scheduling
PPTX
Message Passing, Remote Procedure Calls and Distributed Shared Memory as Com...
Virtualization in 4-4 1-4 Data Center Network.
3.5 SDN CloudStack Developer Day
call for papers, research paper publishing, where to publish research paper, ...
NameNode and DataNode Coupling for a Power-proportional Hadoop Distributed F...
Network Switching | Computer Science
Distributed process and scheduling
Message Passing, Remote Procedure Calls and Distributed Shared Memory as Com...

What's hot (20)

PDF
CS6601 DISTRIBUTED SYSTEMS
PPTX
Lecture 04 Chapter 1 - Introduction to Parallel Computing
PDF
Inter-Process Communication in distributed systems
PDF
A QOS BASED LOAD BALANCED SWITCH
PDF
An Adaptive Load Sharing Algorithm for Heterogeneous Distributed System
DOCX
Computer network solution
PPT
PPT
Distributed System by Pratik Tambekar
PPTX
Unit 3 cs6601 Distributed Systems
PPT
dos mutual exclusion algos
PPT
Network Layer
PPT
resource management
PPTX
Distributed concurrency control
PPTX
FATTREE: A scalable Commodity Data Center Network Architecture
PDF
CS6601 DISTRIBUTED SYSTEMS
PDF
C0312023
PDF
Hot-Spot analysis Using Apache Spark framework
PDF
Process Migration in Heterogeneous Systems
PPT
Communications is distributed systems
DOCX
Data communication q and a
CS6601 DISTRIBUTED SYSTEMS
Lecture 04 Chapter 1 - Introduction to Parallel Computing
Inter-Process Communication in distributed systems
A QOS BASED LOAD BALANCED SWITCH
An Adaptive Load Sharing Algorithm for Heterogeneous Distributed System
Computer network solution
Distributed System by Pratik Tambekar
Unit 3 cs6601 Distributed Systems
dos mutual exclusion algos
Network Layer
resource management
Distributed concurrency control
FATTREE: A scalable Commodity Data Center Network Architecture
CS6601 DISTRIBUTED SYSTEMS
C0312023
Hot-Spot analysis Using Apache Spark framework
Process Migration in Heterogeneous Systems
Communications is distributed systems
Data communication q and a
Ad

Viewers also liked (8)

PDF
Distributed storage system
PDF
my presentation of the paper "FAST'12 NCCloud"
PPTX
Auditing Distributed Preservation Networks
PPTX
[HATCH! FAIR 2013] Decision Making for Startups - Mr. Nguyen Tat Dac
PDF
Network Coding
PPTX
Survey of distributed storage system
PDF
Tachyon: An Open Source Memory-Centric Distributed Storage System
PPTX
Embedded Systems in Automobile
Distributed storage system
my presentation of the paper "FAST'12 NCCloud"
Auditing Distributed Preservation Networks
[HATCH! FAIR 2013] Decision Making for Startups - Mr. Nguyen Tat Dac
Network Coding
Survey of distributed storage system
Tachyon: An Open Source Memory-Centric Distributed Storage System
Embedded Systems in Automobile
Ad

Similar to Distribute Storage System May-2014 (20)

PPT
19. Distributed Databases in DBMS
PPTX
UNIT IV DIS.pptx
PPT
Distributed databases
PPT
Distributed databases,types of database
PPT
18 philbe replication stanford99
PPTX
UNIT II in the part of Database at the PG
PDF
Basics of the Highly Available Distributed Databases - teowaki - javier ramir...
PDF
Everything you always wanted to know about highly available distributed datab...
PPT
Distributed Database Management System - Introduction
PDF
Management of Distributed Transactions
PPT
13 tm adv
PPTX
Basics of Distributed Systems - Distributed Storage
PDF
Design Patterns For Distributed NO-reational databases
PDF
Designing large scale distributed systems
PDF
Navigating Transactions: ACID Complexity in Modern Databases- Mydbops Open So...
PDF
Navigating Transactions: ACID Complexity in Modern Databases
PPTX
Synchronization
PPTX
NonStop Hadoop - Applying the PaxosFamily of Protocols to make Critical Hadoo...
PDF
10 replication
PDF
What is active-active
19. Distributed Databases in DBMS
UNIT IV DIS.pptx
Distributed databases
Distributed databases,types of database
18 philbe replication stanford99
UNIT II in the part of Database at the PG
Basics of the Highly Available Distributed Databases - teowaki - javier ramir...
Everything you always wanted to know about highly available distributed datab...
Distributed Database Management System - Introduction
Management of Distributed Transactions
13 tm adv
Basics of Distributed Systems - Distributed Storage
Design Patterns For Distributed NO-reational databases
Designing large scale distributed systems
Navigating Transactions: ACID Complexity in Modern Databases- Mydbops Open So...
Navigating Transactions: ACID Complexity in Modern Databases
Synchronization
NonStop Hadoop - Applying the PaxosFamily of Protocols to make Critical Hadoo...
10 replication
What is active-active

Recently uploaded (20)

PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
Empathic Computing: Creating Shared Understanding
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PPTX
Spectroscopy.pptx food analysis technology
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PPT
Teaching material agriculture food technology
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PPTX
Programs and apps: productivity, graphics, security and other tools
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
Empathic Computing: Creating Shared Understanding
MIND Revenue Release Quarter 2 2025 Press Release
Building Integrated photovoltaic BIPV_UPV.pdf
Spectroscopy.pptx food analysis technology
Dropbox Q2 2025 Financial Results & Investor Presentation
Teaching material agriculture food technology
Encapsulation_ Review paper, used for researhc scholars
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Programs and apps: productivity, graphics, security and other tools
The AUB Centre for AI in Media Proposal.docx
Network Security Unit 5.pdf for BCA BBA.
Advanced methodologies resolving dimensionality complications for autism neur...
Diabetes mellitus diagnosis method based random forest with bat algorithm
Reach Out and Touch Someone: Haptics and Empathic Computing
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx

Distribute Storage System May-2014

  • 1. DISTRIBUTED STORAGE SYSTEM Mr. Dương Công Lợi Company: VNG-Corp Tel: +84989510016/+84908522017 Email:loidc@vng.com.vn/loiduongcong@gmail.com
  • 2. CONTENTS  1. What is distributed-computing system?  2. Principle of distributed database/storage system  3. Distributed storage system paradigm  4. Canonical problems in distributed systems  5. Common solution for canonical problems in distributed systems  6. UniversalDistributedStorage  7. Appendix
  • 3. 1. WHAT IS DISTRIBUTED-COMPUTING SYSTEM?  Distributed-Computing is the process of solving a computational problem using a distributed system.  A distributed system is a computing system in which a number of components on multiple computers cooperate by communicating over a network to achieve a common goal.
  • 4. DISTRIBUTED DATABASE/STORAGE SYSTEM  A distributed database system, the database is stored on several computers .  A distributed database is a collection of multiple , logic computer network .
  • 5. DISTRIBUTED SYSTEM ADVANCE  Advance  Avoid bottleneck & single-point-of-failure  More Scalability  More Availability  Routing model  Client routing: client request to appropriate server to read/write data  Server routing: server forward request of client to appropriate server and send result to this client * can combine the two model above into a system
  • 6. DISTRIBUTED STORAGE SYSTEM  Store some data {1,2,3,4,6,7,8} into 1 server  And store them into 3 distributed server 1,2,3,4, 6,7,8 1,2,3 4,6 7,8
  • 7. 2. PRINCIPLE OF DISTRIBUTED DATABASE/STORAGE SYSTEM  Shard data key and store it into appropriate server use Distributed Hash Table (DHT)  DHT must be consistent hashing:  Uniform distribution of generation  Consistent  Jenkins, Murmur are the good choice;some else: MD5, SHA slower
  • 8. 3. DISTRIBUTED STORAGE SYSTEM PARADIGM  Data Hashing/Addressing  Determine server for data store in  Data Replication  Store data into multi server node for more availability, fault-tolerance
  • 9. DISTRIBUTED STORAGE SYSTEM ARCHITECT  Data Hashing/Addressing  Use DHT to addressing server (use server-name) to a number, performing it on one circle called the keys space  Use DHT to addressing data and find server store it by successor(k)=ceiling(addressing(k))  successor(k): server store k 0 server3 server1 server2
  • 10. DISTRIBUTED STORAGE SYSTEM ARCHITECT  Addressing – Virtual node  Each server node is generated to more node-id for evenly distributed, load balance Server1: n1, n4, n6 Server2: n2, n7 Server3: n3, n5, n8 0 server3 server1 server2 n7 n1 n5 n2 n4 n8 n3 n6
  • 11. DISTRIBUTED STORAGE SYSTEM ARCHITECT  Data Replication Data k1 store in server1 as master and store in server2 as slave 0 server3 server1 server2 k1
  • 12. 4. CANONICAL PROBLEMS IN DISTRIBUTED SYSTEMS  Distributed transactions: ACID (Atomicity, Consistency, Isolation, Durability) requirement  Distributed data independence  Fault tolerance  Transparency
  • 13. 5. COMMON SOLUTION FOR CANONICAL PROBLEMS IN DISTRIBUTED SYSTEMS  Atomicity and Consistency with Two Phase Commit protocal  Distributed data independence with consistent hashing algorithm  Fault tolerance with leader election, multi master and data replication  Transparency with server routing, client seen distributed system as a single server
  • 14. TWO PHASE COMMIT PROTOCAL  What is this?  Two-phase commit is a transaction protocol designed for the complications that arise with distributed resource managers.  Two-phase commit technology is used for hotel and airline reservations, stock market transactions, banking applications, and credit card systems.  With a two-phase commit protocol, the distributed transaction manager employs a coordinator to manage the individual resource managers. The commit process proceeds as follows:
  • 15. TWO PHASE COMMIT PROTOCAL  Phase1: Obtaining a Decision  Step 1  Coordinator asks all participants to prepare to commit transaction Ti.  Ci adds the records <prepare T> to the log and forces log to stable storage (a log is a file which maintains a record of all changes to the database)  sends prepare T messages to all sites where T executed
  • 16. TWO PHASE COMMIT PROTOCAL  Phase1: Making a Decision  Step 2  Upon receiving message, transaction manager at site determines if it can commit the transaction  if not: add a record <no T> to the log and send abort T message to Ci  if the transaction can be committed, then: 1). add the record <ready T> to the log 2). force all records for T to stable storage 3). send ready T message to Ci
  • 17. TWO PHASE COMMIT PROTOCAL  Phase 2: Recording the Decision  Step 1  T can be committed of Ci received a ready T message from all the participating sites: otherwise T must be aborted.  Step 2  Coordinator adds a decision record, <commit T> or <abort T>, to the log and forces record onto stable storage. Once the record is in stable storage, it cannot be revoked (even if failures occur)  Step 3  Coordinator sends a message to each participant informing it of the decision (commit or abort)  Step 4  Participants take appropriate action locally.
  • 19. TWO PHASE COMMIT PROTOCAL  Costs and Limitations  If one database server is unavailable, none of the servers gets the updates.  This is correctable through network tuning and correctly building the data distribution through database optimization techniques.
  • 20. LEADER ELECTION  Some leader election algorithm can use: LCR (LeLann-Chang-Roberts), Pitterson, HS (Hirschberg-Sinclair)
  • 21. LEADER ELECTION  Bully Leader Election algorithm
  • 23. MULTI MASTER  Multi-master replication  Problem of multi-master replication
  • 24. MULTI MASTER  Solution, 2 candicate model:  Two phase commit (always consistency)  Asynchronize sync data among multi node  Still active despite some node dies  Faster than 2PC
  • 25. MULTI MASTER  Asynchronize sync data  Data store to main master (called sub-leader), then this data post to queue to sync to other master.
  • 26. MULTI MASTER  Asynchronize sync data req1 req2 Server1 (leader ) Server2 data queue req2: forward X
  • 28. 6. UNIVERSALDISTRIBUTEDSTORAGE  UniversalDistributedStorage is a distributed storage system develop for:  Distributed transactions (ACID)  Distributed data independence  Fault tolerance  Leader election (decision for join or leave server node)  Replicate with multiple master replication  Transparency
  • 33. UNIVERSALDISTRIBUTEDSTORAGE FEATURE  Leader election  Use Bully Leader Election algorithm
  • 34. UNIVERSALDISTRIBUTEDSTORAGE FEATURE  Multi-master replication  Use asynchronize sync data among server nodes
  • 35. UNIVERSALDISTRIBUTEDSTORAGE STATISTIC  System information:  3 machine 8GB Ram, core i5 3,220GHz  LAN/WAN network  7 physical servers on 3 above mechine  Concurrence write 16500000 items in 3680s, rate~ 4480req/sec (at client computing)  Concurrence read 16500000 items in 1458s, rate~ 11320req/sec (at client computing) * It doesn’t limit of this system, it limit at clients (this test using 3 client thread)
  • 36. Q & A Contact: Duong Cong Loi loidc@vng.com.vn loiduongcong@gmail.com https://www.facebook.com/duongcong.loi
  • 38. APPENDIX - 001  How to join/leave server(s) 1. join/leave 2. join/leave:forward Leaderserver 4. broadcast result 3. process join/leave Server A Server B Server C
  • 39. APPENDIX - 002  How to move data when join/leave server(s)  Make appropriate data for the moving  Async data for the moving by thread, and control speed of the moving
  • 40. APPENDIX - 003  How to detect Leader or sub-leader die  Easy dectect by polling connection
  • 41. APPENDIX - 004  How to make multi virtual node for one server  Easy generate multi virtual node for one server by hash server-name  Ex: make 200 virtual node for server ‘photoTokyo’: use hash value of: photoTokyo1, photoTokyo2, …, photoTokyo200
  • 42. APPENDIX - 005  For fast moving data  Use bloomfilter for dectect exist hash value of data- key  Use a storage for store all data-key for this local server
  • 43. APPENDIX - 006  How to avoid network turnning  Use client connection pool with screening strategy before, it avoid many connection hanging when call remote via network between two server