SlideShare a Scribd company logo
SLONY Replication
                    and PG POOL II



                        Except where otherwise noted, this work is licensed under
                        http://creativecommons.org/licenses/by/3.0/

PG East, 2010                                          www.consistentstate.com
by Kevin Kempter                                    kevink@consistentstate.com
Session Topics

 ●   SLONY Replication
 ●   SLONY Walk Through
 ●   PG POOL II




PG East, 2010                      www.consistentstate.com
by Kevin Kempter                kevink@consistentstate.com
SLONY Replication

 ●   SLONY is a “Master to Multiple Slaves”
     Replication System
 ●   Topics We'll Cover:
            –      Installation and General Info
            –      Creating and activating a replication set
            –      Methods for Failover & Switchover
 ●   Note: Thie is ONE way to setup SLONY, not
     necessarily the only way

PG East, 2010                                         www.consistentstate.com
by Kevin Kempter                                   kevink@consistentstate.com
SLONY Installation

●   Download
●   Uncompress
●   Configure
            –   ./configure [options]
            – ./configure –with-pgconfigdir=<dir> 
          --with-pgbindir=<dir> 
           --with-pgsharedir=<dir>
●   Make
●   Make Install (as root)

PG East, 2010                                            www.consistentstate.com
by Kevin Kempter                                      kevink@consistentstate.com
SLONY
The SLONIK Preamble

●   Each server to be included in the cluster must be prepared
●   Slony will create a schema on each node (i.e. if your replication
    cluster is named customer then slony will create a schema on each
    node called _customer.)
●   Allow slony to create this cluster schema itself (i.e. keep your hands
    off)
●   The same preamble must be included in each slonik script to manage
    the cluster :
cluster name = customer;
node 1 admin conninfo = 'dbname=custdb host=yoda user=slony';
node 2 admin conninfo = 'dbname=custdb host=r2d2 user=slony';
node 3 admin conninfo = 'dbname=custdb host=c3po user=slony';


PG East, 2010                                            www.consistentstate.com
by Kevin Kempter                                      kevink@consistentstate.com
SLONY
General Info


 ●   The database must exist on all nodes


 ●   The PL/pgSQL language must be installed on all nodes


 ●   Connection Strings and passwords (use trust or a .pgpass file)




PG East, 2010                                      www.consistentstate.com
by Kevin Kempter                                kevink@consistentstate.com
SLONY
Create Variable File

 ●   The use of a variables file can help you manage the setup
     process
 ●   Example (slony_setup.env):
export CLUSTERNAME=sample_rep
export MASTERDBNAME=slony_test
export MASTERHOST=localhost
export MASTERPORT=5444
export SLAVEDBNAME=slony_test
export SLAVEHOST=localhost
export SLAVEPORT=5445
export REPUSER=postgres

PG East, 2010                                      www.consistentstate.com
by Kevin Kempter                                kevink@consistentstate.com
SLONY
Create / Prepare Slave Node


 ●   Create the database on the slave node
 ●   Replicate users on the slave node
 ●   Create the database structures (DDL) on the slave node
 ●   Install PL/pgSQL on the slave node




PG East, 2010                                   www.consistentstate.com
by Kevin Kempter                             kevink@consistentstate.com
SLONY
Initialize the replication cluster


 ●   Initialize the replication cluster
 ●   Create a replication set
 ●   Add tables to the replication set
 ●   Store the slave node
 ●   Store paths




PG East, 2010                                www.consistentstate.com
by Kevin Kempter                          kevink@consistentstate.com
SLONY
Start the slon daemons


 ●    $ slon [options] clustername [connection string]
Options:
     -h            print usage message and exit
     -v            print version and exit
     -d <debuglevel>      verbosity of logging (1..4)
     -s <milliseconds>    SYNC check interval (default 10000)
     -t <milliseconds>    SYNC interval timeout (default 60000)
     -o <milliseconds>     desired subscriber SYNC processing time
     -g <num>            maximum SYNC group size (default 6)




PG East, 2010                                                           www.consistentstate.com
by Kevin Kempter                                                     kevink@consistentstate.com
SLONY
Start the slon daemons – More Options


More Options:
   -c <num>         how often to vacuum in cleanup cycles
   -p <filename>     slon pid file
   -f <filename>     slon configuration file
   -a <directory>    directory to store SYNC archive files (SLONY Log Shipping)
   -x <command>        program to run after writing archive file
   -q <num>         Terminate when this node reaches # of SYNCs
   -r <num>         # of syncs for -q option
   -l <interval>    this node should lag providers by this interval




PG East, 2010                                                            www.consistentstate.com
by Kevin Kempter                                                      kevink@consistentstate.com
SLONY
Start replication




 ●     Subscribe set
SUBSCRIBE SET (
     ID = 1,
     PROVIDER = 1,
     RECEIVER = 3,
     FORWARD = YES
);




PG East, 2010             www.consistentstate.com
by Kevin Kempter       kevink@consistentstate.com
SLONY
Switchover



lock set (id = 1, origin = 1);
wait for event (origin = 1, confirmed = 2);
move set (id = 1, old origin = 1, new origin = 2);
wait for event (origin = 1, confirmed = 2);




PG East, 2010                           www.consistentstate.com
by Kevin Kempter                     kevink@consistentstate.com
SLONY
Failover



failover (id = 1, backup node = 2);


drop node (id = 1, event node = 2);




PG East, 2010                            www.consistentstate.com
by Kevin Kempter                      kevink@consistentstate.com
SLONY
DDL Execution


 ●   ALTER TABLES, CREATE INDEXES, etc
             –     EXECUTE

 ●   CREATE TABLE(s) and add to replication
             –     Merge Set




PG East, 2010                         www.consistentstate.com
by Kevin Kempter                   kevink@consistentstate.com
SLONY
EXECUTE


#!/bin/sh

. ../etc/slony.env

slonik <<_EOF_

cluster name = $CLUSTERNAME ;

node 1 admin conninfo = 'dbname=$MASTERDBNAME host=$MASTERHOST port=$MASTERPORT
user=$REPUSER';

node 2 admin conninfo = 'dbname=$SLAVEDBNAME host=$SLAVEHOST port=$SLAVEPORT
user=$REPUSER';



EXECUTE SCRIPT ( SET ID = 1, FILENAME = '/usr/local/pgsql/slony_init/test.sql', EVENT NODE = 1);



_EOF_



PG East, 2010                                                            www.consistentstate.com
by Kevin Kempter                                                      kevink@consistentstate.com
SLONY
Merge Set


#!/bin/sh


. ../etc/slony.env

slonik <<_EOF_

cluster name = $CLUSTERNAME ;


node 1 admin conninfo = 'dbname=$MASTERDBNAME host=$MASTERHOST port=$MASTERPORT
user=$REPUSER';

node 2 admin conninfo = 'dbname=$SLAVEDBNAME host=$SLAVEHOST port=$SLAVEPORT
user=$REPUSER';



merge set (id=1, add id=2, origin=1);


_EOF_


PG East, 2010                                                  www.consistentstate.com
by Kevin Kempter                                            kevink@consistentstate.com
SLONY
Example / Walk through


 ●   SLONY Setup
 ●   IP alias setup / control
 ●   switchover




PG East, 2010                      www.consistentstate.com
by Kevin Kempter                kevink@consistentstate.com
SLONY




                   Summary




PG East, 2010                   www.consistentstate.com
by Kevin Kempter             kevink@consistentstate.com
PG POOL II




PG East, 2010                      www.consistentstate.com
by Kevin Kempter                kevink@consistentstate.com
PG POOL II
Overview

 ●   Connection Pooling
            –      Connections beyond max_connections are
                    queued instead of rejected
 ●   Replication
 ●   Load Balancing
 ●   Parallel Query



PG East, 2010                                     www.consistentstate.com
by Kevin Kempter                               kevink@consistentstate.com
PG POOL II
Installation Prerequisites

 ●   PostgreSQL header files
 ●   libpq
 ●   make




PG East, 2010                     www.consistentstate.com
by Kevin Kempter               kevink@consistentstate.com
PG POOL II
Installation
 ●   Install PostgreSQL on all nodes
 ●   Install pgpool on node 1
            –      ./configure --prefix=<install dir> 
                                 --with-pgsql=<path to top level pg install dir>
            –      e.g.:
                   ./configure   --prefix=/usr/local/pgsql/pgpool 
                                 --with-pgsql=/usr/local/pgsql/pg841
            –      $ make
            –      $ sudo make install
            –      Chown -R postgres:postgres <install dir>



PG East, 2010                                                        www.consistentstate.com
by Kevin Kempter                                                  kevink@consistentstate.com
PG POOL II
Configuration

 ●   <install dir>/etc/pgpool.conf (copied from pgpool.conf.sample)


# connections

listen_addresses = 'localhost'



# Port number for pgpool

port = 9999




PG East, 2010                                                www.consistentstate.com
by Kevin Kempter                                          kevink@consistentstate.com
PG POOL II
Configuration (cont)
# number of pre-forked child process

num_init_children = 32



# Number of connection pools allowed for a child process

max_pool = 4



# If idle for this many seconds, child exits. 0 means no timeout.

child_life_time = 300



# If idle for this many seconds, connection to PostgreSQL closes.

# 0 means no timeout.

connection_life_time = 0



PG East, 2010                                                          www.consistentstate.com
by Kevin Kempter                                                    kevink@consistentstate.com
PG POOL II
Configuration (cont)

# If child_max_connections connections were received, child exits.

# 0 means no exit.

child_max_connections = 0



# If client_idle_limit is n (n > 0), the client is forced to be

# disconnected whenever after n seconds idle (even inside an explicit

# transactions!)

# 0 means no disconnect.

client_idle_limit = 0




PG East, 2010                                                              www.consistentstate.com
by Kevin Kempter                                                        kevink@consistentstate.com
PG POOL II
Configuration (cont)

# Maximum time in seconds to complete client authentication.

# 0 means no timeout.

authentication_timeout = 60



# Logging directory

logdir = '/tmp'



# pid file name

#pid_file_name = '/var/run/pgpool/pgpool.pid'

pid_file_name = '/usr/local/pgsql/pgpool/etc/pgpool.pid'




PG East, 2010                                                     www.consistentstate.com
by Kevin Kempter                                               kevink@consistentstate.com
PG POOL II
Configuration (cont)

# Replication mode

replication_mode = false



# Load balancing mode, i.e., all SELECTs are load balanced.

# This is ignored if replication_mode is false.

load_balance_mode = false



# if there's a data mismatch between master and secondary

# start degeneration to stop replication mode

replication_stop_on_mismatch = false




PG East, 2010                                                    www.consistentstate.com
by Kevin Kempter                                              kevink@consistentstate.com
PG POOL II
Configuration (cont)

# If true, replicate SELECT statement when load balancing is disabled.

# If false, it is only sent to the master node.

replicate_select = false



# Semicolon separated list of queries to be issued at the end of a session

reset_query_list = 'ABORT; RESET ALL; SET SESSION AUTHORIZATION DEFAULT'

# for 8.3 or newer PostgreSQL versions DISCARD ALL can be used as

# follows. However beware that DISCARD ALL holds exclusive lock on

# pg_listener so it will be a serious performance problem if there are

# lots of concurrent sessions.

# reset_query_list = 'ABORT; DISCARD ALL'




PG East, 2010                                                                   www.consistentstate.com
by Kevin Kempter                                                             kevink@consistentstate.com
PG POOL II
Configuration (cont)

# If true print timestamp on each log line.

print_timestamp = true



# If true, operate in master/slave mode.

master_slave_mode = false



# If true, cache connection pool.

connection_cache = true




PG East, 2010                                    www.consistentstate.com
by Kevin Kempter                              kevink@consistentstate.com
PG POOL II
Configuration (cont)

# Health check timeout. 0 means no timeout.

health_check_timeout = 20



# Health check period. 0 means no health check.

health_check_period = 0



# Health check user

health_check_user = 'nobody'




PG East, 2010                                        www.consistentstate.com
by Kevin Kempter                                  kevink@consistentstate.com
PG POOL II
Configuration (cont)

# Execute command by failover.

# special values: %d = node id

#           %h = host name

#           %p = port number

#           %D = database cluster path

#           %m = new master node id

#           %M = old master node id

#           %% = '%' character

#

failover_command = ''




PG East, 2010                               www.consistentstate.com
by Kevin Kempter                         kevink@consistentstate.com
PG POOL II
Configuration (cont)

# Execute command by failback.

# special values: %d = node id

#           %h = host name

#           %p = port number

#           %D = database cluster path

#           %m = new master node id

#           %M = old master node id

#           %% = '%' character

#

failback_command = ''




PG East, 2010                               www.consistentstate.com
by Kevin Kempter                         kevink@consistentstate.com
PG POOL II
Configuration (cont)

# If true, automatically locks a table with INSERT statements to keep

# SERIAL data consistency. If the data does not have SERIAL data

# type, no lock will be issued. An /*INSERT LOCK*/ comment has the

# same effect. A /NO INSERT LOCK*/ comment disables the effect.

insert_lock = true



# If true, ignore leading white spaces of each query while pgpool judges

# whether the query is a SELECT so that it can be load balanced. This

# is useful for certain APIs such as DBI/DBD which is known to adding an

# extra leading white space.

ignore_leading_white_space = true




PG East, 2010                                                                 www.consistentstate.com
by Kevin Kempter                                                           kevink@consistentstate.com
PG POOL II
Configuration (cont)
# If true, print all statements to the log. Like the log_statement option
# to PostgreSQL, this allows for observing queries without engaging in full
# debugging.

# log_statement = false
log_statement = true



# If true, incoming connections will be printed to the log.

log_connections = false



# If true, hostname will be shown in ps status. Also shown in

# connection log if log_connections = true.

# Be warned that this feature will add overhead to look up hostname.

log_hostname = false



PG East, 2010                                                                    www.consistentstate.com
by Kevin Kempter                                                              kevink@consistentstate.com
PG POOL II
Configuration (cont)

# if non 0, run in parallel query mode

parallel_mode = false



# if non 0, use query cache

enable_query_cache = false



#set pgpool2 hostname

pgpool2_hostname = '192.168.242.137'




PG East, 2010                               www.consistentstate.com
by Kevin Kempter                         kevink@consistentstate.com
PG POOL II
Configuration (cont)

# system DB info

#system_db_hostname = 'localhost'

#system_db_port = 5432

#system_db_dbname = 'pgpool'

#system_db_schema = 'pgpool_catalog'

#system_db_user = 'pgpool'

#system_db_password = ''




PG East, 2010                             www.consistentstate.com
by Kevin Kempter                       kevink@consistentstate.com
PG POOL II
Configuration (cont)

# backend_hostname, backend_port, backend_weight

# here are examples

backend_hostname0 = '192.168.242.138'

backend_port0 = 5432

backend_weight0 = 1

backend_data_directory0 = '/usr/local/pgsql/pg841/data'

#backend_hostname1 = 'host2'

#backend_port1 = 5433

#backend_weight1 = 1

#backend_data_directory1 = '/data1'




PG East, 2010                                                www.consistentstate.com
by Kevin Kempter                                          kevink@consistentstate.com
PG POOL II
Configuration (cont)

# - HBA -



# If true, use pool_hba.conf for client authentication. In pgpool-II

# 1.1, the default value is false. The default value will be true in

# 1.2.

enable_pool_hba = false




PG East, 2010                                                             www.consistentstate.com
by Kevin Kempter                                                       kevink@consistentstate.com
PG POOL II
Configuration (cont)
# - online recovery -

#            NOTE: these values are used to re-attach (after failure) a node or to attach a new node
#                      when pgpool replication is used
# online recovery user

recovery_user = 'nobody'

recovery_password = ''

recovery_1st_stage_command = ''

recovery_2nd_stage_command = ''

# maximum time in seconds to wait for the recovering node's postmaster

# start-up. 0 means no wait.

recovery_timeout = 90




PG East, 2010                                                                 www.consistentstate.com
by Kevin Kempter                                                           kevink@consistentstate.com
Start PG POOL

 ●   $ cd <install dir>/bin

 ●   $ ./pgpool -n > log 2>&1 &




PG East, 2010                        www.consistentstate.com
by Kevin Kempter                  kevink@consistentstate.com
Controlling PG POOL

Usage:

 pgpool [ -c] [ -f CONFIG_FILE ] [ -F PCP_CONFIG_FILE ] [ -a HBA_CONFIG_FILE ]

     [ -n ] [ -d ]

 pgpool [ -f CONFIG_FILE ] [ -F PCP_CONFIG_FILE ] [ -a HBA_CONFIG_FILE ]

     [ -m SHUTDOWN-MODE ] stop

 pgpool [ -f CONFIG_FILE ] [ -F PCP_CONFIG_FILE ] [ -a HBA_CONFIG_FILE ] reload




PG East, 2010                                                           www.consistentstate.com
by Kevin Kempter                                                     kevink@consistentstate.com
Controlling PG POOL (cont)
Common options:

 -a HBA_CONFIG_FILE Sets the path to the pool_hba.conf configuration file

             (default: /usr/local/pgsql/pgpool/etc/pool_hba.conf)

 -f CONFIG_FILE      Sets the path to the pgpool.conf configuration file

             (default: /usr/local/pgsql/pgpool/etc/pgpool.conf)

 -F PCP_CONFIG_FILE Sets the path to the pcp.conf configuration file

             (default: /usr/local/pgsql/pgpool/etc/pcp.conf)

 -h          Prints this help




PG East, 2010                                                                 www.consistentstate.com
by Kevin Kempter                                                           kevink@consistentstate.com
Controlling PG POOL (cont)
Start options:

 -c              Clears query cache (enable_query_cache must be on)

 -n              Don't run in daemon mode, does not detach control tty

 -d              Debug mode



Stop options:

 -m SHUTDOWN-MODE             Can be "smart", "fast", or "immediate"



Shutdown modes are:

 smart     quit after all clients have disconnected

 fast     quit directly, with proper shutdown

 immediate quit without complete shutdown; will lead to recovery on restart



PG East, 2010                                                                www.consistentstate.com
by Kevin Kempter                                                          kevink@consistentstate.com
Configure for pcp commands
 ●   Use pg_md5 to create passwd hash
                   $ /usr/bin/pg_md5 postgres
                   e8a48653851e28c69d0506508fb27fc5


 ●   pcp.conf
                   postgres:e8a48653851e28c69d0506508fb27fc5




PG East, 2010                                       www.consistentstate.com
by Kevin Kempter                                 kevink@consistentstate.com
pcp Commands

pcp_node_count

pcp_node_count _timeout_ _host_ _port_ _userid_ _passwd_



Displays the number of total nodes defined in pgpool.conf. It does not distinguish nodes status, ie
attached/detached. ALL nodes are counted.




pcp_proc_count

pcp_proc_count _timeout_ _host_ _port_ _userid_ _passwd_



Displays the list of pgpool-II child process IDs. If there is more than one process, IDs will be delimitted by a
white space.




PG East, 2010                                                                     www.consistentstate.com
by Kevin Kempter                                                               kevink@consistentstate.com
pcp Commands
pcp_node_info

pcp_node_info _timeout_ _host_ _port_ _userid_ _passwd_ _nodeid_



Displays the information on the given node ID. The output example is as follows:



The result is in the following order:

Hostname      port number      status     load balance weight



Status is represented by a digit from [0 to 3].

0 - This state is only used during the initialization. PCP will never display it.

1 - Node is up. No connections yet.

2 - Node is up. Connections are pooled.

3 - Node is down.



PG East, 2010                                                                          www.consistentstate.com
by Kevin Kempter                                                                    kevink@consistentstate.com
pcp Commands

pcp_proc_info

pcp_proc_info _timeout_ _host_ _port_ _userid_ _passwd_ _processid_



Displays the information on the given pgpool-II child process ID.



pcp_systemdb_info

pcp_systemdb_info _timeout_ _host_ _port_ _userid_ _passwd_



Displays the System DB information.




PG East, 2010                                                            www.consistentstate.com
by Kevin Kempter                                                      kevink@consistentstate.com
pcp Commands
pcp_detach_node

pcp_detach_node _timeout_ _host_ _port_ _userid_ _passwd_ _nodeid_



Detaches the given node from pgpool-II.




pcp_attach_node

pcp_attach_node _timeout_ _host_ _port_ _userid_ _passwd_ _nodeid_



Attaches the given node to pgpool-II.




PG East, 2010                                                       www.consistentstate.com
by Kevin Kempter                                                 kevink@consistentstate.com
pcp Commands


pcp_stop_pgpool

pcp_stop_pgpool _timeout_ _host_ _port_ _userid_ _passwd_ _mode_



Terminate pgpool-II process with the given shutdown mode. The availabe modes are as follows:



s    - smart mode

f    - fast mode

i    - immediate mode




PG East, 2010                                                               www.consistentstate.com
by Kevin Kempter                                                         kevink@consistentstate.com
End




PG East, 2010               www.consistentstate.com
by Kevin Kempter         kevink@consistentstate.com

More Related Content

PDF
Kafka Security 101 and Real-World Tips
PDF
Hello, kafka! (an introduction to apache kafka)
PPT
Hadoop Security Architecture
PPTX
Apache kafka
PPTX
Introduction to Apache Kafka
PPTX
Kafka presentation
PPTX
Migrating your clusters and workloads from Hadoop 2 to Hadoop 3
PDF
Kafka 101 and Developer Best Practices
Kafka Security 101 and Real-World Tips
Hello, kafka! (an introduction to apache kafka)
Hadoop Security Architecture
Apache kafka
Introduction to Apache Kafka
Kafka presentation
Migrating your clusters and workloads from Hadoop 2 to Hadoop 3
Kafka 101 and Developer Best Practices

What's hot (20)

PDF
Apache Kafka Architecture & Fundamentals Explained
PDF
Docker internals
PDF
Disaster Recovery with MirrorMaker 2.0 (Ryanne Dolan, Cloudera) Kafka Summit ...
PPTX
APACHE KAFKA / Kafka Connect / Kafka Streams
PDF
Apache kafka 모니터링을 위한 Metrics 이해 및 최적화 방안
PPTX
Kafka replication apachecon_2013
PPTX
Git and GitFlow branching model
PPTX
Hadoop Security Today & Tomorrow with Apache Knox
ODP
Stream processing using Kafka
PDF
Consumer offset management in Kafka
PDF
Fundamentals of Apache Kafka
PDF
PPTX
Managing your Hadoop Clusters with Apache Ambari
PDF
[오픈소스컨설팅]오픈스택에 대하여
PDF
Apache kafka
PDF
Simplifying Distributed Transactions with Sagas in Kafka (Stephen Zoio, Simpl...
PDF
Deploying Kafka Streams Applications with Docker and Kubernetes
PDF
SF Python Meetup - Introduction to NATS Messaging with Python3
PDF
Bucketing 2.0: Improve Spark SQL Performance by Removing Shuffle
PDF
Apache Kafka Introduction
Apache Kafka Architecture & Fundamentals Explained
Docker internals
Disaster Recovery with MirrorMaker 2.0 (Ryanne Dolan, Cloudera) Kafka Summit ...
APACHE KAFKA / Kafka Connect / Kafka Streams
Apache kafka 모니터링을 위한 Metrics 이해 및 최적화 방안
Kafka replication apachecon_2013
Git and GitFlow branching model
Hadoop Security Today & Tomorrow with Apache Knox
Stream processing using Kafka
Consumer offset management in Kafka
Fundamentals of Apache Kafka
Managing your Hadoop Clusters with Apache Ambari
[오픈소스컨설팅]오픈스택에 대하여
Apache kafka
Simplifying Distributed Transactions with Sagas in Kafka (Stephen Zoio, Simpl...
Deploying Kafka Streams Applications with Docker and Kubernetes
SF Python Meetup - Introduction to NATS Messaging with Python3
Bucketing 2.0: Improve Spark SQL Performance by Removing Shuffle
Apache Kafka Introduction
Ad

Viewers also liked (20)

PDF
Asynchronous Replication for PostgreSQL Slony
PDF
5 Steps to PostgreSQL Performance
ODP
Multi-Master Replication with Slony
PDF
Replicating PostgreSQL Databases Using Slony-I
ODP
Basic Query Tuning Primer
PDF
pg_proctab: Accessing System Stats in PostgreSQL
PDF
PostgreSQL, Extensible to the Nth Degree: Functions, Languages, Types, Rules,...
PDF
Not Just UNIQUE: Generalized Index Constraints
ODP
The PostgreSQL Query Planner
PDF
Mastering PostgreSQL Administration
PPT
Building tungsten-clusters-with-postgre sql-hot-standby-and-streaming-replica...
PDF
Implementing the Future of PostgreSQL Clustering with Tungsten
PDF
Go replicator
PDF
Backup and-recovery2
PDF
configuring a warm standby, the easy way
PDF
PDF
Replication using PostgreSQL Replicator
ODP
Python utilities for data presentation
PDF
A Practical Multi-Tenant Cluster
Asynchronous Replication for PostgreSQL Slony
5 Steps to PostgreSQL Performance
Multi-Master Replication with Slony
Replicating PostgreSQL Databases Using Slony-I
Basic Query Tuning Primer
pg_proctab: Accessing System Stats in PostgreSQL
PostgreSQL, Extensible to the Nth Degree: Functions, Languages, Types, Rules,...
Not Just UNIQUE: Generalized Index Constraints
The PostgreSQL Query Planner
Mastering PostgreSQL Administration
Building tungsten-clusters-with-postgre sql-hot-standby-and-streaming-replica...
Implementing the Future of PostgreSQL Clustering with Tungsten
Go replicator
Backup and-recovery2
configuring a warm standby, the easy way
Replication using PostgreSQL Replicator
Python utilities for data presentation
A Practical Multi-Tenant Cluster
Ad

Similar to PostgreSQL High Availability via SLONY and PG POOL II (19)

PPTX
SCALE 15x Minimizing PostgreSQL Major Version Upgrade Downtime
PDF
Steve Singer - Managing PostgreSQL with Puppet @ Postgres Open
PDF
Streaming Replication (Keynote @ PostgreSQL Conference 2009 Japan)
PDF
Tungsten University: Setup and Operate Tungsten Replicators
PDF
Tungsten University: Configure and provision Tungsten clusters
PDF
Juggle your data with Tungsten Replicator
PDF
PostgreSQL Replication
PDF
Training Slides: Intermediate 201: Single and Multi-Site Tungsten Clustering ...
PDF
ProstgreSQLFailoverConfiguration
PDF
Tungsten University: Set Up And Manage Advanced Replication Topologies
PDF
Postgres 12 Cluster Database operations.
PDF
Training Slides: Basics 102: Introduction to Tungsten Clustering
PPTX
Configuration management
PDF
Training Slides: Basics 104: Simple Tungsten Clustering Deployments
PDF
How to Replicate PostgreSQL Database
PDF
PGConf.ASIA 2019 Bali - Building PostgreSQL as a Service with Kubernetes - Ta...
PDF
Training Slides: Advanced 304: Upgrading From Native MySQL Replication To Tun...
PDF
Tungsten University: MySQL Multi-Master Operations Made Simple With Tungsten ...
PDF
Tungsten University: Configure & Provision Tungsten Clusters
SCALE 15x Minimizing PostgreSQL Major Version Upgrade Downtime
Steve Singer - Managing PostgreSQL with Puppet @ Postgres Open
Streaming Replication (Keynote @ PostgreSQL Conference 2009 Japan)
Tungsten University: Setup and Operate Tungsten Replicators
Tungsten University: Configure and provision Tungsten clusters
Juggle your data with Tungsten Replicator
PostgreSQL Replication
Training Slides: Intermediate 201: Single and Multi-Site Tungsten Clustering ...
ProstgreSQLFailoverConfiguration
Tungsten University: Set Up And Manage Advanced Replication Topologies
Postgres 12 Cluster Database operations.
Training Slides: Basics 102: Introduction to Tungsten Clustering
Configuration management
Training Slides: Basics 104: Simple Tungsten Clustering Deployments
How to Replicate PostgreSQL Database
PGConf.ASIA 2019 Bali - Building PostgreSQL as a Service with Kubernetes - Ta...
Training Slides: Advanced 304: Upgrading From Native MySQL Replication To Tun...
Tungsten University: MySQL Multi-Master Operations Made Simple With Tungsten ...
Tungsten University: Configure & Provision Tungsten Clusters

More from Command Prompt., Inc (14)

PDF
Howdah - An Application using Pylons, PostgreSQL, Simpycity and Exceptable
PDF
Temporal Data
PDF
Elephant Roads: a tour of Postgres forks
PPT
Normalization: A Workshop for Everybody Pt. 2
PPT
Normalization: A Workshop for Everybody Pt. 1
PDF
Integrating PostGIS in Web Applications
PDF
Postgres for MySQL (and other database) people
PDF
Building Grails applications with PostgreSQL
PDF
Not Just UNIQUE: Exclusion Constraints
PDF
pg_proctab: Accessing System Stats in PostgreSQL
PDF
Database Hardware Benchmarking
PDF
Vertically Challenged
PDF
Simpycity and Exceptable
Howdah - An Application using Pylons, PostgreSQL, Simpycity and Exceptable
Temporal Data
Elephant Roads: a tour of Postgres forks
Normalization: A Workshop for Everybody Pt. 2
Normalization: A Workshop for Everybody Pt. 1
Integrating PostGIS in Web Applications
Postgres for MySQL (and other database) people
Building Grails applications with PostgreSQL
Not Just UNIQUE: Exclusion Constraints
pg_proctab: Accessing System Stats in PostgreSQL
Database Hardware Benchmarking
Vertically Challenged
Simpycity and Exceptable

PostgreSQL High Availability via SLONY and PG POOL II

  • 1. SLONY Replication and PG POOL II Except where otherwise noted, this work is licensed under http://creativecommons.org/licenses/by/3.0/ PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 2. Session Topics ● SLONY Replication ● SLONY Walk Through ● PG POOL II PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 3. SLONY Replication ● SLONY is a “Master to Multiple Slaves” Replication System ● Topics We'll Cover: – Installation and General Info – Creating and activating a replication set – Methods for Failover & Switchover ● Note: Thie is ONE way to setup SLONY, not necessarily the only way PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 4. SLONY Installation ● Download ● Uncompress ● Configure – ./configure [options] – ./configure –with-pgconfigdir=<dir> --with-pgbindir=<dir> --with-pgsharedir=<dir> ● Make ● Make Install (as root) PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 5. SLONY The SLONIK Preamble ● Each server to be included in the cluster must be prepared ● Slony will create a schema on each node (i.e. if your replication cluster is named customer then slony will create a schema on each node called _customer.) ● Allow slony to create this cluster schema itself (i.e. keep your hands off) ● The same preamble must be included in each slonik script to manage the cluster : cluster name = customer; node 1 admin conninfo = 'dbname=custdb host=yoda user=slony'; node 2 admin conninfo = 'dbname=custdb host=r2d2 user=slony'; node 3 admin conninfo = 'dbname=custdb host=c3po user=slony'; PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 6. SLONY General Info ● The database must exist on all nodes ● The PL/pgSQL language must be installed on all nodes ● Connection Strings and passwords (use trust or a .pgpass file) PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 7. SLONY Create Variable File ● The use of a variables file can help you manage the setup process ● Example (slony_setup.env): export CLUSTERNAME=sample_rep export MASTERDBNAME=slony_test export MASTERHOST=localhost export MASTERPORT=5444 export SLAVEDBNAME=slony_test export SLAVEHOST=localhost export SLAVEPORT=5445 export REPUSER=postgres PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 8. SLONY Create / Prepare Slave Node ● Create the database on the slave node ● Replicate users on the slave node ● Create the database structures (DDL) on the slave node ● Install PL/pgSQL on the slave node PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 9. SLONY Initialize the replication cluster ● Initialize the replication cluster ● Create a replication set ● Add tables to the replication set ● Store the slave node ● Store paths PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 10. SLONY Start the slon daemons ● $ slon [options] clustername [connection string] Options: -h print usage message and exit -v print version and exit -d <debuglevel> verbosity of logging (1..4) -s <milliseconds> SYNC check interval (default 10000) -t <milliseconds> SYNC interval timeout (default 60000) -o <milliseconds> desired subscriber SYNC processing time -g <num> maximum SYNC group size (default 6) PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 11. SLONY Start the slon daemons – More Options More Options: -c <num> how often to vacuum in cleanup cycles -p <filename> slon pid file -f <filename> slon configuration file -a <directory> directory to store SYNC archive files (SLONY Log Shipping) -x <command> program to run after writing archive file -q <num> Terminate when this node reaches # of SYNCs -r <num> # of syncs for -q option -l <interval> this node should lag providers by this interval PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 12. SLONY Start replication ● Subscribe set SUBSCRIBE SET ( ID = 1, PROVIDER = 1, RECEIVER = 3, FORWARD = YES ); PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 13. SLONY Switchover lock set (id = 1, origin = 1); wait for event (origin = 1, confirmed = 2); move set (id = 1, old origin = 1, new origin = 2); wait for event (origin = 1, confirmed = 2); PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 14. SLONY Failover failover (id = 1, backup node = 2); drop node (id = 1, event node = 2); PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 15. SLONY DDL Execution ● ALTER TABLES, CREATE INDEXES, etc – EXECUTE ● CREATE TABLE(s) and add to replication – Merge Set PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 16. SLONY EXECUTE #!/bin/sh . ../etc/slony.env slonik <<_EOF_ cluster name = $CLUSTERNAME ; node 1 admin conninfo = 'dbname=$MASTERDBNAME host=$MASTERHOST port=$MASTERPORT user=$REPUSER'; node 2 admin conninfo = 'dbname=$SLAVEDBNAME host=$SLAVEHOST port=$SLAVEPORT user=$REPUSER'; EXECUTE SCRIPT ( SET ID = 1, FILENAME = '/usr/local/pgsql/slony_init/test.sql', EVENT NODE = 1); _EOF_ PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 17. SLONY Merge Set #!/bin/sh . ../etc/slony.env slonik <<_EOF_ cluster name = $CLUSTERNAME ; node 1 admin conninfo = 'dbname=$MASTERDBNAME host=$MASTERHOST port=$MASTERPORT user=$REPUSER'; node 2 admin conninfo = 'dbname=$SLAVEDBNAME host=$SLAVEHOST port=$SLAVEPORT user=$REPUSER'; merge set (id=1, add id=2, origin=1); _EOF_ PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 18. SLONY Example / Walk through ● SLONY Setup ● IP alias setup / control ● switchover PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 19. SLONY Summary PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 20. PG POOL II PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 21. PG POOL II Overview ● Connection Pooling – Connections beyond max_connections are queued instead of rejected ● Replication ● Load Balancing ● Parallel Query PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 22. PG POOL II Installation Prerequisites ● PostgreSQL header files ● libpq ● make PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 23. PG POOL II Installation ● Install PostgreSQL on all nodes ● Install pgpool on node 1 – ./configure --prefix=<install dir> --with-pgsql=<path to top level pg install dir> – e.g.: ./configure --prefix=/usr/local/pgsql/pgpool --with-pgsql=/usr/local/pgsql/pg841 – $ make – $ sudo make install – Chown -R postgres:postgres <install dir> PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 24. PG POOL II Configuration ● <install dir>/etc/pgpool.conf (copied from pgpool.conf.sample) # connections listen_addresses = 'localhost' # Port number for pgpool port = 9999 PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 25. PG POOL II Configuration (cont) # number of pre-forked child process num_init_children = 32 # Number of connection pools allowed for a child process max_pool = 4 # If idle for this many seconds, child exits. 0 means no timeout. child_life_time = 300 # If idle for this many seconds, connection to PostgreSQL closes. # 0 means no timeout. connection_life_time = 0 PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 26. PG POOL II Configuration (cont) # If child_max_connections connections were received, child exits. # 0 means no exit. child_max_connections = 0 # If client_idle_limit is n (n > 0), the client is forced to be # disconnected whenever after n seconds idle (even inside an explicit # transactions!) # 0 means no disconnect. client_idle_limit = 0 PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 27. PG POOL II Configuration (cont) # Maximum time in seconds to complete client authentication. # 0 means no timeout. authentication_timeout = 60 # Logging directory logdir = '/tmp' # pid file name #pid_file_name = '/var/run/pgpool/pgpool.pid' pid_file_name = '/usr/local/pgsql/pgpool/etc/pgpool.pid' PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 28. PG POOL II Configuration (cont) # Replication mode replication_mode = false # Load balancing mode, i.e., all SELECTs are load balanced. # This is ignored if replication_mode is false. load_balance_mode = false # if there's a data mismatch between master and secondary # start degeneration to stop replication mode replication_stop_on_mismatch = false PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 29. PG POOL II Configuration (cont) # If true, replicate SELECT statement when load balancing is disabled. # If false, it is only sent to the master node. replicate_select = false # Semicolon separated list of queries to be issued at the end of a session reset_query_list = 'ABORT; RESET ALL; SET SESSION AUTHORIZATION DEFAULT' # for 8.3 or newer PostgreSQL versions DISCARD ALL can be used as # follows. However beware that DISCARD ALL holds exclusive lock on # pg_listener so it will be a serious performance problem if there are # lots of concurrent sessions. # reset_query_list = 'ABORT; DISCARD ALL' PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 30. PG POOL II Configuration (cont) # If true print timestamp on each log line. print_timestamp = true # If true, operate in master/slave mode. master_slave_mode = false # If true, cache connection pool. connection_cache = true PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 31. PG POOL II Configuration (cont) # Health check timeout. 0 means no timeout. health_check_timeout = 20 # Health check period. 0 means no health check. health_check_period = 0 # Health check user health_check_user = 'nobody' PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 32. PG POOL II Configuration (cont) # Execute command by failover. # special values: %d = node id # %h = host name # %p = port number # %D = database cluster path # %m = new master node id # %M = old master node id # %% = '%' character # failover_command = '' PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 33. PG POOL II Configuration (cont) # Execute command by failback. # special values: %d = node id # %h = host name # %p = port number # %D = database cluster path # %m = new master node id # %M = old master node id # %% = '%' character # failback_command = '' PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 34. PG POOL II Configuration (cont) # If true, automatically locks a table with INSERT statements to keep # SERIAL data consistency. If the data does not have SERIAL data # type, no lock will be issued. An /*INSERT LOCK*/ comment has the # same effect. A /NO INSERT LOCK*/ comment disables the effect. insert_lock = true # If true, ignore leading white spaces of each query while pgpool judges # whether the query is a SELECT so that it can be load balanced. This # is useful for certain APIs such as DBI/DBD which is known to adding an # extra leading white space. ignore_leading_white_space = true PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 35. PG POOL II Configuration (cont) # If true, print all statements to the log. Like the log_statement option # to PostgreSQL, this allows for observing queries without engaging in full # debugging. # log_statement = false log_statement = true # If true, incoming connections will be printed to the log. log_connections = false # If true, hostname will be shown in ps status. Also shown in # connection log if log_connections = true. # Be warned that this feature will add overhead to look up hostname. log_hostname = false PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 36. PG POOL II Configuration (cont) # if non 0, run in parallel query mode parallel_mode = false # if non 0, use query cache enable_query_cache = false #set pgpool2 hostname pgpool2_hostname = '192.168.242.137' PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 37. PG POOL II Configuration (cont) # system DB info #system_db_hostname = 'localhost' #system_db_port = 5432 #system_db_dbname = 'pgpool' #system_db_schema = 'pgpool_catalog' #system_db_user = 'pgpool' #system_db_password = '' PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 38. PG POOL II Configuration (cont) # backend_hostname, backend_port, backend_weight # here are examples backend_hostname0 = '192.168.242.138' backend_port0 = 5432 backend_weight0 = 1 backend_data_directory0 = '/usr/local/pgsql/pg841/data' #backend_hostname1 = 'host2' #backend_port1 = 5433 #backend_weight1 = 1 #backend_data_directory1 = '/data1' PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 39. PG POOL II Configuration (cont) # - HBA - # If true, use pool_hba.conf for client authentication. In pgpool-II # 1.1, the default value is false. The default value will be true in # 1.2. enable_pool_hba = false PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 40. PG POOL II Configuration (cont) # - online recovery - # NOTE: these values are used to re-attach (after failure) a node or to attach a new node # when pgpool replication is used # online recovery user recovery_user = 'nobody' recovery_password = '' recovery_1st_stage_command = '' recovery_2nd_stage_command = '' # maximum time in seconds to wait for the recovering node's postmaster # start-up. 0 means no wait. recovery_timeout = 90 PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 41. Start PG POOL ● $ cd <install dir>/bin ● $ ./pgpool -n > log 2>&1 & PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 42. Controlling PG POOL Usage: pgpool [ -c] [ -f CONFIG_FILE ] [ -F PCP_CONFIG_FILE ] [ -a HBA_CONFIG_FILE ] [ -n ] [ -d ] pgpool [ -f CONFIG_FILE ] [ -F PCP_CONFIG_FILE ] [ -a HBA_CONFIG_FILE ] [ -m SHUTDOWN-MODE ] stop pgpool [ -f CONFIG_FILE ] [ -F PCP_CONFIG_FILE ] [ -a HBA_CONFIG_FILE ] reload PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 43. Controlling PG POOL (cont) Common options: -a HBA_CONFIG_FILE Sets the path to the pool_hba.conf configuration file (default: /usr/local/pgsql/pgpool/etc/pool_hba.conf) -f CONFIG_FILE Sets the path to the pgpool.conf configuration file (default: /usr/local/pgsql/pgpool/etc/pgpool.conf) -F PCP_CONFIG_FILE Sets the path to the pcp.conf configuration file (default: /usr/local/pgsql/pgpool/etc/pcp.conf) -h Prints this help PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 44. Controlling PG POOL (cont) Start options: -c Clears query cache (enable_query_cache must be on) -n Don't run in daemon mode, does not detach control tty -d Debug mode Stop options: -m SHUTDOWN-MODE Can be "smart", "fast", or "immediate" Shutdown modes are: smart quit after all clients have disconnected fast quit directly, with proper shutdown immediate quit without complete shutdown; will lead to recovery on restart PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 45. Configure for pcp commands ● Use pg_md5 to create passwd hash $ /usr/bin/pg_md5 postgres e8a48653851e28c69d0506508fb27fc5 ● pcp.conf postgres:e8a48653851e28c69d0506508fb27fc5 PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 46. pcp Commands pcp_node_count pcp_node_count _timeout_ _host_ _port_ _userid_ _passwd_ Displays the number of total nodes defined in pgpool.conf. It does not distinguish nodes status, ie attached/detached. ALL nodes are counted. pcp_proc_count pcp_proc_count _timeout_ _host_ _port_ _userid_ _passwd_ Displays the list of pgpool-II child process IDs. If there is more than one process, IDs will be delimitted by a white space. PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 47. pcp Commands pcp_node_info pcp_node_info _timeout_ _host_ _port_ _userid_ _passwd_ _nodeid_ Displays the information on the given node ID. The output example is as follows: The result is in the following order: Hostname port number status load balance weight Status is represented by a digit from [0 to 3]. 0 - This state is only used during the initialization. PCP will never display it. 1 - Node is up. No connections yet. 2 - Node is up. Connections are pooled. 3 - Node is down. PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 48. pcp Commands pcp_proc_info pcp_proc_info _timeout_ _host_ _port_ _userid_ _passwd_ _processid_ Displays the information on the given pgpool-II child process ID. pcp_systemdb_info pcp_systemdb_info _timeout_ _host_ _port_ _userid_ _passwd_ Displays the System DB information. PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 49. pcp Commands pcp_detach_node pcp_detach_node _timeout_ _host_ _port_ _userid_ _passwd_ _nodeid_ Detaches the given node from pgpool-II. pcp_attach_node pcp_attach_node _timeout_ _host_ _port_ _userid_ _passwd_ _nodeid_ Attaches the given node to pgpool-II. PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 50. pcp Commands pcp_stop_pgpool pcp_stop_pgpool _timeout_ _host_ _port_ _userid_ _passwd_ _mode_ Terminate pgpool-II process with the given shutdown mode. The availabe modes are as follows: s - smart mode f - fast mode i - immediate mode PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com
  • 51. End PG East, 2010 www.consistentstate.com by Kevin Kempter kevink@consistentstate.com