SlideShare a Scribd company logo
1 / 152
2 / 152
MySQL InnoDB Cluster & Group Replication in a
Nutshell: Hands-On Tutorial
 
Percona Live Europe 2017 - Dublin
Frédéric Descamps - MySQL Community Manager - Oracle
Kenny Gryp - MySQL Practice Manager - Percona
3 / 152
 
Safe Harbor Statement
The following is intended to outline our general product direction. It is intended for
information purpose only, and may not be incorporated into any contract. It is not a
commitment to deliver any material, code, or functionality, and should not be relied up in
making purchasing decisions. The development, release and timing of any features or
functionality described for Oracle´s product remains at the sole discretion of Oracle.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
4 / 152
Who are we ?
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
5 / 152
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
6 / 152
Frédéric Descamps
@lefred
MySQL Evangelist
Managing MySQL since 3.23
devops believer
http://about.me/lefred
 
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
7 / 152
Kenny Gryp
@gryp
MySQL Practice Manager
 
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
8 / 152
get more at the conference
MySQL Group Replication
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
9 / 152
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
10 / 152
Agenda
Prepare your workstation
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
11 / 152
Agenda
Prepare your workstation
What are MySQL InnoDB Cluster &Group Replication ?
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
12 / 152
Agenda
Prepare your workstation
What are MySQL InnoDB Cluster &Group Replication ?
Migration fromMaster-Slave to GR
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
13 / 152
Agenda
Prepare your workstation
What are MySQL InnoDB Cluster &Group Replication ?
Migration fromMaster-Slave to GR
Howto monitor ?
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
14 / 152
Agenda
Prepare your workstation
What are MySQL InnoDB Cluster &Group Replication ?
Migration fromMaster-Slave to GR
Howto monitor ?
Application interaction
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
15 / 152
VirtualBox
Setup your workstation
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
16 / 152
Setup your workstation
Install VirtualBox 5
On the USB key, copy PLeu17_GR.ova on your laptop and doubleclick on it
Ensure you have vboxnet2 network interface
(VirtualBox Preferences -> Network -> Host-Only Networks -> +)
Start all virtual machines (mysql1, mysql2, mysql3 &mysql4)
Install putty if you are using Windows
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
17 / 152
Setup your workstation
Install VirtualBox 5
On the USB key, copy PLeu17_GR.ova on your laptop and doubleclick on it
Ensure you have vboxnet2 network interface
(VirtualBox Preferences -> Network -> Host-Only Networks -> +)
Start all virtual machines (mysql1, mysql2, mysql3 &mysql4)
Install putty if you are using Windows
Try to connect to all VM´s fromyour terminal or putty (rootpasswordisX):
ssh -p 8821 root@127.0.0.1 to mysql1
ssh -p 8822 root@127.0.0.1 to mysql2
ssh -p 8823 root@127.0.0.1 to mysql3
ssh -p 8824 root@127.0.0.1 to mysql4
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
18 / 152
LAB1: Current situation
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
19 / 152
launch
run_app.sh
on mysql1 into
a screen
session
verify that
mysql2 is a
running slave
LAB1: Current situation
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
20 / 152
Summary
 
+--------+----------+--------------+-----------------+
| | ROLE | SSH PORT | INTERNAL IP |
+--------+----------+--------------+-----------------+
| | | | |
| mysql1 | master | 8821 | 192.168.56.11 |
| | | | |
| mysql2 | slave | 8822 | 192.168.56.12 |
| | | | |
| mysql3 | n/a | 8823 | 192.168.56.13 |
| | | | |
| mysql4 | n/a | 8824 | 192.168.56.14 |
| | | | |
+--------+----------+--------------+-----------------+
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
21 / 152
Easy High Availability
MySQL InnoDB Cluster
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
22 / 152
InnoDB
cluster
Ease-of-Use
Extreme Scale-Out
Out-of-Box Solution
Built-in HA
High Performance
Everything Integrated
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
23 / 152
InnoDB Cluster's Architecture
Application
MySQL Connector
MySQL Router
MySQL Shell
InnoDB
cluster
Application
MySQL Connector
MySQL Router
Mp
M
M
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
24 / 152
Group Replication: heart of MySQL InnoDB
Cluster
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
25 / 152
Group Replication: heart of MySQL InnoDB
Cluster
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
26 / 152
MySQL Group Replication
but what is it ?!?
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
27 / 152
MySQL Group Replication
but what is it ?!?
GR is a plugin for MySQL, made by MySQL and packaged with MySQL
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
28 / 152
MySQL Group Replication
but what is it ?!?
GR is a plugin for MySQL, made by MySQL and packaged with MySQL
GR is an implementation of Replicated Database State Machine theory
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
29 / 152
MySQL Group Replication
but what is it ?!?
GR is a plugin for MySQL, made by MySQL and packaged with MySQL
GR is an implementation of Replicated Database State Machine theory
Paxos based protocol
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
30 / 152
MySQL Group Replication
but what is it ?!?
GR is a plugin for MySQL, made by MySQL and packaged with MySQL
GR is an implementation of Replicated Database State Machine theory
Paxos based protocol
GR allows to write on all Group Members (cluster nodes) simultaneously while
retaining consistency
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
31 / 152
MySQL Group Replication
but what is it ?!?
GR is a plugin for MySQL, made by MySQL and packaged with MySQL
GR is an implementation of Replicated Database State Machine theory
Paxos based protocol
GR allows to write on all Group Members (cluster nodes) simultaneously while
retaining consistency
GR implements conflict detection and resolution
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
32 / 152
MySQL Group Replication
but what is it ?!?
GR is a plugin for MySQL, made by MySQL and packaged with MySQL
GR is an implementation of Replicated Database State Machine theory
Paxos based protocol
GR allows to write on all Group Members (cluster nodes) simultaneously while
retaining consistency
GR implements conflict detection and resolution
GR allows automatic distributed recovery
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
33 / 152
MySQL Group Replication
but what is it ?!?
GR is a plugin for MySQL, made by MySQL and packaged with MySQL
GR is an implementation of Replicated Database State Machine theory
Paxos based protocol
GR allows to write on all Group Members (cluster nodes) simultaneously while
retaining consistency
GR implements conflict detection and resolution
GR allows automatic distributed recovery
Supported on all MySQL platforms !!
Linux, Windows, Solaris, OSX, FreeBSD
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
34 / 152
And for users ?
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
35 / 152
And for users ?
not longer necessary to handle server fail-over manually or with a complicated script
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
36 / 152
And for users ?
not longer necessary to handle server fail-over manually or with a complicated script
GR provides fault tolerance
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
37 / 152
And for users ?
not longer necessary to handle server fail-over manually or with a complicated script
GR provides fault tolerance
GR enables update-everywhere setups
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
38 / 152
And for users ?
not longer necessary to handle server fail-over manually or with a complicated script
GR provides fault tolerance
GR enables update-everywhere setups
GR handles crashes, failures, re-connects automatically
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
39 / 152
And for users ?
not longer necessary to handle server fail-over manually or with a complicated script
GR provides fault tolerance
GR enables update-everywhere setups
GR handles crashes, failures, re-connects automatically
Allows an easy setup of a highly available MySQL service!
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
40 / 152
ready ?
Migration from Master-Slave to GR
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
41 / 152
The plan
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
42 / 152
1) We install and
setup MySQL InnoDB
Cluster on one of the
newservers
The plan
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
43 / 152
2) We restore a
backup
3) setup
asynchronous
replication on the new
server.
The plan
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
44 / 152
4) We add a new
instance to our group
The plan
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
45 / 152
5) We point the
application to one of
our newnodes.
6) We wait and check
that asynchronous
replication is caught
up
7) we stop those
asynchronous slaves
The plan
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
46 / 152
8) We attach the
mysql2 slave to the
group
The plan
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
47 / 152
9) Use MySQL Router
for directing traffic
The plan
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
48 / 152
Latest MySQL 8.0.3-RC is already installed
on mysql3.
Let´s take a backup on mysql1:
[mysql1 ~]# xtrabackup --backup 
--target-dir=/tmp/backup 
--user=root 
--password=X --host=127.0.0.1
[mysql1 ~]# xtrabackup --prepare 
--target-dir=/tmp/backup
LAB2: Prepare mysql3
Asynchronousslave
49 / 152
LAB2: Prepare mysql3 (2)
Asynchronousslave
Copy the backup frommysql1 to mysql3:
[mysql1 ~]# scp -r /tmp/backup mysql3:/tmp
And restore it:
[mysql3 ~]# systemctl stop mysqld
[mysql3 ~]# rm -rf /var/lib/mysql/*
[mysql3 ~]# xtrabackup --copy-back --target-dir=/tmp/backup
[mysql3 ~]# chown -R mysql. /var/lib/mysql
50 / 152
LAB3: mysql3 as asynchronous slave (2)
Asynchronousslave
Configure /etc/my.cnf with the minimal requirements:
[mysqld]
...
server_id=3
enforce_gtid_consistency = on
gtid_mode = on
#log_bin # new default
#log_slave_updates # new default
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
51 / 152
LAB2: Prepare mysql3 (3)
Asynchronousslave
Let´s start MySQL on mysql3:
[mysql3 ~]# systemctl start mysqld
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
52 / 152
LAB2: Prepare mysql3 (3)
Asynchronousslave
Let´s start MySQL on mysql3:
[mysql3 ~]# systemctl start mysqld
[mysql3 ~]# mysql_upgrade
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
53 / 152
find the GTIDs purged
change MASTER
set the purged GTIDs
start replication
LAB3: mysql3 as asynchronous slave (1)
 
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
54 / 152
LAB3: mysql3 as asynchronous slave (2)
Find the latest purged GTIDs:
[mysql3 ~]# cat /tmp/backup/xtrabackup_binlog_info
mysql-bin.000002 167646328 b346474c-8601-11e6-9b39-08002718d305:1-771
Connect to mysql3 and setup replication:
mysql> CHANGE MASTER TO MASTER_HOST="mysql1",
MASTER_USER="repl_async", MASTER_PASSWORD='Xslave',
MASTER_AUTO_POSITION=1;
mysql> RESET MASTER;
mysql> SET global gtid_purged="VALUE FOUND PREVIOUSLY";
mysql> START SLAVE;
Check that you receive the application´s traffic
55 / 152
Administration made easy and more...
MySQL-Shell
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
56 / 152
MySQL Shell
The MySQL Shell is an interactive Javascript, Python, or SQL interface supporting
development and administration for MySQL. MySQL Shell includes the AdminAPI--available
in JavaScript and Python--which enables you to set up and manage InnoDB clusters. It
provides a modern and fluent API which hides the complexity associated with configuring,
provisioning, and managing an InnoDB cluster, without sacrificing power, flexibility, or
security.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
57 / 152
MySQL Shell (2)
As example. the same operations as before but using the Shell:
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
58 / 152
MySQL Shell (3)
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
59 / 152
LAB4: MySQL InnoDB Cluster
Createasingleinstancecluster
Time to use the newMySQL Shell !
[mysql3 ~]# mysqlsh
Let´s verify if our server is ready to become a member of a newcluster:
mysql-js> dba.checkInstanceCon guration('root@mysql3:3306')
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
60 / 152
LAB4: MySQL InnoDB Cluster
Createasingleinstancecluster
Time to use the newMySQL Shell !
[mysql3 ~]# mysqlsh
Let´s verify if our server is ready to become a member of a newcluster:
mysql-js> dba.checkInstanceCon guration('root@mysql3:3306')
Change the configuration !
mysql-js> dba.con gureLocalInstance()
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
61 / 152
LAB4: MySQL InnoDB Cluster (2)
Restartmysqldtousethenewconfiguration:
[mysql3 ~]# systemctl restart mysqld
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
62 / 152
LAB4: MySQL InnoDB Cluster (2)
Restartmysqldtousethenewconfiguration:
[mysql3 ~]# systemctl restart mysqld
Createasingleinstancecluster
[mysql3 ~]# mysqlsh
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
63 / 152
LAB4: MySQL InnoDB Cluster (2)
Restartmysqldtousethenewconfiguration:
[mysql3 ~]# systemctl restart mysqld
Createasingleinstancecluster
[mysql3 ~]# mysqlsh
mysql-js> dba.checkInstanceCon guration('root@mysql3:3306')
mysql-js> c root@mysql3:3306
mysql-js> cluster = dba.createCluster('perconalive')
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
64 / 152
LAB4: Cluster Status
mysql-js> cluster.status()
{
"clusterName": "perconalive",
"defaultReplicaSet": {
"name": "default",
"primary": "mysql3:3306",
"ssl": "DISABLED",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures.",
"topology": {
"mysql3:3306": {
"address": "mysql3:3306",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
}
}
}
}
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
65 / 152
Add mysql4 to the Group:
restore the backup
set the purged GTIDs
use MySQL Shell
LAB5: add mysql4 to the cluster (1)
 
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
66 / 152
[mysql4 ~]# systemctl start mysqld
[mysql4 ~]# mysql_upgrade
LAB5: add mysql4 to the cluster (2)
Copy the backup frommysql1 to mysql4:
[mysql1 ~]# scp -r /tmp/backup mysql4:/tmp
And restore it:
[mysql4 ~] systemctl stop mysqld
[mysql4 ~] rm -rf /var/lib/mysql/*
[mysql4 ~]# xtrabackup --copy-back --target-dir=/tmp/backup
[mysql4 ~]# chown -R mysql. /var/lib/mysql
Start MySQL on mysql4:
67 / 152
LAB5: MySQL Shell to add an instance (3)
[mysql4 ~]# mysqlsh
Let´s verify the config:
mysql-js> dba.checkInstanceCon guration('root@mysql4:3306')
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
68 / 152
LAB5: MySQL Shell to add an instance (3)
[mysql4 ~]# mysqlsh
Let´s verify the config:
mysql-js> dba.checkInstanceCon guration('root@mysql4:3306')
And change the configuration:
mysql-js> dba.con gureLocalInstance()
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
69 / 152
LAB5: MySQL Shell to add an instance (3)
[mysql4 ~]# mysqlsh
Let´s verify the config:
mysql-js> dba.checkInstanceCon guration('root@mysql4:3306')
And change the configuration:
mysql-js> dba.con gureLocalInstance()
Restart the service to enable the changes:
[mysql4 ~]# systemctl restart mysqld
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
70 / 152
LAB5: MySQL InnoDB Cluster (4)
Groupof2instances
Find the latest purged GTIDs:
[mysql4 ~]# cat /tmp/backup/xtrabackup_binlog_info
mysql-bin.000002 167646328 b346474c-8601-11e6-9b39-08002718d305:1-77177
Connect to mysql4 and set GTID_PURGED
[mysql4 ~]# mysqlsh
mysql-js> c root@mysql4:3306
mysql-js> sql
mysql-sql> RESET MASTER;
mysql-sql> SET global gtid_purged="VALUE FOUND PREVIOUSLY";
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
71 / 152
LAB5: MySQL InnoDB Cluster (5)
mysql-sql> js
mysql-js> dba.checkInstanceCon guration('root@mysql4:3306')
mysql-js> c root@mysql3:3306
mysql-js> cluster = dba.getCluster()
mysql-js> cluster.checkInstanceState('root@mysql4:3306')
mysql-js> cluster.addInstance("root@mysql4:3306")
mysql-js> cluster.status()
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
72 / 152
Cluster Status
mysql-js> cluster.status()
{
"clusterName": "perconalive",
"defaultReplicaSet": {
"name": "default",
"primary": "mysql3:3306",
"ssl": "DISABLED",
"status": "OK_NO_TOLERANCE",
"statusText": "Cluster is NOT tolerant to any failures. 1 member is not active"
"topology": {
"mysql3:3306": {
"address": "mysql3:3306",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
},
"mysql4:3306": {
"address": "mysql4:3306",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "RECOVERING"
}
}
} Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
73 / 152
Recovering progress
On standard MySQL, monitor the group_replication_recovery channel to see
the progress:
mysql4> show slave status for channel 'group_replication_recovery'G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: mysql3
Master_User: mysql_innodb_cluster_rpl_user
...
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
...
Retrieved_Gtid_Set: 6e7d7848-860f-11e6-92e4-08002718d305:1-6,
7c1f0c2d-860d-11e6-9df7-08002718d305:1-15,
b346474c-8601-11e6-9b39-08002718d305:1964-77177,
e8c524df-860d-11e6-9df7-08002718d305:1-2
Executed_Gtid_Set: 7c1f0c2d-860d-11e6-9df7-08002718d305:1-7,
b346474c-8601-11e6-9b39-08002718d305:1-45408,
e8c524df-860d-11e6-9df7-08002718d305:1-2
...
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
74 / 152
point the application
to the cluster
Migrate the application
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
75 / 152
LAB6: Migrate the application
Make sure gtid_executed range on mysql2 is lower or equal than on mysql3
mysql[2-3]> show global variables like 'gtid_executed'G
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
76 / 152
LAB6: Migrate the application
Make sure gtid_executed range on mysql2 is lower or equal than on mysql3
mysql[2-3]> show global variables like 'gtid_executed'G
When they are OK, stop asynchronous replication on mysql2 and mysql3:
mysql2> stop slave;
mysql3> stop slave;
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
77 / 152
LAB6: Migrate the application
Nowwe need to point the application to mysql3, this is the only downtime !
...
[ 21257s] threads: 4, tps: 12.00, reads: 167.94, writes: 47.98, response time: 18
[ 21258s] threads: 4, tps: 6.00, reads: 83.96, writes: 23.99, response time: 14
[ 21259s] threads: 4, tps: 7.00, reads: 98.05, writes: 28.01, response time: 16
[ 31250s] threads: 4, tps: 8.00, reads: 111.95, writes: 31.99, response time: 30
[ 31251s] threads: 4, tps: 11.00, reads: 154.01, writes: 44.00, response time: 13
[ 31252s] threads: 4, tps: 11.00, reads: 153.94, writes: 43.98, response time: 12
[ 31253s] threads: 4, tps: 10.01, reads: 140.07, writes: 40.02, response time: 17
^C
[mysql1 ~]# run_app.sh mysql3
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
78 / 152
LAB6: Migrate the application
Nowwe need to point the application to mysql3, this is the only downtime !
...
[ 21257s] threads: 4, tps: 12.00, reads: 167.94, writes: 47.98, response time: 18
[ 21258s] threads: 4, tps: 6.00, reads: 83.96, writes: 23.99, response time: 14
[ 21259s] threads: 4, tps: 7.00, reads: 98.05, writes: 28.01, response time: 16
[ 31250s] threads: 4, tps: 8.00, reads: 111.95, writes: 31.99, response time: 30
[ 31251s] threads: 4, tps: 11.00, reads: 154.01, writes: 44.00, response time: 13
[ 31252s] threads: 4, tps: 11.00, reads: 153.94, writes: 43.98, response time: 12
[ 31253s] threads: 4, tps: 10.01, reads: 140.07, writes: 40.02, response time: 17
^C
[mysql1 ~]# run_app.sh mysql3
Nowthey can forget about mysql1:
mysql[2-3]> reset slave all;
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
79 / 152
previous slave
(mysql2) can now
be part of the cluster
Add a third instance
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
80 / 152
LAB7: Add mysql2 to the group
We first upgrade to MySQL 8.0.3 :
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
81 / 152
LAB7: Add mysql2 to the group
We first upgrade to MySQL 8.0.3 :
[mysql2 ~]# systemctl stop mysqld
[mysql2 ~]# rpm -Uvh /root/rpms/mysql*rpm
[mysql2 ~]# systemctl start mysqld
[mysql2 ~]# mysql_upgrade
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
82 / 152
LAB7: Add mysql2 to the group
We first upgrade to MySQL 8.0.3 :
[mysql2 ~]# systemctl stop mysqld
[mysql2 ~]# rpm -Uvh /root/rpms/mysql*rpm
[mysql2 ~]# systemctl start mysqld
[mysql2 ~]# mysql_upgrade
and then we validate the instance using MySQL Shell and we configure it:
[mysql2 ~]# mysqlsh
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
83 / 152
LAB7: Add mysql2 to the group
We first upgrade to MySQL 8.0.3 :
[mysql2 ~]# systemctl stop mysqld
[mysql2 ~]# rpm -Uvh /root/rpms/mysql*rpm
[mysql2 ~]# systemctl start mysqld
[mysql2 ~]# mysql_upgrade
and then we validate the instance using MySQL Shell and we configure it:
[mysql2 ~]# mysqlsh
mysql-js> dba.checkInstanceCon guration('root@mysql2:3306')
mysql-js> dba.con gureLocalInstance()
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
84 / 152
LAB7: Add mysql2 to the group
We first upgrade to MySQL 8.0.3 :
[mysql2 ~]# systemctl stop mysqld
[mysql2 ~]# rpm -Uvh /root/rpms/mysql*rpm
[mysql2 ~]# systemctl start mysqld
[mysql2 ~]# mysql_upgrade
and then we validate the instance using MySQL Shell and we configure it:
[mysql2 ~]# mysqlsh
mysql-js> dba.checkInstanceCon guration('root@mysql2:3306')
mysql-js> dba.con gureLocalInstance()
[mysql2 ~]# systemctl restart mysqld
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
85 / 152
LAB7: Add mysql2 to the group (2)
Back in MySQL Shell we add the newinstance:
[mysql2 ~]# mysqlsh
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
86 / 152
LAB7: Add mysql2 to the group (2)
Back in MySQL Shell we add the newinstance:
[mysql2 ~]# mysqlsh
mysql-js> dba.checkInstanceCon guration('root@mysql2:3306')
mysql-js> c root@mysql3:3306
mysql-js> cluster = dba.getCluster()
mysql-js> cluster.addInstance("root@mysql2:3306")
mysql-js> cluster.status()
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
87 / 152
LAB7: Add mysql2 to the group (3)
{
"clusterName": "perconalive",
"defaultReplicaSet": {
"name": "default",
"primary": "mysql3:3306",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"mysql2:3306": {
"address": "mysql2:3306",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
},
"mysql3:3306": {
"address": "mysql3:3306",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE"
},
"mysql4:3306": {
"address": "mysql4:3306",
"mode": "R/O",
"readReplicas": {},Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
88 / 152
writing to a single server
Single Primary Mode
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
89 / 152
Default = Single Primary Mode
By default, MySQL InnoDB Cluster enables Single Primary Mode.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
90 / 152
Default = Single Primary Mode
By default, MySQL InnoDB Cluster enables Single Primary Mode.
mysql> show global variables like 'group_replication_single_primary_mode';
+---------------------------------------+-------+
| Variable_name | Value |
+---------------------------------------+-------+
| group_replication_single_primary_mode | ON |
+---------------------------------------+-------+
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
91 / 152
Default = Single Primary Mode
By default, MySQL InnoDB Cluster enables Single Primary Mode.
mysql> show global variables like 'group_replication_single_primary_mode';
+---------------------------------------+-------+
| Variable_name | Value |
+---------------------------------------+-------+
| group_replication_single_primary_mode | ON |
+---------------------------------------+-------+
In Single Primary Mode, a single member acts as the writable master (PRIMARY) and the
rest of the members act as hot-standbys (SECONDARY).
The group itself coordinates and configures itself automatically to determine which
member will act as the PRIMARY, through a leader election mechanism.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
92 / 152
Who´s the Primary Master ? old fashion style
As the Primary Master is elected, all nodes part of the group knows which one was
elected. This value is exposed in status variables:
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
93 / 152
Who´s the Primary Master ? old fashion style
As the Primary Master is elected, all nodes part of the group knows which one was
elected. This value is exposed in status variables:
mysql> show status like 'group_replication_primary_member';
+----------------------------------+--------------------------------------+
| Variable_name | Value |
+----------------------------------+--------------------------------------+
| group_replication_primary_member | 28a4e51f-860e-11e6-bdc4-08002718d305 |
+----------------------------------+--------------------------------------+
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
94 / 152
Who´s the Primary Master ? old fashion style
As the Primary Master is elected, all nodes part of the group knows which one was
elected. This value is exposed in status variables:
mysql> show status like 'group_replication_primary_member';
+----------------------------------+--------------------------------------+
| Variable_name | Value |
+----------------------------------+--------------------------------------+
| group_replication_primary_member | 28a4e51f-860e-11e6-bdc4-08002718d305 |
+----------------------------------+--------------------------------------+
mysql> select member_host as "primary master"
from performance_schema.global_status
join performance_schema.replication_group_members
where variable_name = 'group_replication_primary_member'
and member_id=variable_value;
+---------------+
| primary master|
+---------------+
| mysql3 |
+---------------+
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
95 / 152
Who´s the Primary Master ? new fashion style
mysql> select member_host
from performance_schema.replication_group_members
where member_role='PRIMARY';
+-------------+
| member_host |
+-------------+
| mysql3 |
+-------------+
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
96 / 152
Create a Multi-Primary Cluster:
It´s also possible to create a Multi-Primary Cluster using the Shell:
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
97 / 152
Create a Multi-Primary Cluster:
It´s also possible to create a Multi-Primary Cluster using the Shell:
mysql-js> cluster=dba.createCluster('perconalive',{multiMaster: true})
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
98 / 152
Create a Multi-Primary Cluster:
It´s also possible to create a Multi-Primary Cluster using the Shell:
mysql-js> cluster=dba.createCluster('perconalive',{multiMaster: true})
A new InnoDB cluster will be created on instance 'root@mysql3:3306'.
The MySQL InnoDB cluster is going to be setup in advanced Multi-Master Mode.
Before continuing you have to con rm that you understand the requirements and
limitations of Multi-Master Mode. Please read the manual before proceeding.
I have read the MySQL InnoDB cluster manual and I understand the requirements
and limitations of advanced Multi-Master Mode.
Con rm [y|N]:
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
99 / 152
Create a Multi-Primary Cluster:
It´s also possible to create a Multi-Primary Cluster using the Shell:
mysql-js> cluster=dba.createCluster('perconalive',{multiMaster: true})
A new InnoDB cluster will be created on instance 'root@mysql3:3306'.
The MySQL InnoDB cluster is going to be setup in advanced Multi-Master Mode.
Before continuing you have to con rm that you understand the requirements and
limitations of Multi-Master Mode. Please read the manual before proceeding.
I have read the MySQL InnoDB cluster manual and I understand the requirements
and limitations of advanced Multi-Master Mode.
Con rm [y|N]:
Or you can force it to avoid interaction (for automation) :
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
100 / 152
Create a Multi-Primary Cluster:
It´s also possible to create a Multi-Primary Cluster using the Shell:
mysql-js> cluster=dba.createCluster('perconalive',{multiMaster: true})
A new InnoDB cluster will be created on instance 'root@mysql3:3306'.
The MySQL InnoDB cluster is going to be setup in advanced Multi-Master Mode.
Before continuing you have to con rm that you understand the requirements and
limitations of Multi-Master Mode. Please read the manual before proceeding.
I have read the MySQL InnoDB cluster manual and I understand the requirements
and limitations of advanced Multi-Master Mode.
Con rm [y|N]:
Or you can force it to avoid interaction (for automation) :
> cluster=dba.createCluster('perconalive',{multiMaster: true, force: true})
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
101 / 152
get more info
Monitoring
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
102 / 152
Performance Schema
Group Replication uses Performance_Schema to expose status
mysql3> SELECT * FROM performance_schema.replication_group_membersG
*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: ade14d5c-9e1e-11e7-b034-08002718d305
MEMBER_HOST: mysql4
MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
MEMBER_ROLE: SECONDARY
MEMBER_VERSION: 8.0.3
*************************** 2. row ***************************
CHANNEL_NAME: group_replication_applier
MEMBER_ID: b9d01593-9dfb-11e7-8ca6-08002718d305
MEMBER_HOST: mysql3
MEMBER_PORT: 3306
MEMBER_STATE: ONLINE
MEMBER_ROLE: PRIMARY
MEMBER_VERSION: 8.0.3
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
103 / 152
mysql3> SELECT * FROM performance_schema.replication_connection_statusG
*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
GROUP_NAME: 8fc848d7-9e1c-11e7-9407...
SOURCE_UUID: 8fc848d7-9e1c-11e7-9407...
THREAD_ID: NULL
SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 0
LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00
RECEIVED_TRANSACTION_SET: 8fc848d7-9e1c-11e7-9407...
b9d01593-9dfb-11e7-8ca6-08002718d305:1-21,
da2f0910-8767-11e6-b82d-08002718d305:1-164741
LAST_ERROR_NUMBER: 0
LAST_ERROR_MESSAGE:
LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
LAST_QUEUED_TRANSACTION: 8fc848d7-9e1c-11e7-9407...
LAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00
LAST_QUEUED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00
LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP: 2017-09-20 16:22:36.486...
LAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP: 2017-09-20 16:22:36.486...
QUEUEING_TRANSACTION:
QUEUEING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00
QUEUEING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00
QUEUEING_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-00 00:00:00
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
104 / 152
Member State
These are the different possible state for a node member:
ONLINE
OFFLINE
RECOVERING
ERROR: when a node is leaving but the plugin was not instructed to stop
UNREACHABLE
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
105 / 152
Status information & metrics
Members
mysql> SELECT member_host, member_state, member_role
FROM performance_schema.replication_group_members;
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
106 / 152
Status information & metrics
Members
mysql> SELECT member_host, member_state, member_role
FROM performance_schema.replication_group_members;
+-------------+--------------+-------------+
| member_host | member_state | member_role |
+-------------+--------------+-------------+
| mysql4 | ONLINE | SECONDARY |
| mysql3 | ONLINE | PRIMARY |
+-------------+--------------+-------------+
2 rows in set (0.00 sec)
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
107 / 152
Status information & metrics - connections
mysql> SELECT * FROM performance_schema.replication_connection_statusG
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
108 / 152
Status information & metrics - connections
mysql> SELECT * FROM performance_schema.replication_connection_statusG
*************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
GROUP_NAME: 8fc848d7-9e1c-11e7-9407-...
SOURCE_UUID: 8fc848d7-9e1c-11e7-9407-...
THREAD_ID: NULL
SERVICE_STATE: ON
COUNT_RECEIVED_HEARTBEATS: 0
LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00
RECEIVED_TRANSACTION_SET: 8fc848d7-9e1c-11e7-9407-...
b9d01593-9dfb-11e7-8ca6-08002718d305:1-21,
da2f0910-8767-11e6-b82d-08002718d305:1-164741
LAST_ERROR_NUMBER: 0
LAST_ERROR_MESSAGE:
LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
LAST_QUEUED_TRANSACTION: 8fc848d7-9e1c-11e7-9407-...
LAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00
LAST_QUEUED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00
LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP: 2017-09-20 16:22:36.4864...
LAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP: 2017-09-20 16:22:36.4865...
QUEUEING_TRANSACTION:
QUEUEING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00
QUEUEING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00
QUEUEING_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-00 00:00:00
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
109 / 152
Status information & metrics
Previously there was only local node statistics, now they are exposed all
over the Group
mysql> select * from performance_schema.replication_group_member_statsG
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
110 / 152
Status information & metrics
Previously there was only local node statistics, now they are exposed all
over the Group
mysql> select * from performance_schema.replication_group_member_statsG
************************** 1. row ***************************
CHANNEL_NAME: group_replication_applier
VIEW_ID: 15059231192196925:2
MEMBER_ID: ade14d5c-9e1e-11e7-b034-08002...
COUNT_TRANSACTIONS_IN_QUEUE: 0
COUNT_TRANSACTIONS_CHECKED: 27992
COUNT_CONFLICTS_DETECTED: 0
COUNT_TRANSACTIONS_ROWS_VALIDATING: 0
TRANSACTIONS_COMMITTED_ALL_MEMBERS: 8fc848d7-9e1c-11e7-9407-08002...
b9d01593-9dfb-11e7-8ca6-08002718d305:1-21,
da2f0910-8767-11e6-b82d-08002718d305:1-164741
LAST_CONFLICT_FREE_TRANSACTION: 8fc848d7-9e1c-11e7-9407-08002...
COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE: 0
COUNT_TRANSACTIONS_REMOTE_APPLIED: 27992
COUNT_TRANSACTIONS_LOCAL_PROPOSED: 0
COUNT_TRANSACTIONS_LOCAL_ROLLBACK: 0
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
111 / 152
************************** 2. row ***************************
CHANNEL_NAME: group_replication_applier
VIEW_ID: 15059231192196925:2
MEMBER_ID: b9d01593-9dfb-11e7-8ca6-08002...
COUNT_TRANSACTIONS_IN_QUEUE: 0
COUNT_TRANSACTIONS_CHECKED: 28000
COUNT_CONFLICTS_DETECTED: 0
COUNT_TRANSACTIONS_ROWS_VALIDATING: 0
TRANSACTIONS_COMMITTED_ALL_MEMBERS: 8fc848d7-9e1c-11e7-9407-08002...
b9d01593-9dfb-11e7-8ca6-08002718d305:1-21,
da2f0910-8767-11e6-b82d-08002718d305:1-164741
LAST_CONFLICT_FREE_TRANSACTION: 8fc848d7-9e1c-11e7-9407-08002...
COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE: 0
COUNT_TRANSACTIONS_REMOTE_APPLIED: 1
COUNT_TRANSACTIONS_LOCAL_PROPOSED: 28000
COUNT_TRANSACTIONS_LOCAL_ROLLBACK: 0
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
112 / 152
Performance_Schema
You can find GR information in the following Performance_Schema tables:
replication_applier_con guration
replication_applier_status
replication_applier_status_by_worker
replication_connection_con guration
replication_connection_status
replication_group_member_stats
replication_group_members
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
113 / 152
Status during recovery
mysql> SHOW SLAVE STATUS FOR CHANNEL 'group_replication_recovery'G
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
114 / 152
Status during recovery
mysql> SHOW SLAVE STATUS FOR CHANNEL 'group_replication_recovery'G
*************************** 1. row ***************************
Slave_IO_State:
Master_Host: <NULL>
Master_User: gr_repl
Master_Port: 0
...
Relay_Log_File: mysql4-relay-bin-group_replication_recovery.000001
...
Slave_IO_Running: No
Slave_SQL_Running: No
...
Executed_Gtid_Set: 5de4400b-3dd7-11e6-8a71-08002774c31b:1-814089,
afb80f36-2bff-11e6-84e0-0800277dd3bf:1-5718
...
Channel_Name: group_replication_recovery
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
115 / 152
Sys Schema
The easiest way to detect if a node is a member of the primary component (when there
are partitioning of your nodes due to network issues for example) and therefore a valid
candidate for routing queries to it, is to use the sys table.
Additional information for sys can be found at https://goo.gl/XFp3bt
On the primary node:
[mysql3 ~]# mysql < /root/addition_to_sys_mysql8.sql
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
116 / 152
Sys Schema
Is this node part of PRIMARY Partition:
mysql3> SELECT sys.gr_member_in_primary_partition();
+------------------------------------+
| sys.gr_node_in_primary_partition() |
+------------------------------------+
| YES |
+------------------------------------+
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
117 / 152
Sys Schema
Is this node part of PRIMARY Partition:
mysql3> SELECT sys.gr_member_in_primary_partition();
+------------------------------------+
| sys.gr_node_in_primary_partition() |
+------------------------------------+
| YES |
+------------------------------------+
To use as healthcheck:
mysql3> SELECT * FROM sys.gr_member_routing_candidate_status;
+------------------+-----------+---------------------+----------------------+
| viable_candidate | read_only | transactions_behind | transactions_to_cert |
+------------------+-----------+---------------------+----------------------+
| YES | YES | 0 | 0 |
+------------------+-----------+---------------------+----------------------+
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
118 / 152
LAB8: Sys Schema - Health Check
On one of the non Primary nodes, run the following command:
mysql-sql> ush tables with read lock;
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
119 / 152
LAB8: Sys Schema - Health Check
On one of the non Primary nodes, run the following command:
mysql-sql> ush tables with read lock;
Nowyou can verify what the healthcheck exposes to you:
mysql-sql> SELECT * FROM sys.gr_member_routing_candidate_status;
+------------------+-----------+---------------------+----------------------+
| viable_candidate | read_only | transactions_behind | transactions_to_cert |
+------------------+-----------+---------------------+----------------------+
| YES | YES | 950 | 0 |
+------------------+-----------+---------------------+----------------------+
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
120 / 152
LAB8: Sys Schema - Health Check
On one of the non Primary nodes, run the following command:
mysql-sql> ush tables with read lock;
Nowyou can verify what the healthcheck exposes to you:
mysql-sql> SELECT * FROM sys.gr_member_routing_candidate_status;
+------------------+-----------+---------------------+----------------------+
| viable_candidate | read_only | transactions_behind | transactions_to_cert |
+------------------+-----------+---------------------+----------------------+
| YES | YES | 950 | 0 |
+------------------+-----------+---------------------+----------------------+
mysql-sql> UNLOCK TABLES;
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
121 / 152
application interaction
MySQL Router
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
122 / 152
MySQL Router
MySQL Router is lightweight middleware that provides transparent routing between your
application and backend MySQL Servers. It can be used for a wide variety of use cases,
such as providing high availability and scalability by effectively routing database traffic to
appropriate backend MySQL Servers.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
123 / 152
MySQL Router
MySQL Router is lightweight middleware that provides transparent routing between your
application and backend MySQL Servers. It can be used for a wide variety of use cases,
such as providing high availability and scalability by effectively routing database traffic to
appropriate backend MySQL Servers.
MySQL Router doesn´t require any specific configuration. It configures itself automatically
(bootstrap) using MySQL InnoDB Cluster´s metadata.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
124 / 152
LAB9: MySQL Router
We will nowuse mysqlrouter between our application and the cluster.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
125 / 152
LAB9: MySQL Router (2)
Configure MySQL Router that will run on the app server (mysql1). We bootstrap it using
the Primary-Master:
[root@mysql1 ~]# mysqlrouter --bootstrap mysql3:3306 --user mysqlrouter
Please enter MySQL password for root:
WARNING: The MySQL server does not have SSL ...
Bootstrapping system MySQL Router instance...
MySQL Router has now been con gured for the InnoDB cluster 'perconalive'.
The following connection information can be used to connect to the cluster.
Classic MySQL protocol connections to cluster 'perconalive':
- Read/Write Connections: localhost:6446
- Read/Only Connections: localhost:6447
X protocol connections to cluster 'perconalive':
- Read/Write Connections: localhost:64460
- Read/Only Connections: localhost:64470
[root@mysql1 ~]# chown -R mysqlrouter. /var/lib/mysqlrouter
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
126 / 152
LAB9: MySQL Router (3)
Nowlet´s modify the configuration file to listen on port 3306:
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
127 / 152
LAB9: MySQL Router (3)
Nowlet´s modify the configuration file to listen on port 3306:
in /etc/mysqlrouter/mysqlrouter.conf:
[routing:perconalive_default_rw]
-bind_port=6446
+bind_port=3306
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
128 / 152
LAB9: MySQL Router (3)
Nowlet´s modify the configuration file to listen on port 3306:
in /etc/mysqlrouter/mysqlrouter.conf:
[routing:perconalive_default_rw]
-bind_port=6446
+bind_port=3306
We can stop mysqld on mysql1 and start mysqlrouter into a screen session:
[mysql1 ~]# systemctl stop mysqld
[mysql1 ~]# systemctl start mysqlrouter
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
129 / 152
LAB9: MySQL Router (4)
Before killing a member we will change systemd´s default behavior that restarts
mysqld immediately:
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
130 / 152
LAB9: MySQL Router (4)
Before killing a member we will change systemd´s default behavior that restarts
mysqld immediately:
in /usr/lib/systemd/system/mysqld.service add the following under
[Service]
RestartSec=30
[mysql3 ~]# systemctl daemon-reload
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
131 / 152
LAB9: MySQL Router (5)
Nowwe can point the application to the router (back to mysql1):
[mysql1 ~]# run_app.sh
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
132 / 152
LAB9: MySQL Router (5)
Nowwe can point the application to the router (back to mysql1):
[mysql1 ~]# run_app.sh
Check app and kill mysqld on mysql3 (the Primary Master R/Wnode) !
[mysql3 ~]# kill -9 $(pidof mysqld)
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
133 / 152
LAB9: MySQL Router (5)
Nowwe can point the application to the router (back to mysql1):
[mysql1 ~]# run_app.sh
Check app and kill mysqld on mysql3 (the Primary Master R/Wnode) !
[mysql3 ~]# kill -9 $(pidof mysqld)
mysql2> select member_host
from performance_schema.replication_group_members
where member_role='PRIMARY';
+-------------+
| member_host |
+-------------+
| mysql4 |
+-------------+
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
134 / 152
ProxySQL / HA Proxy / F5 / ...
3rd party router/proxy
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
135 / 152
3rd party router/proxy
MySQL InnoDB Cluster can also work with third party router / proxy.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
136 / 152
3rd party router/proxy
MySQL InnoDB Cluster can also work with third party router / proxy.
If you need some specific features that are not yet available in MySQL Router, like
transparent R/Wsplitting, then you can use your software of choice.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
137 / 152
3rd party router/proxy
MySQL InnoDB Cluster can also work with third party router / proxy.
If you need some specific features that are not yet available in MySQL Router, like
transparent R/Wsplitting, then you can use your software of choice.
The important part of such implementation is to use a good health check to verify if the
MySQL server you plan to route the traffic is in a valid state.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
138 / 152
3rd party router/proxy
MySQL InnoDB Cluster can also work with third party router / proxy.
If you need some specific features that are not yet available in MySQL Router, like
transparent R/Wsplitting, then you can use your software of choice.
The important part of such implementation is to use a good health check to verify if the
MySQL server you plan to route the traffic is in a valid state.
MySQL Router implements that natively, and it´s very easy to deploy.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
139 / 152
ProxySQL also has native support for Group
Replication which makes it maybe the best
choice for advanced users.
3rd party router/proxy
MySQL InnoDB Cluster can also work with third party router / proxy.
If you need some specific features that are not yet available in MySQL Router, like
transparent R/Wsplitting, then you can use your software of choice.
The important part of such implementation is to use a good health check to verify if the
MySQL server you plan to route the traffic is in a valid state.
MySQL Router implements that natively, and it´s very easy to deploy.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
140 / 152
operational tasks
Recovering Node
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
141 / 152
Recovering Nodes/Members
The old master (mysql3) got killed.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
142 / 152
Recovering Nodes/Members
The old master (mysql3) got killed.
MySQL got restarted automatically by systemd
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
143 / 152
Recovering Nodes/Members
The old master (mysql3) got killed.
MySQL got restarted automatically by systemd
Let´s add mysql3 back to the cluster
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
144 / 152
LAB10: Recovering Nodes/Members
[mysql3 ~]# mysqlsh
mysql-js> c root@mysql4:3306 # The current master
mysql-js> cluster = dba.getCluster()
mysql-js> cluster.status()
mysql-js> cluster.rejoinInstance("root@mysql3:3306")
Rejoining the instance to the InnoDB cluster. Depending on the original
problem that made the instance unavailable, the rejoin operation might not be
successful and further manual steps will be needed to x the underlying
problem.
Please monitor the output of the rejoin operation and take necessary action if
the instance cannot rejoin.
Please provide the password for 'root@mysql3:3306':
Rejoining instance to the cluster ...
The instance 'root@mysql3:3306' was successfully rejoined on the cluster.
The instance 'mysql3:3306' was successfully added to the MySQL Cluster.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
145 / 152
mysql-js> cluster.status()
{ "clusterName": "perconalive",
"defaultReplicaSet": {
"name": "default",
"primary": "mysql4:3306",
"status": "OK",
"statusText": "Cluster is ONLINE and can tolerate up to ONE failure.",
"topology": {
"mysql2:3306": {
"address": "mysql2:3306",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE" },
"mysql3:3306": {
"address": "mysql3:3306",
"mode": "R/O",
"readReplicas": {},
"role": "HA",
"status": "ONLINE" },
"mysql4:3306": {
"address": "mysql4:3306",
"mode": "R/W",
"readReplicas": {},
"role": "HA",
"status": "ONLINE" }
}
}
}
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
146 / 152
Recovering Nodes/Members (automatically)
This time before killing a member of the group, we will persist the configuration on disk in
my.cnf.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
147 / 152
Recovering Nodes/Members (automatically)
This time before killing a member of the group, we will persist the configuration on disk in
my.cnf.
We will use again the same MySQL command as previously
dba.con gureLocalInstance() but this time when all nodes are already part
of the Group.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
148 / 152
LAB10: Recovering Nodes/Members (2)
Verify that all nodes are ONLINE.
...
mysql-js> cluster.status()
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
149 / 152
LAB10: Recovering Nodes/Members (2)
Verify that all nodes are ONLINE.
...
mysql-js> cluster.status()
Then on all nodes run:
mysql-js> dba.con gureLocalInstance()
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
150 / 152
LAB10: Recovering Nodes/Members (3)
Kill one node again:
[mysql3 ~]# kill -9 $(pidof mysqld)
systemd will restart mysqld and verify if the node joined.
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
151 / 152
Thank you !
Any Questions ?
Copyright @ 2017 Oracle and/or its affiliates. All rights reserved.
152 / 152

More Related Content

PDF
MySQL InnoDB Cluster and NDB Cluster
PDF
MySQL InnoDB Cluster - A complete High Availability solution for MySQL
PDF
Wars of MySQL Cluster ( InnoDB Cluster VS Galera )
PDF
FOSDEM 2022 MySQL Devroom: MySQL 8.0 - Logical Backups, Snapshots and Point-...
PDF
MySQL InnoDB Cluster / ReplicaSet - Tutorial
PDF
MySQL Database Architectures - InnoDB ReplicaSet & Cluster
PDF
MySQL Shell for Database Engineers
PDF
Open Source 101 2022 - MySQL Indexes and Histograms
MySQL InnoDB Cluster and NDB Cluster
MySQL InnoDB Cluster - A complete High Availability solution for MySQL
Wars of MySQL Cluster ( InnoDB Cluster VS Galera )
FOSDEM 2022 MySQL Devroom: MySQL 8.0 - Logical Backups, Snapshots and Point-...
MySQL InnoDB Cluster / ReplicaSet - Tutorial
MySQL Database Architectures - InnoDB ReplicaSet & Cluster
MySQL Shell for Database Engineers
Open Source 101 2022 - MySQL Indexes and Histograms

What's hot (20)

PDF
Paper: Oracle RAC Internals - The Cache Fusion Edition
PDF
Almost Perfect Service Discovery and Failover with ProxySQL and Orchestrator
PDF
Introduction to MySQL InnoDB Cluster
PDF
Redo log improvements MYSQL 8.0
PDF
MySQL Database Architectures - MySQL InnoDB ClusterSet 2021-11
PDF
MySQL User Group NL - MySQL 8
PDF
ProxySQL and the Tricks Up Its Sleeve - Percona Live 2022.pdf
PDF
MySQL Replication Performance in the Cloud
PDF
MySQL InnoDB Cluster - Group Replication
PDF
DataOpsbarcelona 2019: Deep dive into MySQL Group Replication... the magic e...
PDF
MySQL Load Balancers - Maxscale, ProxySQL, HAProxy, MySQL Router & nginx - A ...
PDF
preFOSDEM MySQL Day - Best Practices to Upgrade to MySQL 8.0
PDF
Reducing Risk When Upgrading MySQL
PDF
Standard Edition High Availability (SEHA) - The Why, What & How
PDF
HandsOn ProxySQL Tutorial - PLSC18
PPTX
MySQL Architecture and Engine
PDF
MySQL 5.7 InnoDB Cluster (Jan 2018)
PPTX
Capacity Planning For Your Growing MongoDB Cluster
PDF
Oracle db performance tuning
PDF
MySQL Group Replication: Handling Network Glitches - Best Practices
Paper: Oracle RAC Internals - The Cache Fusion Edition
Almost Perfect Service Discovery and Failover with ProxySQL and Orchestrator
Introduction to MySQL InnoDB Cluster
Redo log improvements MYSQL 8.0
MySQL Database Architectures - MySQL InnoDB ClusterSet 2021-11
MySQL User Group NL - MySQL 8
ProxySQL and the Tricks Up Its Sleeve - Percona Live 2022.pdf
MySQL Replication Performance in the Cloud
MySQL InnoDB Cluster - Group Replication
DataOpsbarcelona 2019: Deep dive into MySQL Group Replication... the magic e...
MySQL Load Balancers - Maxscale, ProxySQL, HAProxy, MySQL Router & nginx - A ...
preFOSDEM MySQL Day - Best Practices to Upgrade to MySQL 8.0
Reducing Risk When Upgrading MySQL
Standard Edition High Availability (SEHA) - The Why, What & How
HandsOn ProxySQL Tutorial - PLSC18
MySQL Architecture and Engine
MySQL 5.7 InnoDB Cluster (Jan 2018)
Capacity Planning For Your Growing MongoDB Cluster
Oracle db performance tuning
MySQL Group Replication: Handling Network Glitches - Best Practices
Ad

Similar to MySQL InnoDB Cluster and Group Replication in a Nutshell (20)

PDF
Introduction to MySQL InnoDB Cluster
PDF
Introduction to MySQL InnoDB Cluster
PDF
DataOps Barcelona - MySQL HA so easy... that's insane !
PDF
MySQL InnoDB Cluster in a Nutshell - Hands-on Lab
PDF
MySQL Innovation Day Chicago - MySQL HA So Easy : That's insane !!
PDF
MySQL Group Replicatio in a nutshell - MySQL InnoDB Cluster
PDF
MySQL InnoDB Cluster and Group Replication in a nutshell hands-on tutorial
PDF
MySQL Group Replication - HandsOn Tutorial
PDF
MySQL innodb cluster and Group Replication in a nutshell - hands-on tutorial ...
PDF
Introduction to MySQL InnoDB Cluster
PDF
Oracle Open World 2018 / Code One : MySQL 8.0 High Availability with MySQL I...
PDF
Helsinki MySQL User Group - MySQL InnoDB Cluster
PDF
Boston meetup : MySQL Innodb Cluster - May 1st 2017
PDF
Swedish MySQL User Group - MySQL InnoDB Cluster
PDF
My sql fabric webinar tw2
PDF
MySQL Community Meetup in China : Innovation driven by the Community
PDF
20191001 bkk-secret-of inno-db_clusterv1
PDF
MySQL Innovation from 5.7 to 8.0
PDF
MySQL Innovation: from 5.7 to 8.0
PDF
Disaster Recovery with MySQL InnoDB ClusterSet - What is it and how do I use it?
Introduction to MySQL InnoDB Cluster
Introduction to MySQL InnoDB Cluster
DataOps Barcelona - MySQL HA so easy... that's insane !
MySQL InnoDB Cluster in a Nutshell - Hands-on Lab
MySQL Innovation Day Chicago - MySQL HA So Easy : That's insane !!
MySQL Group Replicatio in a nutshell - MySQL InnoDB Cluster
MySQL InnoDB Cluster and Group Replication in a nutshell hands-on tutorial
MySQL Group Replication - HandsOn Tutorial
MySQL innodb cluster and Group Replication in a nutshell - hands-on tutorial ...
Introduction to MySQL InnoDB Cluster
Oracle Open World 2018 / Code One : MySQL 8.0 High Availability with MySQL I...
Helsinki MySQL User Group - MySQL InnoDB Cluster
Boston meetup : MySQL Innodb Cluster - May 1st 2017
Swedish MySQL User Group - MySQL InnoDB Cluster
My sql fabric webinar tw2
MySQL Community Meetup in China : Innovation driven by the Community
20191001 bkk-secret-of inno-db_clusterv1
MySQL Innovation from 5.7 to 8.0
MySQL Innovation: from 5.7 to 8.0
Disaster Recovery with MySQL InnoDB ClusterSet - What is it and how do I use it?
Ad

More from Frederic Descamps (20)

PDF
MySQL Innovation & Cloud Day - Document Store avec MySQL HeatWave Database Se...
PDF
MySQL Day Roma - MySQL Shell and Visual Studio Code Extension
PDF
RivieraJUG - MySQL Indexes and Histograms
PDF
RivieraJUG - MySQL 8.0 - What's new for developers.pdf
PDF
State of the Dolphin - May 2022
PDF
Percona Live 2022 - MySQL Shell for Visual Studio Code
PDF
Percona Live 2022 - The Evolution of a MySQL Database System
PDF
Percona Live 2022 - MySQL Architectures
PDF
LinuxFest Northwest 2022 - The Evolution of a MySQL Database System
PDF
Pi Day 2022 - from IoT to MySQL HeatWave Database Service
PDF
Confoo 2022 - le cycle d'une instance MySQL
PDF
Les nouveautés de MySQL 8.0
PDF
Les nouveautés de MySQL 8.0
PDF
State of The Dolphin - May 2021
PDF
MySQL Shell for DBAs
PDF
Deploying Magento on OCI with MDS
PDF
MySQL Router REST API
PDF
From single MySQL instance to High Availability: the journey to MySQL InnoDB ...
PDF
Cloud native - Why to use MySQL 8.0 and how to use it on oci with MDS
PDF
MySQL Database Service Webinar: Installing Drupal in oci with mds
MySQL Innovation & Cloud Day - Document Store avec MySQL HeatWave Database Se...
MySQL Day Roma - MySQL Shell and Visual Studio Code Extension
RivieraJUG - MySQL Indexes and Histograms
RivieraJUG - MySQL 8.0 - What's new for developers.pdf
State of the Dolphin - May 2022
Percona Live 2022 - MySQL Shell for Visual Studio Code
Percona Live 2022 - The Evolution of a MySQL Database System
Percona Live 2022 - MySQL Architectures
LinuxFest Northwest 2022 - The Evolution of a MySQL Database System
Pi Day 2022 - from IoT to MySQL HeatWave Database Service
Confoo 2022 - le cycle d'une instance MySQL
Les nouveautés de MySQL 8.0
Les nouveautés de MySQL 8.0
State of The Dolphin - May 2021
MySQL Shell for DBAs
Deploying Magento on OCI with MDS
MySQL Router REST API
From single MySQL instance to High Availability: the journey to MySQL InnoDB ...
Cloud native - Why to use MySQL 8.0 and how to use it on oci with MDS
MySQL Database Service Webinar: Installing Drupal in oci with mds

Recently uploaded (20)

PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
Network Security Unit 5.pdf for BCA BBA.
PDF
NewMind AI Weekly Chronicles - August'25-Week II
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PPTX
Machine Learning_overview_presentation.pptx
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PPTX
Programs and apps: productivity, graphics, security and other tools
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PPTX
A Presentation on Artificial Intelligence
PDF
A comparative analysis of optical character recognition models for extracting...
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PPTX
sap open course for s4hana steps from ECC to s4
PDF
Review of recent advances in non-invasive hemoglobin estimation
PPTX
MYSQL Presentation for SQL database connectivity
PDF
cuic standard and advanced reporting.pdf
Advanced methodologies resolving dimensionality complications for autism neur...
Chapter 3 Spatial Domain Image Processing.pdf
Network Security Unit 5.pdf for BCA BBA.
NewMind AI Weekly Chronicles - August'25-Week II
Spectral efficient network and resource selection model in 5G networks
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Machine Learning_overview_presentation.pptx
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Programs and apps: productivity, graphics, security and other tools
20250228 LYD VKU AI Blended-Learning.pptx
Building Integrated photovoltaic BIPV_UPV.pdf
Per capita expenditure prediction using model stacking based on satellite ima...
MIND Revenue Release Quarter 2 2025 Press Release
A Presentation on Artificial Intelligence
A comparative analysis of optical character recognition models for extracting...
gpt5_lecture_notes_comprehensive_20250812015547.pdf
sap open course for s4hana steps from ECC to s4
Review of recent advances in non-invasive hemoglobin estimation
MYSQL Presentation for SQL database connectivity
cuic standard and advanced reporting.pdf

MySQL InnoDB Cluster and Group Replication in a Nutshell

  • 3. MySQL InnoDB Cluster & Group Replication in a Nutshell: Hands-On Tutorial   Percona Live Europe 2017 - Dublin Frédéric Descamps - MySQL Community Manager - Oracle Kenny Gryp - MySQL Practice Manager - Percona 3 / 152
  • 4.   Safe Harbor Statement The following is intended to outline our general product direction. It is intended for information purpose only, and may not be incorporated into any contract. It is not a commitment to deliver any material, code, or functionality, and should not be relied up in making purchasing decisions. The development, release and timing of any features or functionality described for Oracle´s product remains at the sole discretion of Oracle. Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 4 / 152
  • 5. Who are we ? Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 5 / 152
  • 6. Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 6 / 152
  • 7. Frédéric Descamps @lefred MySQL Evangelist Managing MySQL since 3.23 devops believer http://about.me/lefred   Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 7 / 152
  • 8. Kenny Gryp @gryp MySQL Practice Manager   Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 8 / 152
  • 9. get more at the conference MySQL Group Replication Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 9 / 152
  • 10. Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 10 / 152
  • 11. Agenda Prepare your workstation Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 11 / 152
  • 12. Agenda Prepare your workstation What are MySQL InnoDB Cluster &Group Replication ? Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 12 / 152
  • 13. Agenda Prepare your workstation What are MySQL InnoDB Cluster &Group Replication ? Migration fromMaster-Slave to GR Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 13 / 152
  • 14. Agenda Prepare your workstation What are MySQL InnoDB Cluster &Group Replication ? Migration fromMaster-Slave to GR Howto monitor ? Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 14 / 152
  • 15. Agenda Prepare your workstation What are MySQL InnoDB Cluster &Group Replication ? Migration fromMaster-Slave to GR Howto monitor ? Application interaction Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 15 / 152
  • 16. VirtualBox Setup your workstation Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 16 / 152
  • 17. Setup your workstation Install VirtualBox 5 On the USB key, copy PLeu17_GR.ova on your laptop and doubleclick on it Ensure you have vboxnet2 network interface (VirtualBox Preferences -> Network -> Host-Only Networks -> +) Start all virtual machines (mysql1, mysql2, mysql3 &mysql4) Install putty if you are using Windows Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 17 / 152
  • 18. Setup your workstation Install VirtualBox 5 On the USB key, copy PLeu17_GR.ova on your laptop and doubleclick on it Ensure you have vboxnet2 network interface (VirtualBox Preferences -> Network -> Host-Only Networks -> +) Start all virtual machines (mysql1, mysql2, mysql3 &mysql4) Install putty if you are using Windows Try to connect to all VM´s fromyour terminal or putty (rootpasswordisX): ssh -p 8821 root@127.0.0.1 to mysql1 ssh -p 8822 root@127.0.0.1 to mysql2 ssh -p 8823 root@127.0.0.1 to mysql3 ssh -p 8824 root@127.0.0.1 to mysql4 Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 18 / 152
  • 19. LAB1: Current situation Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 19 / 152
  • 20. launch run_app.sh on mysql1 into a screen session verify that mysql2 is a running slave LAB1: Current situation Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 20 / 152
  • 21. Summary   +--------+----------+--------------+-----------------+ | | ROLE | SSH PORT | INTERNAL IP | +--------+----------+--------------+-----------------+ | | | | | | mysql1 | master | 8821 | 192.168.56.11 | | | | | | | mysql2 | slave | 8822 | 192.168.56.12 | | | | | | | mysql3 | n/a | 8823 | 192.168.56.13 | | | | | | | mysql4 | n/a | 8824 | 192.168.56.14 | | | | | | +--------+----------+--------------+-----------------+ Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 21 / 152
  • 22. Easy High Availability MySQL InnoDB Cluster Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 22 / 152
  • 23. InnoDB cluster Ease-of-Use Extreme Scale-Out Out-of-Box Solution Built-in HA High Performance Everything Integrated Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 23 / 152
  • 24. InnoDB Cluster's Architecture Application MySQL Connector MySQL Router MySQL Shell InnoDB cluster Application MySQL Connector MySQL Router Mp M M Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 24 / 152
  • 25. Group Replication: heart of MySQL InnoDB Cluster Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 25 / 152
  • 26. Group Replication: heart of MySQL InnoDB Cluster Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 26 / 152
  • 27. MySQL Group Replication but what is it ?!? Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 27 / 152
  • 28. MySQL Group Replication but what is it ?!? GR is a plugin for MySQL, made by MySQL and packaged with MySQL Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 28 / 152
  • 29. MySQL Group Replication but what is it ?!? GR is a plugin for MySQL, made by MySQL and packaged with MySQL GR is an implementation of Replicated Database State Machine theory Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 29 / 152
  • 30. MySQL Group Replication but what is it ?!? GR is a plugin for MySQL, made by MySQL and packaged with MySQL GR is an implementation of Replicated Database State Machine theory Paxos based protocol Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 30 / 152
  • 31. MySQL Group Replication but what is it ?!? GR is a plugin for MySQL, made by MySQL and packaged with MySQL GR is an implementation of Replicated Database State Machine theory Paxos based protocol GR allows to write on all Group Members (cluster nodes) simultaneously while retaining consistency Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 31 / 152
  • 32. MySQL Group Replication but what is it ?!? GR is a plugin for MySQL, made by MySQL and packaged with MySQL GR is an implementation of Replicated Database State Machine theory Paxos based protocol GR allows to write on all Group Members (cluster nodes) simultaneously while retaining consistency GR implements conflict detection and resolution Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 32 / 152
  • 33. MySQL Group Replication but what is it ?!? GR is a plugin for MySQL, made by MySQL and packaged with MySQL GR is an implementation of Replicated Database State Machine theory Paxos based protocol GR allows to write on all Group Members (cluster nodes) simultaneously while retaining consistency GR implements conflict detection and resolution GR allows automatic distributed recovery Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 33 / 152
  • 34. MySQL Group Replication but what is it ?!? GR is a plugin for MySQL, made by MySQL and packaged with MySQL GR is an implementation of Replicated Database State Machine theory Paxos based protocol GR allows to write on all Group Members (cluster nodes) simultaneously while retaining consistency GR implements conflict detection and resolution GR allows automatic distributed recovery Supported on all MySQL platforms !! Linux, Windows, Solaris, OSX, FreeBSD Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 34 / 152
  • 35. And for users ? Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 35 / 152
  • 36. And for users ? not longer necessary to handle server fail-over manually or with a complicated script Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 36 / 152
  • 37. And for users ? not longer necessary to handle server fail-over manually or with a complicated script GR provides fault tolerance Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 37 / 152
  • 38. And for users ? not longer necessary to handle server fail-over manually or with a complicated script GR provides fault tolerance GR enables update-everywhere setups Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 38 / 152
  • 39. And for users ? not longer necessary to handle server fail-over manually or with a complicated script GR provides fault tolerance GR enables update-everywhere setups GR handles crashes, failures, re-connects automatically Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 39 / 152
  • 40. And for users ? not longer necessary to handle server fail-over manually or with a complicated script GR provides fault tolerance GR enables update-everywhere setups GR handles crashes, failures, re-connects automatically Allows an easy setup of a highly available MySQL service! Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 40 / 152
  • 41. ready ? Migration from Master-Slave to GR Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 41 / 152
  • 42. The plan Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 42 / 152
  • 43. 1) We install and setup MySQL InnoDB Cluster on one of the newservers The plan Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 43 / 152
  • 44. 2) We restore a backup 3) setup asynchronous replication on the new server. The plan Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 44 / 152
  • 45. 4) We add a new instance to our group The plan Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 45 / 152
  • 46. 5) We point the application to one of our newnodes. 6) We wait and check that asynchronous replication is caught up 7) we stop those asynchronous slaves The plan Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 46 / 152
  • 47. 8) We attach the mysql2 slave to the group The plan Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 47 / 152
  • 48. 9) Use MySQL Router for directing traffic The plan Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 48 / 152
  • 49. Latest MySQL 8.0.3-RC is already installed on mysql3. Let´s take a backup on mysql1: [mysql1 ~]# xtrabackup --backup --target-dir=/tmp/backup --user=root --password=X --host=127.0.0.1 [mysql1 ~]# xtrabackup --prepare --target-dir=/tmp/backup LAB2: Prepare mysql3 Asynchronousslave 49 / 152
  • 50. LAB2: Prepare mysql3 (2) Asynchronousslave Copy the backup frommysql1 to mysql3: [mysql1 ~]# scp -r /tmp/backup mysql3:/tmp And restore it: [mysql3 ~]# systemctl stop mysqld [mysql3 ~]# rm -rf /var/lib/mysql/* [mysql3 ~]# xtrabackup --copy-back --target-dir=/tmp/backup [mysql3 ~]# chown -R mysql. /var/lib/mysql 50 / 152
  • 51. LAB3: mysql3 as asynchronous slave (2) Asynchronousslave Configure /etc/my.cnf with the minimal requirements: [mysqld] ... server_id=3 enforce_gtid_consistency = on gtid_mode = on #log_bin # new default #log_slave_updates # new default Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 51 / 152
  • 52. LAB2: Prepare mysql3 (3) Asynchronousslave Let´s start MySQL on mysql3: [mysql3 ~]# systemctl start mysqld Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 52 / 152
  • 53. LAB2: Prepare mysql3 (3) Asynchronousslave Let´s start MySQL on mysql3: [mysql3 ~]# systemctl start mysqld [mysql3 ~]# mysql_upgrade Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 53 / 152
  • 54. find the GTIDs purged change MASTER set the purged GTIDs start replication LAB3: mysql3 as asynchronous slave (1)   Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 54 / 152
  • 55. LAB3: mysql3 as asynchronous slave (2) Find the latest purged GTIDs: [mysql3 ~]# cat /tmp/backup/xtrabackup_binlog_info mysql-bin.000002 167646328 b346474c-8601-11e6-9b39-08002718d305:1-771 Connect to mysql3 and setup replication: mysql> CHANGE MASTER TO MASTER_HOST="mysql1", MASTER_USER="repl_async", MASTER_PASSWORD='Xslave', MASTER_AUTO_POSITION=1; mysql> RESET MASTER; mysql> SET global gtid_purged="VALUE FOUND PREVIOUSLY"; mysql> START SLAVE; Check that you receive the application´s traffic 55 / 152
  • 56. Administration made easy and more... MySQL-Shell Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 56 / 152
  • 57. MySQL Shell The MySQL Shell is an interactive Javascript, Python, or SQL interface supporting development and administration for MySQL. MySQL Shell includes the AdminAPI--available in JavaScript and Python--which enables you to set up and manage InnoDB clusters. It provides a modern and fluent API which hides the complexity associated with configuring, provisioning, and managing an InnoDB cluster, without sacrificing power, flexibility, or security. Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 57 / 152
  • 58. MySQL Shell (2) As example. the same operations as before but using the Shell: Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 58 / 152
  • 59. MySQL Shell (3) Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 59 / 152
  • 60. LAB4: MySQL InnoDB Cluster Createasingleinstancecluster Time to use the newMySQL Shell ! [mysql3 ~]# mysqlsh Let´s verify if our server is ready to become a member of a newcluster: mysql-js> dba.checkInstanceCon guration('root@mysql3:3306') Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 60 / 152
  • 61. LAB4: MySQL InnoDB Cluster Createasingleinstancecluster Time to use the newMySQL Shell ! [mysql3 ~]# mysqlsh Let´s verify if our server is ready to become a member of a newcluster: mysql-js> dba.checkInstanceCon guration('root@mysql3:3306') Change the configuration ! mysql-js> dba.con gureLocalInstance() Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 61 / 152
  • 62. LAB4: MySQL InnoDB Cluster (2) Restartmysqldtousethenewconfiguration: [mysql3 ~]# systemctl restart mysqld Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 62 / 152
  • 63. LAB4: MySQL InnoDB Cluster (2) Restartmysqldtousethenewconfiguration: [mysql3 ~]# systemctl restart mysqld Createasingleinstancecluster [mysql3 ~]# mysqlsh Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 63 / 152
  • 64. LAB4: MySQL InnoDB Cluster (2) Restartmysqldtousethenewconfiguration: [mysql3 ~]# systemctl restart mysqld Createasingleinstancecluster [mysql3 ~]# mysqlsh mysql-js> dba.checkInstanceCon guration('root@mysql3:3306') mysql-js> c root@mysql3:3306 mysql-js> cluster = dba.createCluster('perconalive') Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 64 / 152
  • 65. LAB4: Cluster Status mysql-js> cluster.status() { "clusterName": "perconalive", "defaultReplicaSet": { "name": "default", "primary": "mysql3:3306", "ssl": "DISABLED", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures.", "topology": { "mysql3:3306": { "address": "mysql3:3306", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE" } } } } Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 65 / 152
  • 66. Add mysql4 to the Group: restore the backup set the purged GTIDs use MySQL Shell LAB5: add mysql4 to the cluster (1)   Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 66 / 152
  • 67. [mysql4 ~]# systemctl start mysqld [mysql4 ~]# mysql_upgrade LAB5: add mysql4 to the cluster (2) Copy the backup frommysql1 to mysql4: [mysql1 ~]# scp -r /tmp/backup mysql4:/tmp And restore it: [mysql4 ~] systemctl stop mysqld [mysql4 ~] rm -rf /var/lib/mysql/* [mysql4 ~]# xtrabackup --copy-back --target-dir=/tmp/backup [mysql4 ~]# chown -R mysql. /var/lib/mysql Start MySQL on mysql4: 67 / 152
  • 68. LAB5: MySQL Shell to add an instance (3) [mysql4 ~]# mysqlsh Let´s verify the config: mysql-js> dba.checkInstanceCon guration('root@mysql4:3306') Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 68 / 152
  • 69. LAB5: MySQL Shell to add an instance (3) [mysql4 ~]# mysqlsh Let´s verify the config: mysql-js> dba.checkInstanceCon guration('root@mysql4:3306') And change the configuration: mysql-js> dba.con gureLocalInstance() Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 69 / 152
  • 70. LAB5: MySQL Shell to add an instance (3) [mysql4 ~]# mysqlsh Let´s verify the config: mysql-js> dba.checkInstanceCon guration('root@mysql4:3306') And change the configuration: mysql-js> dba.con gureLocalInstance() Restart the service to enable the changes: [mysql4 ~]# systemctl restart mysqld Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 70 / 152
  • 71. LAB5: MySQL InnoDB Cluster (4) Groupof2instances Find the latest purged GTIDs: [mysql4 ~]# cat /tmp/backup/xtrabackup_binlog_info mysql-bin.000002 167646328 b346474c-8601-11e6-9b39-08002718d305:1-77177 Connect to mysql4 and set GTID_PURGED [mysql4 ~]# mysqlsh mysql-js> c root@mysql4:3306 mysql-js> sql mysql-sql> RESET MASTER; mysql-sql> SET global gtid_purged="VALUE FOUND PREVIOUSLY"; Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 71 / 152
  • 72. LAB5: MySQL InnoDB Cluster (5) mysql-sql> js mysql-js> dba.checkInstanceCon guration('root@mysql4:3306') mysql-js> c root@mysql3:3306 mysql-js> cluster = dba.getCluster() mysql-js> cluster.checkInstanceState('root@mysql4:3306') mysql-js> cluster.addInstance("root@mysql4:3306") mysql-js> cluster.status() Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 72 / 152
  • 73. Cluster Status mysql-js> cluster.status() { "clusterName": "perconalive", "defaultReplicaSet": { "name": "default", "primary": "mysql3:3306", "ssl": "DISABLED", "status": "OK_NO_TOLERANCE", "statusText": "Cluster is NOT tolerant to any failures. 1 member is not active" "topology": { "mysql3:3306": { "address": "mysql3:3306", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql4:3306": { "address": "mysql4:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "RECOVERING" } } } Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 73 / 152
  • 74. Recovering progress On standard MySQL, monitor the group_replication_recovery channel to see the progress: mysql4> show slave status for channel 'group_replication_recovery'G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: mysql3 Master_User: mysql_innodb_cluster_rpl_user ... Slave_IO_Running: Yes Slave_SQL_Running: Yes ... Retrieved_Gtid_Set: 6e7d7848-860f-11e6-92e4-08002718d305:1-6, 7c1f0c2d-860d-11e6-9df7-08002718d305:1-15, b346474c-8601-11e6-9b39-08002718d305:1964-77177, e8c524df-860d-11e6-9df7-08002718d305:1-2 Executed_Gtid_Set: 7c1f0c2d-860d-11e6-9df7-08002718d305:1-7, b346474c-8601-11e6-9b39-08002718d305:1-45408, e8c524df-860d-11e6-9df7-08002718d305:1-2 ... Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 74 / 152
  • 75. point the application to the cluster Migrate the application Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 75 / 152
  • 76. LAB6: Migrate the application Make sure gtid_executed range on mysql2 is lower or equal than on mysql3 mysql[2-3]> show global variables like 'gtid_executed'G Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 76 / 152
  • 77. LAB6: Migrate the application Make sure gtid_executed range on mysql2 is lower or equal than on mysql3 mysql[2-3]> show global variables like 'gtid_executed'G When they are OK, stop asynchronous replication on mysql2 and mysql3: mysql2> stop slave; mysql3> stop slave; Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 77 / 152
  • 78. LAB6: Migrate the application Nowwe need to point the application to mysql3, this is the only downtime ! ... [ 21257s] threads: 4, tps: 12.00, reads: 167.94, writes: 47.98, response time: 18 [ 21258s] threads: 4, tps: 6.00, reads: 83.96, writes: 23.99, response time: 14 [ 21259s] threads: 4, tps: 7.00, reads: 98.05, writes: 28.01, response time: 16 [ 31250s] threads: 4, tps: 8.00, reads: 111.95, writes: 31.99, response time: 30 [ 31251s] threads: 4, tps: 11.00, reads: 154.01, writes: 44.00, response time: 13 [ 31252s] threads: 4, tps: 11.00, reads: 153.94, writes: 43.98, response time: 12 [ 31253s] threads: 4, tps: 10.01, reads: 140.07, writes: 40.02, response time: 17 ^C [mysql1 ~]# run_app.sh mysql3 Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 78 / 152
  • 79. LAB6: Migrate the application Nowwe need to point the application to mysql3, this is the only downtime ! ... [ 21257s] threads: 4, tps: 12.00, reads: 167.94, writes: 47.98, response time: 18 [ 21258s] threads: 4, tps: 6.00, reads: 83.96, writes: 23.99, response time: 14 [ 21259s] threads: 4, tps: 7.00, reads: 98.05, writes: 28.01, response time: 16 [ 31250s] threads: 4, tps: 8.00, reads: 111.95, writes: 31.99, response time: 30 [ 31251s] threads: 4, tps: 11.00, reads: 154.01, writes: 44.00, response time: 13 [ 31252s] threads: 4, tps: 11.00, reads: 153.94, writes: 43.98, response time: 12 [ 31253s] threads: 4, tps: 10.01, reads: 140.07, writes: 40.02, response time: 17 ^C [mysql1 ~]# run_app.sh mysql3 Nowthey can forget about mysql1: mysql[2-3]> reset slave all; Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 79 / 152
  • 80. previous slave (mysql2) can now be part of the cluster Add a third instance Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 80 / 152
  • 81. LAB7: Add mysql2 to the group We first upgrade to MySQL 8.0.3 : Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 81 / 152
  • 82. LAB7: Add mysql2 to the group We first upgrade to MySQL 8.0.3 : [mysql2 ~]# systemctl stop mysqld [mysql2 ~]# rpm -Uvh /root/rpms/mysql*rpm [mysql2 ~]# systemctl start mysqld [mysql2 ~]# mysql_upgrade Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 82 / 152
  • 83. LAB7: Add mysql2 to the group We first upgrade to MySQL 8.0.3 : [mysql2 ~]# systemctl stop mysqld [mysql2 ~]# rpm -Uvh /root/rpms/mysql*rpm [mysql2 ~]# systemctl start mysqld [mysql2 ~]# mysql_upgrade and then we validate the instance using MySQL Shell and we configure it: [mysql2 ~]# mysqlsh Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 83 / 152
  • 84. LAB7: Add mysql2 to the group We first upgrade to MySQL 8.0.3 : [mysql2 ~]# systemctl stop mysqld [mysql2 ~]# rpm -Uvh /root/rpms/mysql*rpm [mysql2 ~]# systemctl start mysqld [mysql2 ~]# mysql_upgrade and then we validate the instance using MySQL Shell and we configure it: [mysql2 ~]# mysqlsh mysql-js> dba.checkInstanceCon guration('root@mysql2:3306') mysql-js> dba.con gureLocalInstance() Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 84 / 152
  • 85. LAB7: Add mysql2 to the group We first upgrade to MySQL 8.0.3 : [mysql2 ~]# systemctl stop mysqld [mysql2 ~]# rpm -Uvh /root/rpms/mysql*rpm [mysql2 ~]# systemctl start mysqld [mysql2 ~]# mysql_upgrade and then we validate the instance using MySQL Shell and we configure it: [mysql2 ~]# mysqlsh mysql-js> dba.checkInstanceCon guration('root@mysql2:3306') mysql-js> dba.con gureLocalInstance() [mysql2 ~]# systemctl restart mysqld Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 85 / 152
  • 86. LAB7: Add mysql2 to the group (2) Back in MySQL Shell we add the newinstance: [mysql2 ~]# mysqlsh Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 86 / 152
  • 87. LAB7: Add mysql2 to the group (2) Back in MySQL Shell we add the newinstance: [mysql2 ~]# mysqlsh mysql-js> dba.checkInstanceCon guration('root@mysql2:3306') mysql-js> c root@mysql3:3306 mysql-js> cluster = dba.getCluster() mysql-js> cluster.addInstance("root@mysql2:3306") mysql-js> cluster.status() Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 87 / 152
  • 88. LAB7: Add mysql2 to the group (3) { "clusterName": "perconalive", "defaultReplicaSet": { "name": "default", "primary": "mysql3:3306", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "mysql2:3306": { "address": "mysql2:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql3:3306": { "address": "mysql3:3306", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql4:3306": { "address": "mysql4:3306", "mode": "R/O", "readReplicas": {},Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 88 / 152
  • 89. writing to a single server Single Primary Mode Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 89 / 152
  • 90. Default = Single Primary Mode By default, MySQL InnoDB Cluster enables Single Primary Mode. Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 90 / 152
  • 91. Default = Single Primary Mode By default, MySQL InnoDB Cluster enables Single Primary Mode. mysql> show global variables like 'group_replication_single_primary_mode'; +---------------------------------------+-------+ | Variable_name | Value | +---------------------------------------+-------+ | group_replication_single_primary_mode | ON | +---------------------------------------+-------+ Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 91 / 152
  • 92. Default = Single Primary Mode By default, MySQL InnoDB Cluster enables Single Primary Mode. mysql> show global variables like 'group_replication_single_primary_mode'; +---------------------------------------+-------+ | Variable_name | Value | +---------------------------------------+-------+ | group_replication_single_primary_mode | ON | +---------------------------------------+-------+ In Single Primary Mode, a single member acts as the writable master (PRIMARY) and the rest of the members act as hot-standbys (SECONDARY). The group itself coordinates and configures itself automatically to determine which member will act as the PRIMARY, through a leader election mechanism. Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 92 / 152
  • 93. Who´s the Primary Master ? old fashion style As the Primary Master is elected, all nodes part of the group knows which one was elected. This value is exposed in status variables: Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 93 / 152
  • 94. Who´s the Primary Master ? old fashion style As the Primary Master is elected, all nodes part of the group knows which one was elected. This value is exposed in status variables: mysql> show status like 'group_replication_primary_member'; +----------------------------------+--------------------------------------+ | Variable_name | Value | +----------------------------------+--------------------------------------+ | group_replication_primary_member | 28a4e51f-860e-11e6-bdc4-08002718d305 | +----------------------------------+--------------------------------------+ Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 94 / 152
  • 95. Who´s the Primary Master ? old fashion style As the Primary Master is elected, all nodes part of the group knows which one was elected. This value is exposed in status variables: mysql> show status like 'group_replication_primary_member'; +----------------------------------+--------------------------------------+ | Variable_name | Value | +----------------------------------+--------------------------------------+ | group_replication_primary_member | 28a4e51f-860e-11e6-bdc4-08002718d305 | +----------------------------------+--------------------------------------+ mysql> select member_host as "primary master" from performance_schema.global_status join performance_schema.replication_group_members where variable_name = 'group_replication_primary_member' and member_id=variable_value; +---------------+ | primary master| +---------------+ | mysql3 | +---------------+ Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 95 / 152
  • 96. Who´s the Primary Master ? new fashion style mysql> select member_host from performance_schema.replication_group_members where member_role='PRIMARY'; +-------------+ | member_host | +-------------+ | mysql3 | +-------------+ Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 96 / 152
  • 97. Create a Multi-Primary Cluster: It´s also possible to create a Multi-Primary Cluster using the Shell: Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 97 / 152
  • 98. Create a Multi-Primary Cluster: It´s also possible to create a Multi-Primary Cluster using the Shell: mysql-js> cluster=dba.createCluster('perconalive',{multiMaster: true}) Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 98 / 152
  • 99. Create a Multi-Primary Cluster: It´s also possible to create a Multi-Primary Cluster using the Shell: mysql-js> cluster=dba.createCluster('perconalive',{multiMaster: true}) A new InnoDB cluster will be created on instance 'root@mysql3:3306'. The MySQL InnoDB cluster is going to be setup in advanced Multi-Master Mode. Before continuing you have to con rm that you understand the requirements and limitations of Multi-Master Mode. Please read the manual before proceeding. I have read the MySQL InnoDB cluster manual and I understand the requirements and limitations of advanced Multi-Master Mode. Con rm [y|N]: Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 99 / 152
  • 100. Create a Multi-Primary Cluster: It´s also possible to create a Multi-Primary Cluster using the Shell: mysql-js> cluster=dba.createCluster('perconalive',{multiMaster: true}) A new InnoDB cluster will be created on instance 'root@mysql3:3306'. The MySQL InnoDB cluster is going to be setup in advanced Multi-Master Mode. Before continuing you have to con rm that you understand the requirements and limitations of Multi-Master Mode. Please read the manual before proceeding. I have read the MySQL InnoDB cluster manual and I understand the requirements and limitations of advanced Multi-Master Mode. Con rm [y|N]: Or you can force it to avoid interaction (for automation) : Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 100 / 152
  • 101. Create a Multi-Primary Cluster: It´s also possible to create a Multi-Primary Cluster using the Shell: mysql-js> cluster=dba.createCluster('perconalive',{multiMaster: true}) A new InnoDB cluster will be created on instance 'root@mysql3:3306'. The MySQL InnoDB cluster is going to be setup in advanced Multi-Master Mode. Before continuing you have to con rm that you understand the requirements and limitations of Multi-Master Mode. Please read the manual before proceeding. I have read the MySQL InnoDB cluster manual and I understand the requirements and limitations of advanced Multi-Master Mode. Con rm [y|N]: Or you can force it to avoid interaction (for automation) : > cluster=dba.createCluster('perconalive',{multiMaster: true, force: true}) Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 101 / 152
  • 102. get more info Monitoring Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 102 / 152
  • 103. Performance Schema Group Replication uses Performance_Schema to expose status mysql3> SELECT * FROM performance_schema.replication_group_membersG *************************** 1. row *************************** CHANNEL_NAME: group_replication_applier MEMBER_ID: ade14d5c-9e1e-11e7-b034-08002718d305 MEMBER_HOST: mysql4 MEMBER_PORT: 3306 MEMBER_STATE: ONLINE MEMBER_ROLE: SECONDARY MEMBER_VERSION: 8.0.3 *************************** 2. row *************************** CHANNEL_NAME: group_replication_applier MEMBER_ID: b9d01593-9dfb-11e7-8ca6-08002718d305 MEMBER_HOST: mysql3 MEMBER_PORT: 3306 MEMBER_STATE: ONLINE MEMBER_ROLE: PRIMARY MEMBER_VERSION: 8.0.3 Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 103 / 152
  • 104. mysql3> SELECT * FROM performance_schema.replication_connection_statusG *************************** 1. row *************************** CHANNEL_NAME: group_replication_applier GROUP_NAME: 8fc848d7-9e1c-11e7-9407... SOURCE_UUID: 8fc848d7-9e1c-11e7-9407... THREAD_ID: NULL SERVICE_STATE: ON COUNT_RECEIVED_HEARTBEATS: 0 LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00 RECEIVED_TRANSACTION_SET: 8fc848d7-9e1c-11e7-9407... b9d01593-9dfb-11e7-8ca6-08002718d305:1-21, da2f0910-8767-11e6-b82d-08002718d305:1-164741 LAST_ERROR_NUMBER: 0 LAST_ERROR_MESSAGE: LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00 LAST_QUEUED_TRANSACTION: 8fc848d7-9e1c-11e7-9407... LAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00 LAST_QUEUED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00 LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP: 2017-09-20 16:22:36.486... LAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP: 2017-09-20 16:22:36.486... QUEUEING_TRANSACTION: QUEUEING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00 QUEUEING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00 QUEUEING_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-00 00:00:00 Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 104 / 152
  • 105. Member State These are the different possible state for a node member: ONLINE OFFLINE RECOVERING ERROR: when a node is leaving but the plugin was not instructed to stop UNREACHABLE Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 105 / 152
  • 106. Status information & metrics Members mysql> SELECT member_host, member_state, member_role FROM performance_schema.replication_group_members; Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 106 / 152
  • 107. Status information & metrics Members mysql> SELECT member_host, member_state, member_role FROM performance_schema.replication_group_members; +-------------+--------------+-------------+ | member_host | member_state | member_role | +-------------+--------------+-------------+ | mysql4 | ONLINE | SECONDARY | | mysql3 | ONLINE | PRIMARY | +-------------+--------------+-------------+ 2 rows in set (0.00 sec) Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 107 / 152
  • 108. Status information & metrics - connections mysql> SELECT * FROM performance_schema.replication_connection_statusG Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 108 / 152
  • 109. Status information & metrics - connections mysql> SELECT * FROM performance_schema.replication_connection_statusG *************************** 1. row *************************** CHANNEL_NAME: group_replication_applier GROUP_NAME: 8fc848d7-9e1c-11e7-9407-... SOURCE_UUID: 8fc848d7-9e1c-11e7-9407-... THREAD_ID: NULL SERVICE_STATE: ON COUNT_RECEIVED_HEARTBEATS: 0 LAST_HEARTBEAT_TIMESTAMP: 0000-00-00 00:00:00 RECEIVED_TRANSACTION_SET: 8fc848d7-9e1c-11e7-9407-... b9d01593-9dfb-11e7-8ca6-08002718d305:1-21, da2f0910-8767-11e6-b82d-08002718d305:1-164741 LAST_ERROR_NUMBER: 0 LAST_ERROR_MESSAGE: LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00 LAST_QUEUED_TRANSACTION: 8fc848d7-9e1c-11e7-9407-... LAST_QUEUED_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00 LAST_QUEUED_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00 LAST_QUEUED_TRANSACTION_START_QUEUE_TIMESTAMP: 2017-09-20 16:22:36.4864... LAST_QUEUED_TRANSACTION_END_QUEUE_TIMESTAMP: 2017-09-20 16:22:36.4865... QUEUEING_TRANSACTION: QUEUEING_TRANSACTION_ORIGINAL_COMMIT_TIMESTAMP: 0000-00-00 00:00:00 QUEUEING_TRANSACTION_IMMEDIATE_COMMIT_TIMESTAMP: 0000-00-00 00:00:00 QUEUEING_TRANSACTION_START_QUEUE_TIMESTAMP: 0000-00-00 00:00:00 Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 109 / 152
  • 110. Status information & metrics Previously there was only local node statistics, now they are exposed all over the Group mysql> select * from performance_schema.replication_group_member_statsG Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 110 / 152
  • 111. Status information & metrics Previously there was only local node statistics, now they are exposed all over the Group mysql> select * from performance_schema.replication_group_member_statsG ************************** 1. row *************************** CHANNEL_NAME: group_replication_applier VIEW_ID: 15059231192196925:2 MEMBER_ID: ade14d5c-9e1e-11e7-b034-08002... COUNT_TRANSACTIONS_IN_QUEUE: 0 COUNT_TRANSACTIONS_CHECKED: 27992 COUNT_CONFLICTS_DETECTED: 0 COUNT_TRANSACTIONS_ROWS_VALIDATING: 0 TRANSACTIONS_COMMITTED_ALL_MEMBERS: 8fc848d7-9e1c-11e7-9407-08002... b9d01593-9dfb-11e7-8ca6-08002718d305:1-21, da2f0910-8767-11e6-b82d-08002718d305:1-164741 LAST_CONFLICT_FREE_TRANSACTION: 8fc848d7-9e1c-11e7-9407-08002... COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE: 0 COUNT_TRANSACTIONS_REMOTE_APPLIED: 27992 COUNT_TRANSACTIONS_LOCAL_PROPOSED: 0 COUNT_TRANSACTIONS_LOCAL_ROLLBACK: 0 Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 111 / 152
  • 112. ************************** 2. row *************************** CHANNEL_NAME: group_replication_applier VIEW_ID: 15059231192196925:2 MEMBER_ID: b9d01593-9dfb-11e7-8ca6-08002... COUNT_TRANSACTIONS_IN_QUEUE: 0 COUNT_TRANSACTIONS_CHECKED: 28000 COUNT_CONFLICTS_DETECTED: 0 COUNT_TRANSACTIONS_ROWS_VALIDATING: 0 TRANSACTIONS_COMMITTED_ALL_MEMBERS: 8fc848d7-9e1c-11e7-9407-08002... b9d01593-9dfb-11e7-8ca6-08002718d305:1-21, da2f0910-8767-11e6-b82d-08002718d305:1-164741 LAST_CONFLICT_FREE_TRANSACTION: 8fc848d7-9e1c-11e7-9407-08002... COUNT_TRANSACTIONS_REMOTE_IN_APPLIER_QUEUE: 0 COUNT_TRANSACTIONS_REMOTE_APPLIED: 1 COUNT_TRANSACTIONS_LOCAL_PROPOSED: 28000 COUNT_TRANSACTIONS_LOCAL_ROLLBACK: 0 Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 112 / 152
  • 113. Performance_Schema You can find GR information in the following Performance_Schema tables: replication_applier_con guration replication_applier_status replication_applier_status_by_worker replication_connection_con guration replication_connection_status replication_group_member_stats replication_group_members Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 113 / 152
  • 114. Status during recovery mysql> SHOW SLAVE STATUS FOR CHANNEL 'group_replication_recovery'G Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 114 / 152
  • 115. Status during recovery mysql> SHOW SLAVE STATUS FOR CHANNEL 'group_replication_recovery'G *************************** 1. row *************************** Slave_IO_State: Master_Host: <NULL> Master_User: gr_repl Master_Port: 0 ... Relay_Log_File: mysql4-relay-bin-group_replication_recovery.000001 ... Slave_IO_Running: No Slave_SQL_Running: No ... Executed_Gtid_Set: 5de4400b-3dd7-11e6-8a71-08002774c31b:1-814089, afb80f36-2bff-11e6-84e0-0800277dd3bf:1-5718 ... Channel_Name: group_replication_recovery Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 115 / 152
  • 116. Sys Schema The easiest way to detect if a node is a member of the primary component (when there are partitioning of your nodes due to network issues for example) and therefore a valid candidate for routing queries to it, is to use the sys table. Additional information for sys can be found at https://goo.gl/XFp3bt On the primary node: [mysql3 ~]# mysql < /root/addition_to_sys_mysql8.sql Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 116 / 152
  • 117. Sys Schema Is this node part of PRIMARY Partition: mysql3> SELECT sys.gr_member_in_primary_partition(); +------------------------------------+ | sys.gr_node_in_primary_partition() | +------------------------------------+ | YES | +------------------------------------+ Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 117 / 152
  • 118. Sys Schema Is this node part of PRIMARY Partition: mysql3> SELECT sys.gr_member_in_primary_partition(); +------------------------------------+ | sys.gr_node_in_primary_partition() | +------------------------------------+ | YES | +------------------------------------+ To use as healthcheck: mysql3> SELECT * FROM sys.gr_member_routing_candidate_status; +------------------+-----------+---------------------+----------------------+ | viable_candidate | read_only | transactions_behind | transactions_to_cert | +------------------+-----------+---------------------+----------------------+ | YES | YES | 0 | 0 | +------------------+-----------+---------------------+----------------------+ Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 118 / 152
  • 119. LAB8: Sys Schema - Health Check On one of the non Primary nodes, run the following command: mysql-sql> ush tables with read lock; Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 119 / 152
  • 120. LAB8: Sys Schema - Health Check On one of the non Primary nodes, run the following command: mysql-sql> ush tables with read lock; Nowyou can verify what the healthcheck exposes to you: mysql-sql> SELECT * FROM sys.gr_member_routing_candidate_status; +------------------+-----------+---------------------+----------------------+ | viable_candidate | read_only | transactions_behind | transactions_to_cert | +------------------+-----------+---------------------+----------------------+ | YES | YES | 950 | 0 | +------------------+-----------+---------------------+----------------------+ Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 120 / 152
  • 121. LAB8: Sys Schema - Health Check On one of the non Primary nodes, run the following command: mysql-sql> ush tables with read lock; Nowyou can verify what the healthcheck exposes to you: mysql-sql> SELECT * FROM sys.gr_member_routing_candidate_status; +------------------+-----------+---------------------+----------------------+ | viable_candidate | read_only | transactions_behind | transactions_to_cert | +------------------+-----------+---------------------+----------------------+ | YES | YES | 950 | 0 | +------------------+-----------+---------------------+----------------------+ mysql-sql> UNLOCK TABLES; Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 121 / 152
  • 122. application interaction MySQL Router Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 122 / 152
  • 123. MySQL Router MySQL Router is lightweight middleware that provides transparent routing between your application and backend MySQL Servers. It can be used for a wide variety of use cases, such as providing high availability and scalability by effectively routing database traffic to appropriate backend MySQL Servers. Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 123 / 152
  • 124. MySQL Router MySQL Router is lightweight middleware that provides transparent routing between your application and backend MySQL Servers. It can be used for a wide variety of use cases, such as providing high availability and scalability by effectively routing database traffic to appropriate backend MySQL Servers. MySQL Router doesn´t require any specific configuration. It configures itself automatically (bootstrap) using MySQL InnoDB Cluster´s metadata. Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 124 / 152
  • 125. LAB9: MySQL Router We will nowuse mysqlrouter between our application and the cluster. Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 125 / 152
  • 126. LAB9: MySQL Router (2) Configure MySQL Router that will run on the app server (mysql1). We bootstrap it using the Primary-Master: [root@mysql1 ~]# mysqlrouter --bootstrap mysql3:3306 --user mysqlrouter Please enter MySQL password for root: WARNING: The MySQL server does not have SSL ... Bootstrapping system MySQL Router instance... MySQL Router has now been con gured for the InnoDB cluster 'perconalive'. The following connection information can be used to connect to the cluster. Classic MySQL protocol connections to cluster 'perconalive': - Read/Write Connections: localhost:6446 - Read/Only Connections: localhost:6447 X protocol connections to cluster 'perconalive': - Read/Write Connections: localhost:64460 - Read/Only Connections: localhost:64470 [root@mysql1 ~]# chown -R mysqlrouter. /var/lib/mysqlrouter Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 126 / 152
  • 127. LAB9: MySQL Router (3) Nowlet´s modify the configuration file to listen on port 3306: Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 127 / 152
  • 128. LAB9: MySQL Router (3) Nowlet´s modify the configuration file to listen on port 3306: in /etc/mysqlrouter/mysqlrouter.conf: [routing:perconalive_default_rw] -bind_port=6446 +bind_port=3306 Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 128 / 152
  • 129. LAB9: MySQL Router (3) Nowlet´s modify the configuration file to listen on port 3306: in /etc/mysqlrouter/mysqlrouter.conf: [routing:perconalive_default_rw] -bind_port=6446 +bind_port=3306 We can stop mysqld on mysql1 and start mysqlrouter into a screen session: [mysql1 ~]# systemctl stop mysqld [mysql1 ~]# systemctl start mysqlrouter Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 129 / 152
  • 130. LAB9: MySQL Router (4) Before killing a member we will change systemd´s default behavior that restarts mysqld immediately: Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 130 / 152
  • 131. LAB9: MySQL Router (4) Before killing a member we will change systemd´s default behavior that restarts mysqld immediately: in /usr/lib/systemd/system/mysqld.service add the following under [Service] RestartSec=30 [mysql3 ~]# systemctl daemon-reload Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 131 / 152
  • 132. LAB9: MySQL Router (5) Nowwe can point the application to the router (back to mysql1): [mysql1 ~]# run_app.sh Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 132 / 152
  • 133. LAB9: MySQL Router (5) Nowwe can point the application to the router (back to mysql1): [mysql1 ~]# run_app.sh Check app and kill mysqld on mysql3 (the Primary Master R/Wnode) ! [mysql3 ~]# kill -9 $(pidof mysqld) Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 133 / 152
  • 134. LAB9: MySQL Router (5) Nowwe can point the application to the router (back to mysql1): [mysql1 ~]# run_app.sh Check app and kill mysqld on mysql3 (the Primary Master R/Wnode) ! [mysql3 ~]# kill -9 $(pidof mysqld) mysql2> select member_host from performance_schema.replication_group_members where member_role='PRIMARY'; +-------------+ | member_host | +-------------+ | mysql4 | +-------------+ Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 134 / 152
  • 135. ProxySQL / HA Proxy / F5 / ... 3rd party router/proxy Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 135 / 152
  • 136. 3rd party router/proxy MySQL InnoDB Cluster can also work with third party router / proxy. Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 136 / 152
  • 137. 3rd party router/proxy MySQL InnoDB Cluster can also work with third party router / proxy. If you need some specific features that are not yet available in MySQL Router, like transparent R/Wsplitting, then you can use your software of choice. Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 137 / 152
  • 138. 3rd party router/proxy MySQL InnoDB Cluster can also work with third party router / proxy. If you need some specific features that are not yet available in MySQL Router, like transparent R/Wsplitting, then you can use your software of choice. The important part of such implementation is to use a good health check to verify if the MySQL server you plan to route the traffic is in a valid state. Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 138 / 152
  • 139. 3rd party router/proxy MySQL InnoDB Cluster can also work with third party router / proxy. If you need some specific features that are not yet available in MySQL Router, like transparent R/Wsplitting, then you can use your software of choice. The important part of such implementation is to use a good health check to verify if the MySQL server you plan to route the traffic is in a valid state. MySQL Router implements that natively, and it´s very easy to deploy. Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 139 / 152
  • 140. ProxySQL also has native support for Group Replication which makes it maybe the best choice for advanced users. 3rd party router/proxy MySQL InnoDB Cluster can also work with third party router / proxy. If you need some specific features that are not yet available in MySQL Router, like transparent R/Wsplitting, then you can use your software of choice. The important part of such implementation is to use a good health check to verify if the MySQL server you plan to route the traffic is in a valid state. MySQL Router implements that natively, and it´s very easy to deploy. Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 140 / 152
  • 141. operational tasks Recovering Node Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 141 / 152
  • 142. Recovering Nodes/Members The old master (mysql3) got killed. Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 142 / 152
  • 143. Recovering Nodes/Members The old master (mysql3) got killed. MySQL got restarted automatically by systemd Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 143 / 152
  • 144. Recovering Nodes/Members The old master (mysql3) got killed. MySQL got restarted automatically by systemd Let´s add mysql3 back to the cluster Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 144 / 152
  • 145. LAB10: Recovering Nodes/Members [mysql3 ~]# mysqlsh mysql-js> c root@mysql4:3306 # The current master mysql-js> cluster = dba.getCluster() mysql-js> cluster.status() mysql-js> cluster.rejoinInstance("root@mysql3:3306") Rejoining the instance to the InnoDB cluster. Depending on the original problem that made the instance unavailable, the rejoin operation might not be successful and further manual steps will be needed to x the underlying problem. Please monitor the output of the rejoin operation and take necessary action if the instance cannot rejoin. Please provide the password for 'root@mysql3:3306': Rejoining instance to the cluster ... The instance 'root@mysql3:3306' was successfully rejoined on the cluster. The instance 'mysql3:3306' was successfully added to the MySQL Cluster. Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 145 / 152
  • 146. mysql-js> cluster.status() { "clusterName": "perconalive", "defaultReplicaSet": { "name": "default", "primary": "mysql4:3306", "status": "OK", "statusText": "Cluster is ONLINE and can tolerate up to ONE failure.", "topology": { "mysql2:3306": { "address": "mysql2:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql3:3306": { "address": "mysql3:3306", "mode": "R/O", "readReplicas": {}, "role": "HA", "status": "ONLINE" }, "mysql4:3306": { "address": "mysql4:3306", "mode": "R/W", "readReplicas": {}, "role": "HA", "status": "ONLINE" } } } } Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 146 / 152
  • 147. Recovering Nodes/Members (automatically) This time before killing a member of the group, we will persist the configuration on disk in my.cnf. Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 147 / 152
  • 148. Recovering Nodes/Members (automatically) This time before killing a member of the group, we will persist the configuration on disk in my.cnf. We will use again the same MySQL command as previously dba.con gureLocalInstance() but this time when all nodes are already part of the Group. Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 148 / 152
  • 149. LAB10: Recovering Nodes/Members (2) Verify that all nodes are ONLINE. ... mysql-js> cluster.status() Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 149 / 152
  • 150. LAB10: Recovering Nodes/Members (2) Verify that all nodes are ONLINE. ... mysql-js> cluster.status() Then on all nodes run: mysql-js> dba.con gureLocalInstance() Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 150 / 152
  • 151. LAB10: Recovering Nodes/Members (3) Kill one node again: [mysql3 ~]# kill -9 $(pidof mysqld) systemd will restart mysqld and verify if the node joined. Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 151 / 152
  • 152. Thank you ! Any Questions ? Copyright @ 2017 Oracle and/or its affiliates. All rights reserved. 152 / 152