Tuesday, 1 April 2014

MySQL-5.7.4- Change master without stopping slave altogether

At MySQL, we have been working on simplifying the failover process making it faster, more flexible and easier to use. MySQL-5.6 came up with Global Transaction Identifiers (GTID) which was a huge leap in the direction of easing the failover process hiding the details about replication logs and positions. With MySQL-5.7.4, we are introducing a new feature that further adds to flexibility and onliness- the user can only shut down components that he needs to re-configure.

What we allow with this new feature is to execute CHANGE MASTER TO command without stopping slave altogether. We realized that stopping slave altogether is not mandatory in all cases and doing that was more of a cautious approach to switching master restricting more than what’s required at times.

Lets dive deep into this to see what can be relaxed here. For this, lets break the replication process into two modules:
M1) Receiver module (concerned with IO thread) and
M2) Applier module (concerning SQL thread or coordinator and worker threads, whichever be the case)

We can now divide options under the command CHANGE MASTER TO into three groups based on the above classification: 

G1) Options that change a receiver configuration. 
G2) Options that change an applier configuration. 
G3) Options that relate to both (1) and (2). 

For the precise division look at the picture below. Note that the illustration takes into account all the CHANGER MASTER TO options present currently (MySQL-5.7.4). 



Note that given its current usage, we could put the MASTER_AUTO_POSITION  option under group G1(i.e., receive side). Currently only the receiver module uses GTID positioning but we foresee that in future it will be good to allow the applier module to use GTID positioning. We thus keep the master_auto_position option under group G3 to keep things future-proof. Worried that obstructs the failover process again like before? Well that's not a problem as MASTER_AUTO_POSITION is a slave's configuration that you only set once. Then it affects all future times that you redirect to a new immediate master. So you don't specify it on fail-over.

With the classifications stated above, we propose a 3-point rule stated as:
R1) For CHANGE MASTER TO options under group G1, stop only receiver module (M1) using the command STOP SLAVE IO_THREAD command.
R2) For CHANGE MASTER TO options under group G2, stop only applier module (M2) using the command STOP SLAVE SQL_THREAD command.
R3) For CHANGE MASTER TO options under group G3, stop both receiver (M1) and
applier modules (M2) using the command STOP SLAVE.

HOW DOES THIS RULE WORK?
Lets explore more about our 3-point rule(R1-R3):

  • Starting with rule R1, we stated that we only need to stop the receiver thread to change receive options. What happens to the applier module? Well the applier module keeps applying pending transactions, if any. If you have a situation where the slave was lagging behind with a lot of transactions queued into the slave's logs, you can allow the applier module to catch up while you switch masters or change a configuration on the receive side keeping the master same.
  • Under rule R2, we stated that we can change configuration of applier module after stopping the applier threads ONLY, receiver module can be running while you do this. So while you fine-tune your slave applier module the receiver module keeps reading master's log and copying transactions to the slave's log. These could then be applied in-parallel by the slave when the applier module is up and running.
  • Under rule R3, you stop both receiver and applier modules. So, this is analogous to
    STOP SLAVE;
    CHANGE MASTER TO <master_def>;
    used before this feature was available.

Worried how relay log purge would be handled now? Well its pretty simple- Under rules R1 and R2, we do not purge logs implicitly on executing the CHANGE MASTER command so that the receiver or applier whichever is running just keeps processing/adding relay logs as it would do if no replication thread was stopped.

Finally note that you need not always look at the figure above to find which options are allowed which thread being stopped. You just need to ask yourself if the parameter is related to receiver thread and stop the concerned thread. And if you go wrong there are error messages to guide you on the right path. Look at the next section for the usage and the errors.

EXAMPLES OF USAGE
example 1:
Previously, to change master heartbeat period, you would do a

STOP SLAVE;
CHANGE MASTER TO MASTER_HEARTBEAT_PERIOD= <value>;
START SLAVE;

Now, with this feature you just have to stop the receiver (io) thread as heartbeat has nothing to do with the applier thread(s).
 
STOP SLAVE IO_THREAD;
CHANGE MASTER TO MASTER_HEARTBEAT_PERIOD= <value>;
START SLAVE IO_THREAD;

Note that the applier thread keeps executing the transactions in the relay log while you change the heartbeat period for the master-slave connection. Likewise, you could do this with all the attributes mentioned in group G1 in the figure above.

example 2:
Similarly, to change applier thread attributes, you just have to stop the applier threads.
So instead of

STOP SLAVE;
CHANGE MASTER TO MASTER_DELAY=<value>;
START SLAVE;

it is enough to do the following with this feature.

STOP SLAVE SQL_THREAD;
CHANGE MASTER TO MASTER_DELAY=<value>;
START SLAVE SQL_THREAD;

Lastly, if you go wrong there are nicely worded error message to guide you. So in the first case, if your receiver module is active and you execute a

CHANGE MASTER TO MASTER_HEARTBEAT_PERIOD= <value>;

 you get an error saying:
This operation cannot be performed with a running slave io thread; run STOP SLAVE IO_THREAD first.


and if you forgot changing applier module when it was required, the server will say:

This operation cannot be performed with a running slave sql thread; run STOP SLAVE SQL_THREAD first.


Lastly you still have the error message saying both the threads should stop appearing only for MASTER_AUTO_POSITION option now:
MASTER
This operation cannot be performed with a running slave; run STOP SLAVE first.

Lets see some examples once again:

example 3:
slave>START SLAVE;
slave>CHANGE MASTER TO MASTER_DELAY= 10;
ERROR 1900 (HY000): This operation cannot be performed with a running slave sql thread; run STOP SLAVE SQL_THREAD first

example 4:
mysql> CHANGE MASTER TO MASTER_HEARTBEAT_PERIOD= 10;
ERROR 1904 (HY000): This operation cannot be performed with a running slave io thread; run STOP SLAVE IO_THREAD first.

example 5:
mysql> CHANGE MASTER TO MASTER_AUTO_POSITION= 0;
ERROR 1198 (HY000): This operation cannot be performed with a running slave; run STOP SLAVE first

SIDE-EFFECTS?

While implementing this, we have taken special care to make sure we dont break anything for a user switching masters like:

STOP SLAVE;
CHANGE MASTER to <master_def>;
START SLAVE.

There are absolutely NO side-effects to worry you. Note that as stated before, CHANGE MASTER TO will not delete relay logs if one of the receiver or applier thread is running.

Try it out and give us your feedback. As always, we look forward to hearing from you to improve this feature. Enjoy :)

Sunday, 22 September 2013

MySQL 5.7: Monitoring Replication with the Performance Schema tables

In the previous post, we had a look at the Performance Schema tables to monitor MySQL replication. We discussed why these were needed, what advantages they have to offer and what each table stands for. We also did a small exercise to see how one can select a particular table given the data that is needed. We will now dive deeper and look at the individual tables to understand what fields they are made of. We will also go through different scenarios to see how the different pieces of data can be used.

Note that these tables report the status/configuration of a replication slave. So, these are empty at the master server or at a standalone server. All the SELECTs we will talk of, are done at the slave. Lets take the tables one by one and go through them.

REPLICATION_CONNECTION_CONFIGURATION:

This table shows the configuration parameters used by the slave server for connecting to the master server. Lets set up replication with the following CHANGE MASTER TO command executed on a slave server. The  CHANGE MASTER TO command adds/changes the parameters that the slave server uses for connecting to the master server. To see the definition of all the fields in this table, please visit our official documentation.

mysql> change master to master_host='127.0.0.1', master_port=13000 ,
master_user
='rpluser', master_password='secret', master_connect_retry=40,
master_retry_count
=10, master_ssl=1, master_ssl_verify_server_cert=1,
master_auto_position
=1
;
Query OK
, 0 rows affected, 2 warnings (0.51 sec)

We can now look up the connection parameters set in our table for replication connection configuration. Lets do a SELECT * on this table to see what we get:

mysql> select * from performance_schema.replication_connection_configuration\G
*************************** 1. row ***************************
                         HOST: 127.0.0.1
                         PORT: 13000
                         USER: rpluser
            NETWORK_INTERFACE:
                AUTO_POSITION: 1
                  SSL_ALLOWED: YES
                  SSL_CA_FILE:
                  SSL_CA_PATH:
              SSL_CERTIFICATE:
                   SSL_CIPHER:
                      SSL_KEY:
SSL_VERIFY_SERVER_CERTIFICATE: YES
                 SSL_CRL_FILE:
                 SSL_CRL_PATH:
    CONNECTION_RETRY_INTERVAL: 40
       CONNECTION_RETRY_COUNT: 10
1 row in set (0.00 sec)

You can see that the changes made by CHANGE MASTER TO command can be seen in this table. These values are preserved here until the next CHANGE MASTER TO or RESET SLAVE ALL command. A row in this table represents a connection between the slave and its master. Since we have only one master here, we see only one row. If MySQL supported multi-source, then this table would have information about multiple sources- one row per source.

Note that a START SLAVE is not done yet. The CHANGE MASTER TO command makes changes to the internal data structures regarding the connection parameters (if you know the internals, the "active_mi" variable is what I am talking about) and this table just picks up values from this data structure. So, the values shown in the table just reflects the state of the this data structure at the moment the SELECT was done.

Lets now move to the next table "replication_connection_status".

REPLICATION_CONNECTION_STATUS:

This table shows the current status of connection between the slave and its master. Technically, it talks about the IO thread's status. To view the definition of all the fields in this table, please visit our official documentation. Now, we need a START SLAVE because this command establishes the connection between slave and its master. So, lets execute a START SLAVE/ START SLAVE IO_THREAD command followed by a SELECT* from "replication_connection_status" table.

mysql> start slave; Query OK, 0 rows affected (0.05 sec)

mysql> select * from performance_schema.replication_connection_status\G
*************************** 1. row ***************************
             SOURCE_UUID: 1766a057-10cd-11e3-b06a-0021cc72bfc3
               THREAD_ID: 2
           SERVICE_STATE: ON
RECEIVED_TRANSACTION_SET:
       LAST_ERROR_NUMBER: 0
      LAST_ERROR_MESSAGE:
    LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00

So, we can see from here that, at the moment SELECT was done:

  1. The slave was reading from a master with UUID= 1766a057-10cd-11e3-b06a-0021cc72bfc3,
  2. The thread that handles this connection (IO thread) has an id=2. The thread_id field is very useful if you want to join this table with other Performance Schema/Information Schema tables. As an example, you can join this table with the table performance_schema.threads to extract a lot more information about this connection (IO thread). Likewise, there is a lot more  thread-wise statistics you can collect with the other existing Performance Schema tables. Note that there is a bug here  but thankfully there is an easy workaround.
  3. Further, we see that our connection (the IO thread) is ready for service (SERVICE_STATE: ON). 
Note that the service state shows one of (ON/OFF/CONNECTING):
  • ON: meaning ready to serve i.e. if there is an event that is executed on master, you can assume      that it will be read by the slave via this connection.
  • OFF: meaning the connection (IO thread) is not serving i.e. it is not ready to read from the master or killed.
  • CONNECTING: meaning the connection (IO thread)is not established yet but the slave is trying to connect.
It is important to note here that the names and semantics of thread_id and service_state is extended to all the status tables and they all mean exactly the same. Hence, going forward lets skip explaining these two columns in the following tables.

Lets now execute a couple of queries on the master server and see if we can get the set of received transactions on our slave server.

mysql>create table test.t(a int primary key); Query OK, 0 rows affected (0.61 sec)
mysql>insert into test.t values(1); Query OK, 1 row affected (0.06 sec)

Lets turn back to our slave server and see what the field "received_transaction_set" has.

mysql> select received_transaction_set from performance_schema.replication_connection_status\G
*************************** 1. row ***************************
RECEIVED_TRANSACTION_SET: 1766a057-10cd-11e3-b06a-0021cc72bfc3:1-4
1 row in set (0.00 sec)

Lets now cause an error in the connection (IO thread) and look at the error related fields in this table. To cause an error in the connection, we will provide a wrong password in our CHANGE MASTER TO command.


mysql>stop slave; Query OK, 0 rows affected (0.10 sec)
mysql>change master to master_password='abcd'; Query OK, 0 rows affected, 2 warnings (0.29 sec)

mysql> start slave;Query OK, 0 rows affected (0.05 sec)


mysql> select last_error_number, last_error_message, last_error_timestamp from 
performance_schema.replication_connection_status\G
*************************** 1. row ***************************                   
  LAST_ERROR_NUMBER: 1045                                                         
  LAST_ERROR_MESSAGE: error connecting to master 'rpluser@127.0.0.1:13000' - retry-time: 40 retries: 2                                                           
LAST_ERROR_TIMESTAMP: 0000-00-00  
1 row in set (0.00 sec)


The names and semantics of the error related fields(number, message and timestamp) are again exactly same across all the tables, wherever applicable. Hence, going ahead, lets skip monitoring error related fields in the following tables.

Lets now clean-up the error we caused to move ahead and work with other tables.

mysql> stop slave; Query OK, 0 rows affected (0.06 sec)
mysql> change master to master_password='secret'; Query OK, 0 rows affected, 2 warnings (0.25 sec)
mysql> start slave; Query OK, 0 rows affected (0.08 sec)

mysql> select last_error_number, last_error_message, last_error_timestamp from 
performance_schema.replication_connection_status\G
*************************** 1. row ***************************                   
  LAST_ERROR_NUMBER: 0                                                         
  LAST_ERROR_MESSAGE:                                                           
LAST_ERROR_TIMESTAMP: 0000-00-00  
1 row in set (0.00 sec) 

Great! We are done with monitoring the connection specific parameters between a slave and its master. Lets now move to the other half of replication module- execution of events received from the master on the slave server.

REPLICATION_EXECUTE_CONFIGURATION/STATUS:

This table shows the configuration parameters that affect execution of transactions by the slave server. Parameters stored in the table can be changed with the CHANGE MASTER TO command. Lets setup a delayed replication now and see how the same can be read from the tables replication execute configuration.


mysql> stop slave; Query OK, 0 rows affected (0.11 sec)
mysql> change master to master_delay=150; Query OK, 0 rows affected (0.29 sec)
mysql> start slave; Query OK, 0 rows affected (0.06 sec)

mysql> select desired_delay from performance_schema.replication_execute_configuration\G
*************************** 1. row ***************************
DESIRED_DELAY: 150
1 row in set (0.00 sec)

Lets now execute an event at our master to see if the event is executed at the slave or not.

mysql> insert into test.t values(1); Query OK, 1 row affected (0.06 sec)

Lets move to slave and make sure our slave has not elapsed the 150 second delay yet.

mysql> select remaining_delay from performance_schema.replication_execute_status\G
*************************** 1. row ***************************
REMAINING_DELAY: 147
1 row in set (0.00 sec)

Now we should be able to see that although the insert(2) is received at slave it is not executed yet. The last time we saw our received_transaction_set (before insert(2)), it was 1766a057-10cd-11e3-b06a-0021cc72bfc3:1-4. So insert(2) should be assigned a GTID 1766a057-10cd-11e3-b06a-0021cc72bfc3:5.

mysql> select received_transaction_set from performance_schema.replication_connection_status\G
*************************** 1. row ***************************
RECEIVED_TRANSACTION_SET: 1766a057-10cd-11e3-b06a-0021cc72bfc3:5
1 row in set (0.00 sec)

Ok, we have received the event. Now lets see if this insert(2) with GTID 1766a057-10cd-11e3-b06a-0021cc72bfc3:5 is executed yet.

mysql> select @@global.gtid_executed\G
*************************** 1. row ***************************
@@global.gtid_executed: 1766a057-10cd-11e3-b06a-0021cc72bfc3:1-4
1 row in set (0.00 sec)

Lets now wait until, our delay is elapsed.

mysql> select remaining_delay from performance_schema.replication_execute_status\G
*************************** 1. row ***************************
REMAINING_DELAY: 0
1 row in set (0.00 sec)

And check if the insert(2) has been executed.

mysql> select @@global.gtid_executed\G
*************************** 1. row ***************************
@@global.gtid_executed: 1766a057-10cd-11e3-b06a-0021cc72bfc3:1-5
1 row in set (0.00 sec)

Executed. Great!

Before going ahead to have a look at the other goodies in store, lets revise a couple of things:
  • SQL THREAD: Executes the events received from the master at slave.
  • MULTI-THREADED SLAVE (MTS): With only one thread (SQL thread) reading and applying events that are applied by multiple clients concurrently on the master, the slave starts lagging behind the master. In this case, one can chose to opt for MTS. Simply put, we have a buffer (relaylog) on slave server that stores the events executed on the master server. The coordinator thread reads the relaylog and assigns these to worker threads which execute the events assigned to them parallely on the slave. The coordinator thread is the scheduler and the worker threads are responsible for executing the events.

REPLICATION_EXECUTE_STATUS_BY_COORDINATOR:

This table is used in two ways depending on whether the slave is operating with one applier (MTS disabled) or multiple appliers executing events in parallel(MTS enabled). When operating with one applier, this table reports the status of this applier. When multiple appliers, the same table reports the status of the scheduler (coordinator thread). Lets explore the single applier use case.

mysql> select * from performance_schema.replication_execute_status_by_coordinator\G
*************************** 1. row ***************************
           THREAD_ID: 13
       SERVICE_STATE: ON
   LAST_ERROR_NUMBER: 0
  LAST_ERROR_MESSAGE:
LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
1 row in set (0.00 sec)

We see that the applier's (SQL thread) id is 13 and it is alive (SERVICE_STATE: ON). For now there are only two states that we show- ON/OFF. The semantics of error fields are exactly similar to the explanations earlier. Hence we skip them here.

The Multiple applier case (MTS enabled) is similar except that this table will show the scheduler's (coordinator) thread id. Hence we will skip the explanations and move to the next table. To read more about this table, please visit our official documentation for this table.

REPLICATION_EXECUTE_STATUS_BY_WORKER (Single Applier Case):

When the slave server is operating in the single applier mode, lets see what the table replication_execute_status_by_worker shows us. Conceptually, we don't have a scheduler and multiple appliers. so the table should be empty.

mysql> select * from performance_schema.replication_execute_status_by_worker\G
Empty set (0.00 sec)

And it is empty :)

REPLICATION_EXECUTE_STATUS_BY_WORKER (Multiple Appliers Case):

Lets now set up our slave to use multiple appliers (MTS enabled) and then see how we can monitor the replication status concerned with our scheduler and appliers (coordinator and worker threads). Lets call our multiple appliers "worker".

mysql> stop slave; Query OK, 0 rows affected (0.10 sec)
mysql> SET GLOBAL slave_parallel_workers=2; Query OK, 0 rows affected (0.00 sec)

Note that a START SLAVE is necessary to have the workers ready for action. Just setting slave_parallel_workers=2, doesn’t spawn the 2 workers.

mysql> start slave; Query OK, 0 rows affected, 1 warning (0.14 sec)

mysql> select * from performance_schema.replication_execute_status_by_worker\G
*************************** 1. row ***************************
            WORKER_ID: 0
            THREAD_ID: 16
        SERVICE_STATE: ON
LAST_SEEN_TRANSACTION:
    LAST_ERROR_NUMBER: 0    LAST_ERROR_MESSAGE: LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00
*************************** 2. row ***************************
            WORKER_ID: 1
            THREAD_ID: 17
        SERVICE_STATE: ON
LAST_SEEN_TRANSACTION:     LAST_ERROR_NUMBER: 0    LAST_ERROR_MESSAGE: LAST_ERROR_TIMESTAMP: 0000-00-00 00:00:00 2 rows in set (0.00 sec)

So, we see that we have two rows to represent the two workers. The worker_id field here is same as the id field in slave_worker_info table, so you can again join the two tables to know more about a worker thread. The service_state(ON/OFF) has nothing new to add.
The interesting parts here are:
  1. You get the per worker error. So, if two of your workers have errored out in a group, you can identify both the errors as opposed to only one through SHOW SLAVE STATUS.
  2. last_seen_transaction is the new field we have added to help detect MTS failures better. This helps you find the GTID of all the transactions being executed by the all the workers at the moment the error happened. These values are preserved in case of an error and in the idle state of the worker after executing the transactions.
To read more about this table and field-wise descriptions, please visit our official documentation for this table.

CONCLUDING REMARKS:

The goal of this post was to help you familiarize with the Performance Schema tables relating to replication. Please try out this new interface for monitoring MySQL replication and let us know your feedback. To read more on these Performance Schema tables, you can visit the blog on multisource replication by Rith. These Performance Schema tables look really powerful with multisource replication. The motive behind this feature was to make it easier to monitor the health and performance of MySQL replication. We hope this new interface to monitor MySQL replication makes our user's life easier.

Saturday, 21 September 2013

MySQL 5.7: Introducing the Performance Schema tables to monitor Replication


MySQL-5.6 was our best release ever and we are happy to see people praising it. This motivates us to work even harder for our next release. MySQL-5.7.2 DMR is out and we have already got a number of MySQL replication features. Below is a list of these features:

    - mysqlbinlog idempotent mode
    - mysqlbinlog --rewrite-db option
    - Dump thread does not take binary log lock
    - Lossless semisync
    - intra-schema multi-threaded slave
    - Performance Schema tables for replication monitoring (this post :) )

We want to make MySQL-5.7 huge and these are tiny steps towards the same.

This post introduces the Performance Schema tables to monitor MySQL replication. You can find the official documentation for the same here. This post gives a brief overview of these Performance Schema tables in an attempt to prepare you to dive deeper into the details. Please follow my second blog post on this topic to know the details on the individual tables.

Why Performance Schema Tables:

Replication is a delicate module of MySQL and people want to know its status frequently. To monitor the MySQL replication health, we have been using the SHOW SLAVE STATUS command for long. But replication is growing fast and this static command is not able to match up to people's expectations. SHOW SLAVE STATUS does not scale in the sense that there are multiple parts of the slave server: receiver part, applier part, intermediate queues, etc, in cases, multiple instances of each part. It has 54 fields, as of now, interleaving different information together. It has now reached such a point where we foresee that not having an SQL interface to query exactly what is required from the replication status would make monitoring tedious. So, we decided to have tables to monitor replication and we have put all our tables in the performance_schema database.

The motivation behind having tables is:

  • To make it easier to access exactly what is required, through an SQL interface,
  • Pieces of data can be assigned to variables and thus used in stored procedures etc,
  • Easier testing with SELECT item from Performance Schema tables,
  • Split the logically unrelated information into different tables,
  • Cross-reference monitoring data seamlessly by joining with other Performance Schema tables, Information_Schema tables etc,
  • Easier to extend and
  • More flexibility to accommodate a lot of replication information but still be organized and easy to use.

Additionally, we noted that SHOW SLAVE STATUS used a lot of technical jargons(words like IO, SQL, Relay_log etc). In order to make it easier to monitor replication, we decided to hide these implementation specific details so that it is easier to understand the names. We have tried our best to make it convenient for everyone to understand the names of the tables and the fields.

Why 6 tables? What does each stand for:

The idea is to come up with a better organized and an easy to extend interface. To start with, we have split the information under SHOW SLAVE STATUS into different parts based on:

  1. Connection information or execute information
  2. In each of (1), we further have configuration and status related fields put into different tables.

Based on the above classifications, we have got four tables:

    a)    replication_connection_configuration,
    b)    replication_connection_status,
    c)    replication_execute_configuration and
    d)    replication_execute_status

Note that all replication applier module status information under (d) would become confusing again and it makes more sense to have them split based on overall stats, coordinator's status or worker's status. So, we have two more tables namely,

    e)    replication_execute_status_by_coordinator and
    f)     replication_execute_status_by_worker.

Note that (d) is responsible for showing the overall status of the applier module in replication, (e) relates to the coordinator thread and (f) refers to the worker thread.

See the tree diagram below for the basis of division of data into different tables as discussed above:


To make it easier to select a table when you have a configuration/status parameter in mind, just ask yourself the following questions:

  • Is it relating to the  connection between the slave and its master?
  • If yes, you have narrowed down your scope to only (a) and (b).
  • If not, the others (c, d, e or f).
  • Suppose, the data you are looking for is relating to the connection (not status), now just ask yourself if it is a configuration parameter or relating to the status of connection between slave and its master. And there you are- you know which table to look at.
  • If the data you are looking for is relating to the execution of events received at your slave (c, d, e, f) are the tables you have. Now ask yourself the same question. Is this data you need relating to a configuration parameter (c)or relating to the status of execution of events at the slave(d, e or f).

For those who know the internals and have been using SHOW SLAVE STATUS, the names might look new and it could take a little time to get used to them. So, lets now see how these tables relate to the different replication threads. If you are not familiar with the threads and buffers, you don't really need to understand them. This new replication monitoring interface is designed to make sure you don't have to and hence you can skip this section. If you are familiar with the replication threads and want to understand the tables in terms of threads, you can use the table below to map these tables to the replication thread you want to monitor. But before we move ahead lets revise a couple of things:

  • IO THREAD: Responsible for the connection between master and slave, also gets queries   executed on the master onto the slave.
  • SQL THREAD: Executes the events received from the master at slave.
  • MULTI-THREADED SLAVE (MTS): With only one thread (SQL thread) reading and applying events that are applied by multiple clients concurrently on the master, the slave starts lagging behind the master. In this case, one can chose to opt for MTS. Simply put, we have a buffer (relaylog) on slave server that stores the events executed on the master server. The coordinator thread reads the relaylog and assigns these to worker threads which execute the events assigned to them parallely on the slave. The coordinator thread is the scheduler and the worker threads are responsible for executing the events.

With the background set, lets now look at the table below to understand the mapping between the Performance Schema tables and the replication threads they monitor.









To end with, it may interest you to note that these Performance Schema tables not only give you an easier way to monitor replication but also allow you to easily obtain all the information available about replication threads and coordinates through a join with the other Performance Schema tables, Information Schema tables, mysql.slave_worker_info etc.

Go check it out and let us know if it allows you to know more about the replication status. We want to have your valuable feedback to improve this and add a lot more information to this interface.

Want to read more and explore the fields, how-to, what fields etc ? Here is the deep-dive in the next post :) Enjoy!

Tuesday, 10 September 2013

Hello World...

Hello,

I am Shivji Kumar Jha (shiv), a graduate from BIT Mesra who fell in love with MySQL sometime ago. I am one of those lucky ones who got to work at MySQL as well and I thoroughly enjoy the experience every single day. I believe this is the golden age of technology and I aim to  contribute my bit to it affecting millions of people through an open source software – the most popular open source database on web – MySQL ! At MySQL I am currently working as a software developer with the replication team, which provides high-availability solution built into MySQL server. My area of expertise revolves around analyzing and enhancing performance Database replicas, enhancing the replication monitoring tools. This blog is an attempt to reach out to people and share the tiny details I know about MySQL Replication. The views expressed on this blog are my own and do NOT necessarily reflect the views of Oracle Corporation. 


I do not claim that all information available on this blog is perfectly correct. So, if you find something that is wrongly published, please let me know of the same. Also, in case of doubts please use the comments section of this blog. Together lets talk MySQL Replication :)