Related Documentation Download this Manual
PDF (US Ltr) - 46.1Mb
PDF (A4) - 46.1Mb
PDF (RPM) - 41.5Mb
HTML Download (TGZ) - 10.6Mb
HTML Download (Zip) - 10.6Mb
HTML Download (RPM) - 9.1Mb
Man Pages (TGZ) - 220.4Kb
Man Pages (Zip) - 325.8Kb
Info (Gzip) - 4.1Mb
Info (Zip) - 4.1Mb
Excerpts from this Manual

MySQL 8.0 Reference Manual  /  ...  /  Production Deployment of InnoDB Cluster

21.2.4 Production Deployment of InnoDB Cluster

When working in a production environment, the MySQL server instances which make up an InnoDB cluster run on multiple host machines as part of a network rather than on single machine as described in Section 21.2.5, “Sandbox Deployment of InnoDB Cluster”. Before proceeding with these instructions you must install the required software to each machine that you intend to add as a server instance to your cluster, see Section 21.2.3, “Methods of Installing”.

The following diagram illustrates the scenario you work with in this section:

Figure 21.2 Production Deployment

Three MySQL servers are grouped together as a production InnoDB cluster. One of the servers is the primary instance, and the other two are secondary instances. The IP address for the primary server is 139.59.177.10, and the IP addresses for the two secondary instances are 139.59.177.11 and 139.59.177.12. MySQL Router connects a client application to the primary instance. The admin capability in MySQL Shell interacts directly with the production InnoDB cluster.

Important

Unlike a sandbox deployment, where all instances are deployed locally to one machine which AdminAPI has local file access to and can persist configuration changes, for a production deployment you must persist any configuration changes on the instance. How you do this depends on the version of MySQL running on the instance, see Persisting Settings.

To pass a server's connection information to AdminAPI, use URI-like connection strings or a data dictionary; see Section 4.2.5, “Connecting to the Server Using URI-Like Strings or Key-Value Pairs”. In this documentation, URI-like strings are shown.

User Privileges

The user account used to administer an instance does not have to be the root account, however the user needs to be assigned full read and write privileges on the InnoDB cluster metadata tables in addition to full MySQL administrator privileges (SUPER, GRANT OPTION, CREATE, DROP and so on). The preferred method to create users to administer the cluster is using the clusterAdmin option with the dba.configureInstance(), and Cluster.addInstance() operations. In this procedure the user ic is shown in examples.

If only read operations are needed (such as for monitoring purposes), an account with more restricted privileges can be used. See Configuring Users for InnoDB Cluster.

User Accounts Created by InnoDB Cluster

As part of using Group Replication, InnoDB cluster creates internal users which enable replication between the servers in the cluster. These users are internal to the cluster, and the user name of the generated users follows a naming scheme of mysql_innodb_cluster_r[10_numbers]. The hostname used for the internal users depends on whether the ipWhitelist option has been configured. If ipWhitelist is not configured, it defaults to AUTOMATIC and the internal users are created using both the wildcard % character and localhost for the hostname value. When ipWhitelist has been configured, for each address in the ipWhitelist list an internal user is created. For more information, see Creating a Whitelist of Servers.

Each internal user has a randomly generated password. The randomly generated users are given the following grants:

GRANT REPLICATION SLAVE ON *.* to internal_user;

The internal user accounts are created on the seed instance and then replicated to the other instances in the cluster. The internal users are:

  • generated when creating a new cluster by issuing dba.createCluster()

  • generated when adding a new instance to the cluster by issuing Cluster.addInstance().

In addition, the Cluster.rejoinInstance() operation can also result in a new internal user being generated when the ipWhitelist option is used to specify a hostname. For example by issuing:

Cluster.rejoinInstance({ipWhitelist: "192.168.1.1/22"});

all previously existing internal users are removed and a new internal user is created, taking into account the ipWhitelist value used.

For more information on the internal users required by Group Replication, see Section 18.2.1.3, “User Credentials”.

Configuring Hostname

The production instances which make up a cluster run on separate machines, therefore each machine must have a unique host name and be able to resolve the host names of the other machines which run server instances in the cluster. If this is not the case, you can:

  • configure each machine to map the IP of each other machine to a hostname. See your operating system documentation for details. This is the recommended solution.

  • set up a DNS service

  • configure the report_host variable in the MySQL configuration of each instance to a suitable externally reachable address

In this procedure the host name ic-number is used in examples.

To verify whether the hostname of a MySQL server is correctly configured, execute the following query to see how the instance reports its own address to other servers and try to connect to that MySQL server from other hosts using the returned address:

SELECT coalesce(@@report_host, @@hostname);

Persisting Settings

The AdminAPI commands you use to work with a cluster and it's server instances modify the configuration of the instance. Depending on the way MySQL Shell is connected to the instance and the version of MySQL installed on the instance, these configuration changes can be persisted to the instance automatically. Persisting settings to the instance ensures that configuration changes are retained after the instance restarts, for background information see SET PERSIST. This is essential for reliable cluster usage, for example if settings are not persisted then an instance which has been added to a cluster does not rejoin the cluster after a restart because configuration changes are lost. Persisting changes is required after the following operations:

  • dba.configureInstance()

  • dba.createCluster()

  • Cluster.addInstance()

  • Cluster.removeInstance()

  • Cluster.rejoinInstance()

Instances which meet the following requirements support persisting configuration changes automatically:

Instances which do not meet these requirements do not support persisting configuration changes automatically, when AdminAPI operations result in changes to the instance's settings to be persisted you receive warnings such as:

WARNING: On instance 'localhost:3320' membership change cannot be persisted since MySQL version 5.7.21 
does not support the SET PERSIST command (MySQL version >= 8.0.5 required). Please use the 
<Dba>.configureLocalInstance command locally to persist the changes.

When AdminAPI commands are issued against the MySQL instance which MySQL Shell is currently running on, in other words the local instance, MySQL Shell persists configuration changes directly to the instance. On local instances which support persisting configuration changes automatically, configuration changes are persisted to the instance's mysqld-auto.cnf file and the configuration change does not require any further steps. On local instances which do not support persisting configuration changes automatically, you need to make the changes locally, see Configuring Instances with dba.configureLocalInstance().

When run against a remote instance, in other words an instance other than the one which MySQL Shell is currently running on, if the instance supports persisting configuration changes automatically, the AdminAPI commands persist configuration changes to the instance's mysql-auto.conf option file. If a remote instance does not support persisting configuration changes automatically, the AdminAPI commands can not automatically configure the instance's option file. This means that AdminAPI commands can read information from the instance, for example to display the current configuration, but changes to the configuration cannot be persisted to the instance's option file. In this case, you need to persist the changes locally, see Configuring Instances with dba.configureLocalInstance().

Verbose Logging

When working with a production deployment it can be useful to configure verbose logging for MySQL Shell, the information in the log can help you to find and resolve any issues that might occur when you are preparing server instances to work as part of InnoDB cluster. To start MySQL Shell with a verbose logging level use the --log-level option:

shell> mysqlsh --log-level=DEBUG3

The DEBUG3 is recommended, see --log-level for more information. When DEBUG3 is set the MySQL Shell log file contains lines such as Debug: execute_sql( ... ) which contain the SQL queries that are executed as part of each AdminAPI call. The log file generated by MySQL Shell is located in ~/.mysqlsh/mysqlsh.log for Unix-based systems; on Microsoft Windows systems it is located in %APPDATA%\MySQL\mysqlsh\mysqlsh.log. See MySQL Shell Application Log for more information.

In addition to enabling the MySQL Shell log level, you can configure the amount of output AdminAPI provides in MySQL Shell after issuing each command. To enable the amount of AdminAPI output, in MySQL Shell issue:

mysql-js> dba.verbose=2

This enables the maximum output from AdminAPI calls. The available levels of output are:

  • 0 or OFF is the default. This provides minimal output and is the recommended level when not troubleshooting.

  • 1 or ON adds verbose output from each call to the AdminAPI.

  • 2 adds debug output to the verbose output providing full information about what each call to AdminAPI executes.

Configuring Production Instances

AdminAPI provides the dba.configureInstance() function that checks if an instance is suitably configured for InnoDB cluster usage, and configures the instance if it finds any settings which are not compatible with InnoDB cluster. You run the dba.configureInstance() command against an instance and it checks all of the settings required to enable the instance to be used for InnoDB cluster usage. If the instance does not require configuration changes, there is no need to modify the configuration of the instance, and the dba.configureInstance() command output confirms that the instance is ready for InnoDB cluster usage. If any changes are required to make the instance compatible with InnoDB cluster, a report of the incompatible settings is displayed, and you can choose to let the command make the changes to the instance's option file. Depending on the way MySQL Shell is connected to the instance, and the version of MySQL running on the instance, you can make these changes permanent by persisting them to a remote instance's option file, see Persisting Settings. Instances which do not support persisting configuration changes automatically require that you configure the instance locally, see Configuring Instances with dba.configureLocalInstance(). Alternatively you can make the changes to the instance's option file manually, see Section 4.2.2.2, “Using Option Files” for more information. Regardless of the way you make the configuration changes, you might have to restart MySQL to ensure the configuration changes are detected.

The syntax of the dba.configureInstance() command is:

dba.configureInstance([instance][, options])

where instance is an instance definition, and options is a data dictionary with additional options to configure the operation. The command returns a descriptive text message about the operation's result.

The instance definition is the connection data for the instance. If the target instance already belongs to an InnoDB cluster an error is generated and the process fails.

The options dictionary can contain the following:

  • mycnfPath - the path to the MySQL option file of the instance.

  • outputMycnfPath - alternative output path to write the MySQL option file of the instance.

  • password - the password to be used by the connection.

  • clusterAdmin - the name of an InnoDB cluster administrator user to be created. The supported format is the standard MySQL account name format. Supports identifiers or strings for the user name and host name. By default if unquoted it assumes input is a string.

  • clusterAdminPassword - the password for the InnoDB cluster administrator account being created using clusterAdmin.

  • clearReadOnly - a boolean value used to confirm that super_read_only should be set to off, see Super Read-only and Instances.

  • interactive - a boolean value used to disable the interactive wizards in the command execution, so that prompts are not provided to the user and confirmation prompts are not shown.

  • restart - a boolean value used to indicate that a remote restart of the target instance should be performed to finalize the operation.

The connection password can be contained in the instance definition. Alternatively, it can be overwritten by specifying it using the password option.

Once dba.configureInstance() is issued against an instance, the command checks if the instance's settings are suitable for InnoDB cluster usage. A report is displayed which shows the settings required by InnoDB cluster . If the instance does not require any changes to its settings you can use it in an InnoDB cluster, and can proceed to Creating the Cluster. If the instance's settings are not valid for InnoDB cluster usage the dba.configureInstance() command displays the settings which require modification. Before configuring the instance you are prompted to confirm the changes shown in a table with the following information:

  • Variable - the invalid configuration variable.

  • Current Value - the current value for the invalid configuration variable.

  • Required Value - the required value for the configuration variable.

How you proceed depends on whether the instance supports persisting settings, see Persisting Settings. When dba.configureInstance() is issued against the MySQL instance which MySQL Shell is currently running on, in other words the local instance, it attempts to automatically configure the instance. When dba.configureInstance() is issued against a remote instance, if the instance supports persisting configuration changes automatically, you can choose to do this. If a remote instance does not support persisting the changes to configure it for InnoDB cluster usage, you have to configure the instance locally. See Configuring Instances with dba.configureLocalInstance().

In general, a restart of the instance is not required after dba.configureInstance() configures the option file, but for some specific settings a restart might be required. This information is shown in the report generated after issuing dba.configureInstance(). If the instance supports the RESTART statement, MySQL Shell can shutdown and then start the instance. This ensures that the changes made to the instance's option file are detected by mysqld. For more information see RESTART.

Note

After executing a RESTART statement, the current connection to the instance is lost. If auto-reconnect is enabled, the connection is reestablished after the server restarts. Otherwise, the connection must be reestablished manually.

The dba.configureInstance() method verifies that a suitable user is available for cluster usage, which is used for connections between members of the cluster, see User Privileges . The recommended way to add a suitable user is to use the clusterAdmin and clusterAdminPassword options, which enable you to configure the cluster user and password when calling the function. For example:

mysql-js> dba.configureInstance('ic@ic-1:3306', \ 
{clusterAdmin: "'icadmin'@'ic-1%'", clusterAdminPassword: 'password'});

This user is granted the privileges to be able to administer the cluster. The format of the user names accepted follows the standard MySQL account name format, see Section 6.2.4, “Specifying Account Names”.

If you do not specify a user to administer the cluster, in interactive mode a wizard enables you to choose one of the following options:

  • enable remote connections for the root user

  • create a new user, the equivalent of specifying the clusterAdmin and clusterAdminPassword options

  • no automatic configuration, in which case you need to manually create the user

The following example demonstrates the option to create a new user for cluster usage.

mysql-js> dba.configureLocalInstance('root@localhost:3306')

Please provide the password for 'root@localhost:3306':

Please specify the path to the MySQL configuration file: /etc/mysql/mysql.conf.d/mysqld.cnf
Validating instance...

The configuration has been updated but it is required to restart the server.
{
  "config_errors": [
    {
      "action": "restart",
      "current": "OFF",
      "option": "enforce_gtid_consistency",
      "required": "ON"
    },
    {
      "action": "restart",
      "current": "OFF",
      "option": "gtid_mode",
      "required": "ON"
      },
    {
      "action": "restart",
      "current": "0",
      "option": "log_bin",
      "required": "1"
    },
    {
      "action": "restart",
      "current": "0",
      "option": "log_slave_updates",
      "required": "ON"
    },
    {
      "action": "restart",
      "current": "FILE",
      "option": "master_info_repository",
      "required": "TABLE"
    },
    {
      "action": "restart",
      "current": "FILE",
      "option": "relay_log_info_repository",
      "required": "TABLE"
    },
    {
      "action": "restart",
      "current": "OFF",
      "option": "transaction_write_set_extraction",
      "required": "XXHASH64"
    }
  ],
  "errors": [],
  "restart_required": true,
  "status": "error"
}
mysql-js>
Tip

If the instance has super_read_only=ON then you might need to confirm that AdminAPI can set super_read_only=OFF. See Super Read-only and Instances for more information.

Creating the Cluster

Once you have prepared your instances, use the dba.createCluster() function to create the cluster. The machine which you are running MySQL Shell on is used as the seed instance for the cluster. The seed instance is replicated to the other instances which you add to the cluster, making them replicas of the seed instance.

MySQL Shell must be connected to an instance before you can create a cluster because when you issue dba.createCluster(name) MySQL Shell creates a MySQL protocol session to the server instance connected to the MySQL Shell's current global session. Use the dba.createCluster(name) function to create the cluster and assign the returned cluster to a variable called cluster:

mysql-js> var cluster = dba.createCluster('testCluster')

Validating instance at ic@ic-1:3306...

This instance reports its own address as ic-1

Instance configuration is suitable.
Creating InnoDB cluster 'testCluster' on 'ic@ic-1:3306'...

Adding Seed Instance...
Cluster successfully created. Use Cluster.addInstance() to add MySQL instances.
At least 3 instances are needed for the cluster to be able to withstand up to
one server failure.

The returned Cluster object uses a new session, independent from the MySQL Shell's main session. This ensures that if you change the MySQL Shell global session, the Cluster object maintains its session to the instance.

Tip

If the instance has super_read_only=ON then you might need to confirm that AdminAPI can set super_read_only=OFF. See Super Read-only and Instances for more information.

Note

If you encounter an error related to metadata being inaccessible you might have the loopback network interface configured. For correct InnoDB cluster usage disable the loopback interface.

To check the cluster has been created, use the cluster instance's status() function. See Checking a cluster's Status with Cluster.status().

Tip

Once server instances belong to a cluster it is important to only administer them using MySQL Shell and AdminAPI. Attempting to manually change the configuration of Group Replication on an instance once it has been added to a cluster is not supported. Similarly, modifying server variables critical to InnoDB cluster, such as server_uuid after an instance is configured using AdminAPI is not supported.

When you create a cluster using MySQL Shell 8.0.14 and later, you can set the timeout before instances are expelled from the cluster, for example when they become unreachable. Pass the expelTimeout option to the dba.createCluster() operation, which configures the group_replication_member_expel_timeout variable on the seed instance. The expelTimeout option can take an integer value in the range of 0 to 3600. All instances running MySQL server 8.0.13 and later which are added to a cluster with expelTimeout configured are automatically configured to have the same expelTimeout value as configured on the seed instance.

For information on the other options which you can pass to dba.createCluster(), see Section 21.4, “Working with InnoDB Cluster”.

Adding Instances to a Cluster

Use the cluster.addInstance(instance) function to add more instances to the cluster, where instance is connection information to a configured instance, see Configuring Production Instances. You need a minimum of three instances in the cluster to make it tolerant to the failure of one instance. Adding further instances increases the tolerance to failure of an instance. To add an instance to the cluster issue:

mysql-js> cluster.addInstance('ic@ic-2:3306')
A new instance will be added to the InnoDB cluster. Depending on the amount of
data on the cluster this might take from a few seconds to several hours.

Please provide the password for 'ic@ic-2:3306': ********
Adding instance to the cluster ...

Validating instance at ic-2:3306...

This instance reports its own address as ic-2

Instance configuration is suitable.

The instance 'ic@ic-2:3306' was successfully added to the cluster.

To verify the instance has been added, use the cluster instance's status() function. For example this is the status output of a sandbox cluster after adding a second instance:

mysql-js> cluster.status()
{
    "clusterName": "testCluster", 
    "defaultReplicaSet": {
        "name": "default", 
        "primary": "localhost:3310", 
        "ssl": "REQUIRED", 
        "status": "OK_NO_TOLERANCE", 
        "statusText": "Cluster is NOT tolerant to any failures.", 
        "topology": {
            "localhost:3310": {
                "address": "localhost:3310", 
                "mode": "R/W", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }, 
            "localhost:3320": {
                "address": "localhost:3320", 
                "mode": "R/O", 
                "readReplicas": {}, 
                "role": "HA", 
                "status": "ONLINE"
            }
        }
    }, 
    "groupInformationSourceMember": "mysql://root@localhost:3310"
}

How you proceed depends on whether the instance is local or remote to the instance MySQL Shell is running on, and whether the instance supports persisting configuration changes automatically, see Persisting Settings. If the instance supports persisting configuration changes automatically, you do not need to persist the settings manually and can either add more instances or continue to the next step. If the instance does not support persisting configuration changes automatically, you have to configure the instance locally. See Configuring Instances with dba.configureLocalInstance(). This is essential to ensure that instances rejoin the cluster in the event of leaving the cluster.

Tip

If the instance has super_read_only=ON then you might need to confirm that AdminAPI can set super_read_only=OFF. See Super Read-only and Instances for more information.

Once you have your cluster deployed you can configure MySQL Router to provide high availability, see Section 21.3, “Using MySQL Router with InnoDB Cluster”.