$ docker pull registry.access.redhat.com/rhscl/mariadb-100-rhel7 $ docker pull registry.access.redhat.com/rhscl/mariadb-101-rhel7
OpenShift Container Platform provides a container image for running MariaDB. This image can provide database services based on username, password, and database name settings provided in a configuration file.
These images come in two flavors, depending on your needs:
RHEL 7
CentOS 7
RHEL 7 Based Images
The RHEL 7 images are available through the Red Hat Registry:
$ docker pull registry.access.redhat.com/rhscl/mariadb-100-rhel7 $ docker pull registry.access.redhat.com/rhscl/mariadb-101-rhel7
CentOS 7 Based Images
These images are available on Docker Hub:
$ docker pull openshift/mariadb-100-centos7 $ docker pull centos/mariadb-101-centos7
To use these images, you can either access them directly from these registries or push them into your OpenShift Container Platform Docker registry. Additionally, you can create an ImageStream that points to the image, either in your Docker registry or at the external location. Your OpenShift Container Platform resources can then reference the ImageStream. You can find example ImageStream definitions for all the provided OpenShift Container Platform images.
The first time you use the shared volume, the database is created along with
the database administrator user and the MariaDB root user (if you specify the
MYSQL_ROOT_PASSWORD
environment variable). Afterwards, the MariaDB daemon
starts up. If you are re-attaching the volume to another container, then the
database, database user, and the administrator user are not created, and the
MariaDB daemon starts.
The following command creates a new database pod with MariaDB running in a container:
$ oc new-app \ -e MYSQL_USER=<username> \ -e MYSQL_PASSWORD=<password> \ -e MYSQL_DATABASE=<database_name> \ registry.access.redhat.com/rhscl/mariadb-101-rhel7
OpenShift Container Platform uses Software Collections (SCLs) to install and launch MariaDB. If you want to execute a MariaDB command inside of a running container (for debugging), you must invoke it using bash.
To do so, first identify the name of the running MariaDB pod. For example, you can view the list of pods in your current project:
$ oc get pods
Then, open a remote shell session to the pod:
$ oc rsh <pod>
When you enter the container, the required SCL is automatically enabled.
You can now run mysql commands from the bash shell to start a MariaDB interactive session and perform normal MariaDB operations. For example, to authenticate as the database user:
bash-4.2$ mysql -u $MYSQL_USER -p$MYSQL_PASSWORD -h $HOSTNAME $MYSQL_DATABASE Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 4 Server version: 5.5.37 MySQL Community Server (GPL) ... mysql>
When you are finished, enter quit or exit to leave the MySQL session.
The MariaDB user name, password, and database name must be configured with the following environment variables:
Variable Name | Description |
---|---|
|
User name for MySQL account to be created. |
|
Password for the user account. |
|
Database name. |
|
Password for the root user (optional). |
You must specify the user name, password, and database name. If you do not specify all three, the pod will fail to start and OpenShift Container Platform will continuously try to restart it. |
MariaDB settings can be configured with the following environment variables:
Variable Name | Description | Default |
---|---|---|
|
Sets how the table names are stored and compared. |
0 |
|
The maximum permitted number of simultaneous client connections. |
151 |
|
The maximum size of one packet or any generated/intermediate string. |
200M |
|
The minimum length of the word to be included in a FULLTEXT index. |
4 |
|
The maximum length of the word to be included in a FULLTEXT index. |
20 |
|
Controls the innodb_use_native_aio setting value if the native AIO is broken. |
1 |
|
The number of open tables for all threads. |
400 |
|
The size of the buffer used for index blocks. |
32M (or 10% of available memory) |
|
The size of the buffer used for sorting. |
256K |
|
The size of the buffer used for a sequential scan. |
8M (or 5% of available memory) |
|
The size of the buffer pool where InnoDB caches table and index data. |
32M (or 50% of available memory) |
|
The size of each log file in a log group. |
8M (or 15% of available memory) |
|
The size of the buffer that InnoDB uses to write to the log files on disk. |
8M (or 15% of available memory) |
|
Point to an alternative configuration file. |
/etc/my.cnf |
|
Set sets the binlog format, supported values are |
statement |
The MariaDB image can be run with mounted volumes to enable persistent storage for the database:
/var/lib/mysql/data - The MySQL data directory is where MariaDB stores database files.
When mounting a directory from the host into the container, ensure that the mounted directory has the appropriate permissions. Also verify that the owner and group of the directory match the user name running inside the container. |
Passwords are part of the image configuration, therefore the only supported
method to change passwords for the database user (MYSQL_USER
) and admin
user is by changing the environment variables MYSQL_PASSWORD
and
MYSQL_ROOT_PASSWORD
, respectively.
You can view the current passwords by viewing the pod or deployment configuration in the web console or by listing the environment variables with the CLI:
$ oc set env pod <pod_name> --list
Changing database passwords through SQL statements or any way other than through the environment variables aforementioned causes a mismatch between the values stored in the variables and the actual passwords. Whenever a database container starts, it resets the passwords to the values stored in the environment variables.
To change these passwords, update one or both of the desired environment
variables for the related deployment configuration(s) using the oc set env
command. If multiple deployment configurations utilize these environment
variables, for example in the case of an application created from a template,
you must update the variables on each deployment configuration so that the
passwords are in sync everywhere. This can be done all in the same command:
$ oc set env dc <dc_name> [<dc_name_2> ...] \ MYSQL_PASSWORD=<new_password> \ MYSQL_ROOT_PASSWORD=<new_root_password>
Depending on your application, there may be other environment variables for
passwords in other parts of the application that should also be updated to
match. For example, there could be a more generic |
Updating the environment variables triggers the redeployment of the database server if you have a configuration change trigger. Otherwise, you must manually start a new deployment in order to apply the password changes.
To verify that new passwords are in effect, first open a remote shell session to the running MariaDB pod:
$ oc rsh <pod>
From the bash shell, verify the database user’s new password:
bash-4.2$ mysql -u $MYSQL_USER -p<new_password> -h $HOSTNAME $MYSQL_DATABASE -te "SELECT * FROM (SELECT database()) db CROSS JOIN (SELECT user()) u"
If the password was changed correctly, you should see a table like this:
+------------+---------------------+ | database() | user() | +------------+---------------------+ | sampledb | user0PG@172.17.42.1 | +------------+---------------------+
To verify the root user’s new password:
bash-4.2$ mysql -u root -p<new_root_password> -h $HOSTNAME $MYSQL_DATABASE -te "SELECT * FROM (SELECT database()) db CROSS JOIN (SELECT user()) u"
If the password was changed correctly, you should see a table like this:
+------------+------------------+ | database() | user() | +------------+------------------+ | sampledb | root@172.17.42.1 | +------------+------------------+
OpenShift Container Platform provides a template to make creating a new database service easy. The template provides parameter fields to define all the mandatory environment variables (user, password, database name, etc) with predefined defaults including auto-generation of password values. It will also define both a deployment configuration and a service.
The MariaDB templates should have been registered in the default openshift project by your cluster administrator during the initial cluster setup. See Loading the Default Image Streams and Templates for more details, if required.
There are two templates available:
mariadb-ephemeral
is for development or testing purposes only because it uses
ephemeral storage for the database content. This means that if the database
pod is restarted for any reason, such as the pod being moved to another node
or the deployment configuration being updated and triggering a redeploy, all
data will be lost.
mariadb-persistent
uses a persistent volume store for the database data
which means the data will survive a pod restart.
Using persistent volumes requires a persistent volume pool be defined in the
OpenShift Container Platform deployment.
Cluster administrator instructions for setting up the pool are located
here.
You can find instructions for instantiating templates by following these instructions.
Once you have instantiated the service, you can copy the user name, password, and database name environment variables into a deployment configuration for another component that intends to access the database. That component can then access the database through the service that was defined.
Enabling clustering for database images is currently in Technology Preview and not intended for production use. |
Red Hat provides a proof-of-concept template for MariaDB replication (clustering); you can obtain the example template from GitHub.
To upload the example template into the current project’s template library:
$ oc create -f \ https://github.com/openshift/origin/blob/master/examples/db-templates/mariadb-ephemeral-template.json
The example template does not use persistent storage. When you lose all members of the replication set, your data will be lost. |
The following sections detail the objects defined in the example template and describe how they work together to start a cluster of MariaDB servers implementing master-slave replication and automated failover. This is the recommended replication strategy for MariaDB.
To set up MariaDB replication, a deployment configuration is defined in the example template that defines a replication controller. The replication controller manages the members of the MariaDB cluster.
To tell a MariaDB server to act as the master, the command
field in the
container’s definition in the deployment configuration must be set to
run-mysqld-master. This script acts as an alternative entrypoint for the
MariaDB image and configures the MariaDB server to run as the master in replication.
MariaDB replication requires a special user that relays data between the master and slaves. The following environment variables are defined in the template for this purpose:
Variable Name | Description | Default |
---|---|---|
|
The user name of the replication user |
master |
|
The password for the replication user |
generated |
kind: "deploymentConfig"
apiVersion: "v1"
metadata:
name: "mysql-master"
spec:
strategy:
type: "Recreate"
triggers:
- type: "ConfigChange"
replicas: 1
selector:
name: "mysql-master"
template:
metadata:
labels:
name: "mysql-master"
spec:
volumes:
- name: "mysql-master-data"
persistentVolumeClaim:
claimName: "mysql-master"
containers:
- name: "server"
image: "openshift/mariadb-100-centos7"
command:
- "run-mysqld-master"
ports:
- containerPort: 3306
protocol: "TCP"
env:
- name: "MYSQL_MASTER_USER"
value: "${MYSQL_MASTER_USER}"
- name: "MYSQL_MASTER_PASSWORD"
value: "${MYSQL_MASTER_PASSWORD}"
- name: "MYSQL_USER"
value: "${MYSQL_USER}"
- name: "MYSQL_PASSWORD"
value: "${MYSQL_PASSWORD}"
- name: "MYSQL_DATABASE"
value: "${MYSQL_DATABASE}"
- name: "MYSQL_ROOT_PASSWORD"
value: "${MYSQL_ROOT_PASSWORD}"
volumeMounts:
- name: "mysql-master-data"
mountPath: "/var/lib/mysql/data"
resources: {}
terminationMessagePath: "/dev/termination-log"
imagePullPolicy: "IfNotPresent"
securityContext:
capabilities: {}
privileged: false
restartPolicy: "Always"
dnsPolicy: "ClusterFirst"
Since we claimed a persistent volume in this deployment configuration to have all data persisted for the MySQL master server, you must ask your cluster administrator to create a persistent volume that you can claim the storage from.
After the deployment configuration is created and the pod with MariaDB master
server is started, it will create the database defined by MYSQL_DATABASE
and
configure the server to replicate this database to slaves.
The example provided defines only one replica of the MariaDB master server. This causes OpenShift Container Platform to start only one instance of the server. Multiple instances (multi-master) is not supported and therefore you can not scale this replication controller.
To replicate the database created by the
MySQL master, a
deployment configuration is defined in the template. This deployment
configuration creates a replication controller that launches the MySQL image
with the command
field set to run-mysqld-slave. This alternative
entrypoints skips the initialization of the database and configures the MySQL
server to connect to the mysql-master service, which is also defined in
example template.
kind: "deploymentConfig"
apiVersion: "v1"
metadata:
name: "mysql-slave"
spec:
strategy:
type: "Recreate"
triggers:
- type: "ConfigChange"
replicas: 1
selector:
name: "mysql-slave"
template:
metadata:
labels:
name: "mysql-slave"
spec:
containers:
- name: "server"
image: "openshift/mariadb-100-centos7"
command:
- "run-mysqld-slave"
ports:
- containerPort: 3306
protocol: "TCP"
env:
- name: "MYSQL_MASTER_USER"
value: "${MYSQL_MASTER_USER}"
- name: "MYSQL_MASTER_PASSWORD"
value: "${MYSQL_MASTER_PASSWORD}"
- name: "MYSQL_DATABASE"
value: "${MYSQL_DATABASE}"
resources: {}
terminationMessagePath: "/dev/termination-log"
imagePullPolicy: "IfNotPresent"
securityContext:
capabilities: {}
privileged: false
restartPolicy: "Always"
dnsPolicy: "ClusterFirst"
This example deployment configuration starts the replication controller with the initial number of replicas set to 1. You can scale this replication controller in both directions, up to the resources capacity of your account.
The initiate
argument in the container specification
instructs the container to first discover all running member pods within
the MariaDB cluster. To achieve this, a headless service is defined named
mariadb in the example template.
To have a headless service, the portalIP
parameter in the service definition
is set to None. Then you can use a DNS query to get a list of the pod IP
addresses that represents the current endpoints for this service.
kind: "Service"
apiVersion: "v1"
metadata:
name: "mysql-master"
labels:
name: "mysql-master"
spec:
ports:
- protocol: "TCP"
port: 3306
targetPort: 3306
nodePort: 0
selector:
name: "mysql-master"
portalIP: "None"
type: "ClusterIP"
sessionAffinity: "None"
status:
loadBalancer: {}
To increase the number of members in the cluster:
$ oc scale rc mysql-slave-1 --replicas=<number>
This tells the replication controller to create a new MySQL slave pod. When a new slave is created, the slave entrypoint first attempts to contact the mysql-master service and register itself to the replication set. Once that is done, the MySQL master server sends the slave the replicated database.
When scaling down, the MySQL slave is shut down and, because the slave does not have any persistent storage defined, all data on the slave is lost. The MySQL master server then discovers that the slave is not reachable anymore, and it automatically removes it from the replication.
This section describes some troubles you might encounter and presents possible resolutions.
The MySQL container fails to start and the logs show something like:
151113 5:06:56 InnoDB: Using Linux native AIO 151113 5:06:56 InnoDB: Warning: io_setup() failed with EAGAIN. Will make 5 attempts before giving up. InnoDB: Warning: io_setup() attempt 1 failed. InnoDB: Warning: io_setup() attempt 2 failed. Waiting for MySQL to start ... InnoDB: Warning: io_setup() attempt 3 failed. InnoDB: Warning: io_setup() attempt 4 failed. Waiting for MySQL to start ... InnoDB: Warning: io_setup() attempt 5 failed. 151113 5:06:59 InnoDB: Error: io_setup() failed with EAGAIN after 5 attempts. InnoDB: You can disable Linux Native AIO by setting innodb_use_native_aio = 0 in my.cnf 151113 5:06:59 InnoDB: Fatal error: cannot initialize AIO sub-system 151113 5:06:59 [ERROR] Plugin 'InnoDB' init function returned error. 151113 5:06:59 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 151113 5:06:59 [ERROR] Unknown/unsupported storage engine: InnoDB 151113 5:06:59 [ERROR] Aborting
MariaDB’s storage engine was unable to use the kernel’s AIO (Asynchronous I/O) facilities due to resource limits.
Turn off AIO usage entirely, by setting environment variable MYSQL_AIO
to
have value 0
. On subsequent deployments, this arranges for the MySQL
configuration variable innodb_use_native_aio
to have value 0
.
Alternatively, increase the aio-max-nr
kernel resource.
The following example examines the current value of aio-max-nr
and doubles it.
$ sysctl fs.aio-max-nr fs.aio-max-nr = 1048576 # sysctl -w fs.aio-max-nr=2097152
This is a per-node resolution and lasts until the next node reboot.