This is a cache of https://docs.openshift.com/container-platform/3.7/day_two_guide/environment_backup.html. It is a snapshot of the page at 2024-11-23T03:23:30.483+0000.
Creating an environment-wide backup | Day Two Operations Guide | OpenShift Container Platform 3.7
×

Creating an environment-wide backup involves copying important data to assist with restoration in the case of crashing instances, or corrupt data. After backups have been created, they can be restored onto a newly installed version of the relevant component.

In OpenShift Container Platform, you can back up, saving state to separate storage, at the cluster level. The full state of an environment backup includes:

  • Cluster data files

  • etcd data on each master

  • API objects

  • Registry storage

  • Volume storage

Perform a back up on a regular basis to prevent data loss.

The following process describes a generic way of backing up applications and the OpenShift Container Platform cluster. It cannot take into account custom requirements. Use these steps as a foundation for a full backup and restoration procedure for your cluster. You must take all necessary precautions to prevent data loss.

Backup and restore is not guaranteed. You are responsible for backing up your own data.

Creating a master host backup

Perform this backup process before any change to the OpenShift Container Platform infrastructure, such as a system update, upgrade, or any other significant modification. Back up data regularly to ensure that recent data is available if a failure occurs.

OpenShift Container Platform files

The master instances run important services, such as the API, controllers. The /etc/origin/master directory stores many important files:

  • The configuration, the API, controllers, services, and more

  • Certificates generated by the installation

  • All cloud provider-related configuration

  • Keys and other authentication files, such as htpasswd if you use htpasswd

  • And more

You can customize OpenShift Container Platform services, such as increasing the log level or using proxies. The configuration files are stored in the /etc/sysconfig directory.

Because the masters are also unschedulable nodes, back up the entire /etc/origin directory.

Procedure

You must perform the following steps on each master node.

  1. Create a backup of the master host configuration files:

    $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    $ sudo mkdir -p ${MYBACKUPDIR}/etc/sysconfig
    $ sudo cp -aR /etc/origin ${MYBACKUPDIR}/etc
    $ sudo cp -aR /etc/sysconfig/atomic-* ${MYBACKUPDIR}/etc/sysconfig/

    The configuration file is stored in the /etc/sysconfig/atomic-openshift-master-api, and /etc/sysconfig/atomic-openshift-master-controllers directories.

    The /etc/origin/master/ca.serial.txt file is generated on only the first master listed in the Ansible host inventory. If you deprecate the first master host, copy the /etc/origin/master/ca.serial.txt file to the rest of master hosts before the process.

  2. Other important files that need to be considered when planning a backup include:

    File

    Description

    /etc/cni/*

    Container Network Interface configuration (if used)

    /etc/sysconfig/iptables

    Where the iptables rules are stored

    /etc/sysconfig/docker-storage-setup

    The input file for container-storage-setup command

    /etc/sysconfig/docker

    The docker configuration file

    /etc/sysconfig/docker-network

    docker networking configuration (i.e. MTU)

    /etc/sysconfig/docker-storage

    docker storage configuration (generated by container-storage-setup)

    /etc/dnsmasq.conf

    Main configuration file for dnsmasq

    /etc/dnsmasq.d/*

    Different dnsmasq configuration files

    /etc/sysconfig/flanneld

    flannel configuration file (if used)

    /etc/pki/ca-trust/source/anchors/

    Certificates added to the system (i.e. for external registries)

    Create a backup of those files:

    $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    $ sudo mkdir -p ${MYBACKUPDIR}/etc/sysconfig
    $ sudo mkdir -p ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors
    $ sudo cp -aR /etc/sysconfig/{iptables,docker-*,flanneld} \
        ${MYBACKUPDIR}/etc/sysconfig/
    $ sudo cp -aR /etc/dnsmasq* /etc/cni ${MYBACKUPDIR}/etc/
    $ sudo cp -aR /etc/pki/ca-trust/source/anchors/* \
        ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors/
  3. If a package is accidentally removed or you need to resore a file that is included in an rpm package, having a list of rhel packages installed on the system can be useful.

    If you use Red Hat Satellite features, such as content views or the facts store, provide a proper mechanism to reinstall the missing packages and a historical data of packages installed in the systems.

    To create a list of the current rhel packages installed in the system:

    $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    $ sudo mkdir -p ${MYBACKUPDIR}
    $ rpm -qa | sort | sudo tee $MYBACKUPDIR/packages.txt
  4. If you used the previous steps, the following files are present in the backup directory:

    $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    $ sudo find ${MYBACKUPDIR} -mindepth 1 -type f -printf '%P\n'
    etc/sysconfig/atomic-openshift-master
    etc/sysconfig/atomic-openshift-master-api
    etc/sysconfig/atomic-openshift-master-controllers
    etc/sysconfig/atomic-openshift-node
    etc/sysconfig/flanneld
    etc/sysconfig/iptables
    etc/sysconfig/docker-network
    etc/sysconfig/docker-storage
    etc/sysconfig/docker-storage-setup
    etc/sysconfig/docker-storage-setup.rpmnew
    etc/origin/master/ca.crt
    etc/origin/master/ca.key
    etc/origin/master/ca.serial.txt
    etc/origin/master/ca-bundle.crt
    etc/origin/master/master.proxy-client.crt
    etc/origin/master/master.proxy-client.key
    etc/origin/master/service-signer.crt
    etc/origin/master/service-signer.key
    etc/origin/master/serviceaccounts.private.key
    etc/origin/master/serviceaccounts.public.key
    etc/origin/master/openshift-master.crt
    etc/origin/master/openshift-master.key
    etc/origin/master/openshift-master.kubeconfig
    etc/origin/master/master.server.crt
    etc/origin/master/master.server.key
    etc/origin/master/master.kubelet-client.crt
    etc/origin/master/master.kubelet-client.key
    etc/origin/master/admin.crt
    etc/origin/master/admin.key
    etc/origin/master/admin.kubeconfig
    etc/origin/master/etcd.server.crt
    etc/origin/master/etcd.server.key
    etc/origin/master/master.etcd-client.key
    etc/origin/master/master.etcd-client.csr
    etc/origin/master/master.etcd-client.crt
    etc/origin/master/master.etcd-ca.crt
    etc/origin/master/policy.json
    etc/origin/master/scheduler.json
    etc/origin/master/htpasswd
    etc/origin/master/session-secrets.yaml
    etc/origin/master/openshift-router.crt
    etc/origin/master/openshift-router.key
    etc/origin/master/registry.crt
    etc/origin/master/registry.key
    etc/origin/master/master-config.yaml
    etc/origin/generated-configs/master-master-1.example.com/master.server.crt
    ...[OUTPUT OMITTED]...
    etc/origin/cloudprovider/openstack.conf
    etc/origin/node/system:node:master-0.example.com.crt
    etc/origin/node/system:node:master-0.example.com.key
    etc/origin/node/ca.crt
    etc/origin/node/system:node:master-0.example.com.kubeconfig
    etc/origin/node/server.crt
    etc/origin/node/server.key
    etc/origin/node/node-dnsmasq.conf
    etc/origin/node/resolv.conf
    etc/origin/node/node-config.yaml
    etc/origin/node/flannel.etcd-client.key
    etc/origin/node/flannel.etcd-client.csr
    etc/origin/node/flannel.etcd-client.crt
    etc/origin/node/flannel.etcd-ca.crt
    etc/pki/ca-trust/source/anchors/openshift-ca.crt
    etc/pki/ca-trust/source/anchors/registry-ca.crt
    etc/dnsmasq.conf
    etc/dnsmasq.d/origin-dns.conf
    etc/dnsmasq.d/origin-upstream-dns.conf
    etc/dnsmasq.d/node-dnsmasq.conf
    packages.txt

    If needed, you can compress the files to save space:

    $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    $ sudo tar -zcvf /backup/$(hostname)-$(date +%Y%m%d).tar.gz $MYBACKUPDIR
    $ sudo rm -Rf ${MYBACKUPDIR}

To create any of these files from scratch, the openshift-ansible-contrib repository contains the backup_master_node.sh script, which performs the previous steps. The script creates a directory on the host where you run the script and copies all the files previously mentioned.

The openshift-ansible-contrib script is not supported by Red Hat, but the reference architecture team performs testing to ensure the code operates as defined and is secure.

You can run the script on every master host with:

$ mkdir ~/git
$ cd ~/git
$ git clone https://github.com/openshift/openshift-ansible-contrib.git
$ cd openshift-ansible-contrib/reference-architecture/day2ops/scripts
$ ./backup_master_node.sh -h

Creating a node host backup

Creating a backup of a node host is a different use case from backing up a master host. Because master hosts contain many important files, creating a backup is highly recommended. However, the nature of nodes is that anything special is replicated over the nodes in case of failover, and they typically do not contain data that is necessary to run an environment. If a backup of a node contains something necessary to run an environment, then a creating a backup is recommended.

The backup process is to be performed before any change to the infrastructure, such as a system update, upgrade, or any other significant modification. Backups should be performed on a regular basis to ensure the most recent data is available if a failure occurs.

OpenShift Container Platform files

Node instances run applications in the form of pods, which are based on containers. The /etc/origin/ and /etc/origin/node directories house important files, such as:

  • The configuration of the node services

  • Certificates generated by the installation

  • Cloud provider-related configuration

  • Keys and other authentication files, such as the dnsmasq configuration

The OpenShift Container Platform services can be customized to increase the log level, use proxies, and more, and the configuration files are stored in the /etc/sysconfig directory.

Procedure

  1. Create a backup of the node configuration files:

    $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    $ sudo mkdir -p ${MYBACKUPDIR}/etc/sysconfig
    $ sudo cp -aR /etc/origin ${MYBACKUPDIR}/etc
    $ sudo cp -aR /etc/sysconfig/atomic-openshift-node ${MYBACKUPDIR}/etc/sysconfig/
  2. OpenShift Container Platform uses specific files that must be taken into account when planning the backup policy, including:

    File

    Description

    /etc/cni/*

    Container Network Interface configuration (if used)

    /etc/sysconfig/iptables

    Where the iptables rules are stored

    /etc/sysconfig/docker-storage-setup

    The input file for container-storage-setup command

    /etc/sysconfig/docker

    The docker configuration file

    /etc/sysconfig/docker-network

    docker networking configuration (i.e. MTU)

    /etc/sysconfig/docker-storage

    docker storage configuration (generated by container-storage-setup)

    /etc/dnsmasq.conf

    Main configuration file for dnsmasq

    /etc/dnsmasq.d/*

    Different dnsmasq configuration files

    /etc/sysconfig/flanneld

    flannel configuration file (if used)

    /etc/pki/ca-trust/source/anchors/

    Certificates added to the system (i.e. for external registries)

    To create those files:

    $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    $ sudo mkdir -p ${MYBACKUPDIR}/etc/sysconfig
    $ sudo mkdir -p ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors
    $ sudo cp -aR /etc/sysconfig/{iptables,docker-*,flanneld} \
        ${MYBACKUPDIR}/etc/sysconfig/
    $ sudo cp -aR /etc/dnsmasq* /etc/cni ${MYBACKUPDIR}/etc/
    $ sudo cp -aR /etc/pki/ca-trust/source/anchors/* \
        ${MYBACKUPDIR}/etc/pki/ca-trust/source/anchors/
  3. If a package is accidentally removed, or a file included in an rpm package should be restored, having a list of rhel packages installed on the system can be useful.

    If using Red Hat Satellite features, such as content views or the facts store, provide a proper mechanism to reinstall the missing packages and a historical data of packages installed in the systems.

    To create a list of the current rhel packages installed in the system:

    $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    $ sudo mkdir -p ${MYBACKUPDIR}
    $ rpm -qa | sort | sudo tee $MYBACKUPDIR/packages.txt
  4. The following files should now be present in the backup directory:

    $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    $ sudo find ${MYBACKUPDIR} -mindepth 1 -type f -printf '%P\n'
    etc/sysconfig/atomic-openshift-node
    etc/sysconfig/flanneld
    etc/sysconfig/iptables
    etc/sysconfig/docker-network
    etc/sysconfig/docker-storage
    etc/sysconfig/docker-storage-setup
    etc/sysconfig/docker-storage-setup.rpmnew
    etc/origin/node/system:node:app-node-0.example.com.crt
    etc/origin/node/system:node:app-node-0.example.com.key
    etc/origin/node/ca.crt
    etc/origin/node/system:node:app-node-0.example.com.kubeconfig
    etc/origin/node/server.crt
    etc/origin/node/server.key
    etc/origin/node/node-dnsmasq.conf
    etc/origin/node/resolv.conf
    etc/origin/node/node-config.yaml
    etc/origin/node/flannel.etcd-client.key
    etc/origin/node/flannel.etcd-client.csr
    etc/origin/node/flannel.etcd-client.crt
    etc/origin/node/flannel.etcd-ca.crt
    etc/origin/cloudprovider/openstack.conf
    etc/pki/ca-trust/source/anchors/openshift-ca.crt
    etc/pki/ca-trust/source/anchors/registry-ca.crt
    etc/dnsmasq.conf
    etc/dnsmasq.d/origin-dns.conf
    etc/dnsmasq.d/origin-upstream-dns.conf
    etc/dnsmasq.d/node-dnsmasq.conf
    packages.txt

    If needed, the files can be compressed to save space:

    $ MYBACKUPDIR=/backup/$(hostname)/$(date +%Y%m%d)
    $ sudo tar -zcvf /backup/$(hostname)-$(date +%Y%m%d).tar.gz $MYBACKUPDIR
    $ sudo rm -Rf ${MYBACKUPDIR}

To create any of these files from scratch, the openshift-ansible-contrib repository contains the backup_master_node.sh script, which performs the previous steps. The script creates a directory on the host running the script and copies all the files previously mentioned.

The openshift-ansible-contrib script is not supported by Red Hat, but the reference architecture team performs testing to ensure the code operates as defined and is secure.

The script can be executed on every master host with:

$ mkdir ~/git
$ cd ~/git
$ git clone https://github.com/openshift/openshift-ansible-contrib.git
$ cd openshift-ansible-contrib/reference-architecture/day2ops/scripts
$ ./backup_master_node.sh -h

Backing up registry certificates

If you use an external secured registry, you must save all the registry certificates. The registry is secured by default.

You must perform the following steps on each cluster node.

Procedure

  1. Back up the registry certificates:

    # cd /etc/docker/certs.d/
    # tar cf /tmp/docker-registry-certs-$(hostname).tar *
  2. Move the backup to an external location.

When working with one or more external secured registry, any host that pulls or pushes images must trust the registry certificates to run pods.

Backing up other installation files

Back up the files that you used to install OpenShift Container Platform.

Procedure

  1. Because the restoration procedure involves a complete reinstallation, save all the files used in the initial installation. These files might include:

  2. Backup the procedures for post-installation steps. Some installations might involve steps that are not included in the installer. These steps might include changes to the services outside of the control of OpenShift Container Platform or the installation of extra services like monitoring agents. Additional configuration that is not yet supported by the advanced installer might also be affected, such as using multiple authentication providers.

Backing up application data

In many cases, you can back up application data by using the oc rsync command, assuming rsync is installed within the container image. The Red Hat rhel7 base image contains rsync. Therefore, all images that are based on rhel7 contain it as well. See Troubleshooting and Debugging CLI Operations - rsync.

This is a generic backup of application data and does not take into account application-specific backup procedures, for example, special export and import procedures for database systems.

Other means of backup might exist depending on the type of the persistent volume you use, for example, Cinder, NFS, or Gluster.

The paths to back up are also application specific. You can determine what path to back up by looking at the mountPath for volumes in the deploymentconfig.

You can perform this type of application data backup only while the application pod is running.

Procedure

Example of backing up a Jenkins deployment’s application data
  1. Get the application data mountPath from the deploymentconfig:

    $ oc get dc/jenkins -o jsonpath='{ .spec.template.spec.containers[?(@.name=="jenkins")].volumeMounts[?(@.name=="jenkins-data")].mountPath }'
    /var/lib/jenkins
  2. Get the name of the pod that is currently running:

    $ oc get pod --selector=deploymentconfig=jenkins -o jsonpath='{ .metadata.name }'
    jenkins-1-37nux
  3. Use the oc rsync command to copy application data:

    $ oc rsync jenkins-1-37nux:/var/lib/jenkins /tmp/

etcd backup

etcd is the key value store for all object definitions, as well as the persistent master state. Other components watch for changes, then bring themselves into the desired state.

OpenShift Container Platform versions prior to 3.5 use etcd version 2 (v2), while 3.5 and later use version 3 (v3). The data model between the two versions of etcd is different. etcd v3 can use both the v2 and v3 data models, whereas etcd v2 can only use the v2 data model. In an etcd v3 server, the v2 and v3 data stores exist in parallel and are independent.

For both v2 and v3 operations, you can use the etcdCTL_API environment variable to use the proper API:

$ etcdctl -v
etcdctl version: 3.2.5
API version: 2
$ etcdCTL_API=3 etcdctl version
etcdctl version: 3.2.5
API version: 3.2

See Migrating etcd Data (v2 to v3) section for information about how to migrate to v3.

The etcd backup process is composed of two different procedures:

  • Configuration backup: Including the required etcd configuration and certificates

  • Data backup: Including both v2 and v3 data model.

You can perform the data backup process on any host that has connectivity to the etcd cluster, where the proper certificates are provided, and where the etcdctl tool is installed.

The backup files must be copied to an external system, ideally outside the OpenShift Container Platform environment, and then encrypted.

Note that the etcd backup still has all the references to current storage volumes. When you restore etcd, OpenShift Container Platform starts launching the previous pods on nodes and reattaching the same storage. This process is no different than the process of when you remove a node from the cluster and add a new one back in its place. Anything attached to that node is reattached to the pods on whatever nodes they are rescheduled to.

Backing up etcd

When you back up etcd, you must back up both the etcd configuration files and the etcd data.

Backing up etcd configuration files

The etcd configuration files to be preserved are all stored in the /etc/etcd directory of the instances where etcd is running. This includes the etcd configuration file (/etc/etcd/etcd.conf) and the required certificates for cluster communication. All those files are generated at installation time by the Ansible installer.

Procedure

For each etcd member of the cluster, back up the etcd configuration.

$ ssh master-0
# mkdir -p /backup/etcd-config-$(date +%Y%m%d)/
# cp -R /etc/etcd/ /backup/etcd-config-$(date +%Y%m%d)/

The certificates and configuration files on each etcd cluster member are unique.

Backing up etcd data

Prerequisites

The OpenShift Container Platform installer creates aliases to avoid typing all the flags named etcdctl2 for etcd v2 tasks and etcdctl3 for etcd v3 tasks.

However, the etcdctl3 alias does not provide the full endpoint list to the etcdctl command, so the --endpoints option with all the endpoints must be provided.

Before backing up etcd:

  • etcdctl binaries should be available or, in containerized installations, the rhel7/etcd container should be available

  • Ensure connectivity with the etcd cluster (port 2379/tcp)

  • Ensure the proper certificates to connect to the etcd cluster

    1. To ensure the etcd cluster is working, check its health.

      • If you use the etcd v2 API, run the following command:

        # etcdctl --cert-file=/etc/etcd/peer.crt \
                  --key-file=/etc/etcd/peer.key \
                  --ca-file=/etc/etcd/ca.crt \
                  --peers="https://*master-0.example.com*:2379,\
                  https://*master-1.example.com*:2379,\
                  https://*master-2.example.com*:2379"\
                  cluster-health
        member 5ee217d19001 is healthy: got healthy result from https://192.168.55.12:2379
        member 2a529ba1840722c0 is healthy: got healthy result from https://192.168.55.8:2379
        member ed4f0efd277d7599 is healthy: got healthy result from https://192.168.55.13:2379
        cluster is healthy
      • If you use the etcd v3 API, run the following command:

        # etcdCTL_API=3 etcdctl --cert="/etc/etcd/peer.crt" \
                  --key=/etc/etcd/peer.key \
                  --cacert="/etc/etcd/ca.crt" \
                  --endpoints="https://*master-0.example.com*:2379,\
                    https://*master-1.example.com*:2379,\
                    https://*master-2.example.com*:2379"
                    endpoint health
        https://master-0.example.com:2379 is healthy: successfully committed proposal: took = 5.011358ms
        https://master-1.example.com:2379 is healthy: successfully committed proposal: took = 1.305173ms
        https://master-2.example.com:2379 is healthy: successfully committed proposal: took = 1.388772ms
    2. Check the member list.

      • If you use the etcd v2 API, run the following command:

        # etcdctl2 member list
        2a371dd20f21ca8d: name=master-1.example.com peerURLs=https://192.168.55.12:2380 clientURLs=https://192.168.55.12:2379 isLeader=false
        40bef1f6c79b3163: name=master-0.example.com peerURLs=https://192.168.55.8:2380 clientURLs=https://192.168.55.8:2379 isLeader=false
        95dc17ffcce8ee29: name=master-2.example.com peerURLs=https://192.168.55.13:2380 clientURLs=https://192.168.55.13:2379 isLeader=true
      • If you use the etcd v3 API, run the following command:

        # etcdctl3 member list
        2a371dd20f21ca8d, started, master-1.example.com, https://192.168.55.12:2380, https://192.168.55.12:2379
        40bef1f6c79b3163, started, master-0.example.com, https://192.168.55.8:2380, https://192.168.55.8:2379
        95dc17ffcce8ee29, started, master-2.example.com, https://192.168.55.13:2380, https://192.168.55.13:2379
Procedure

While the etcdctl backup command is used to perform the backup, etcd v3 has no concept of a backup. Instead, you either take a snapshot from a live member with the etcdctl snapshot save command or copy the member/snap/db file from an etcd data directory.

The etcdctl backup command rewrites some of the metadata contained in the backup, specifically, the node ID and cluster ID, which means that in the backup, the node loses its former identity. To recreate a cluster from the backup, you create a new, single-node cluster, then add the rest of the nodes to the cluster. The metadata is rewritten to prevent the new node from joining an existing cluster.

Back up the etcd data:

  • If you use the v2 API, take the following actions:

    1. Stop all etcd services:

      # systemctl stop etcd.service
    2. Create the etcd data backup and copy the etcd db file:

      # mkdir -p /backup/etcd-$(date +%Y%m%d)
      # etcdctl2 backup \
          --data-dir /var/lib/etcd \
          --backup-dir /backup/etcd-$(date +%Y%m%d)
      # cp /var/lib/etcd/member/snap/db /backup/etcd-$(date +%Y%m%d)
    3. Start all etcd services:

      # systemctl start etcd.service
  • If you use the v3 API, run the following commands:

    Because clusters upgraded from previous versions of OpenShift Container Platform might contain v2 data stores, back up both v2 and v3 datastores.

    1. Back up etcd v3 data:

      # systemctl show etcd --property=ActiveState,SubState
      # mkdir -p /backup/etcd-$(date +%Y%m%d)
      # etcdctl3 snapshot save */backup/etcd-$(date +%Y%m%d)*/db
      Snapshot saved at /backup/etcd-<date>/db
    2. Back up etcd v2 data:

      # systemctl stop etcd.service
      # etcdctl2 backup \
          --data-dir /var/lib/etcd \
          --backup-dir /backup/etcd-$(date +%Y%m%d)
      # cp /var/lib/etcd/member/snap/db /backup/etcd-$(date +%Y%m%d)
      # systemctl start etcd.service

      The etcdctl snapshot save command requires the etcd service to be running.

      In these commands, a /backup/etcd-<date>/ directory is created, where <date> represents the current date, which must be an external NFS share, S3 bucket, or any external storage location.

      In the case of an all-in-one cluster, the etcd data directory is located in the /var/lib/origin/openshift.local.etcd directory.

Backing up a project

Creating a backup of all relevant data involves exporting all important information, then restoring into a new project.

Currently, a OpenShift Container Platform project back up and restore tool is being developed by Red Hat. See the following bug for more information:

Procedure

  1. List all the relevant data to back up:

    $ oc get all
    NAME         TYPE      FROM      LATEST
    bc/ruby-ex   Source    Git       1
    
    NAME               TYPE      FROM          STATUS     STARTED         DURATION
    builds/ruby-ex-1   Source    Git@c457001   Complete   2 minutes ago   35s
    
    NAME                 DOCKER REPO                                     TAGS      UPDATED
    is/guestbook         10.111.255.221:5000/myproject/guestbook         latest    2 minutes ago
    is/hello-openshift   10.111.255.221:5000/myproject/hello-openshift   latest    2 minutes ago
    is/ruby-22-centos7   10.111.255.221:5000/myproject/ruby-22-centos7   latest    2 minutes ago
    is/ruby-ex           10.111.255.221:5000/myproject/ruby-ex           latest    2 minutes ago
    
    NAME                 REVISION   DESIRED   CURRENT   TRIGGERED BY
    dc/guestbook         1          1         1         config,image(guestbook:latest)
    dc/hello-openshift   1          1         1         config,image(hello-openshift:latest)
    dc/ruby-ex           1          1         1         config,image(ruby-ex:latest)
    
    NAME                   DESIRED   CURRENT   READY     AGE
    rc/guestbook-1         1         1         1         2m
    rc/hello-openshift-1   1         1         1         2m
    rc/ruby-ex-1           1         1         1         2m
    
    NAME                  CLUSTER-IP       EXTERNAL-IP   PORT(S)             AGE
    svc/guestbook         10.111.105.84    <none>        3000/TCP            2m
    svc/hello-openshift   10.111.230.24    <none>        8080/TCP,8888/TCP   2m
    svc/ruby-ex           10.111.232.117   <none>        8080/TCP            2m
    
    NAME                         READY     STATUS      RESTARTS   AGE
    po/guestbook-1-c010g         1/1       Running     0          2m
    po/hello-openshift-1-4zw2q   1/1       Running     0          2m
    po/ruby-ex-1-build           0/1       Completed   0          2m
    po/ruby-ex-1-rxc74           1/1       Running     0          2m
  2. Export the project objects to a .yaml or .json file.

    • To export the project objects into a project.yaml file:

      $ oc export all -o yaml > project.yaml
    • To export the project objects into a project.json file:

      $ oc export all -o json > project.json
  3. Export the project’s role bindings, secrets, service accounts, and persistent volume claims:

    $ for object in rolebindings serviceaccounts secrets imagestreamtags podpreset cms egressnetworkpolicies rolebindingrestrictions limitranges resourcequotas pvcs templates cronjobs statefulsets hpas deployments replicasets poddisruptionbudget endpoints
    do
      oc export $object -o yaml > $object.yaml
    done
  4. To list all the namespaced objects:

    $ oc api-resources --namespaced=true -o name
  5. Some exported objects can rely on specific metadata or references to unique IDs in the project. This is a limitation on the usability of the recreated objects.

    When using imagestreams, the image parameter of a deploymentconfig can point to a specific sha checksum of an image in the internal registry that would not exist in a restored environment. For instance, running the sample "ruby-ex" as oc new-app centos/ruby-22-centos7~https://github.com/sclorg/ruby-ex.git creates an imagestream ruby-ex using the internal registry to host the image:

    $ oc get dc ruby-ex -o jsonpath="{.spec.template.spec.containers[].image}"
    10.111.255.221:5000/myproject/ruby-ex@sha256:880c720b23c8d15a53b01db52f7abdcbb2280e03f686a5c8edfef1a2a7b21cee

    If importing the deploymentconfig as it is exported with oc export it fails if the image does not exist.

Backing up persistent volume claims

You can synchronize persistent data from inside of a container to a server.

Depending on the provider that is hosting the OpenShift Container Platform environment, the ability to launch third party snapshot services for backup and restore purposes also exists. As OpenShift Container Platform does not have the ability to launch these services, this guide does not describe these steps.

Consult any product documentation for the correct backup procedures of specific applications. For example, copying the mysql data directory itself does not create a usable backup. Instead, run the specific backup procedures of the associated application and then synchronize any data. This includes using snapshot solutions provided by the OpenShift Container Platform hosting platform.

Procedure

  1. View the project and pods:

    $ oc get pods
    NAME           READY     STATUS      RESTARTS   AGE
    demo-1-build   0/1       Completed   0          2h
    demo-2-fxx6d   1/1       Running     0          1h
  2. Describe the desired pod to find the volumes that are currently used by a persistent volume:

    $ oc describe pod demo-2-fxx6d
    Name:			demo-2-fxx6d
    Namespace:		test
    Security Policy:	restricted
    Node:			ip-10-20-6-20.ec2.internal/10.20.6.20
    Start Time:		Tue, 05 Dec 2017 12:54:34 -0500
    Labels:			app=demo
    			deployment=demo-2
    			deploymentconfig=demo
    Status:			Running
    IP:			172.16.12.5
    Controllers:		ReplicationController/demo-2
    Containers:
      demo:
        Container ID:	docker://201f3e55b373641eb36945d723e1e212ecab847311109b5cee1fd0109424217a
        Image:		docker-registry.default.svc:5000/test/demo@sha256:0a9f2487a0d95d51511e49d20dc9ff6f350436f935968b0c83fcb98a7a8c381a
        Image ID:		docker-pullable://docker-registry.default.svc:5000/test/demo@sha256:0a9f2487a0d95d51511e49d20dc9ff6f350436f935968b0c83fcb98a7a8c381a
        Port:		8080/TCP
        State:		Running
          Started:		Tue, 05 Dec 2017 12:54:52 -0500
        Ready:		True
        Restart Count:	0
        Volume Mounts:
          */opt/app-root/src/uploaded from persistent-volume (rw)*
          /var/run/secrets/kubernetes.io/serviceaccount from default-token-8mmrk (ro)
        Environment Variables:	<none>
    ...omitted...

    This output shows that the persistent data is in the /opt/app-root/src/uploaded directory.

  3. Copy the data locally:

    $ oc rsync demo-2-fxx6d:/opt/app-root/src/uploaded ./demo-app
    receiving incremental file list
    uploaded/
    uploaded/ocp_sop.txt
    uploaded/lost+found/
    
    sent 38 bytes  received 190 bytes  152.00 bytes/sec
    total size is 32  speedup is 0.14

    The ocp_sop.txt file is downloaded to the local system to be backed up by backup software or another backup mechanism.

    You can also use the previous steps if a pod starts without needing to use a pvc, but you later decide that a pvc is necessary. You can preserve the data and then use the restorate process to populate the new storage.