This is a cache of https://docs.okd.io/3.11/day_two_guide/docker_tasks.html. It is a snapshot of the page at 2024-11-01T03:09:36.985+0000.
Docker tasks | Day Two Operations Guide | OKD 3.11
×

Increasing container storage

Increasing the amount of storage available ensures continued deployment without any outages. To do so, a free partition must be made available that contains an appropriate amount of free capacity.

Evacuating the node

Procedure

  1. From a master instance, or as a cluster administrator, allow the evacuation of any pod from the node and disable scheduling of other pods on that node:

    $ NODE=ose-app-node01.example.com
    $ oc adm manage-node ${NODE} --schedulable=false
    NAME                          STATUS                     AGE       VERSION
    ose-app-node01.example.com   Ready,SchedulingDisabled   20m       v1.6.1+5115d708d7
    
    $ oc adm drain ${NODE} --ignore-daemonsets
    node "ose-app-node01.example.com" already cordoned
    pod "perl-1-build" evicted
    pod "perl-1-3lnsh" evicted
    pod "perl-1-9jzd8" evicted
    node "ose-app-node01.example.com" drained

    If there are containers running with local volumes that will not migrate, run the following command: oc adm drain ${NODE} --ignore-daemonsets --delete-local-data.

  2. List the pods on the node to verify that they have been removed:

    $ oc adm manage-node ${NODE} --list-pods
    
    Listing matched pods on node: ose-app-node01.example.com
    
    NAME      READY     STATUS    RESTARTS   AGE
  3. Repeat the previous two steps for each node.

For more information on evacuating and draining pods or nodes, see Node maintenance.

Increasing storage

You can increase Docker storage in two ways: attaching a new disk, or extending the existing disk.

Increasing storage with a new disk

Prerequisites

  • A new disk must be available to the existing instance that requires more storage. In the following steps, the original disk is labeled /dev/xvdb, and the new disk is labeled /dev/xvdd, as shown in the /etc/sysconfig/docker-storage-setup file:

    # vi /etc/sysconfig/docker-storage-setup
    DEVS="/dev/xvdb /dev/xvdd"

    The process may differ depending on the underlying OKD infrastructure.

Procedure

  1. Stop the docker:

    # systemctl stop docker
  2. Stop the node service by removing the pod definition and rebooting the host:

    # mkdir -p /etc/origin/node/pods-stopped
    # mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/
  3. Run the docker-storage-setup command to extend the volume groups and logical volumes associated with container storage:

    # docker-storage-setup
    1. For thin pool setups, you should see the following output and can proceed to the next step:

      INFO: Volume group backing root filesystem could not be determined
      INFO: Device /dev/xvdb is already partitioned and is part of volume group docker_vol
      INFO: Device node /dev/xvdd1 exists.
        Physical volume "/dev/xvdd1" successfully created.
        Volume group "docker_vol" successfully extended
    2. For XFS setups that use the Overlay2 file system, the increase shown in the previous output will not be visible.

      You must perform the following steps to extend and grow the XFS storage:

      1. Run the lvextend command to grow the logical volume to use of all the available space in the volume group:

        # lvextend -r -l +100%FREE /dev/mapper/docker_vol-dockerlv

        If the requirement is to use lesser space, choose the FREE percentage accordingly.

      2. Run the xfs_growfs command to grow the file system to use the available space:

        # xfs_growfs /dev/mapper/docker_vol-dockerlv

        XFS file systems cannot be shrunk.

      3. Run the docker-storage-setup command again:

        # docker-storage-setup

        You should now see the extended volume groups and logical volumes in the output.

        INFO: Device /dev/vdb is already partitioned and is part of volume group docker_vg
        INFO: Found an already configured thin pool /dev/mapper/docker_vg-docker--pool in /etc/sysconfig/docker-storage
        INFO: Device node /dev/mapper/docker_vg-docker--pool exists.
          Logical volume docker_vg/docker-pool changed.
  4. Start the Docker services:

    # systemctl start docker
    # vgs
      VG         #PV #LV #SN Attr   VSize  VFree
      docker_vol   2   1   0 wz--n- 64.99g <55.00g
  5. Restore the pod definition:

    # mv /etc/origin/node/pods-stopped/* /etc/origin/node/pods/
  6. Restart the node service by rebooting the host:

    # systemctl restart atomic-openshift-node.service
  7. A benefit in adding a disk compared to creating a new volume group and re-running docker-storage-setup is that the images that were used on the system still exist after the new storage has been added:

    # container images
    REPOSITORY                                              TAG                 IMAGE ID            CREATED             SIZE
    docker-registry.default.svc:5000/tet/perl               latest              8b0b0106fb5e        13 minutes ago      627.4 MB
    registry.redhat.io/rhscl/perl-524-rhel7         <none>              912b01ac7570        6 days ago          559.5 MB
    registry.redhat.io/openshift3/ose-deployer      v3.6.173.0.21       89fd398a337d        5 weeks ago         970.2 MB
    registry.redhat.io/openshift3/ose-pod           v3.6.173.0.21       63accd48a0d7        5 weeks ago         208.6 MB
  8. With the increase in storage capacity, enable the node to be schedulable in order to accept new incoming pods.

    As a cluster administrator, run the following from a master instance:

    $ oc adm manage-node ${NODE} --schedulable=true
    
    ose-master01.example.com   Ready,SchedulingDisabled   24m       v1.6.1+5115d708d7
    ose-master02.example.com   Ready,SchedulingDisabled   24m       v1.6.1+5115d708d7
    ose-master03.example.com   Ready,SchedulingDisabled   24m       v1.6.1+5115d708d7
    ose-infra-node01.example.com   Ready                      24m       v1.6.1+5115d708d7
    ose-infra-node02.example.com   Ready                      24m       v1.6.1+5115d708d7
    ose-infra-node03.example.com   Ready                      24m       v1.6.1+5115d708d7
    ose-app-node01.example.com   Ready                      24m       v1.6.1+5115d708d7
    ose-app-node02.example.com   Ready                      24m       v1.6.1+5115d708d7

Extending storage for an existing disk

  1. Evacuate the node following the previous steps.

  2. Stop the docker:

    # systemctl stop docker
  3. Stop the node service by removing the pod definition:

    # mkdir -p /etc/origin/node/pods-stopped
    # mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/
  4. Resize the existing disk as desired. This can depend on your environment:

  5. Verify that the /etc/sysconfig/docker-storage-setup file is correctly configured for the new disk by checking the device name, size, etc.

  6. Run docker-storage-setup to reconfigure the new disk:

    # docker-storage-setup
    INFO: Volume group backing root filesystem could not be determined
    INFO: Device /dev/xvdb is already partitioned and is part of volume group docker_vol
    INFO: Device node /dev/xvdd1 exists.
      Physical volume "/dev/xvdd1" successfully created.
      Volume group "docker_vol" successfully extended
  7. Start the Docker services:

    # systemctl start docker
    # vgs
      VG         #PV #LV #SN Attr   VSize  VFree
      docker_vol   2   1   0 wz--n- 64.99g <55.00g
  8. Restore the pod definition:

    # mv /etc/origin/node/pods-stopped/* /etc/origin/node/pods/
  9. Restart the node service by rebooting the host:

    # systemctl restart atomic-openshift-node.service

Changing the storage backend

With the advancements of services and file systems, changes in a storage backend may be necessary to take advantage of new features. The following steps provide an example of changing a device mapper backend to an overlay2 storage backend. overlay2 offers increased speed and density over traditional device mapper.

Evacuating the node

  1. From a master instance, or as a cluster administrator, allow the evacuation of any pod from the node and disable scheduling of other pods on that node:

    $ NODE=ose-app-node01.example.com
    $ oc adm manage-node ${NODE} --schedulable=false
    NAME                          STATUS                     AGE       VERSION
    ose-app-node01.example.com   Ready,SchedulingDisabled   20m       v1.6.1+5115d708d7
    
    $ oc adm drain ${NODE} --ignore-daemonsets
    node "ose-app-node01.example.com" already cordoned
    pod "perl-1-build" evicted
    pod "perl-1-3lnsh" evicted
    pod "perl-1-9jzd8" evicted
    node "ose-app-node01.example.com" drained

    If there are containers running with local volumes that will not migrate, run the following command: oc adm drain ${NODE} --ignore-daemonsets --delete-local-data

  2. List the pods on the node to verify that they have been removed:

    $ oc adm manage-node ${NODE} --list-pods
    
    Listing matched pods on node: ose-app-node01.example.com
    
    NAME      READY     STATUS    RESTARTS   AGE

    For more information on evacuating and draining pods or nodes, see Node maintenance.

  3. With no containers currently running on the instance, stop the docker service:

    # systemctl stop docker
  4. Stop the atomic-openshift-node service:

    # systemctl stop atomic-openshift-node
  5. Verify the name of the volume group, logical volume name, and physical volume name:

    # vgs
      VG         #PV #LV #SN Attr   VSize   VFree
      docker_vol   1   1   0 wz--n- <25.00g 15.00g
    
    # lvs
    LV       VG         Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
     dockerlv docker_vol -wi-ao---- <10.00g
    
    # lvremove /dev/docker_vol/docker-pool  -y
    # vgremove docker_vol -y
    # pvs
      PV         VG         Fmt  Attr PSize   PFree
      /dev/xvdb1 docker_vol lvm2 a--  <25.00g 15.00g
    
    # pvremove /dev/xvdb1 -y
    # rm -Rf /var/lib/docker/*
    # rm -f /etc/sysconfig/docker-storage
  6. Modify the docker-storage-setup file to specify the STORAGE_DRIVER.

    When a system is upgraded from Red Hat Enterprise Linux version 7.3 to 7.4, the docker service attempts to use /var with the STORAGE_DRIVER of extfs. The use of extfs as the STORAGE_DRIVER causes errors. See the following bug for more info regarding the error:

    DEVS=/dev/xvdb
    VG=docker_vol
    DATA_SIZE=95%VG
    STORAGE_DRIVER=overlay2
    CONTAINER_ROOT_LV_NAME=dockerlv
    CONTAINER_ROOT_LV_MOUNT_PATH=/var/lib/docker
    CONTAINER_ROOT_LV_SIZE=100%FREE
  7. Set up the storage:

    # docker-storage-setup
  8. Start the docker:

    # systemctl start docker
  9. Restart the atomic-openshift-node service:

    # systemctl restart atomic-openshift-node.service
  10. With the storage modified to use overlay2, enable the node to be schedulable in order to accept new incoming pods.

    From a master instance, or as a cluster administrator:

    $ oc adm manage-node ${NODE} --schedulable=true
    
    ose-master01.example.com   Ready,SchedulingDisabled   24m       v1.6.1+5115d708d7
    ose-master02.example.com   Ready,SchedulingDisabled   24m       v1.6.1+5115d708d7
    ose-master03.example.com   Ready,SchedulingDisabled   24m       v1.6.1+5115d708d7
    ose-infra-node01.example.com   Ready                      24m       v1.6.1+5115d708d7
    ose-infra-node02.example.com   Ready                      24m       v1.6.1+5115d708d7
    ose-infra-node03.example.com   Ready                      24m       v1.6.1+5115d708d7
    ose-app-node01.example.com   Ready                      24m       v1.6.1+5115d708d7
    ose-app-node02.example.com   Ready                      24m       v1.6.1+5115d708d7

Managing container registry certificates

An OKD internal registry is created as a pod. However, containers may be pulled from external registries if desired. By default, registries listen on TCP port 5000. Registries provide the option of securing exposed images via TLS or running a registry without encrypting traffic.

Docker interprets .crt files as CA certificates and .cert files as client certificates. Any CA extensions must be .crt.

Installing a certificate authority certificate for external registries

In order to use OKD with an external registry, the registry certificate authority (CA) certificate must be trusted for all the nodes that can pull images from the registry.

Depending on the Docker version, the process to trust a container image registry varies. The latest versions of Docker’s root certificate authorities are merged with system defaults. Prior to docker version 1.13, the system default certificate is used only when no other custom root certificates exist.

Procedure

  1. Copy the CA certificate to /etc/pki/ca-trust/source/anchors/:

    $ sudo cp myregistry.example.com.crt /etc/pki/ca-trust/source/anchors/
  2. Extract and add the CA certificate to the list of trusted certificates authorities:

    $ sudo update-ca-trust extract
  3. Verify the SSL certificate using the openssl command:

    $ openssl verify myregistry.example.com.crt
    myregistry.example.com.crt: OK
  4. Once the certificate is in place and the trust is updated, restart the docker service to ensure the new certificates are properly set:

    $ sudo systemctl restart docker.service

For Docker versions prior to 1.13, perform the following additional steps for trusting certificates of authority:

  1. On every node create a new directory in /etc/docker/certs.d where the name of the directory is the host name of the container image registry:

    $ sudo mkdir -p /etc/docker/certs.d/myregistry.example.com

    The port number is not required unless the container image registry cannot be accessed without a port number. Addressing the port to the original Docker registry is as follows: myregistry.example.com:port

  2. Accessing the container image registry via IP address requires the creation of a new directory within /etc/docker/certs.d on every node where the name of the directory is the IP of the container image registry:

    $ sudo mkdir -p /etc/docker/certs.d/10.10.10.10
  3. Copy the CA certificate to the newly created Docker directories from the previous steps:

    $ sudo cp myregistry.example.com.crt \
      /etc/docker/certs.d/myregistry.example.com/ca.crt
    
    $ sudo cp myregistry.example.com.crt /etc/docker/certs.d/10.10.10.10/ca.crt
  4. Once the certificates have been copied, restart the docker service to ensure the new certificates are used:

    $ sudo systemctl restart docker.service

Docker certificates backup

When performing a node host backup, ensure to include the certificates for external registries.

Procedure

  1. If using /etc/docker/certs.d, copy all the certificates included in the directory and store the files:

    $ sudo tar -czvf docker-registry-certs-$(hostname)-$(date +%Y%m%d).tar.gz /etc/docker/certs.d/
  2. If using a system trust, store the certificates prior to adding them within the system trust. Once the store is complete, extract the certificate for restoration using the trust command. Identify the system trust CAs and note the pkcs11 ID:

    $ trust list
    ...[OUTPUT OMMITED]...
    pkcs11:id=%a5%b3%e1%2b%2b%49%b6%d7%73%a1%aa%94%f5%01%e7%73%65%4c%ac%50;type=cert
        type: certificate
        label: MyCA
        trust: anchor
        category: authority
    ...[OUTPUT OMMITED]...
  3. Extract the certificate in pem format and provide it a name. For example, myca.crt.

    $ trust extract --format=pem-bundle \
     --filter="%a5%b3%e1%2b%2b%49%b6%d7%73%a1%aa%94%f5%01%e7%73%65%4c%ac%50;type=cert" myca.crt
  4. Verify the certificate has been properly extracted via openssl:

    $ openssl verify myca.crt
  5. Repeat the procedure for all the required certificates and store the files in a remote location.

Docker certificates restore

In the event of the deletion or corruption of the Docker certificates for the external registries, the restore mechanism uses the same steps as the installation method using the files from the backups performed previously.

Managing container registries

You can configure OKD to use external docker registries to pull images. However, you can use configuration files to allow or deny certain images or registries.

If the external registry is exposed using certificates for the network traffic, it can be named as a secure registry. Otherwise, traffic between the registry and host is plain text and not encrypted, meaning it is an insecure registry.

Docker search external registries

By default, the docker daemon has the ability to pull images from any registry, but the search operation is performed against docker.io/ and registry.redhat.io. The daemon can be configured to search images from other registries using the --add-registry option with the docker daemon.

The ability to search images from the Red Hat Registry registry.redhat.io exists by default in the Red Hat Enterprise Linux docker package.

Procedure

  1. To allow users to search for images using docker search with other registries, add those registries to the /etc/containers/registries.conf file under the registries parameter:

    registries:
      - registry.redhat.io
      - my.registry.example.com
  2. Restart the docker daemon to allow for my.registry.example.com to be used:

    $ sudo systemctl restart docker.service

    Restarting the docker daemon causes the docker containers to restart.

  3. Using the Ansible installer, this can be configured using the openshift_docker_additional_registries variable in the Ansible hosts file:

    openshift_docker_additional_registries=registry.redhat.io,my.registry.example.com

Docker external registries whitelist and blacklist

Docker can be configured to block operations from external registries by configuring the registries and block_registries flags for the docker daemon.

Procedure

  1. Add the allowed registries to the /etc/containers/registries.conf file with the registries flag:

    registries:
      - registry.redhat.io
      - my.registry.example.com

    The docker.io registry can be added using the same method.

  2. Block the rest of the registries:

    block_registries:
       - all
  3. Block the rest of the registries in older versions:

    BLOCK_REGISTRY='--block-registry=all'
  4. Restart the docker daemon:

    $ sudo systemctl restart docker.service

    Restarting the docker daemon causes the docker containers to restart.

  5. In this example, the docker.io registry has been blacklisted, so any operation regarding that registry fails:

    $ sudo docker pull hello-world
    Using default tag: latest
    Trying to pull repository registry.redhat.io/hello-world ...
    Trying to pull repository my.registry.example.com/hello-world ...
    Trying to pull repository registry.redhat.io/hello-world ...
    unknown: Not Found
    $ sudo docker pull docker.io/hello-world
    Using default tag: latest
    Trying to pull repository docker.io/library/hello-world ...
    All endpoints blocked.

    Add docker.io back to the registries variable by modifying the file again and restarting the service.

    registries:
      - registry.redhat.io
      - my.registry.example.com
      - docker.io
    block_registries:
      - all

    or

    ADD_REGISTRY="--add-registry=registry.redhat.io --add-registry=my.registry.example.com --add-registry=docker.io"
    BLOCK_REGISTRY='--block-registry=all'
  6. Restart the Docker service:

    $ sudo systemctl restart docker
  7. To verify that the image is now available to be pulled:

    $ sudo docker pull docker.io/hello-world
    Using default tag: latest
    Trying to pull repository docker.io/library/hello-world ...
    latest: Pulling from docker.io/library/hello-world
    
    9a0669468bf7: Pull complete
    Digest: sha256:0e06ef5e1945a718b02a8c319e15bae44f47039005530bc617a5d071190ed3fc
  8. If using an external registry is required, for example to modify the docker daemon configuration file in all the node hosts that require to use that registry, create a blacklist on those nodes to avoid malicious containers from being executed.

    Using the Ansible installer, this can be configured using the openshift_docker_additional_registries and openshift_docker_blocked_registries variables in the Ansible hosts file:

    openshift_docker_additional_registries=registry.redhat.io,my.registry.example.com
    openshift_docker_blocked_registries=all

Secure registries

In order to be able to pull images from an external registry, it is required to trust the registry certificates, otherwise the pull image operation fails.

If using a whitelist, the external registries should be added to the registries variable, as explained above.

Insecure registries

External registries that use non-trusted certificates, or without certificates at all, should be avoided.

However, any insecure registries should be added using the --insecure-registry option to allow for the docker daemon to pull images from the repository. This is the same as the --add-registry option, but the docker operation is not verified.

The registry should be added using both options to enable search, and, if there is a blacklist, to perform other operations, such as pulling images.

For testing purposes, an example is shown on how to add a localhost insecure registry.

Procedure

  1. Modify /etc/containers/registries.conf configuration file to add the localhost insecure registry:

    [registries.search]
    registries = ['registry.redhat.io', 'my.registry.example.com', 'docker.io', 'localhost:5000' ]
    
    [registries.insecure]
    registries = ['localhost:5000']
    
    [registries.block]
    registries = ['all']

    Add insecure registries to both the registries.search section as well as the registries.insecure section to ensure they are marked as insecure and whitelisted. Any registry added to the registeries.block section will be blocked unless it is also whitelisted by being added to the registries.search section.

  2. Restart the docker daemon to use the registry:

    $ sudo systemctl restart docker.service

    Restarting the docker daemon causes the docker containers to be restarted.

  3. Run a container image registry pod at localhost:

    $ sudo docker run -p 5000:5000 registry:2
  4. Pull an image:

    $ sudo docker pull openshift/hello-openshift
  5. Tag the image:

    $ sudo docker tag docker.io/openshift/hello-openshift:latest localhost:5000/hello-openshift-local:latest
  6. Push the image to the local registry:

    $ sudo docker push localhost:5000/hello-openshift-local:latest
  7. Using the Ansible installer, this can be configured using the openshift_docker_additional_registries, openshift_docker_blocked_registries, and openshift_docker_insecure_registries variables in the Ansible hosts file:

    openshift_docker_additional_registries=registry.redhat.io,my.registry.example.com,localhost:5000
    openshift_docker_insecure_registries=localhost:5000
    openshift_docker_blocked_registries=all

    You can also set the openshift_docker_insecure_registries variable to the IP address of the host. 0.0.0.0/0 is not a valid setting.

Authenticated registries

Using authenticated registries with docker requires the docker daemon to log in to the registry. With OKD, a different set of steps must be performed, because the users can not run docker login commands on the host. Authenticated registries can be used to limit the images users can pull or who can access the external registries.

If an external docker registry requires authentication, create a special secret in the project that uses that registry and then use that secret to perform the docker operations.

Procedure

  1. Create a dockercfg secret in the project where the user is going to log in to the docker registry:

    $ oc project <my_project>
    $ oc create secret docker-registry <my_registry> --docker-server=<my.registry.example.com> --docker-username=<username> --docker-password=<my_password> --docker-email=<me@example.com>
  2. If a .dockercfg file exists, create the secret using the oc command:

    $ oc create secret generic <my_registry> --from-file=.dockercfg=<path/to/.dockercfg> --type=kubernetes.io/dockercfg
  3. Populate the $HOME/.docker/config.json file:

    $ oc create secret generic <my_registry> --from-file=.dockerconfigjson=<path/to/.dockercfg> --type=kubernetes.io/dockerconfigjson
  4. Use the dockercfg secret to pull images from the authenticated registry by linking the secret to the service account performing the pull operations. The default service account to pull images is named default:

    $ oc secrets link default <my_registry> --for=pull
  5. For pushing images using the s2i feature, the dockercfg secret is mounted in the s2i pod, so it needs to be linked to the proper service account that performs the build. The default service account used to build images is named builder.

    $ oc secrets link builder <my_registry>
  6. In the buildconfig, the secret should be specified for push or pull operations:

    "type": "Source",
    "sourceStrategy": {
        "from": {
            "kind": "DockerImage",
            "name": "*my.registry.example.com*/myproject/myimage:stable"
        },
        "pullSecret": {
            "name": "*mydockerregistry*"
        },
    ...[OUTPUT ABBREVIATED]...
    "output": {
        "to": {
            "kind": "DockerImage",
            "name": "*my.registry.example.com*/myproject/myimage:latest"
        },
        "pushSecret": {
            "name": "*mydockerregistry*"
        },
    ...[OUTPUT ABBREVIATED]...
  7. If the external registry delegates authentication to external services, create both dockercfg secrets: the registry one using the registry URL and the external authentication system using its own URL. Both secrets should be added to the service accounts.

    $ oc project <my_project>
    $ oc create secret docker-registry <my_registry> --docker-server=*<my_registry_example.com> --docker-username=<username> --docker-password=<my_password> --docker-email=<me@example.com>
    $ oc create secret docker-registry <my_docker_registry_ext_auth> --docker-server=<my.authsystem.example.com> --docker-username=<username> --docker-password=<my_password> --docker-email=<me@example.com>
    $ oc secrets link default <my_registry> --for=pull
    $ oc secrets link default <my_docker_registry_ext_auth> --for=pull
    $ oc secrets link builder <my_registry>
    $ oc secrets link builder <my_docker_registry_ext_auth>

ImagePolicy admission plug-in

An admission control plug-in intercepts requests to the API, and performs checks depending on the configured rules and allows or denies certain actions based on those rules. OKD can limit the allowed images running in the environment using the ImagePolicy admission plug-in where it can control:

  • The source of images: which registries can be used to pull images

  • Image resolution: force pods to run with immutable digests to ensure the image does not change due to a re-tag

  • Container image label restrictions: force an image to have or not have particular labels

  • Image annotation restrictions: force an image in the integrated container registry to have or not have particular annotations

ImagePolicy admission plug-in is currently considered beta.

Procedure

  1. If the ImagePolicy plug-in is enabled, it needs to be modified to allow the external registries to be used by modifying the /etc/origin/master/master-config.yaml file on every master node:

    admissionConfig:
      pluginConfig:
        openshift.io/ImagePolicy:
          configuration:
            kind: ImagePolicyConfig
            apiVersion: v1
            executionRules:
            - name: allow-images-from-other-registries
              onResources:
              - resource: pods
              - resource: builds
              matchRegistries:
              - docker.io
              - <my.registry.example.com>
              - registry.redhat.io

    Enabling ImagePolicy requires users to specify the registry when deploying an application like oc new-app docker.io/kubernetes/guestbook instead oc new-app kubernetes/guestbook, otherwise it fails.

  2. To enable the admission plug-ins at installation time, the openshift_master_admission_plugin_config variable can be used with a json formatted string including all the pluginConfig configuration:

    openshift_master_admission_plugin_config={"openshift.io/ImagePolicy":{"configuration":{"kind":"ImagePolicyConfig","apiVersion":"v1","executionRules":[{"name":"allow-images-from-other-registries","onResources":[{"resource":"pods"},{"resource":"builds"}],"matchRegistries":["docker.io","*my.registry.example.com*","registry.redhat.io"]}]}}}

Import images from external registries

Application developers can import images to create imagestreams using the oc import-image command, and OKD can be configured to allow or deny image imports from external registries.

Procedure

  1. To configure the allowed registries where users can import images, add the following to the /etc/origin/master/master-config.yaml file:

    imagePolicyConfig:
      allowedRegistriesForImport:
      - domainName: docker.io
      - domainName: '\*.docker.io'
      - domainName: '*.redhat.com'
      - domainName: 'my.registry.example.com'
  2. To import images from an external authenticated registry, create a secret within the desired project.

  3. Even if not recommended, if the external authenticated registry is insecure or the certificates can not be trusted, the oc import-image command can be used with the --insecure=true option.

    If the external authenticated registry is secure, the registry certificate should be trusted in the master hosts as they run the registry import controller as:

    Copy the certificate in the /etc/pki/ca-trust/source/anchors/:

    $ sudo cp <my.registry.example.com.crt> /etc/pki/ca-trust/source/anchors/<my.registry.example.com.crt>
  4. Run update-ca-trust command:

    $ sudo update-ca-trust
  5. Restart the master services on all the master hosts:

    $ sudo master-restart api
    $ sudo master-restart controllers
  6. The certificate for the external registry should be trusted in the OKD registry:

    $ for i in pem openssl java; do
      oc create configmap ca-trust-extracted-${i} --from-file /etc/pki/ca-trust/extracted/${i}
      oc set volume dc/docker-registry --add -m /etc/pki/ca-trust/extracted/${i} --configmap-name=ca-trust-extracted-${i} --name ca-trust-extracted-${i}
    done

    There is no official procedure currently for adding the certificate to the registry pod, but the above workaround can be used.

    This workaround creates configmaps with all the trusted certificates from the system running those commands, so the recommendation is to run it from a clean system where just the required certificates are trusted.

  7. Alternatively, modify the registry image in order to trust the proper certificates rebuilding the image using a Dockerfile as:

    FROM registry.redhat.io/openshift3/ose-docker-registry:v3.6
    ADD <my.registry.example.com.crt> /etc/pki/ca-trust/source/anchors/
    USER 0
    RUN update-ca-trust extract
    USER 1001
  8. Rebuild the image, push it to a docker registry, and use that image as spec.template.spec.containers["name":"registry"].image in the registry deploymentconfig:

    $ oc patch dc docker-registry -p '{"spec":{"template":{"spec":{"containers":[{"name":"registry","image":"*myregistry.example.com/openshift3/ose-docker-registry:latest*"}]}}}}'

To add the imagePolicyConfig configuration at installation, the openshift_master_image_policy_config variable can be used with a json formatted string including all the imagePolicyConfig configuration, like:

openshift_master_image_policy_config={"imagePolicyConfig":{"allowedRegistriesForImport":[{"domainName":"docker.io"},{"domainName":"\*.docker.io"},{"domainName":"*.redhat.com"},{"domainName":"*my.registry.example.com*"}]}}

For more information about the ImagePolicy, see the ImagePolicy admission plug-in section.

OKD registry integration

You can install OKD as a stand-alone container image registry to provide only the registry capabilities, but with the advantages of running in an OKD platform.

For more information about the OKD registry, see Installing a Stand-alone Deployment of OpenShift Container Registry.

To integrate the OKD registry, all previous sections apply. From the OKD point of view, it is treated as an external registry, but there are some extra tasks that need to be performed, because it is a multi-tenant registry and the authorization model from OKD applies so when a new project is created, the registry does not create a project within its environment as it is independent.

Connect the registry project with the cluster

As the registry is a full OKD environment with a registry pod and a web interface, the process to create a new project in the registry is performed using the oc new-project or oc create command line or via the web interface.

Once the project has been created, the usual service accounts (builder, default, and deployer) are created automatically, as well as the project administrator user is granted permissions. Different users can be authorized to push/pull images as well as "anonymous" users.

There can be several use cases, such as allowing all the users to pull images from this new project within the registry, but if you want to have a 1:1 project relationship between OKD and the registry, where the users can push and pull images from that specific project, some steps are required.

The registry web console shows a token to be used for pull/push operations, but the token showed there is a session token, so it expires. Creating a service account with specific permissions allows the administrator to limit the permissions for the service account, so that, for example, different service accounts can be used for push or pull images. Then, a user does not have to configure for token expiration, secret recreation, and other tasks, as the service account tokens will not expire.

Procedure

  1. Create a new project:

    $ oc new-project <my_project>
  2. Create a registry project:

    $ oc new-project <registry_project>
  3. Create a service account in the registry project:

    $ oc create serviceaccount <my_serviceaccount> -n <registry_project>
  4. Give permissions to push and pull images using the registry-editor role:

    $ oc adm policy add-role-to-user registry-editor -z <my_serviceaccount> -n <registry_project>

    If only pull permissions are required, the registry-viewer role can be used.

  5. Get the service account token:

    $ TOKEN=$(oc sa get-token <my_serviceaccount> -n <registry_project>)
  6. Use the token as the password to create a dockercfg secret:

    $ oc create secret docker-registry <my_registry> \
      --docker-server=<myregistry.example.com> --docker-username=<notused> --docker-password=${TOKEN} --docker-email=<me@example.com>
  7. Use the dockercfg secret to pull images from the registry by linking the secret to the service account performing the pull operations. The default service account to pull images is named default:

    $ oc secrets link default <my_registry> --for=pull
  8. For pushing images using the s2i feature, the dockercfg secret is mounted in the s2i pod, so it needs to be linked to the proper service account that performs the build. The default service account used to build images is named builder:

    $ oc secrets link builder <my_registry>
  9. In the buildconfig, the secret should be specified for push or pull operations:

    "type": "Source",
    "sourceStrategy": {
        "from": {
            "kind": "DockerImage",
            "name": "<myregistry.example.com/registry_project/my_image:stable>"
        },
        "pullSecret": {
            "name": "<my_registry>"
        },
    ...[OUTPUT ABBREVIATED]...
    "output": {
        "to": {
            "kind": "DockerImage",
            "name": "<myregistry.example.com/registry_project/my_image:latest>"
        },
        "pushSecret": {
            "name": "<my_registry>"
        },
    ...[OUTPUT ABBREVIATED]...