$ oc secrets new <secret_name> .dockerconfigjson=[path/to/].docker/config.json
OpenShift Enterprise by Red Hat is a Platform as a Service (PaaS) that provides developers and IT organizations with a cloud application platform for deploying new applications on secure, scalable resources with minimal configuration and management overhead. OpenShift Enterprise supports a wide selection of programming languages and frameworks, such as Java, Ruby, and PHP.
Built on Red Hat Enterprise Linux and Google Kubernetes, OpenShift Enterprise provides a secure and scalable multi-tenant operating system for today’s enterprise-class applications, while providing integrated application runtimes and libraries. OpenShift Enterprise brings the OpenShift PaaS platform to customer data centers, enabling organizations to implement a private PaaS that meets security, privacy, compliance, and governance requirements.
OpenShift Enterprise version 3.1 is now available. Ensure that you follow the instructions on upgrading your OpenShift cluster properly, including steps specific to this release.
For any release, always review Installation and Configuration for instructions on upgrading your OpenShift cluster properly, including any additional steps that may be required for a specific release. |
Previous Name | New Name |
---|---|
openshift-master |
atomic-openshift-master |
openshift-node |
atomic-openshift-node |
/etc/openshift/ |
/etc/origin/ |
/var/lib/openshift/ |
/var/lib/origin/ |
/etc/sysconfig/openshift-master |
/etc/sysconfig/atomic-openshift-master |
/etc/sysconfig/openshift-node |
/etc/sysconfig/atomic-openshift-node |
Docker version 1.8.2 is required. This contains the fix to let the /etc/groups file use supplementary groups.
OpenShift now allows you to sync LDAP records with OpenShift, so that you can manage groups easily.
You can now configure an F5 load-balancer for use with your OpenShift environment.
Several persistent storage options are now available, such as Red Hat’s GlusterFS and Ceph RBD, AWS, and Google Compute Engine. Also, NFS storage is now supplemented by iSCSI- and Fibre Channel-based volumes.
Several middleware services are now available, such as JBoss DataGrid, and JBoss BRMS, as well as a supported JBoss Developer Studio and Eclipse plug-in.
The job object type is now available, meaning that finite jobs can now be executed on the cluster.
Multiple enhancements have been made to the Ansible-based installer. The installer can now:
Perform container-based installations. (Fully supported starting in OpenShift Enterprise 3.1.1)
Install active-active, highly-available clusters.
Uninstall existing OpenShift clusters.
You can now specify your own CA certificate during the install, so that application developers do not have to specify the OpenShift-generated CA to obtain secure connections.
The DNS name for service SRV discovery has changed. Services without search paths resulted in long load times to resolve DNS. The change reduces load times.
Excessive amounts of events being stored in etcd can lead to excessive memory
growth. You can now set the event-ttl
parameter in the master
configuration file to a lower value (for example, 15m
) to prevent memory
growth.
You can now specify the port to send routes to. Any services that are
pointing to multiple ports should have the spec.port.targetPort
parameter
on the pod set to the desired port.
The oc rsync
command is now available, which can copy local directories into
a remote pod.
Isolated projects can now be bound together using oadm pod-network
join-project
.
New commands exist to validate master and node configuration files: openshift
ex validate master-config
and openshift ex validate node-config
, respectively.
You can now delete tags from an image stream using the oc tag <tag_name> -d
command.
You can now create containers that can specify compute resource requests and limits. Requests are used for scheduling your container and provide a minimum service guarantee. Limits constrain the amount of compute resource that may be consumed on your node.
If you wish to disable CFS quota enforcement, you may disable it by modifying
your node-config.yaml file to specify a kubeletArguments
stanza where
cpu-cfs-quota
is set to false.
v1beta3
no Longer SupportedUsing v1beta3
in configuration files is no longer supported:
The etcdStorageConfig.kubernetesStorageVersion
and
etcdStorageConfig.openShiftStorageVersion
values in the master
configuration file must be v1
.
You may also need to change the apiLevels
field and remove v1beta3
.
v1beta3
is no longer supported as an endpoint. /api/v1beta3
and
/osapi/v1beta3
are now disabled.
Multiple web console enhancements:
Extended resource information is now available on the web console.
The ability to trigger a deployment and rollback from the console has been added.
Logs for builds and pods are now displayed on the web console in real time.
When enabled, the web console will now display pod metrics.
You can now connect to a container using a remote shell connection when in the Builds tab.
Elasticsearch, Fluentd, and Kibana (together, known as the EFK stack) are now available for logging consumption.
The Heapster interface and metric datamodel can now be used with OpenShift.
A Jenkins image is now available for deployment on OpenShift.
Integration between Jenkins masters and Jenkins slaves running on OpenShift has improved.
oc build-logs
Is Now DeprecatedThe oc build-logs <build_name>
command is now deprecated and replaced by oc
logs build/<build_name>
.
spec.rollingParams.updatePercent
Field Is ReplacedThe spec.rollingParams.updatePercent
field in deployment configurations
has been replaced with maxUnavailable
and maxSurge
.
Images can be edited to set fields such as labels
or annotations
.
Previously, the upgrade script used an incorrect image to upgrade the haproxy router. The script now uses the right image.
Previously, an upgrade would fail when a defined image stream or template did not exist. Now, the installation utility skips the incorrectly defined image stream or template and continues with the upgrade.
When using
the oc new-app
command with the --insecure-registry
option, it would not
set if the Docker daemon was not running. This issue has been fixed.
Using the oc
edit
command on Windows machines displayed errors with wrapping and file
changes. These issues have been fixed.
Previously, creating pods from the same image in the same service and deployment were not grouped into another service. Now, pods created with the same image run in the same service and deployment, grouped together.
Previously,
using the oc export
command could produce an error, and the export would
fail. This issue has been fixed.
The recycler would previously fail if hidden files or directories would be present. This issue has been fixed.
Previously, when viewing a build to completion on the web console after deleting and recreating the same build, no build spinner would show. This issue has been fixed.
You can now use custom self-signed certificates for the web console for specific host names.
Previously,
the installation utility did not have an option to configure the deployment
type. Now, you can run the --deployment-type
option with the installation
utility to select a type, otherwise the type set in the installation utility
will be set.
There was an
issue with the pip
command not being available in the newest OpenShift
release. This issue has been fixed.
Previously,
using the oc exec
command was only available to be used on privileged
containers. Now, users with permissions to create pods can use the oc exec
command to SSH into privileged containers.
There was an
issue with using the iptables
command with the -w
option to make the
iptables
command wait to acquire the xtables lock, causing some SDN
initializations to fail. This issue has been fixed.
When installing a clustered etcd and defining variables for IP and etcd interfaces when using two network interfaces, the certificate would be populated with only the first network, instead of whichever network was desired. The issue has now been fixed.
Using the GET
fieldSelector
would return a 500 BadRequest error. This issue has been fixed.
Previously, creating an application from a image stream could result in two builds being initiated. This was caused by the wrong image stream tag being used by the build process. The issue has been fixed.
The ose-ha-proxy router image was missing the X-Forwarded
headers, causing the Jenkins application to redirect to HTTP instead of HTTPS. The issue has been fixed.
Previously, an error was present where the haproxy router did not expose statistics, even if the port was specified. The issue has been fixed.
Previously, some node hosts would not talk to the SDN due to routing table differences. A lbr0
entry was causing traffic to be routed incorrectly. The issue has been fixed.
When persistent volume claims (PVC) were created from a template, sometimes the same volume would be mounted to multiple PVCs. At the same time, the volume would show that only one PVC was being used. The issue has been fixed.
Previously, using a etcd storage location other than the default, as defined in the master configuration file, would result in an upgrade fail at the "generate etcd backup" stage. This issue has now been fixed.
Basic authentication passwords can now contain colons.
Previously, giving EmptyDir
volumes a different default permission setting and group ownership could affect deploying the postgresql-92-rhel7 image. The issue has been fixed.
Previously, an error could occur when trying to perform an HA install using Ansible, due to a problem with SRC files. The issue has been fixed.
When installing a etcd cluster with hosts with different network interfaces, the install would fail. The issue has been fixed.
Previously, when changing the default project region from infra to primary, old route and registry pods are stuck in the terminating stage and could not be deleted, meaning that new route and registry pods could not be deployed. The issue has been fixed.
If, when upgrading to OpenShift Enterprise 3.1, the OpenShift Enterprise repository was not set, a Python error would occur. This issue has been fixed.
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Please note the following scope of support on the Red Hat Customer Portal for these features:
The following features are in Technology Preview:
Binary builds and the Dockerfile source type for builds. (Fully supported starting in OpenShift Enterprise 3.1.1)
Pod autoscaling, using the HorizontalPodAutoscaler
object. OpenShift
compares pod CPU usage as a percentage of requested CPU and scales according
to an indicated threshold. (Fully supported starting in
OpenShift Enterprise 3.1.1)
Support for OpenShift Enterprise running on RHEL Atomic Host. (Fully supported starting in OpenShift Enterprise 3.1.1)
Containerized installations, meaning all OpenShift Enterprise components running in containers. (Fully supported starting in OpenShift Enterprise 3.1.1)
When pushing to an internal registry when multiple registries share the same NFS volume, there is a chance the push will fail. A workaround has been suggested.
When creating a build, in the event where there are not enough resources (possibly due to quota), the build will be pending indefinitely. As a workaround, free up resources, cancel the build, then start a new build.
Security, bug fix, and enhancement updates for OpenShift Enterprise 3.1 are released as asynchronous errata through the Red Hat Network. All OpenShift Enterprise 3.1 errata is available on the Red Hat Customer Portal. See the OpenShift Enterprise Life Cycle for more information about asynchronous errata.
Red Hat Customer Portal users can enable errata notifications in the account settings for Red Hat Subscription Management (RHSM). When errata notifications are enabled, users are notified via email whenever new errata relevant to their registered systems are released.
Red Hat Customer Portal user accounts must have systems registered and consuming OpenShift Enterprise entitlements for OpenShift Enterprise errata notification emails to generate. |
The following sections provide notes on enhancements and bug fixes for each asynchronous errata release of OpenShift Enterprise 3.1.
For any release, always review the instructions on upgrading your OpenShift cluster properly. |
OpenShift Enterprise release 3.1.1 (RHSA-2016:0070) is now available. Ensure that you follow the instructions on upgrading your OpenShift cluster to this asynchronous release properly.
This release includes the following enhancements and bug fixes.
Installation of OpenShift Enterprise master and node components as containerized services, added as Technology Preview in OpenShift Enterprise 3.1.0, is now fully supported as an alternative to the standard RPM method. Both the quick and advanced installation methods support use of the containerized method. See RPM vs Containerized for more details on the differences when running as a containerized installation.
Installing OpenShift Enterprise on Red Hat Enterprise Linux (RHEL) Atomic Host 7.1.6 or later, added as Technology Preview in OpenShift Enterprise 3.1.0, is now fully supported for running containerized OpenShift services. See System Requirements for more details.
Binary builds and the Dockerfile source type for builds, added as Technology Preview in OpenShift Enterprise 3.1.0, are now fully supported.
Pod autoscaling using the
HorizontalPodAutoscaler
object, added as Technology Preview in OpenShift
Enterprise 3.1.0, is now fully supported. OpenShift compares pod CPU usage as a
percentage of requested CPU and scales according to an indicated threshold.
When creating an application from source in the web console, you can independently specify build environment variables and deployment environment variables on the creation page. Build environment variables created in this way also become available at runtime. (BZ#1280216)
When creating an application from source in the web console, all container ports are now exposed on the creation page under "Routing". (BZ#1247523)
Build trends are shown on the build configuration overview page.
Individual build configurations and deployment configurations can be deleted.
Any object in the web console can be edited like oc edit
with a direct YAML
editor, for when you need to tweak rarely used fields.
The experience around web console scaling has been improved with more information.
Empty replication controllers are shown in the Overview when they are not part of a service.
Users can dismiss web console alerts.
oc status
now shows suggestions and warnings about conditions it detects in
the current project.
oc start-build
now allows --env
and --build-loglevel
to be passed as
arguments.
oc secret
now allows custom secret types to be created.
Secrets can be created for Docker configuration files using the new .docker/config.json format with the following syntax:
$ oc secrets new <secret_name> .dockerconfigjson=[path/to/].docker/config.json
oc new-build
now supports the --to
flag, which allows you to specify which
image stream tag you want to push a build to. You can pass --to-docker
to
push to an external image registry. If you only want to test the build, pass
--no-output
which only ensures that the build passes.
The user name of the person requesting a new project is now available to
parameterize the initial project template as the parameter
PROJECT_REQUESTING_USER
.
When creating a new application from a Docker image, a warning occurs if the image does not specify a user that administrators may have disabled running as root inside of containers.
Add a new role system:image-pusher that allows pushing images to the integrated registry.
Deleting a cluster role from the command line now deletes all role bindings
associated to that role unless you pass the --cascade=false
option.
You can delete a tag using DELETE
/oapi/v1/namespaces/<namespace>/imagestreamtags/<steam>:<tag>
.
It is no longer valid to set route TLS configuration without also specifying a
termination type. A default has been set for the type to be terminate
if the
user provided TLS certificates.
Docker builds can now be configured with custom Dockerfile paths.
The integrated Docker registry has been updated to version 2.2.1.
The LDAP group prune and sync commands have been promoted out of experimental
and into oadm groups
.
More tests and configuration warnings have been added to openshift ex
diagnostics
.
Builds are now updated with the Git commit used in a build after the build completes.
Routers now support overriding the host value in a route at startup. You can
start multiple routers and serve the same route over different wildcards (with
different configurations). See the help text for openshift-router
.
The following features have entered into Technology Preview:
Dynamic provisioning of persistent storage volumes from Amazon EBS, Google Compute Disk, OpenStack Cinder storage providers.
Deleting users and groups cascades to delete their role bindings across the cluster.
In clustered etcd environments, user logins could fail with a 401 Unauthorized error due to stale reads from etcd. This bug fix updates OpenShift to wait for access tokens to propagate to all etcd cluster members before returning the token to the user.
OpenShift Enterprise now supports DWARF debugging.
Builds can now retrieve sources from Git when providing the repository with a
user other than git
.
When a build fails to start because of quota limits, if the quota is increased, the build is now handled correctly and starts.
When canceling a build within a few seconds of entering the running state, the build is now correctly marked "Cancelled" instead of "Failed".
The example syntax in the help text for oc attach
has been fixed.
The man page for the tuned-profiles-atomic-openshift-node
command was missing,
and has now been restored.
An event is now created with an accompanying error message when a deployment cannot be created due to a quota limit.
The default templates for Jenkins, MySQL, MongoDB, and PostgreSQL incorrectly pointed to CentOS images instead of the correct RHEL-based image streams. These templates have been fixed.
An out of range panic issue has been fixed in the OpenShift SDN.
Previously, it was possible for core dumps to be generated after running OpenShift for several days. Several memory leaks have since been fixed to address this issue.
The Kubelet exposes statistics from cAdvisor securely using cluster permissions to view metrics, enabling secure communication for Heapster metric collection.
A bug was fixed in which service endpoints could not be accessed reliably by IP address between different nodes.
When the ovs-multitenant plug-in is enabled, creating and deleting an application could previously leave behind OVS rules and a veth pair on the OVS bridge. Errors could be seen when checking the OVS interface. This bug fix ensures that ports for the deleted applications are properly removed.
If a node was under heavy load, it was possible for the node host subnet to not get created properly during installation. This bug fix bumps the timeout wait from 10 to 30 seconds to avoid the issue.
Various improvements have been made to ensure that OpenShift SDN can be installed and started properly.
The MySQL image can now handle if handle MYSQL_USER=root
is set. However, an
error is produced if you set MYSQL_USER=root
and also MYSQL_PASSWORD
and
MYSQL_ROOT_PASSWORD
at the same time.
The default haproxy "503" response lacked response headers, resulting in an invalid HTTP response. The response headers have been updated to fix this issue.
haproxy’s "Forwarded" header value is now RFC 7239 compliant.
The default strategies for cluster SCCs have been changed to RunAsAny for
FSGroup
and SupplementalGroups
, to retain backwards compatible behavior.
When creating a PV and PVC for a Cinder volume, it was possible for pods to not be created successfully due to a "Cloud provider not initialized properly" error. This has been fixed by ensuring that the related OpenShift instance ID is properly cached and used for volume management.
There was an issue with OpenShift Enterprise 3.1.1 where hosts with host names
that resolved to IP addresses that were not local to the host would run into
problems with liveness and readiness probes on newly-created haproxy routers.
This was resolved in
RHBA-2016:0293
by configuring the probes to use localhost as the hostname for pods with
hostPort
values.
If you created a router under the affected version, and your liveness or readiness probes unexpectedly fail for your router, then add host: localhost:
# oc edit dc/router
Apply the following changes:
spec: template: spec: containers: ... livenessProbe: httpGet: host: localhost (1) path: /healthz port: 1936 scheme: HTTP initialDelaySeconds: 10 timeoutSeconds: 1 ... readinessProbe: httpGet: host: localhost (2) path: /healthz port: 1936 scheme: HTTP timeoutSeconds: 1
1 | Add host: localhost to your liveness probe. |
2 | Add host: localhost to your readiness probe. |
OpenShift Enterprise release 3.1.1.11 is now available. The list of packages and bug fixes included in the update are documented in the RHBA-2017:0989 advisory. The list of container images included in the update are documented in the RHBA-2017:0990 advisory.
The container images in this release have been updated using the rhel:7.3-74
base image, where applicable.
To upgrade an existing OpenShift Enterprise 3.0 or 3.1 cluster to the latest 3.1 release, use the automated upgrade playbook. See Performing Automated In-place Cluster Upgrades for instructions.
OpenShift Enterprise release 3.1.1.11-2 is now available. The list of packages and bug fixes included in the update are documented in the RHBA-2017:1235 advisory. The list of container images included in the update are documented in the RHBA-2017:1236 advisory.
To upgrade an existing OpenShift Enterprise 3.0 or 3.1 cluster to the latest 3.1 release, use the automated upgrade playbook. See Performing Automated In-place Cluster Upgrades for instructions.
OpenShift Enterprise release 3.1.1.11-3 is now available. The list of packages and bug fixes included in the update are documented in the RHBA-2017:1665 advisory.
To upgrade an existing OpenShift Enterprise 3.0 or 3.1 cluster to the latest 3.1 release, use the automated upgrade playbook. See Performing Automated In-place Cluster Upgrades for instructions.