$ oc tag <jenkins_source_image> jenkins:2 --reference-policy=source -n openshift
Red Hat OpenShift Container Platform provides developers and IT organizations with a hybrid cloud application platform for deploying both new and existing applications on secure, scalable resources with minimal configuration and management overhead. OpenShift Container Platform supports a wide selection of programming languages and frameworks, such as Java, JavaScript, Python, Ruby, and PHP.
Built on Red Hat Enterprise Linux and Kubernetes, OpenShift Container Platform provides a more secure and scalable multi-tenant operating system for today’s enterprise-class applications, while delivering integrated application runtimes and libraries. OpenShift Container Platform enables organizations to meet security, privacy, compliance, and governance requirements.
Red Hat OpenShift Container Platform (RHBA-2020:0062) is now available. This release uses Kubernetes 1.16 with CRI-O runtime. New features, changes, and known issues that pertain to OpenShift Container Platform 4.3 are included in this topic.
This is an increase of two versions of Kubernetes from OpenShift Container Platform 4.2, which used Kubernetes 1.14. |
OpenShift Container Platform 4.3 clusters are available at https://cloud.redhat.com/openshift. The Red Hat OpenShift Cluster Manager application for OpenShift Container Platform allows you to deploy OpenShift clusters to either on-premise or cloud environments.
OpenShift Container Platform 4.3 is supported on Red Hat Enterprise Linux 7.6 or later, as well as Red Hat Enterprise Linux CoreOS 4.3.
You must use Red Hat Enterprise Linux CoreOS (RHCOS) for the control plane, or master, machines and can use either RHCOS or Red Hat Enterprise Linux 7.6 or later for compute, or worker, machines.
Because only Red Hat Enterprise Linux version 7.6 or later is supported for compute machines, you must not upgrade the Red Hat Enterprise Linux compute machines to version 8. |
This release adds improvements related to the following components and concepts.
In OpenShift Container Platform 4.1, Red Hat introduced the concept of upgrade channels for
recommending the appropriate upgrade versions to your cluster. upgrade channels
separate upgrade strategies and also are used to control the cadence of updates.
Channels are tied to a minor version of OpenShift Container Platform. For instance,
OpenShift Container Platform 4.3 channels will never include an upgrade to a
4.4 release. This ensures administrators make an explicit decision to upgrade to
the next minor version of OpenShift Container Platform. Channels only control updates and
have no impact on the version of the cluster you install; the
openshift-install
binary for a given patch level of OpenShift Container Platform always
installs that patch level.
You must choose the upgrade channel version corresponding to the OpenShift Container Platform version you plan to upgrade to. OpenShift Container Platform 4.3 includes the upgrade from the previous 4.2 release.
See OpenShift 4.x upgrades phased roll out for more information on the types of updates and upgrade channels.
Because upgrades are published to the channels as they are gradually rolled out to customers based on data from the Red Hat Service Reliability Engineering (SRE) team, you might not immediately see notification in the web console that updates from version 4.2.z to 4.3 are available at initial release.
You can now install an OpenShift Container Platform cluster that uses FIPS validated / Implementation Under Test cryptographic libraries. OpenShift Container Platform uses certain FIPS validated / Implementation Under Test modules within Red Hat Enterprise Linux (RHEL) and Red Hat CoreOS (RHCOS) for the operating system components that it uses. For more information, see Support for FIPS cryptography.
You can install a private cluster into an
existing VPC on Amazon Web Services (AWS).
existing VPC on Google Cloud Platform (GCP).
existing Azure Virtual Network (VNet) on Microsoft Azure.
To create a private cluster on these cloud platforms, you must provide an existing private VPC/VNet and subnets to host the cluster. The installation program configures the Ingress Operator and API server for access from only the private network.
Automated service CA rotation will be available in this release in a future z-stream update. In previous versions, the service CA did not automatically renew, leading to service disruption and requiring manual intervention. The service CA and signing key now auto-rotate before expiration. This allows administrators to plan for their environments in advance, avoiding service disruption.
You can now encrypt data stored in etcd. Enabling etcd encryption for your cluster provides an additional layer of data security.
When you enable etcd encryption, the following OpenShift API server and Kubernetes API server resources are encrypted:
Secrets
ConfigMaps
Routes
OAuth access tokens
OAuth authorize tokens
Performance improvements are now available for the PromQL query browser used in the OpenShift Container Platform web console.
The KubeletTooManyPods
alert now uses the Pod capacity metric as a threshold
instead of a fixed number.
A machine instance that is deleted out of band no longer attempts to recreate a new instance; instead, the machine enters a failed phase. You can automatically repair damaged machines in a machine pool by configuring and deploying a machine health check.
The controller that observes a MachineHealthCheck resource checks for the status that you define. If a machine fails the health check, it is automatically deleted and a new one is created to take its place. When a machine is deleted, you see a machine-deleted event. To limit disruptive impact of the machine deletion, the controller drains and deletes only one node at a time.
To stop the check, you remove the resource.
The Log Forwarding API provides a way to ship container and node logs to destinations that are not necessarily managed by the OpenShift Container Platform cluster logging infrastructure. Destination endpoints can be on or off your OpenShift Container Platform cluster. Log forwarding provides an easier way to forward logs than using Fluentd plug-ins without requiring you to set the cluster to Unmanaged. See Forwarding logs using the Log Forwarding API for more information.
OpenShift Do (odo) has a few enhancements that focus on the user experience of application deployment:
PushTimeout
has been added as a configurable wait parameter.
Both the Service Catalog and component creation have been improved with extended output and information prompts.
Architecture support has been expanded to IBM Z and Power platforms, providing binaries that are available for installation.
Helm is a package manager for Kubernetes and OpenShift Container Platform applications. It uses a packaging format called Helm charts to simplify defining, installing, and upgrading of applications and Services.
Helm CLI is built and shipped with OpenShift Container Platform and is available in the web console’s CLI menu to download.
The new Project dashboard is now available from the Administrator and Developer perspectives. This dashboard provides the following information about a project:
status/health
external links
inventory
utilization
resource quota
activity and top consumers
The new location option NamespaceDashboard
in the ConsoleLink Custom Resource
Definition lets you add project-specific links to the project dashboard.
You can now integrate cluster-wide third-party user interfaces to develop, administer, and configure Operator-backed services with the ConsoleLink Custom Resource Definition.
The new ConsoleYAMLSample
Custom Resource Definition provides the ability to
dynamically add YAML examples to any Kubernetes resource at any time.
See Customizing the web console for more information.
You can now open a Red Hat Support case from the help menu in the web console.
You can now view your container vulnerabilities from the web console dashboard. This leverages the Quay Operator, which supports both on-premise and external Quay registries. Security vulnerabilities are only reported for images managed by Quay.
All user management resources are now available under the User Resource navigation section.
The ability to impersonate a user has also been added, which lets you view exactly what a user sees when navigating the console.
You can now create alert receivers to be informed about your cluster’s state. You can create PagerDuty and webhook alert types.
You can now use the Developer perspective to:
Create serverless applications and revisions, and split traffic between the revisions.
Delete an application and all its components.
Assign RBAC permissions to users within a project.
Bind an application with a Service using the binding connector.
Container Storage Interface (CSI) provisioners are now shown on the storage class creation page. Storage classes are hardcoded in the user interface; CSI-based storage classes are dynamic in nature and do not have static names. Now, users are able to list CSI-based provisioners in the storage class creation page and can also create one.
The Kubernetes v1
NetworkPolicy features are available in OpenShift Container Platform
except for egress policy types and IPBlock.
IPBlock is supported in NetworkPolicy with limitations; it supports IPBlock
without except
clauses. If you create a policy with an ipBlock
section
including an except
clause, the SDN Pods log warnings, and the entire
ipBlock
section of that policy is ignored.
You can install a customized cluster on RHOSP 13 and 16 that uses Kuryr SDN. You can follow the installation guide for installing a cluster on OpenStack with Kuryr.
Updated guidance around Cluster maximums for OpenShift Container Platform 4.3 is now available.
Use the OpenShift Container Platform Limit Calculator to estimate cluster limits for your environment.
You can now deploy, manage, monitor, and migrate a Red Hat OpenShift Container Storage 4.2 cluster. See the Red Hat OpenShift Container Storage 4.2 Release Notes for more information.
Persistent volumes using iSCSI, previously in Technology Preview, is now fully supported in OpenShift Container Platform 4.3.
iSCSI raw block volumes, previously in Technology Preview, are now fully supported with OpenShift Container Platform 4.3.
Raw block volumes using Cinder are now in Technology Preview.
The Samples Operator automatically recognizes the cluster architecture during installation and does not install incompatible x86_64 content on Power and Z architectures.
The Samples Operator also uses Prometheus metrics to gather information about which imagestreams have failed to import, and if the Samples Operator has invalid configurations. An alert is sent if an imagestream fails to import or the Samples Operator has an invalid configuration.
The following enhancements are now available for the Image Registry Operator:
The registry management state is set as Removed
on Baremetal, vSphere, and
Red Hat Virtualization platforms so other storage providers can be configured.
New installations must set the registry state to Managed
in addition to
provisioning storage.
An alert is sent when the registry storage has changed, as this could result in data loss.
Assuming there is a registry running in a disconnected environment available to
both the disconnected cluster and to the workstation from which the oc adm
commands are run, you can now mirror the OperatorHub by following three steps:
Mirror the Operator catalog into the container image and push to the
disconnected registry using oc adm catalog build
.
Parse the referenced Operator and app images and push to the disconnected
registry using oc adm catalog mirror
.
Enable the mirror catalog in the disconnected cluster using
oc apply -f ./manifests
.
See Using Operator Lifecycle Manager on restricted networks for details.
OpenShift Container Platform 4.3 introduces the following notable technical changes.
OpenShift Container Platform 4.3 supports Operator SDK v0.12.0 or later.
The Operator test tooling (scorecard v2) now includes the following improvements:
Categorizing Operator tests as required/optional.
Configuring test selection and pass/fail behavior.
Shipping custom tests.
For Helm-based Operators, improvements include
Helm v3 support, starting with Operator SDK 0.14.0.
Role-based access control (RBAC) generation.
Ansible-based Operator enhancements include
Support for Prometheus metrics.
Usage of Red Hat Universal Base Image (UBI).
Molecule-based end to end testing.
Lastly, the Golang-based Operator improvements include
OpenAPI spec generation.
Kubernetes 1.14 support.
Removal of dep
-based projects. All Go projects are now scaffolded to use Go
modules. The operator-sdk new
command’s --dep-manager
flag has been removed.
Required Go version update from v1.10 to v1.13.
Support for Prometheus metrics.
Changes introduced by the new Log Forwarding API modified the support for the Fluentd forward protocol starting with the OpenShift Container Platform 4.3 release. For the 4.3 release, you can still use the Fluentd forward protocol, without using the new log forwarding feature, which is in Technology Preview.
To use the Fluentd forward protocol, you must create a ConfigMap object to configure out_forward instead of editing the secure-forward.conf
section in the fluentd
ConfigMap.
Additionally, you can add any certificates required by your configuration to a secret that is mounted to the Fluentd Pods. See Sending logs to external devices using Fluentd Forward plug-ins.
In 4.3, the Fluentd forward method is deprecated and will be removed in a future release.
When you update to OpenShift Container Platform 4.3, any existing modifications to the secure-forward.conf
section of the fluentd
ConfigMap are removed. You can copy your current secure-forward.conf
section before updating to use when creating the secure-forward
ConfigMap object.
Due to changes introduced by the new Log Forwarding API, you can no longer forward logs to an external Elasticsearch instance by editing the Fluentd DaemonSet.
In previous versions, you could use the ES_HOST
and OPS_HOST
environment variables or configure the fluent-plugin-remote-syslog
plug-in through the fluentd
Daemonset.
You can forward logs to an external Elasticsearch instance and other endpoints using the new Log Forwarding API feature or the Fluentd forward protocol. The documentation is now updated to reflect these changes.
The pipelines build strategy is now deprecated. Use OpenShift Pipelines instead.
The apps/v1beta1
, apps/v1beta2
, and extensions/v1beta1
workloads are now
deprecated with the introduction of Kubernetes 1.16.
The UsingDeprecatedAPIExtensionsV1Beta1
alert is prompted when you use one of
the deprecated APIs. These deprecated APIs will be removed in the next version
of OpenShift Container Platform, so it is critical that you migrate to supported APIs.
Service Catalog, Template Service Broker, Ansible Service Broker, and their associated Operators were deprecated in OpenShift Container Platform 4.2 and will be removed in a future OpenShift Container Platform release. If they are enabled in 4.3, the web console now warns the user that these features are still enabled.
The following alerts can be viewed from the Monitoring → Alerts page and have a Warning severity:
ServiceCatalogAPIServerEnabled
ServiceCatalogControllerManagerEnabled
TemplateServiceBrokerEnabled
AnsibleServiceBrokerEnabled
The following related APIs will be removed in a future release:
.servicecatalog.k8s.io/v1beta1
.automationbroker.io/v1alpha1
.osb.openshift.io/v1
OperatorSources and CatalogSourceConfigs are deprecated from OperatorHub. The following related APIs will be removed in a future release:
operatorsources.operators.coreos.com/v1
catalogsourceconfigs.operators.coreos.com/v2
catalogsourceconfigs.operators.coreos.com/v1
Authentication
The Authentication Operator reported a static "available" string as a reason for the unavailability condition, which was unclear. This bug fix implements more precise reasons for unavailability conditions, and as a result, inspecting why the Operator is unavailable is more clear. (BZ#1740357)
The oauth-proxy
process was reloading CA certificates for each request and
storing them in memory. High memory consumption caused the oauth-proxy
container to be killed. With this bug fix, CA certificates are now cached unless
they change. As a result, memory consumption for the oauth-proxy
process has
dropped significantly when multiple requests against it are issued.
(BZ#1759169)
Previously, the client CA certificate configured for the RequestHeader
identity provider (IdP) was not announced among other certificates during TLS
handshake with the OAuth server. When login-proxies
tried to connect to the
OAuth server, they would not use their client certificate, resulting in their
request being unauthenticated, which in turn caused users of the IdP being
unable to log in to the cluster. This bug fix adds the configured client CA
among the rest in the TLS configuration, and as a result, authentication using
the RequestHeader IdP works as expected.
(BZ#1764558)
The bootstrap user introduced in OpenShift Container Platform 4.1 internally made CLI log flow always available. The message about how to retrieve an authentication token, which was there in OCP 3.x, no longer appeared for users that tried to log in from CLI in cases where only web console flows were configured. With this bug fix, the bootstrap user identity provider (IdP) is no longer configured when it is disabled by the user. As a result, after the bootstrap IdP is disabled by following the steps from the OCP documentation, the message about how to retrieve an authentication token in web console-only scenarios is now displayed. (BZ#1781083)
Previously, the route to oauth-server did not react to Ingress domain changes, which degraded the Authentication Operator and caused the oauth-server to not authenticate properly. The oauth-server route now updates when an Ingress domain change is detected, allowing authentication to work in this scenario. (BZ#1707905)
Builds
Builds started very soon after an imagestream was created might not leverage local pullthrough imagestream tags when specified. The build attempts to pull the image from the external image registry, and if the build is not set up with the authorization and certificates needed for that registry, the build would fail. The build controller is now updated to detect when its imagestream cache is missing the necessary information to allow for local pullthrough imagestream tags and retrieve that information from other means. Builds can now successfully leverage local imagestream tag pullthrough. (BZ#1753731)
The build controller sometimes incorrectly assumed a build was instantiated from the build config endpoint when it was actually instantiated directly from the build endpoint. Therefore, confusing logging about non-existent build configs could appear in the build controller logs if a user instantiated an OpenShift build directly, as opposed to initiating a build request off of the build config API endpoint. The build controller is now updated to better check whether a build was instantiated from the build config endpoint and refrain from logging unnecessary error messages.The build controller logs no longer have these confusing error messages for build instantiated directly versus from the build config endpoint. (BZ#1767218), (BZ#1767219)
Cluster Version Operator
Previously, the update protocol Cincinnati, designed to facilitate automatic
updates, used tags for payload references. This could yield different results
when applying the same release of the same graph at different points. Now the
payload reference uses the image SHA instead, if the container registry provides
the manifestref
. This guarantees the exact release version a cluster is going
to use. (BZ#1686589)
Console Kubevirt Plugin
Previously, the specified volumeMode
was not passed to newly created disks,
so PVCs might not bind properly. The volumeMode
is now passed properly to the
newly created disks.
(BZ#1753688)
Previously, the virtual machine detail page did not load properly when accessed directly by the URL. The page now loads properly. (BZ#1731480)
Previously, the kubevirt-storage-class-defaults
ConfigMap setting was not
reflected properly for VMware VM imports. Because of this, blockMode
PVCs
could not be used for VMware VM imports. The storage class defaults are now used
properly when requesting VMware imported disks.
(BZ#1762217)
Previously, the title for the Import VM wizard was incorrect and could be confusing. The wizard now has the correct title of Import Virtual Machine. (BZ#1768442)
Previously, the confirmation buttons for storage and network configuration in the VM migration wizard were located in the wrong place. These confirmation buttons are now located in the correct location. (BZ#1778783)
Previously, the Create Virtual Machine wizard did not prompt for confirmation before creating a VM, which meant the user could unexpectedly create a VM. With this fix, the user must click "Create Virtual Machine" on the review page before a VM is created. (BZ#1674407)
Previously, the Create Virtual Machine wizard had required fields that were not always intuitive when importing a VM. The Create Virtual Machine wizard has been redesigned to work as expected. (BZ#1710939)
Previously, error messages for validating VM names were not helpful. These error messages have been improved to be more descriptive. (BZ#1743938)
Containers
Previously, CRI-O was not properly filtering Podman containers during a restore operation. Because Podman containers do not have CRI-O-specific metadata, at startup, CRI-O would interpret the Podman containers it saw as CRI-O containers that were incorrectly created. It would therefore ask the storage library to delete the containers. This bug fix now properly filters Podman containers on CRI-O restore so that they are no longer deleted from storage upon startup. (BZ#1758500)
Etcd
Etcd would become overloaded with a large number of objects, causing the cluster to go down when etcd failed. Now, the etcd client balancer facilitates peer failovers in the event of a client connection timeout. (BZ#1706103)
Etcd would fail during the upgrade process and result in disaster recovery remediation steps. Now, etcd has been updated to resolve gRPC package to prevent catastrophic cluster failure. (BZ#1733594)
Image Registry
After changing the storage type in the image registry Operator’s configuration, both the previous and new storage types appeared Operator’s status. Because of this behavior, the image registry Operator was not removed after you deleted its configuration. Now only the new storage type is displayed, so the image registry Operator is removed after you change the storage type that the image uses. BZ#1722878)
Because it was possible for older imagestreams to have invalid names, image pruning failed when the specs for the imagestream’s tags were invalid. Now, the image pruner always prune images when the associated imagestream has an invalid name. BZ#1749256)
When the image registry operator’s management state was Removed
, it did not
report itself as Available or the correct version number. Because of this issue,
upgrades failed when the image registry operator was set to Removed
. Now when
you set the image registry Operator’s status to Removed
, it reports itself as
Available and at the correct version. You can complete upgrades even if you
remove the image registry from the cluster.
(BZ#1753778)
It was possible to configure the image registry Operator with an invalid Azure container name, and the image registry did not deploy on Azure because of the invalid name. Now the image registry Operator’s API schema ensures that the Azure container name that you enter conforms to Azure’s API requirements and is valid, which ensures that the Operator can deploy. (BZ#1750675)
kube-apiserver
An unnecessary service monitoring object was created for each of the following contollers: kube-apiserver, kube-controller-manager, and kube-scheduler. The unused service monitoring object is no longer created. (BZ#1735509)
When a cluster is in a non-upgradeable state because either Technology Preview
features or custom features are enabled, no alert was sent. The cluster will now
send a TechPreviewNoupgrade
alert through Prometheus if an upgrade is
attempted on a cluster in a non-upgradeable state.
(BZ#1731228)
kube-controller-manager
When defining a StatefulSet resource object, custom labels were not applied
when creating PersistentVolumeClaim resource objects from the template specified
by volumeClaimTemplates
parameter. Custom labels are now applied correctly to
PersistentVolumeClaim objects created from the volumeClaimTemplates
objects
defined by a StatefulSet resource.
(BZ#1753467)
Previously, if the lease ConfigMap for the Kubernetes Controller Manager (KCM) was deleted, KCM did not have permission to recreate the ConfigMap and was unable to do so. The KCM can now recreate the lease ConfigMap if it is deleted. (BZ#1780843)
Logging
Mismatches between cluster version and ClusterLogging version would cause ClusterLogging to fail to deploy. Now, the kubeversion is verified that it supports the deployed ClusterLogging version. (BZ#1765261)
The data in journald for facility values was not sanitized and values were incorrect, causing fluentd to emit error messages at the wrong level. Now, fluentd logs at the debug level and these errors are reported correctly. (BZ#1753936,BZ#1766187)
The oauth-proxy was misconfigured in a way that users were unable to log in after logging out. Now, the oauth-proxy has been reconfigured so that users can log in again after logging out. (BZ#1725517)
Eventrouter was not able to handle unknown event types, which would result in Eventrouter crashing. Now, Eventrouter properly handles unknown event types. (BZ#1753568)
Management Console
The Management Console Dashboard Details were unnecessarily watching the Infrastructure resources. As a result, errors regarding early web socket connection terminations were possible. Now, the Details card does not watch Infrastructure resources and only fetches the resource data once. Errors are not reported after implementing this fix. (BZ#1765083)
The console Operator would record an initial empty string value for the console URL before the router had a chance to provide the host name. Now, the Operator waits until the hostname is filled and eliminates the empty string value. (BZ#1768684)
Metering Operator
Previously, the containerImage
field in the metering-operator
CSV bundle
referenced an image tag that was not listed in the image-references
file that
ART uses for substitution purposes. This meant that ART wasn’t able to properly
substitute the origin image listed in the containerImage field
with the
associated image-registry
repository and sha256
tag. This bug fix replaces
the image tag latest
with release-4.3
, which is what was defined in the
image-references
file. As a result, ART is now able to successfully substitute
the metering-operator
container image.
(BZ#1782237)
Previously, Hadoop Dockerfile.rhel
copied the gcs-connector
JAR file to
the wrong location in the container. The path has been corrected to now point to
the right location.
(BZ#1767629)
Networking
Previously, not all related objects were deleted when the CNO was changed, which left stale network-attachment-definitions. The code has been refactored to now do this in a more generic way in OpenShift Container Platform 4.3 so that the related objects are cleaned up properly. (BZ#1755586)
Previously, some updates were dropped which caused events to be missed. Events are no longer dropped. (BZ#1747532)
Previously, in clusters that had high network traffic volumes with packet loss, a once-successful connection to a service could fail with a Connection reset by peer error. As a result, clients had to reconnect and retransmit. An update has been made to iptables rules to process TCP retransmits correctly. Established connections will remain open until they are closed. (BZ#1762298)
Previously, NetworkPolicy rule applications to new namespaces could occur slowly in clusters that had many namespaces, namespace changes, and NetworkPolicies that select namespaces. New namespaces could take significant amounts of time before they could be accessed from other namespaces. Due to an update in Namespace and NetworkPolicy code, NetworkPolicies should be applied promptly to new namespaces. (BZ#1752636)
Previously, SDN pods did not clean up Egress IP addresses when they restarted on a node, resulting in IP address conflicts. SDN pods now clean up stale Egress IP addresses as they start, preventing such conflicts from occuring. (link: BZ#1753216)
Previously, DNS names were queried every time they occured in an EgressNetworkPolicy. Records were queried regardless of whether a particular DNS record had been refreshed by a previous query, resulting in slow network performance. DNS records are now queried based on unique names rather than per each EgressNetworkPolicy. As a result, DNS query performance has been significantly improved. (BZ#1684079)
Route creation between multiple service endpoints was not possible from the console. Now, the GUI has been updated to add or remove up to three alternative service endpoints. (BZ#1725006)
Node
Previously, when containers had a high (or > 1) restart count, the kubelet
could inject duplicate container metrics into the metrics stream, causing the
/metrics
endpoint on the kubelet to throw a 500 error. With this bug fix, only
metrics of the most current container (running or stopped) are included. As a
result, the /metrics
endpoint now allows metrics to flow to Prometheus without
causing a 500 error.
(BZ#1779285)
Upstream changes were made to the long path names test. Pods with names longer than 255 character were not logged and no warning was issued. Now, the long names test is removed and Pods with names longer than 255 characters will log as expected. (BZ#1711544)
The LocalStorageCapacityIsolation
feature was disabled, and users were unable
to use the Statefulset.emptyDir.sizeLimit
parameter. Now, the
LocalStorageCapacityIsolation
feature has been enabled and the
Statefulset.emptyDir.sizeLimit
parameter can be set.
(BZ#1758434)
oc
Previously when using server-side print, the wide output option was ignored
when used in a watch (oc get clusteroperators -o wide
). The operation has been
fixed to now properly recognize all the possible options when using server-side
print. (BZ#1685189)
The oc explain
command links to upstream documentation were out of date.
These links have been updated and are now valid.
(BZ#1727781)
Full usage menu information was printed along with bad flag error messages,
causing the error message to be lost at times. Now, when the oc command --help
command is run, the bad flag error is the only information displayed.
(BZ#1748777)
The oc status
command was not displaying DaemonSets in a consistent format
due to missing status code information. Now, the Daemonsets, Deployments, and
Deployment Configurations are printed properly in the output of the oc status
command.
(BZ#1540560)
The commands oc version
and oopenshift-install version
would show as Dirty
due to incorectly set flags. These flags have been updated and the commands no
longer display a Dirty GitTreeState
or GitVersion
.
(BZ#1715001)
The oc status
command would suggest oc set probe pod
to verify pods are
still running, including pods that may have been owned by controllers. Now, pods
that are owned by controllers are ignored for probe suggestions.
(BZ#1712697)
Previously, the oc new-build
help command was not properly filtering flags.
This caused irrelevant flags to be printed when invoking oc new-build --help
.
This has been fixed, and now the help command only prints relevant output.
(BZ#1737392)
openshift-apiserver
The ClusterResourceQuota
in 4.2 and 4.3 were not allowing non-strings as
limit values because the OpenAPI schema was wrong. Therefore, integer quota
values could not be set in ClusterResourceQuota
objects, even though doing so
was previously possible in 4.1. The OpenAPI schema for ClusterResourceQuota
has been fixed to allow integers so that integers can now be used as quota
values in ClusterResourceQuota
again.
(BZ#1756417)
During upgrades, openshift-apiserver
would report degraded
. The reason for
degradation was MultipleAvailable
, but this was not understandable to the
user. This bug fix now lists the reason for the degradation, so that no
information is hidden from the user.
(BZ#1767156)
Web Console
The console workload shows a restricted access error if the knative serverless TP1 Operator is installed and you are logged in as non-admin user. With this bug fix, the Overview sidebar resources now work as expected for both normal and knative-specific deployments. A non-admin user can now view the workloads. (BZ#1758628)
The topology view data model was originally a subset of the project Workloads page. As more feature were added, the topology view grew to be similar but did not share the same code. As use cases became more complex, certain edge cases were being missed in the new code. In certain situations, the Pod list from the topology view was incorrect. With this bug fix, code logic is now shared between the topology view and project Workloads page. As a result, whether viewing the sidebar Pod list from topology or from the project Workloads list, the Pod details are now identical. (BZ#1760827)
Previously, when the Route object was created, the first port from the list of available ports was set instead of setting the selected port from the target-port dropdown menu. Because of this, the user was unable to select their desired target port. The port selected from the target port dropdown menu is now applied when creating a Route object; if no port is selected, the first port from the list is set. (BZ#1760836)
Previously, certain features, such as the name of the application and the build status, were not rendered in the Topology view on the Edge browser. With this bug fix, the Edge browser renders the application name and the build status as expected. (BZ#1760858)
In the web console Overview, a non-admin user was not able to view workloads when the Knative Operator was installed, even if a deployment that was not a Knative workload was selected. This bug fix adds a check in case there are no configurations found so that the system will not add Knative-specific resources in Overview. This enables a non-admin user to now view the workloads as expected. (BZ#1760810)
Previously, when the Topology context menu was open, the associated node was not easily identifiable. This caused confusion for users because they did not know which node the context menu referred to. Now when right-clicking a node to open the context menu, a visual hover, or drop shadow, is applied to the node for easier identification. (BZ#1776401)
Previously, the Import from Git form in the web console used a regular expression too restrictive to validate the Git URL, which disallowed some valid URLs. The regular expression has been updated to accept all valid Git URLs. (BZ#1766350), (BZ#1771851)
Error messages from the developer console were duplicated. Now, this system has been updated to reflect values from the client side. As a result, error messages are now clear and concise. (BZ#1688613)
Previously, the web console could experience a runtime error when visiting the Resources tab of an OLM operand resource. The web console could also freeze when trying to sort the Resources tab for an OLM operand resource. These issues are now resolved. (BZ#1756319)
Previously, visiting the OpenShift web console pod details page in Microsoft Edge could result in a runtime error, preventing the page from displaying. The issue is now resolved and the pod details page now displays correctly. (BZ#1768654)
Previously, if a dashboard card watched Prometheus results, the dashboard page’s performance could decrease due to an incorrect comparison between old alerts and new alerts. The comparison defect has been fixed. (BZ#1781053)
In previous versions, the documentation link on the Network Policy page was incorrect. It has been replaced with the correct link. (BZ#1692227)
Previously, Prometheus queries contained a range selector, which prevented the chart on the default page of the Prometheus UI from rendering. The queries no longer contain range selectors, so the query now renders properly. (BZ#1746979)
Recycle
was the default value for the Persistent Volume Reclaim policy even
though that option was deprecated. Persistent Volumes contained deprecated
values by default. The default Persistent Volume Reclaim policy is now
Retain
, so new Persistent Volumes do not contain deprecated values.
(BZ#1751647)
Previously, after upgrading your cluster, the web console could use cached CSS stylesheets, which might cause some rendering issues when loading the console. The problem has been fixed, and the web console now correctly uses the correct stylesheets after an upgrade. (BZ#1772687)
Previously, when using the web console in some situations part of the options menu was hidden behind other elements on the page. The options menu no longer appears behind other page elements and will expand in a viewable space on the page to ensure the entire menu is always visible. (BZ#1723254)
Previously, long node names could overflow the table column in the OpenShift console pods table. With this bug fix, they now correctly wrap. (BZ#1713193)
Previously, creating a report query using an example YAML would result in an error. This bug fix adds a new YAML example for report queries that contains all required fields so that an error does not occur. (BZ#1753124)
Previously on the Install Plan Details page, the namespace for associated
catalog sources was set incorrectly. This resulted in broken links because the
namespace did not exist. This bug fix uses the status.plan
field of the
InstallPlan resource to associate the catalog source with the correct namespace
to build links from. Thus, the catalog source links now work as expected.
(BZ#1767072)
Previously, unknown custom resources were automatically split into words to estimate what the user should see. However, some resources were split inappropriately. With this bug fix, custom resources now use the name as defined in the Custom Resource Definition, rather than being split into separate words.(BZ#1722811)
Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red Hat Customer Portal for these features:
In the table below, features marked TP indicate Technology Preview and features marked GA indicate General Availability. Features marked as - indicate that the feature is removed from the release or deprecated.
Feature | OCP 4.1 | OCP 4.2 | OCP 4.3 |
---|---|---|---|
Prometheus Cluster Monitoring |
GA |
GA |
GA |
Precision Time Protocol (PTP) |
- |
- |
TP |
CRI-O for runtime Pods |
GA |
GA |
GA |
|
TP |
TP |
TP |
Service Catalog |
GA |
- |
- |
Template Service Broker |
GA |
- |
- |
OpenShift Ansible Service Broker |
GA |
- |
- |
Network Policy |
GA |
GA |
GA |
Multus |
GA |
GA |
GA |
New Add Project Flow |
GA |
GA |
GA |
Search Catalog |
GA |
GA |
GA |
Cron Jobs |
GA |
GA |
GA |
Kubernetes Deployments |
GA |
GA |
GA |
StatefulSets |
GA |
GA |
GA |
Explicit Quota |
GA |
GA |
GA |
Mount Options |
GA |
GA |
GA |
System Containers for Docker, CRI-O |
- |
- |
- |
Hawkular Agent |
- |
- |
- |
Pod PreSets |
- |
- |
- |
experimental-qos-reserved |
TP |
TP |
TP |
Pod sysctls |
GA |
GA |
GA |
Central Audit |
- |
- |
- |
Static IPs for External Project Traffic |
GA |
GA |
GA |
Template Completion Detection |
GA |
GA |
GA |
|
GA |
GA |
GA |
Clustered MongoDB Template |
- |
- |
- |
Clustered MySQL Template |
- |
- |
- |
ImageStreams with Kubernetes Resources |
GA |
GA |
GA |
Device Manager |
GA |
GA |
GA |
Persistent Volume Resize |
GA |
GA |
GA |
Huge Pages |
GA |
GA |
GA |
CPU Pinning |
GA |
GA |
GA |
Admission Webhooks |
GA |
GA |
GA |
External provisioner for AWS EFS |
TP |
TP |
TP |
Pod Unidler |
TP |
TP |
TP |
Ephemeral Storage Limit/Requests |
TP |
TP |
TP |
CephFS Provisioner |
- |
- |
- |
Podman |
TP |
TP |
TP |
Kuryr CNI Plug-in |
- |
TP |
GA |
Sharing Control of the PID Namespace |
TP |
TP |
TP |
Manila Provisioner |
- |
- |
- |
Cluster Administrator console |
GA |
GA |
GA |
Cluster Autoscaling |
GA |
GA |
GA |
Container Storage Interface (CSI) |
TP |
GA |
GA |
Operator Lifecycle Manager |
GA |
GA |
GA |
Red Hat OpenShift Service Mesh |
GA |
GA |
GA |
"Fully Automatic" Egress IPs |
GA |
GA |
GA |
Pod Priority and Preemption |
GA |
GA |
GA |
Multi-stage builds in Dockerfiles |
GA |
GA |
GA |
OVN-Kubernetes Pod network provider |
TP |
TP |
TP |
HPA custom metrics adapter based on Prometheus |
TP |
TP |
TP |
Machine health checks |
TP |
TP |
GA |
Persistent Storage with iSCSI |
TP |
TP |
GA |
Raw Block with iSCSI |
- |
TP |
GA |
Raw Block with Cinder |
TP |
||
OperatorHub |
GA |
GA |
|
Three-node bare metal deployments |
TP |
TP |
|
SR-IOV Network Operator |
TP |
GA |
|
Helm CLI |
TP |
||
Service Binding |
TP |
||
Log forwarding |
TP |
||
User workload monitoring |
TP |
||
OpenShift Serverless |
TP |
TP |
TP |
Compute Node Topology Manager |
TP |
||
Metering |
TP |
GA |
GA |
Cost Management |
TP |
TP |
GA |
If you have Service Mesh installed, upgrade Service Mesh before upgrading OpenShift Container Platform. For a workaround, see Updating OpenShift Service Mesh from version 1.0.1 to 1.0.2.
Determination of active Pods when a rollout fails can be incorrect in the Topology view. (BZ#1760828)
When a user with limited cluster-wide permissions creates an application using the Container Image option in the Add page, and chooses the Image name from internal registry option, no imagestreams are detected in the project, though an imagestream exists. (BZ#1784264)
The ImageContentSourcePolicy
is not supported by the registry at the time of
release (BZ#1787112).
In disconnected environments, Jenkins can be enabled to pull through by
default. Use this command as a workaround to use Jenkins in disconnected
environments:
$ oc tag <jenkins_source_image> jenkins:2 --reference-policy=source -n openshift
The OpenShift Cluster Version Operator (CVO) does not correctly mount SSL certificates from the host, which prevents cluster version updates when using MITM proxy checking. (BZ#1773419)
When adding defaultProxy
and gitProxy
under builds.config.openshift.io
,
the Jenkins pipeline build cannot retrieve the proxy configuration. (BZ#1753562)
When installing on Red Hat OpenStack Platform 13 or 16, where the OpenStack endpoints are configured with self-signed TLS certificates the installation will fail. (BZ#1786314,BZ#1769879,BZ#1735192)
Installer-provisioned infrastructure installations on OpenStack fail with Security group rule already exists
error when OpenStack Neutron is under heavy load. (BZ#1788062)
Clusters will display errors and abnormal states after etcd
backup or restore functions are conducted during the etcd
encryption migration process. (BZ#1776811)
RHCOS master and worker nodes may go into a NotReady,SchedulingDisabled
state while upgrading from 4.2.12 to 4.3.0. (BZ#1786993)
The public cloud access image for RHEL cannot be used directly if you enable FIPS mode. This is caused by public cloud images not allowing kernel integrity checks. To do this, you must upload your own images. (BZ#1788051)
The Operator Lifecycle Manager (OLM) does not work in OpenShift Container Platform when Kuryr SDN is enabled. (BZ#1786217)
The oc adm catalog build
and oc adm catalog mirror
commands do not work for the restricted cluster. (BZ#1773821)
When upgrading an OpenShift Container Platform cluster from 4.1 to 4.2 to 4.3, there is a
possibility that the Node Tuning Operator tuned Pods can get stuck in the
ContainerCreating
state.
To confirm the issue, run:
$ oc get pods -n openshift-cluster-node-tuning-operator
One or more tuned Pods are stuck in the ContainerCreating
state.
To resolve the issue, apply the following workaround. Run:
$ oc delete daemonset/tuned -n openshift-cluster-node-tuning-operator $ oc get daemonset/tuned -n openshift-cluster-node-tuning-operator $ oc get pods -n openshift-cluster-node-tuning-operator
Verify that the Pods are now in a Running
state.
(BZ#1791916)
The Node Feature Discovery (NFD) Operator version 4.3 fails to deploy from
OpertorHub on the OpenShift Container Platform web console. As a workaround, download the
oc
client for your operating system, and place the kubeconfig
file from the
installer in ~/.kube/config
. Run these commands to deploy the NFD Operator
from the CLI and GitHub:
$ cd $GOPATH/src/openshift $ git clone https://github.com/openshift/cluster-nfd-operator.git $ cd cluster-nfd-operator $ git checkout release-4.3 $ PULLPOLICY=Always make deploy $ oc get pods -n openshift-nfd
Example output:
$ oc get pods -n openshift-nfd NAME READY STATUS RESTARTS AGE nfd-master-gj4bh 1/1 Running 0 9m46s nfd-master-hngrm 1/1 Running 0 9m46s nfd-master-shwg5 1/1 Running 0 9m46s nfd-operator-b74cbdc66-jsgqq 1/1 Running 0 10m nfd-worker-87wpn 1/1 Running 2 9m47s nfd-worker-d7kj8 1/1 Running 1 9m47s nfd-worker-n4g7g 1/1 Running 1 9m47s
If a cluster-wide egress proxy is configured and then later unset, Pods for
applications that have been previously deployed by OLM-managed Operators can
enter a CrashLoopBackOff
state. This is caused by the deployed Operator still
being configured to rely on the proxy.
This issue applies for environment variables, Volumes, and VolumeMounts created by the cluster-wide egress proxy. This same issue occurs when setting environment variables, Volumes, and VolumeMounts using the SubscriptionsConfig object. |
A fix is planned for a future release of OpenShift Container Platform, however you can workaround the issue by deleting the Deployment using the CLI or web console. This triggers OLM to regenerate the Deployment and starts up Pods with the correct networking configuration.
Cluster administrators can get a list of all affected OLM-managed Deployments by running the following command:
$ oc get deployments --all-namespaces \ -l olm.owner,olm.owner!=packageserver (1)
1 | Exclude packageserver , which is unaffected. |
There is an issue with the Machine Config Operator (MCO) supporting Day 2 proxy support, which describes when an existing non-proxied cluster is reconfigured to use a proxy. The MCO should apply newly configured proxy CA certificates in a ConfigMap to the RHCOS trust bundle; this is not working. As a workaround, you must manually add the proxy CA certificate to your trust bundle and then update the trust bundle:
$ cp /opt/registry/certs/<my_root_ca>.crt /etc/pki/ca-trust/source/anchors/ $ update-ca-trust extract $ oc adm drain <node> $ systemctl reboot
When upgrading to a new OpenShift Container Platform z-stream release, connectivity to the API server might be interrupted as nodes are upgraded, causing API requests to fail. (BZ#1791162)
When upgrading to a new OpenShift Container Platform z-stream release, connectivity to routers might be interrupted as router Pods are updated. For the duration of the upgrade, some applications might not be consistently reachable. (BZ#1809665)
Git clone operations that go through an HTTPS proxy fail. Non-TLS (HTTP) proxies can be used successfully. (BZ#1750650)
Git clone operations fail in builds running behind a proxy if the source URIs
use the git://
or ssh://
scheme.
(BZ#1751738)
In OpenShift Container Platform 4.1, anonymous users could access discovery endpoints. Later releases revoked this access to reduce the possible attack surface for security exploits because some discovery endpoints are forwarded to aggregated API servers. However, unauthenticated access is preserved in upgraded clusters so that existing use cases are not broken.
If you are a cluster administrator for a cluster that has been upgraded from OpenShift Container Platform 4.1 to 4.3, you can either revoke or continue to allow unauthenticated access. It is recommended to revoke unauthenticated access unless there is a specific need for it. If you do continue to allow unauthenticated access, be aware of the increased risks.
If you have applications that rely on unauthenticated access, they might receive HTTP 403 errors if you revoke unauthenticated access. |
Use the following script to revoke unauthenticated access to discovery endpoints:
## Snippet to remove unauthenticated group from all the cluster role bindings
$ for clusterrolebinding in cluster-status-binding discovery system:basic-user system:discovery system:openshift:discovery ;
do
### Find the index of unauthenticated group in list of subjects
index=$(oc get clusterrolebinding ${clusterrolebinding} -o json | jq 'select(.subjects!=null) | .subjects | map(.name=="system:unauthenticated") | index(true)');
### Remove the element at index from subjects array
oc patch clusterrolebinding ${clusterrolebinding} --type=json --patch "[{'op': 'remove','path': '/subjects/$index'}]";
done
This script removes unauthenticated subjects from the following cluster role bindings:
cluster-status-binding
discovery
system:basic-user
system:discovery
system:openshift:discovery
Security, bug fix, and enhancement updates for OpenShift Container Platform 4.3 are released as asynchronous errata through the Red Hat Network. All OpenShift Container Platform 4.3 errata is available on the Red Hat Customer Portal. See the OpenShift Container Platform Life Cycle for more information about asynchronous errata.
Red Hat Customer Portal users can enable errata notifications in the account settings for Red Hat Subscription Management (RHSM). When errata notifications are enabled, users are notified via email whenever new errata relevant to their registered systems are released.
Red Hat Customer Portal user accounts must have systems registered and consuming OpenShift Container Platform entitlements for OpenShift Container Platform errata notification emails to generate. |
This section will continue to be updated over time to provide notes on enhancements and bug fixes for future asynchronous errata releases of OpenShift Container Platform 4.3. Versioned asynchronous releases, for example with the form OpenShift Container Platform 4.3.z, will be detailed in subsections. In addition, releases in which the errata text cannot fit in the space provided by the advisory will be detailed in subsections that follow.
For any OpenShift Container Platform release, always review the instructions on updating your cluster properly. |
Issued: 2020-01-23
OpenShift Container Platform release 4.3 is now available. The list of container images and bug fixes includes in the update are documented in the RHBA-2020:0062 advisory. The RPM packages included in the update are provided by the RHBA-2019:0063 advisory.
Space precluded documenting all of the container images for this release in the advisory. See the following article for notes on the container images in this release:
Issued: 2020-02-12
OpenShift Container Platform release 4.3.1 is now available. The list of packages included in the update are documented in the RHBA-2020:0390 advisory. The container images and bug fixes included in the update are provided by the RHBA-2020:0391 advisory.
Space precluded documenting all of the container images for this release in the advisory. See the following article for notes on the container images in this release:
To upgrade an existing OpenShift Container Platform 4.3 cluster to this latest release, see Updating a cluster by using the CLI for instructions.
Issued: 2020-02-19
OpenShift Container Platform release 4.3.2 is now available. The list of packages included in the update are documented in the RHBA-2020:0491 advisory. The container images and bug fixes included in the update are provided by the RHBA-2020:0492 advisory.
Space precluded documenting all of the container images for this release in the advisory. See the following article for notes on the container images in this release:
Amazon Web Services (AWS) installer-provisioned infrastructure and Red Hat OpenStack Platform (RHOSP) user-provisioned infrastructure were missing security group rules allowing bi-directional traffic between control plane hosts and workers on TCP and UDP ports 30000-32767. This caused newly introduced OVN Networking components to not work properly in clusters lacking these security group rules. Now security group rules are available to allow the aforementioned bi-directional traffic support. (BZ#1779469)
Previously, users would be given a Restricted Access error when trying to access the Installed Operators page in the web console. This happened because the console was trying to access the subscription resource outside of the current namespace to show subscription details. Users can now access the Installed Operators page. The Subscription tab will be hidden from users who can not access the subscription resource. (BZ#1791101)
To upgrade an existing OpenShift Container Platform 4.3 cluster to this latest release, see Updating a cluster by using the CLI for instructions.
Issued: 2020-02-24
OpenShift Container Platform release 4.3.3 is now available. The list of packages included in the update are documented in the RHBA-2020:0527 advisory. The container images and bug fixes included in the update are provided by the RHBA-2020:0528 advisory.
Space precluded documenting all of the container images for this release in the advisory. See the following article for notes on the container images in this release:
The API group of KnativeServing resources serving.knative.dev
is deprecated
and it has changed to operator.knative.dev
in Serverless Operator 1.4.
(BZ#1779469)
To upgrade an existing OpenShift Container Platform 4.3 cluster to this latest release, see Updating a cluster by using the CLI for instructions.
Issued: 2020-02-24
An update for jenkins-slave-base-rhel7-container
is now available for
OpenShift Container Platform 4.3. Details of the update are documented in the
RHSA-2020:0562 advisory.
Issued: 2020-03-10
OpenShift Container Platform release 4.3.5 is now available. The list of packages included in the update are documented in the RHBA-2020:0675 advisory. The container images and bug fixes included in the update are provided by the RHBA-2020:0676 advisory.
Space precluded documenting all of the container images for this release in the advisory. See the following article for notes on the container images in this release:
To upgrade an existing OpenShift Container Platform 4.3 cluster to this latest release, see Updating a cluster by using the CLI for instructions.
If you are upgrading to this release from OpenShift Container Platform 4.2 or OpenShift Container Platform 4.3.3 or earlier, you must restart all Pods after the upgrade is complete. This is because the service CA is automatically rotated as of OpenShift Container Platform 4.3.5. The service CA is rotated during the upgrade and a restart is required afterward to ensure that all services are using the new service CA before the previous service CA expires. After this one-time manual restart, subsequent upgrades and rotations will ensure restart before the service CA expires without requiring manual intervention. |
Issued: 2020-03-10
An update for skopeo
is now available for OpenShift Container Platform 4.3. Details of the
update are documented in the
RHSA-2020:0679 advisory.
Issued: 2020-03-10
An update for podman
is now available for OpenShift Container Platform 4.3. Details of the
update are documented in the
RHSA-2020:0680 advisory.
Issued: 2020-03-10
An update for openshift-enterprise-apb-base-container
,
openshift-enterprise-mariadb-apb
, openshift-enterprise-mysql-apb
, and
openshift-enterprise-postgresql-apb
is now available for OpenShift Container Platform 4.3.
Details of the update are documented in the
RHSA-2020:0681 advisory.
Issued: 2020-03-10
An update for openshift-enterprise-ansible-operator-container
is now available
for OpenShift Container Platform 4.3. Details of the update are documented in the
RHSA-2020:0683 advisory.
Issued: 2020-03-24
OpenShift Container Platform release 4.3.8 is now available. The list of packages included in the update are documented in the RHBA-2020:0857 advisory. The container images and bug fixes included in the update are provided by the RHBA-2020:0858 advisory.
Space precluded documenting all of the container images for this release in the advisory. See the following article for notes on the container images in this release:
To upgrade an existing OpenShift Container Platform 4.3 cluster to this latest release, see Updating a cluster by using the CLI for instructions.
If you are upgrading to this release from OpenShift Container Platform 4.2 or OpenShift Container Platform 4.3.3 or earlier, you must restart all Pods after the upgrade is complete. This is because the service CA is automatically rotated as of OpenShift Container Platform 4.3.5. The service CA is rotated during the upgrade and a restart is required afterward to ensure that all services are using the new service CA before the previous service CA expires. After this one-time manual restart, subsequent upgrades and rotations will ensure restart before the service CA expires without requiring manual intervention. |
Issued: 2020-03-24
An update for openshift-enterprise-builder-container
and
openshift-enterprise-cli-container
is now available for OpenShift Container Platform 4.3.
Details of the update are documented in the
RHSA-2020:0863 advisory.
Issued: 2020-03-24
An update for openshift-enterprise-template-service-broker-operator-container
is
now available for OpenShift Container Platform 4.3. Details of the update are documented in
the RHSA-2020:0866
advisory.
Issued: 2020-03-24
An update for openshift-clients
is now available for OpenShift Container Platform 4.3.
Details of the update are documented in the
RHSA-2020:0928 advisory.
Issued: 2020-04-01
OpenShift Container Platform release 4.3.9 is now available. The list of packages included in the update are documented in the RHBA-2020:0929 advisory. The container images and bug fixes included in the update are provided by the RHBA-2020:0930 advisory.
Space precluded documenting all of the container images for this release in the advisory. See the following article for notes on the container images in this release:
To upgrade an existing OpenShift Container Platform 4.3 cluster to this latest release, see Updating a cluster by using the CLI for instructions.
If you are upgrading to this release from OpenShift Container Platform 4.2 or OpenShift Container Platform 4.3.3 or earlier, you must restart all Pods after the upgrade is complete. This is because the service CA is automatically rotated as of OpenShift Container Platform 4.3.5. The service CA is rotated during the upgrade and a restart is required afterward to ensure that all services are using the new service CA before the previous service CA expires. After this one-time manual restart, subsequent upgrades and rotations will ensure restart before the service CA expires without requiring manual intervention. |
Issued: 2020-04-01
An update for ose-openshift-apiserver-container
is now available for
OpenShift Container Platform 4.3. Details of the update are documented in the
RHSA-2020:0933 advisory.
Issued: 2020-04-01
An update for ose-openshift-controller-manager-container
is now available for
OpenShift Container Platform 4.3. Details of the update are documented in the
RHSA-2020:0934 advisory.
Issued: 2020-04-08
OpenShift Container Platform release 4.3.10 is now available. The list of packages included in the update are documented in the RHBA-2020:1255 advisory. The container images and bug fixes included in the update are provided by the RHBA-2020:1262 advisory.
Space precluded documenting all of the container images for this release in the advisory. See the following article for notes on the container images in this release:
To upgrade an existing OpenShift Container Platform 4.3 cluster to this latest release, see Updating a cluster by using the CLI for instructions.
If you are upgrading to this release from OpenShift Container Platform 4.2 or OpenShift Container Platform 4.3.3 or earlier, you must restart all Pods after the upgrade is complete. This is because the service CA is automatically rotated as of OpenShift Container Platform 4.3.5. The service CA is rotated during the upgrade and a restart is required afterward to ensure that all services are using the new service CA before the previous service CA expires. After this one-time manual restart, subsequent upgrades and rotations will ensure restart before the service CA expires without requiring manual intervention. |
Issued: 2020-04-08
An update for openshift
is now available for OpenShift Container Platform 4.3. Details of the
update are documented in the
RHSA-2020:1276 advisory.
Issued: 2020-04-08
An update for openshift-enterprise-hyperkube-container
is now available for
OpenShift Container Platform 4.3. Details of the update are documented in the
RHSA-2020:1277 advisory.
Issued: 2020-04-14
OpenShift Container Platform release 4.3.12 is now available. The list of packages included in the update are documented in the RHBA-2020:1392 advisory. The container images and bug fixes included in the update are provided by the RHBA-2020:1393 advisory.
Space precluded documenting all of the container images for this release in the advisory. See the following article for notes on the container images in this release:
To upgrade an existing OpenShift Container Platform 4.3 cluster to this latest release, see Updating a cluster by using the CLI for instructions.
If you are upgrading to this release from OpenShift Container Platform 4.2 or OpenShift Container Platform 4.3.3 or earlier, you must restart all Pods after the upgrade is complete. This is because the service CA is automatically rotated as of OpenShift Container Platform 4.3.5. The service CA is rotated during the upgrade and a restart is required afterward to ensure that all services are using the new service CA before the previous service CA expires. After this one-time manual restart, subsequent upgrades and rotations will ensure restart before the service CA expires without requiring manual intervention. |
Issued: 2020-04-14
An update for podman
is now available for OpenShift Container Platform 4.3. Details of the
update are documented in the
RHSA-2020:1396 advisory.
Issued: 2020-04-20
OpenShift Container Platform release 4.3.13 is now available. The list of packages included in the update are documented in the RHBA-2020:1481 advisory. The container images and bug fixes included in the update are provided by the RHBA-2020:1482 advisory.
Space precluded documenting all of the container images for this release in the advisory. See the following article for notes on the container images in this release:
To upgrade an existing OpenShift Container Platform 4.3 cluster to this latest release, see Updating a cluster by using the CLI for instructions.
If you are upgrading to this release from OpenShift Container Platform 4.2 or OpenShift Container Platform 4.3.3 or earlier, you must restart all Pods after the upgrade is complete. This is because the service CA is automatically rotated as of OpenShift Container Platform 4.3.5. The service CA is rotated during the upgrade and a restart is required afterward to ensure that all services are using the new service CA before the previous service CA expires. After this one-time manual restart, subsequent upgrades and rotations will ensure restart before the service CA expires without requiring manual intervention. |
Issued: 2020-04-20
An update for runc
is now available for OpenShift Container Platform 4.3. Details of the
update are documented in the
RHSA-2020:1485 advisory.
Issued: 2020-04-29
OpenShift Container Platform release 4.3.18 is now available. The list of packages included in the update are documented in the RHBA-2020:1528 advisory. The container images and bug fixes included in the update are provided by the RHBA-2020:1529 advisory.
Space precluded documenting all of the container images for this release in the advisory. See the following article for notes on the container images in this release:
With this release, IBM Power Systems are now compatible with OpenShift Container Platform. See Installing a cluster on IBM Power or Installing a cluster on IBM Power in a restricted network.
Restrictions
Note the following restrictions for OpenShift Container Platform on IBM Power:
OpenShift Container Platform for IBM Power Systems does not include the following Technology Preview features:
Container-native virtualization (CNV)
OpenShift Container Platform Serverless
The following OpenShift Container Platform features are unsupported:
Red Hat OpenShift Service Mesh
OpenShift Do (odo)
CodeReady Containers (CRC)
OpenShift Container Platform Pipelines based on Tekton
OpenShift Container Platform Metering
Multus CNI plug-in
Worker nodes must run Red Hat Enterprise Linux CoreOS.
Persistent storage must be of the Filesystem
mode using local volumes, Network File System (NFS), OpenStack Cinder, or Container Storage Interface (CSI).
With this release, IBM Z and LinuxONE is now compatible with OpenShift Container Platform 4.3. See Installing a cluster on IBM Z and LinuxONE for installation instructions.
Restrictions
Note the following restrictions for OpenShift Container Platform on IBM Z and LinuxONE:
OpenShift Container Platform for IBM Z does not include the following Technology Preview features:
Container-native virtualization (CNV)
OpenShift Container Platform Serverless
Log forwarding
Helm command-line interface (CLI) tool
Precision Time Protocol (PTP) hardware
The following OpenShift Container Platform features are unsupported:
Red Hat OpenShift Service Mesh
OpenShift Do (odo)
CodeReady Containers (CRC)
OpenShift Container Platform Pipelines based on Tekton
OpenShift Container Platform Metering
Multus CNI plug-in
OpenShift Container Platform upgrades phased rollout
FIPS cryptography
Encrypting data stored in etcd
Automatic repair of damaged machines with machine health checking
Tang mode disk encryption during OpenShift Container Platform deployment
Worker nodes must run Red Hat Enterprise Linux CoreOS.
Persistent shared storage must be of type Filesystem: NFS.
Other third-party storage vendors might provide Container Storage Interface (CSI)-enabled solutions that are certified to work with OpenShift Container Platform. Consult OperatorHub on OpenShift Container Platform, or your storage vendor, for more information.
When setting up your z/VM instance for the OpenShift Container Platform installation you might need to temporarily give your worker nodes more virtual CPU capacity or add a third worker node. (BZ#1822770)
These features are available for OpenShift Container Platform on IBM Z for 4.3, but not for OpenShift Container Platform 4.3 on x86:
HyperPAV enabled on IBM System Z for the virtual machine for FICON attached ECKD storage.
Previously, the web console would fail to show an operand if an invalid OLM descriptor was set by an Operator. The web console now tolerates invalid descriptors and shows the operand details. (BZ#1798130)
Previously, the web console would display the Create Project action for users that did not have permissions to create projects. This was confusing because users without the proper permissions would try to create a project and would be met with error messaging stating they could not create a project. The Create Project action is no longer visible to users that do not have permission to create a project. (BZ#1804708)
The upgradeable
field was not being properly set by the Service Catalog
Operators. This would display an incorrect upgrade status of Unknown
after a
fresh OpenShift Container Platform cluster installation. The upgradeable
field is now
properly set so a cluster’s upgrade status is now accurate.
(BZ#1813488)
The Image Registry Operator was not reporting new versions of OpenShift Container Platform
if the Operator was set to Unmanaged
. This caused upgrades to newer cluster
versions to fail. Now when the Image Registry Operator is set to Unmanaged
,
new cluster versions are reported, allowing for successful upgrades.
(BZ#1816656)
Previously, the web console was experiencing runtime errors on certain pages
due to the ts-loader using the incorrect tsconfig.json
in some cases. The
ts-loader issue is resolved, allowing all web console pages to load properly.
(BZ#1818980)
Previously, the client used to create pull secrets for the OpenShift Container Platform internal registry had a low rate limit. If a large number of namespaces were created in a short time window, it would take a long time for image registry pull secrets to be created. The client’s rate limit has been increased, so internal registry pull secrets are now created quickly, even with high traffic. (BZ#1819850)
Previously, the node-ca daemon did not tolerate the NoExecute
taint. This
caused the node-ca daemon to ignore certificates on nodes that had the
NoExecute
taint applied. This bug fix syncs the additionalTrustedCA
to all
nodes with taints, allowing all taints to be tolerated.
(BZ#1820242)
The oc
command for refreshing the CA certificate was missing the resource
type on which to operate. This caused the command to return errors. The missing
ConfigMap is now added, fixing the command errors.
(BZ#1824921)
To upgrade an existing OpenShift Container Platform 4.3 cluster to this latest release, see Updating a cluster by using the CLI for instructions.
If you are upgrading to this release from OpenShift Container Platform 4.2 or OpenShift Container Platform 4.3.3 or earlier, you must restart all Pods after the upgrade is complete. This is because the service CA is automatically rotated as of OpenShift Container Platform 4.3.5. The service CA is rotated during the upgrade and a restart is required afterward to ensure that all services are using the new service CA before the previous service CA expires. After this one-time manual restart, subsequent upgrades and rotations will ensure restart before the service CA expires without requiring manual intervention. |
Because of issues with coreos-installer
, CoreOS cannot be installed on
bare-metal nodes that use NVMe drives with 4K sectors.
(BZ#1805249)
There is an issue with the fw_enabled_large_send
setting on IBM Power
systems that is causing VXLAN packet drop and can cause deployments to fail.
(BZ#1816254)
For clusters on an IBM Power infrastructure, due to
missing packages related to hotplug devices, guest virtual machines
might not detect dynamically-provisioned persistent volumes.
As a result, you need to install the following packages and services:
librtas
, powerpc-utils
, and ppc64-diag
.
(BZ#1811537)
Issued: 2020-05-11
OpenShift Container Platform release 4.3.19 is now available. The list of packages included in the update are documented in the RHBA-2020:2005 advisory. The container images and bug fixes included in the update are provided by the RHBA-2020:2006 advisory.
Space precluded documenting all of the container images for this release in the advisory. See the following article for notes on the container images in this release:
Previously, if an imagestream was backed by a private registry that the internal image registry had credentials for, but the consumer did not have credentials for, the subsequent image pull would fail. Because local reference settings on imagestreams were ignored if imagestream access occurred soon after cluster startup or restart of the OpenShift controller manager, the controller manager would store imagestreams in its cache with incomplete metadata. The OpenShift controller manager has been updated to refresh its imagestream cache if its metadata initialization was incomplete. This preserves the local reference imagestream policy even during the timing windows that occur immediately after cluster start or OpenShift controller manager restart. (BZ#1813420)
Previously, the OpenShift console Pod terminal did not correctly handle Unicode characters. This problem has been fixed, and Unicode characters now display correctly. (BZ#1821647)
A Multus-related DaemonSet mistakenly used the deprecated version
extensions/v1beta1
rather than apps/v1
in its YAML definition. This sent an
alert for clusters that had the deprecated API usage alert enabled. The
DaemonSet has been updated to use the correct version name; therefore, the
deprecated API usage alert is no longer sent.
(BZ#1824866)
Before starting a build, the OpenShift Container Platform builder would parse the supplied
Dockerfile and reconstruct a modified version of it to use for the build. This
process included adding labels and handling substitutions of the images named in
FROM
instructions. The generated Dockerfile did not always correctly
reconstruct ENV
and LABEL
instructions; sometimes the generated Dockerfile
would include =
characters, although the original Dockerfile did not include
them. This caused the build to fail with a syntax error. When generating the
modified Dockerfile, the original text for ENV
and LABEL
instructions are
now used verbatim, fixing this issue.
(BZ#1821861)
The Node Tuning Operator did not ship with fixes to address tuned daemon behavior related to BZ#1702724 and BZ#1774645. As a result, when an invalid profile was specified by the user, a Denial of Service (DoS) of the operand’s functionality occurred. Also, correcting the profile did not restore the operand’s functionality. This was fixed by applying the aforementioned bug fixes, allowing the tuned daemon to process and set a new, corrected profile. (BZ#1825007)
Previously, the OpenStack installer created security groups using
remote_group_id
to allow traffic origins. Using the remote_group_id
in the
security rules was very inefficient, triggering a lot of computation by the OVS
agent to generate the flows. This process sometimes exceeded the time allocated
for flow generation. In such cases, especially in environments already under
stress, master nodes would be unable to communicate with worker nodes, leading
the deployment to fail. Now IP prefixes for whitelisting traffic origins are
used instead of the remote_group_id
. This lessens the load on Neutron
resources, reducing the occurrence of timeouts.
(BZ#1825973)
Previously, tuned Pods did not mount /etc/sysctl.{conf,d/}
from the host.
This gave the ability for settings provided by the host to be overridden by
tuned profiles. Now /etc/sysctl.{conf,d/}
is mounted from the host in tuned
Pods, which prevents tuned profiles from overriding the host sysctl settings in
/etc/sysctl.{conf,d/}
.
(BZ#1826167)
To upgrade an existing OpenShift Container Platform 4.3 cluster to this latest release, see Updating a cluster by using the CLI for instructions.
If you are upgrading to this release from OpenShift Container Platform 4.2 or OpenShift Container Platform 4.3.3 or earlier, you must restart all Pods after the upgrade is complete. This is because the service CA is automatically rotated as of OpenShift Container Platform 4.3.5. The service CA is rotated during the upgrade and a restart is required afterward to ensure that all services are using the new service CA before the previous service CA expires. After this one-time manual restart, subsequent upgrades and rotations will ensure restart before the service CA expires without requiring manual intervention. |
Issued: 2020-05-11
An update for ose-cluster-image-registry-operator-container
is now available for
OpenShift Container Platform 4.3. Details of the update are documented in the
RHSA-2020:2009 advisory.
Issued: 2020-05-19
OpenShift Container Platform release 4.3.21 is now available. The list of packages included in the update are documented in the RHBA-2020:2128 advisory. The container images and bug fixes included in the update are provided by the RHBA-2020:2129 advisory.
Space precluded documenting all of the container images for this release in the advisory. See the following article for notes on the container images in this release:
To upgrade an existing OpenShift Container Platform 4.3 cluster to this latest release, see Updating a cluster by using the CLI for instructions.
If you are upgrading to this release from OpenShift Container Platform 4.2 or OpenShift Container Platform 4.3.3 or earlier, you must restart all Pods after the upgrade is complete. This is because the service CA is automatically rotated as of OpenShift Container Platform 4.3.5. The service CA is rotated during the upgrade and a restart is required afterward to ensure that all services are using the new service CA before the previous service CA expires. After this one-time manual restart, subsequent upgrades and rotations will ensure restart before the service CA expires without requiring manual intervention. |
Issued: 2020-05-26
OpenShift Container Platform release 4.3.22 is now available. The list of packages included in the update are documented in the RHBA-2020:2183 advisory. The container images and bug fixes included in the update are provided by the RHBA-2020:2184 advisory.
Space precluded documenting all of the container images for this release in the advisory. See the following article for notes on the container images in this release:
Previously, installations on the s390x and ppc64le architectures would not complete successfully because the Samples Operator was failing to report its version when running on those architectures. This problem has been fixed, and these installations now complete successfully. (BZ#1779934)
With this update, several environment variables involving TLS certificates and
the ETCDCTL API version are written into /root/.profile
. As a result, when
users perform oc rsh
, the etcdctl commands now work without having to set these
variables manually.
(BZ#1801430)
To prevent problems with one of the registry Pods from affecting registry availability, the registry now has two replicas by default whenever possible. (BZ#1810563)
Previously, Machine Health Checks and Machine Config were not visually separated. A dividing line has been added between these two items. (BZ#1819289)
Previously, the Fluentd buffer queue was not limited and a high volume of incoming logs could flood the filesystem of a node and cause it to crash. As a result, applications would be rescheduled. To prevent this type of crash, the Fluentd buffer queue is now limited to a fixed amount of chunks per output (default: 32). (BZ#1824427, BZ#1833226)
Previously, OpenShift Docker Strategy Builds where the Dockerfile had an ARG step would panic and fail prior to invoking buildah because the map needed for processing ARG steps was not initialized. With this update, the map is now initialized and OpenShift Docker Strategy Builds where the Dockerfile has an ARG step will not panic prior to invoking buildah. (BZ#1832975)
Previously, stale ImageStreamImport error messages made it unclear to users what current ImageStreamImport problems existed. With this update, the logic for updating ImageStreamImport error messages is enhanced to better determine whether successive errors were from different root causes, and update the error messages when appropriate. As a result, users get better guidance after successive failed attempts on what is needed to fix problems with ImageStreamImport. (BZ#1833019)
Previously, the cluster-network-operator did not remove deprecated security group rules on Kuryr bootstrapping when they were replaced by new rules. As a result, the deprecated rules were left in place during OCP upgrades between 4.3.z releases. The cluster-network-operator is now updated to remove deprecated security group rules so that Pods continue to have the correct host VM access restrictions after 4.3.z upgrades. (BZ#1834858)
To upgrade an existing OpenShift Container Platform 4.3 cluster to this latest release, see Updating a cluster by using the CLI for instructions.
If you are upgrading to this release from OpenShift Container Platform 4.2 or OpenShift Container Platform 4.3.3 or earlier, you must restart all Pods after the upgrade is complete. This is because the service CA is automatically rotated as of OpenShift Container Platform 4.3.5. The service CA is rotated during the upgrade and a restart is required afterward to ensure that all services are using the new service CA before the previous service CA expires. After this one-time manual restart, subsequent upgrades and rotations will ensure restart before the service CA expires without requiring manual intervention. |
Issued: 2020-06-02
OpenShift Container Platform release 4.3.23 is now available. The list of packages included in the update are documented in the RHBA-2020:2255 advisory. The container images and bug fixes included in the update are provided by the RHBA-2020:2256 advisory.
Space precluded documenting all of the container images for this release in the advisory. See the following article for notes on the container images in this release:
To upgrade an existing OpenShift Container Platform 4.3 cluster to this latest release, see Updating a cluster by using the CLI for instructions.
If you are upgrading to this release from OpenShift Container Platform 4.2 or OpenShift Container Platform 4.3.3 or earlier, you must restart all Pods after the upgrade is complete. This is because the service CA is automatically rotated as of OpenShift Container Platform 4.3.5. The service CA is rotated during the upgrade and a restart is required afterward to ensure that all services are using the new service CA before the previous service CA expires. After this one-time manual restart, subsequent upgrades and rotations will ensure restart before the service CA expires without requiring manual intervention. |
Issued: 2020-06-16
OpenShift Container Platform release 4.3.25 is now available. The list of packages included in the update are documented in the RHBA-2020:2435 advisory. The container images and bug fixes included in the update are provided by the RHBA-2020:2436 advisory.
Space precluded documenting all of the container images for this release in the advisory. See the following article for notes on the container images in this release:
Previously, users could run etc-member-add.sh
on the wrong master node,
causing etcd to lose quorum. With this release, an additional check prevents the
user from running the script if etcd is already running on the specified master
node.
(BZ#1804067)
Previously, if a sample imagestream from an earlier release was removed in a subsequent release, an upgrade to the subsequent release could fail if the missing imagestream was incorrectly tracked as needing to be updated. With this release, the upgrade process no longer attempts to update imagestreams which existed in a prior release but not in the release being upgraded to. (BZ#1811206)
With this release, users can collect data about Configurations from ConfigMap
objects to determine whether certificates are used for cluster CA and gather
other cluster-related settings from the openshift-config
namespace.
(BZ#1825758)
With the release of Red Hat OpenShift Serverless 1 Serverless Operator version 1.7.1, the operator is generally available. The Tech Preview badge in the Developer perspective of the web console has been removed. (BZ#1829046)
With this release, users can collect anonymized data about unapproved Certificate Service Requests to help troubleshoot issues. (BZ#1835094)
Previously, the Samples operator would not finish an upgrade for the s390x architecture because it could not locate samples content that did not exist, which caused the overall upgrade to fail. With this release, the Samples operator no longer attempts to retrieve samples content during upgrades for s390x.
A workaround to get the Samples operator out of degraded state also exists. A
cluster admin can run oc delete config.samples cluster
to reset the Samples
operator.
(BZ#1835996)
Previously, the Image Registry Operator could not be created when using IPI on Azure because API restrictions did not allow bootstrapping a config object that contained an empty container. In this release, the API restrictions have been removed. (BZ#1836753)
Previously, incorrect handling of certificate rotation caused Prometheus to be
unable to obtain data from the /metrics
endpoint. The issue has been resolved
in this release.
(BZ#1836939)
Previously, the web console stopped responding when a user created a PipelineRun using the CLI or YAML. With this update, checks have been added to avoid the web console error. (BZ#1839036)
Previously, if a sample template from an earlier release was removed in a subsequent release, an upgrade to the subsequent release could fail if the missing template was incorrectly tracked as needing to be updated. With this release, the upgrade process no longer attempts to update templates which existed in a prior release but not in the release being upgraded to. (BZ#1841996)
To upgrade an existing OpenShift Container Platform 4.3 cluster to this latest release, see Updating a cluster by using the CLI for instructions.
If you are upgrading to this release from OpenShift Container Platform 4.2 or OpenShift Container Platform 4.3.3 or earlier, you must restart all Pods after the upgrade is complete. This is because the service CA is automatically rotated as of OpenShift Container Platform 4.3.5. The service CA is rotated during the upgrade and a restart is required afterward to ensure that all services are using the new service CA before the previous service CA expires. After this one-time manual restart, subsequent upgrades and rotations will ensure restart before the service CA expires without requiring manual intervention. |
Issued: 2020-06-16
An update for machine-config-operator-container
is now available for
OpenShift Container Platform 4.3. Details of the update are documented in the
RHSA-2020:2439 advisory.
Issued: 2020-06-16
An update for Kubernetes is now available for OpenShift Container Platform 4.3. Details of the update are documented in the RHSA-2020:2440 advisory.
Issued: 2020-06-16
An update for Kubernetes is now available for OpenShift Container Platform 4.3. Details of the update are documented in the RHSA-2020:2441 advisory.
Issued: 2020-06-16
An update for openshift-enterprise-apb-tools-container
is now available for
OpenShift Container Platform 4.3. Details of the update are documented in the
RHSA-2020:2442 advisory.
Issued: 2020-06-16
An update for containernetworking-plugins
is now available for
OpenShift Container Platform 4.3. Details of the update are documented in the
RHSA-2020:2443 advisory.
Issued: 2020-06-23
OpenShift Container Platform release 4.3.26 is now available. The list of packages included in the update are documented in the RHBA-2020:2435 advisory. The container images and bug fixes included in the update are provided by the RHBA-2020:2436 advisory.
Space precluded documenting all of the container images for this release in the advisory. See the following article for notes on the container images in this release:
The jenkins-agent-nodejs-10-rhel7
and jenkins-agent-nodejs-12-rhel7
images
are now added to OpenShift Container Platform. These new images allow Jenkins Pipelines to
be upgraded to use either v10 or v12 of the Node.js Jenkins Agent. The Node.js
v8 Jenkins Agent is now deprecated, but will continue to be provided. For
existing clusters, you must manually upgrade the Node.js Jenkins Agent, which
can be performed on a per namespace basis. Follow these steps to complete the
manual upgrade:
Select the project for which you want to upgrade the Jenkins Pipelines:
$ oc project <project_name>
Import the new Node.js Jenkins Agent image:
$ oc import-image nodejs openshift4/jenkins-agent-nodejs-10-rhel7 --from=registry.redhat.io/openshift4/jenkins-agent-nodejs-10-rhel7 --confirm
This command imports the v10 image. If you prefer v12, update the image specifications accordingly.
Overwrite the current Node.js Jenkins Agent with the new one you imported:
$ oc label is nodejs role=jenkins-slave --overwrite
Verify in the Jenkins log that the new Jenkins Agent template is configured:
$ oc logs -f jenkins-1-<pod>
See Jenkins agent for more information.
Due to providing disconnected cluster support for samples in OpenShift Container Platform
4.2 and later, the Samples Operator had to allow applications of the
samplesRegistry to override the CVO-based Jenkins imagestreams for the CVO
install payload mirror to be used. This caused samplesRegistry overrides to be
more difficult, since the Jenkins image spec from the CVO payload does not match
the analogous specs in quay.io
or registry.redhat.io
, which customers might
have decided to mirror. Also, the Jenkins image on those registries violated the
special case support contract Red Hat provides for the Jenkins image, since it
is part of the base OpenShift installation. This bug fix removes the use of the
samplesRegistry override for Jenkins imagestreams, since the image registry can
now handle importing the Jenkins imagestream when an install mirror is in place.
Now the Jenkins imagestream imports work when you use samplesOverride to bring
in the other sample imagestreams from other locations outside of
registry.redhat.io
.
(BZ#1826028)
Previously, the web console only showed name, namespace, and creation date when listing OLM Subscriptions on the Home → Search page. The web console now shows additional subscription details. (BZ#1827747)
The Azure infrastructure name is used for generated Azure containers and storage accounts. Therefore, if the Azure infrastructure name contained uppercase letters, the container would successfully be created, but the storage account creation would fail. This bug fix adjusts the container name creation logic to discard invalid characters, allowing the image registry to deploy on an infrastructure that contains invalid characters in its name. (BZ#1832144)
The CVO had a race condition where it would consider a timed-out update reconciliation cycle a successful update. This only happened for restricted network clusters where the Operator timed out attempting to fetch release image signatures. This bug caused the CVO to enter its shuffled-manifest reconciliation mode, which could break the cluster if the manifests were applied in an order that the components could not handle. The CVO now treats timed-out updates as failures, so it no longer enters reconciling mode before the update succeeds. (BZ#1844117)
To upgrade an existing OpenShift Container Platform 4.3 cluster to this latest release, see Updating a cluster by using the CLI for instructions.
If you are upgrading to this release from OpenShift Container Platform 4.2 or OpenShift Container Platform 4.3.3 or earlier, you must restart all Pods after the upgrade is complete. This is because the service CA is automatically rotated as of OpenShift Container Platform 4.3.5. The service CA is rotated during the upgrade and a restart is required afterward to ensure that all services are using the new service CA before the previous service CA expires. After this one-time manual restart, subsequent upgrades and rotations will ensure restart before the service CA expires without requiring manual intervention. |
Issued: 2020-06-23
An update for python-psutil
is now available for OpenShift Container Platform 4.3. Details of
the update are documented in the
RHSA-2020:2635 advisory.
Issued: 2020-06-30
OpenShift Container Platform release 4.3.27 is now available. The list of packages included in the update are documented in the RHBA-2020:2627 advisory. The container images and bug fixes included in the update are provided by the RHBA-2020:2628 advisory.
Space precluded documenting all of the container images for this release in the advisory. See the following article for notes on the container images in this release:
To upgrade an existing OpenShift Container Platform 4.3 cluster to this latest release, see Updating a cluster by using the CLI for instructions.
If you are upgrading to this release from OpenShift Container Platform 4.2 or OpenShift Container Platform 4.3.3 or earlier, you must restart all Pods after the upgrade is complete. This is because the service CA is automatically rotated as of OpenShift Container Platform 4.3.5. The service CA is rotated during the upgrade and a restart is required afterward to ensure that all services are using the new service CA before the previous service CA expires. After this one-time manual restart, subsequent upgrades and rotations will ensure restart before the service CA expires without requiring manual intervention. |
Issued: 2020-07-07
OpenShift Container Platform release 4.3.28 is now available. The list of packages included in the update are documented in the RHBA-2020:2804 advisory. The container images and bug fixes included in the update are provided by the RHBA-2020:2805 advisory.
Space precluded documenting all of the container images for this release in the advisory. See the following article for notes on the container images in this release:
Previously, triggers did not work on v1.StatefulSet objects because the trigger controller did not recognize StatefulSet as GA. The issue has been resolved in this release. (BZ#1831888)
Previously, when logging out from the Kibana dashboard, it was still possible to log in again from a new browser tab without specifying the login credentials. This was caused by the signoff link pointing to an incorrect handler for the OAuth proxy that provided security for Kibana. The signoff link is now fixed, forcing login credentials when attempting to reaccess the Kibana dashboard. (BZ#1835578)
After upgrading Octavia from OpenStack 13 to 16, the UDP listener is supported and the strategy to enforce DNS resolution over TCP protocol is removed. This change requires adding the new listener to the existent DNS service that specifies the UDP protocol, but the old Amphora image for the existent DNS load balancer does not support the new listener and causes the listener creation to fail. With this release, the DNS service that requires UDP is recreated, causing the load balancer to be recreated with the new Amphora version. Recreating the service and load balancer causes some down time for DNS resolution. When this process is complete, the load balancer for the DNS service is created with all the required listeners. (BZ#1846459)
Previously, the Kubernetes volume plugin for Azure disks was not able to find attached Azure volumes because it expected new udev rules installed on the host operating system. As a result, any Pod that used the volumes could not start on RHEL 7. With this release, the Kubernetes volume plugin for Azure disks scans for attached Azure disks even with RHEL 7 udev rules, and Pods with Azure disk volumes are able to start on RHEL 7. (BZ#1847089)
Previously, the Terraform step openstack_networking_floatingip_associate_v2
did not list all its dependent steps, and the resulting omission of a dependent
step sometimes caused a race condition that occasionally caused the Terraform
job to fail, especially on overloaded systems. With this release, the dependant
Terraform step is listed as depends_on
to force the Terraform steps to run in
the correct order.
(BZ#1849171)
To upgrade an existing OpenShift Container Platform 4.3 cluster to this latest release, see Updating a cluster by using the CLI for instructions.
If you are upgrading to this release from OpenShift Container Platform 4.2 or OpenShift Container Platform 4.3.3 or earlier, you must restart all Pods after the upgrade is complete. This is because the service CA is automatically rotated as of OpenShift Container Platform 4.3.5. The service CA is rotated during the upgrade and a restart is required afterward to ensure that all services are using the new service CA before the previous service CA expires. After this one-time manual restart, subsequent upgrades and rotations will ensure restart before the service CA expires without requiring manual intervention. |
Issued: 2020-07-14
OpenShift Container Platform release 4.3.29 is now available. The list of packages included in the update are documented in the RHBA-2020:2879 advisory. The container images and bug fixes included in the update are provided by the RHBA-2020:2872 advisory.
Space precluded documenting all of the container images for this release in the advisory. See the following article for notes on the container images in this release:
To upgrade an existing OpenShift Container Platform 4.3 cluster to this latest release, see Updating a cluster by using the CLI for instructions.
If you are upgrading to this release from OpenShift Container Platform 4.2 or OpenShift Container Platform 4.3.3 or earlier, you must restart all Pods after the upgrade is complete. This is because the service CA is automatically rotated as of OpenShift Container Platform 4.3.5. The service CA is rotated during the upgrade and a restart is required afterward to ensure that all services are using the new service CA before the previous service CA expires. After this one-time manual restart, subsequent upgrades and rotations will ensure restart before the service CA expires without requiring manual intervention. |
Issued: 2020-08-05
OpenShift Container Platform release 4.3.31 is now available. The container images and bug fixes in the update are documented in the RHBA-2020:3180 advisory. The list of packages included included in the update are provided by the RHBA-2020:3179 advisory.
Space precluded documenting all of the container images for this release in the advisory. See the following article for notes on the container images in this release:
To upgrade an existing OpenShift Container Platform 4.3 cluster to this latest release, see Updating a cluster by using the CLI for instructions.
If you are upgrading to this release from OpenShift Container Platform 4.2 or OpenShift Container Platform 4.3.3 or earlier, you must restart all Pods after the upgrade is complete. This is because the service CA is automatically rotated as of OpenShift Container Platform 4.3.5. The service CA is rotated during the upgrade and a restart is required afterward to ensure that all services are using the new service CA before the previous service CA expires. After this one-time manual restart, subsequent upgrades and rotations will ensure restart before the service CA expires without requiring manual intervention. |
Issued: 2020-08-05
An update for openshift
is now available for OpenShift Container Platform 4.3. Details of
the update are documented in the
RHSA-2020:3183 advisory.
Issued: 2020-08-05
An update for openshift-enterprise-hyperkube-container
is now available for
OpenShift Container Platform 4.3. Details of the update are documented in the
RHSA-2020:3184 advisory.
Issued: 2020-08-19
OpenShift Container Platform release 4.3.33 is now available. The container images and bug fixes in the update are documented in the RHBA-2020:3259 advisory. The list of packages included included in the update are provided by the RHBA-2020:3258 advisory.
Space precluded documenting all of the container images for this release in the advisory. See the following article for notes on the container images in this release:
To upgrade an existing OpenShift Container Platform 4.3 cluster to this latest release, see Updating a cluster by using the CLI for instructions.
If you are upgrading to this release from OpenShift Container Platform 4.2 or OpenShift Container Platform 4.3.3 or earlier, you must restart all Pods after the upgrade is complete. This is because the service CA is automatically rotated as of OpenShift Container Platform 4.3.5. The service CA is rotated during the upgrade and a restart is required afterward to ensure that all services are using the new service CA before the previous service CA expires. After this one-time manual restart, subsequent upgrades and rotations will ensure restart before the service CA expires without requiring manual intervention. |
Issued: 2020-09-09
OpenShift Container Platform release 4.3.35 is now available. The container images and bug fixes in the update are documented in the RHBA-2020:3457 advisory. The list of packages included included in the update are provided by the RHBA-2020:3458 advisory.
Space precluded documenting all of the container images for this release in the advisory. See the following article for notes on the container images in this release:
To upgrade an existing OpenShift Container Platform 4.3 cluster to this latest release, see Updating a cluster by using the CLI for instructions.
If you are upgrading to this release from OpenShift Container Platform 4.2 or OpenShift Container Platform 4.3.3 or earlier, you must restart all Pods after the upgrade is complete. This is because the service CA is automatically rotated as of OpenShift Container Platform 4.3.5. The service CA is rotated during the upgrade and a restart is required afterward to ensure that all services are using the new service CA before the previous service CA expires. After this one-time manual restart, subsequent upgrades and rotations will ensure restart before the service CA expires without requiring manual intervention. |
Issued: 2020-09-09
An update for jenkins-2-plugins
is now available for OpenShift Container Platform 4.3.
Details of the update are documented in the
RHSA-2020:3616 advisory.
Issued: 2020-09-23
OpenShift Container Platform release 4.3.38 is now available. The container images and bug fixes in the update are documented in the RHBA-2020:3609 advisory. The list of packages included included in the update are provided by the RHBA-2020:3610 advisory.
Space precluded documenting all of the container images for this release in the advisory. See the following article for notes on the container images in this release:
To upgrade an existing OpenShift Container Platform 4.3 cluster to this latest release, see Updating a cluster by using the CLI for instructions.
If you are upgrading to this release from OpenShift Container Platform 4.2 or OpenShift Container Platform 4.3.3 or earlier, you must restart all Pods after the upgrade is complete. This is because the service CA is automatically rotated as of OpenShift Container Platform 4.3.5. The service CA is rotated during the upgrade and a restart is required afterward to ensure that all services are using the new service CA before the previous service CA expires. After this one-time manual restart, subsequent upgrades and rotations will ensure restart before the service CA expires without requiring manual intervention. |
Issued: 2020-09-23
An update for jenkins
and openshift
is now available for OpenShift Container Platform
4.3. Details of the update are documented in the
RHSA-2020:3808 advisory.
Issued: 2020-09-23
An update for openshift-enterprise-hyperkube-container
and
sriov-dp-admission-controller-container
is now available for OpenShift Container Platform
4.3. Details of the update are documented in the
RHSA-2020:3809 advisory.
Issued: 2020-10-20
OpenShift Container Platform release 4.3.40, which includes a security update for golang.org/x/crypto
, is now available. The container images, bug fixes, and CVEs in the update are documented in the RHSA-2020:4264 advisory.
Space precluded documenting all of the container images for this release in the advisory. See the following article for notes on the container images in this release:
To upgrade an existing OpenShift Container Platform 4.3 cluster to this latest release, see Updating a cluster by using the CLI for instructions.
If you are upgrading to this release from OpenShift Container Platform 4.2 or OpenShift Container Platform 4.3.3 or earlier, you must restart all Pods after the upgrade is complete. This is because the service CA is automatically rotated as of OpenShift Container Platform 4.3.5. The service CA is rotated during the upgrade and a restart is required afterward to ensure that all services are using the new service CA before the previous service CA expires. After this one-time manual restart, subsequent upgrades and rotations will ensure restart before the service CA expires without requiring manual intervention. |
Issued: 2020-10-20
An update for jenkins-2-plugins
is now available for OpenShift Container Platform 4.3. Details of the update are documented in the RHSA-2020:4265 advisory.