-
Previously, installing a cluster with a Dynamic Host Configuration Protocol (DHCP) network on Nutanix caused a failure. With this release, this issue is resolved. (OCPBUGS-38118)
-
Previously, installing an AWS cluster in either the Commercial Cloud Services (C2S) region or the Secret Commercial Cloud Services (SC2S) region failed because the installation program added unsupported security groups to the load balancer. With this release, the installation program no longer adds unsupported security groups to the load balancer for a cluster that needs to be installed in either the C2S region or SC2S region. (OCPBUGS-33311)
-
Previously, when installing a Google Cloud Platform (GCP) cluster where instances required that IP forwarding was not set, the installation failed. With this release, IP forwarding is disabled for all GCP machines and the issue is resolved. (OCPBUGS-49842)
-
Previously, when installing a cluster on AWS in existing subnets, for bring your own virtual private cloud (BYO VPC) in edge zones, the installation program did not tag the subnet edge resource with kubernetes.io/cluster/<InfraID>:shared
. With this release, all subnets that are used in the install-config.yaml
file contain the required tags. (OCPBUGS-49792)
-
Previously, a cluster that was created on Amazon Web Services (AWS) could fail to deprovision the cluster without the permissions to release the EIP address, ec2:ReleaseAddress
. This issue occurred when the cluster was created with the minimum permissions in an existing virtual private cloud (VPC), including an unmanaged VPC or bring your own (BYO) VPC, and BYO Public IPv4 Pool address. With this release, the ec2:ReleaseAddress
permission is exported to the Identity and Access Management (IAM) policy generated during installation. (OCPBUGS-49735)
-
Previously, when installing a cluster on Nutanix, the installation program could fail with a timeout while uploading images to Prism Central. This occurred in some slower Prism Central environments when the Prism API attempted to load the Red Hat Enterprise Linux CoreOS (RHCOS) image. The Prism API call timeout value was 5 minutes. With this release, the Prism API call timeout value is a configurable parameter platform.nutanix.prismAPICallTimeout
in the install-config.yaml
file and the default timeout value is 10 minutes. (OCPBUGS-49148)
-
Previously, the oc adm node-image monitor
command failed because of a temporary API server disconnection and then displayed an error or End of File message. With this release, the installation program ignores a temporary API server disconnection and the monitor command tries to connect to the API server again. (OCPBUGS-48714)
-
Previously, when you deleted backend service resources on Google Cloud Platform (GCP), some resources to be deleted were not found. For example, the associated forwarding rules, health checks, and firewall rules were not deleted. With this release, the installation program tries to find the backend service by name first, then searches for forwarding rules, health checks, and firewall rules before it determines if those results match a backend service. The algorithm for associating resources is reversed and the appropriate resources are deleted. There are no leaked backend service resources and the issue is resolved. When you delete a private cluster, the forwarding rules, backend services, health checks, and firewall rules created by the Ingress Operator are not deleted. (OCPBUGS-48611)
-
Previously, OpenShift Container Platform was not compliant with PCI-DSS/BAFIN regulations. With this release, the cross-tenant object replication in Microsoft Azure is unavailable. Consequently, the chance of unauthorized data access is reduced and the strict adherence to data governance policies is ensured. (OCPBUGS-48118)
-
Previously, when you installed OpenShift Container Platform on Amazon Web Services (AWS) and specified an edge machine pool without an instance type, in some instances it caused the edge node to fail. With this release, if you specify an edge machine pool without an instance type you must use the permission ec2:DescribeInstanceTypeOfferings
. The permission derives the correct instance type available, based on the AWS Local Zones or Wavelength Zones locations used. (OCPBUGS-47502)
-
Previously, when the API server disconnected temporarily, the command oc adm node-image monitor
reported an end of file (EOF) error. With this release, when the API server disconnects temporarily, the monitor command does not fail. (OCPBUGS-46391)
-
Previously, when you specified the HostedZoneRole
permission in the install-config.yaml
file while creating a shared Virtual Private Cloud (VPC), you also had to specify the sts:AssumeRole
permission. Otherwise, it caused an error. With this release, if you specify the HostedZoneRole
permission the installation program validates that the sts:AssumeRole
permission is present. (OCPBUGS-46046)
-
Previously, when the publicIpv4Pool
configuration parameter was used during installation the permissions ec2:AllocateAddress
and ec2:AssociateAddress
were not validated. As a consequence, permission failures could occur during installation. With this release, the required permissions are validated before the cluster is installed and the issue is resolved. (OCPBUGS-45711)
-
Previously, during a disconnected installation, when the imageContentSources
parameter was configured for more than one mirror for a source, the command to create the agent ISO image could fail, depending on the sequence of the mirror configuration. With this release, multiple mirrors are handled correctly when the agent ISO is created and the issue is resolved. (OCPBUGS-45630)
-
Previously, when installing a cluster using the Cluster API on installer-provisioned infrastructure, the user provided a machineNetwork
parameter. With this release, the installation program uses a random machineNetwork
parameter. (OCPBUGS-45485)
-
Previously, during an installation on Amazon Web Services (AWS), the installation program used the wrong load balancer when searching for the hostedZone
ID, which caused an error. With the release, the correct load balancer is used and the issue is resolved. (OCPBUGS-45301)
-
Previously, endpoint overrides in IBM Power Virtual Server were not conditional. As a consequence, endpoint overrides were created incorrectly and caused failures in Virtual Private Environments (VPE). With this release, endpoint overrides are conditional only for disconnected installations. (OCPBUGS-44922)
-
Previously, during a shared Virtual Private Cloud (VPC) installation, the installation program added the records to a private DNS zone created by the installation program instead of adding the records to the cluster’s private DNS zone. As a consequence, the installation failed. With this release, the installation program searches for an existing private DNS zone and, if found, pairs that zone with the network that is supplied by the install-config.yaml
file and the issue is resolved. (OCPBUGS-44641) (OCPBUGS-44641)
-
Previously, the oc adm drain --delete-local-data
command was not supported in the 4.18 oc
CLI tool. With this release, the command has been updated to oc adm drain --delete-emptydir-data
. (OCPBUGS-44318)
-
Previously, US East (wdc04
), US South (dal13
), Sydney (syd05
), and Toronto (tor01
) regions were not supported for IBM Power Virtual Server. With this release, these regions, which include PowerEdgeRouter
(PER) capabilities, are supported for IBM Power Virtual Server.
(OCPBUGS-44312)
-
Previously, during a Google Cloud Platform (GCP) installation, when the installation program was creating filters with large numbers of returned data, for example for subnets, it exceeded the quota for the maximum number times that a resource can be filtered in a specific period. With this release, all relevant filtering is moved to the client so that the filter quotas are not exceeded and the issue is resolved. (OCPBUGS-44193)
-
Previously, during an Amazon Web Services (AWS) installation, the installation program validated all the tags in the install-config.yaml
file only when you set propogateTags
to true. With this release, the installation program validates all the tags in the install-config.yaml
file. (OCPBUGS-44171)
-
Previously, if the RendezvousIP
value matched a substring in the next-hop-address
field of a compute node configuration, it reported a validation error. The RendezvousIP
value must match a control plane host address only. With this release, a substring comparison for RendezvousIP
value is used against a control plane host address only, so that the error no longer exists. (OCPBUGS-44167)
-
Previously, when you deleted a cluster in IBM Power Virtual Server, the Transit Gateway connections were cleaned up. With this release, if the tgName
parameter is set, Red Hat OpenStack Platform (RHOSP) does not clean up the Transit Gateway connection when you delete a cluster. (OCPBUGS-44162)
-
Previously, when installing a cluster on an IBM platform and adding an existing VPC to the cluster, the Cluster API Provider IBM Cloud would not add ports 443, 5000, and 6443 to the security group of the VPC. This situation prevented the VPC from being added to the cluster. With this release, a fix ensures that the Cluster API Provider IBM Cloud adds the ports to the security group of the VPC so that the VPC gets added to your cluster. (OCPBUGS-44068)
-
Previously, the Cluster API Provider IBM Cloud module was very verbose. With this release, the verbosity of the module is reduced, and this will affect the output of the .openshift_install.log
file. (OCPBUGS-44022)
-
Previously, when you deployed a cluster on a IBM Power Virtual Server zone, the load balancers were slow to create. As a consequence, the cluster failed. With this release, the Cluster API Provider IBM Cloud no longer has to wait until all load balancers are ready and the issue is resolved. (OCPBUGS-43923)
-
Previously, for the Agent-based Installer, all host validation status logs referred to the name of the first registered host. As a consequence, when a host validation failed, it was not possible to determine the problem host. With this release, the correct host is identified in each log message and now the host validation logs correctly show the host to which they correspond, and the issue is resolved. (OCPBUGS-43768)
-
Previously, when you used the oc adm node-image create
command to generate the image while running the Agent-based Installer and the step fails, the accompanying error message did not show the container log. The oc adm node-image create
command uses a container to generate the image. When the image generation step fails, the basic error message does not show the underlying issue that caused the image generation failure. With this release, to help troubleshooting, the oc adm node-image create
command now shows the container log, so the underlying issue is displayed. (OCPBUGS-43757)
-
Previously, the Agent-based Installer failed to parse the cloud_controller_manager
parameter in the install-config.yaml
configuration file. This resulted in the Assisted Service API failing because it received an empty string, and this in turn caused the installation of the cluster to fail on Oracle® Cloud Infrastructure (OCI). With this release, an update to the parsing logic ensures that the Agent-based Installer correctly interprets the cloud_controller_manager
parameter so that the Assisted Service API receives the correct string value. As a result, the Agent-based Installer can now installer a cluster on OCI. (OCPBUGS-43674)
-
Previously, an update to Azure SDK for Go removed the SendCertificateChain
option and this changed the behavior of sending certificates. As a consequence, the full certificate chain was not sent. With this release, the option to send a full certification chain is available and the issue is resolved. (OCPBUGS-43567)
-
Previously, when installing a cluster on Google Cloud Platform (GCP) using the Cluster API implementation, the installation program did not distinguish between internal and external load balancers while creating firewall rules. As a consequence, the firewall rule for internal load balancers was open to all IP address sources, that is, 0.0.0.0/0
. With this release, the Cluster API Provider GCP is updated to restrict firewall rules to the machine CIDR when using an internal load balancer. The firewall rule for internal load balancers is correctly limited to machine networks, that is, nodes in the cluster and the issue is resolved. (OCPBUGS-43520)
-
Previously, when installing a cluster on IBM Power Virtual Server, the required security group rules were not created. With this release, the missing security group rules for installation are identified and created and the issue is resolved. (OCPBUGS-43518)
-
Previously, when you tried to add a compute node with the oc adm node-image
command by using an instance that was previously created with Red Hat OpenStack Platform (RHOSP), the operation failed. With this release, the issue is resolved by correctly setting the user-managed networking configuration. (OCPBUGS-43513)
-
Previously, when destroying a cluster on Google Cloud Platform (GCP), a forwarding rule incorrectly blocked the installation program. As a consequence, the destroy process failed to complete. With this release, the issue is resolved by the installation program setting its state correctly and marking all destroyed resources as deleted. (OCPBUGS-42789)
-
Previously, when configuring the Agent-Based Installer installation in a disconnected environment with more than one mirror for the same source, the installation might fail. This occurred because one of the mirrors was not checked. With this release, all mirrors are used when multiple mirrors are defined for the same source and the issue is resolved. (OCPBUGS-42705)
-
Previously, you could not change the AdditionalTrustBundlePolicy
parameter in the install-config.yaml
file for the Agent-based Installer. The parameter was always set to ProxyOnly
. With this release, you can set AdditionalTrustBundlePolicy
to other values, for example, Always
. By default, the parameter is set to ProxyOnly
. (OCPBUGS-42670)
-
Previously, when you installed a cluster and tried to add a compute node with the oc adm node-image
command, it failed because the date, time, or both might have been inaccurate. With this release, the issue is resolved by applying the same Network Time Protocol (NTP) configuration in the target cluster MachineConfig
chrony resource to the node ephemeral live environment (OCPBUGS-42544)
-
Previously, during installation the name of the artifact that the oc adm node-image create
command generated did not include <arch>
in its file name. As a consequence, the file name was inconsistent with other generated ISOs. With this release, a patch fixes the name of the artifact that is generated by the oc adm node-image create
command by also including the referenced architecture as part of the file name and the issue is resolved. (OCPBUGS-42528)
-
Previously, the Agent-based Installer set the assisted-service
object to a debug logging mode. Unintentionally, the pprof
module in the assisted-service
object, which uses port 6060
, was then turned on. As a consequence, there was a port conflict and the Cloud Credential Operator (CCO) did not run. When requested by the VMware vSphere Cloud Controller Manager (CCM), vSphere secrets were not generated, the RHOSP CCM failed to initialize the nodes, and the cluster installation was blocked. With this release, the pprof
module in the assisted-service
object does not run when invoked by the Agent-based Installer. As a result, the CCO runs correctly and cluster installations on vSphere that use the Agent-based Installer succeed. (OCPBUGS-42525)
-
Previously, when a compute node was trying to join a cluster the rendezvous node rebooted before the process completed. As the compute node could not communicate as expected with the rendezvous node, the installation was not successful. With this release, a patch is applied that fixes the racing condition that caused the rendezvous node to reboot prematurely and the issue is resolved. (OCPBUGS-41811)
-
Previously, when using the Assisted Installer, selecting a multi-architecture image for s390x
CPU architecture on Red Hat Hybrid Cloud Console could cause the installation to fail. The installation program reported an error that the new cluster was not created because the skip MCO reboot was not compatible with s390x
CPU architecture. With this release, the issue is resolved. (OCPBUGS-41716)
-
Previously, a coding issue caused the Ansible script on RHOSP user-provisioned infrastructure installation to fail during the provisioning of compact clusters. This occurred when IPv6 was enabled for a three-node cluster. With this release, the issue is resolved and you can provision compact three-node clusters. (OCPBUGS-41538)
-
Previously, a coding issue caused the Ansible script on RHOSP user-provisioned installation infrastructure to fail during the provisioning of compact clusters. This occurred when IPv6 was enabled for a three-node cluster. With this release, the issue is resolved and you can provision compact three-node clusters
on RHOSP for user-provisioned installation infrastructure. (OCPBUGS-39402)
-
Previously, the order of an Ansible Playbook was modified to run before the metadata.json
file was created, which caused issues with older versions of Ansible. With this release, the playbook is more tolerant of missing files to accommodate older versions of Ansible and the issue is resolved. (OCPBUGS-39285)
-
Previously, when you installed a cluster there were issues using a compute node because the date, time, or both might have been inaccurate. With this release, a patch is applied to the live ISO time synchronization. The patch configures the /etc/chrony.conf
file with the list of the additional Network Time Protocol (NTP) servers that the user provides in the agent-config.yaml
file, so that you can use a compute node without experiencing a cluster installation issue. (OCPBUGS-39231)
-
Previously, when installing a cluster on bare metal using installer-provisioned infrastructure, the installation could time out if the network to the bootstrap virtual machine is slow. With this update, the timeout duration has been increased to cover a wider range of network performance scenarios. (OCPBUGS-39081)
-
Previously, the oc adm node-image create
command failed when run against a cluster in a restricted environment with a proxy because the command ignored the cluster-wide proxy setting. With this release, when the command is run it includes the cluster proxy resource settings, if available, to ensure the command is run successfully and the issue is resolved.
(OCPBUGS-38990)
-
Previously, when installing a cluster on Google Cloud Platform (GCP) into a shared Virtual Private Cloud (VPC) with a bring your own (BYO) hosted zone, the installation could fail due to an error creating the private managed zone. With this release, a fix ensures that where there is a preexisting private managed zone the installation program skips creating a new one and the issue is resolved. (OCPBUGS-38966)
-
Previously, an installer-provisioned installation on VMware vSphere to run OpenShift Container Platform 4.16 in a disconnected environment failed when the template could not be downloaded. With this release, the template is downloaded correctly and the issue is resolved. (OCPBUGS-38918)
-
Previously, during installation the oc adm node-image create
command used the kube-system/cluster-config-v1
resource to determine the platform type. With this release, the installation program uses the infrastructure resource, which provides more accurate information about the platform type. (OCPBUGS-38802)
-
Previously, a rare condition on VMware vSphere Cluster API machines caused the vCenter session management to time out unexpectedly. With this release, the Keep Alive support is disabled in the current and later versions of Cluster API Provider vSphere, and the issue is resolved. (OCPBUGS-38657)
-
Previously, when a folder was undefined and the data center was located in a data center folder, a wrong folder structure was created starting from the root of the vCenter server. By using the Govmomi DatacenterFolders.VmFolder
, it used the wrong path. With this release, the folder structure uses the data center inventory path and joins it with the virtual machine (VM) and cluster ID value, and the issue is resolved. (OCPBUGS-38599)
-
Previously, the installation program on Google Cloud Platform (GCP) filtered addresses to find and delete internal addresses only. The addition of Cluster API Provider Google Cloud Platform (GCP) provisioned resources included changes to address resources. With this release, Cluster API Provider GCP creates external addresses and these must be included in a cluster cleanup operation. (OCPBUGS-38571)
-
Previously, if you specified an unsupported architecture in the install-config.yaml
file the installation program would fail with a connection refused
message. With this update, the installation program correctly validates that the specified cluster architecture is compatible with OpenShift Container Platform, leading to successful installations. (OCPBUGS-38479)
-
Previously, when you used the Agent-based Installer to install a cluster, assisted-installer-controller
timed out or exited the installation process depending on whether assisted-service
was unavailable on the rendezvous host. This situation caused the cluster installation to fail during CSR approval checks. With this release, an update to assisted-installer-controller
ensures that the controller does not timeout or exit if assisted-service
is unavailable. The CSR approval check now works as expected. (OCPBUGS-38466)
-
Previously, installing a cluster with a Dynamic Host Configuration Protocol (DHCP) network on Nutanix caused a failure. With this release, this issue is resolved. (OCPBUGS-388118)
-
Previously, when the VMware vSphere vCenter cluster contained an ESXi host that did not have a standard port group defined and the installation program tried to select that host to import the OVA, the import failed and the error Invalid Configuration for device 0
was reported. With this release, the installation program verifies whether a standard port group for an ESXi host is defined and, if not, continues until it locates an ESXi host with a defined standard port group, or reports an error message if it fails to locate one, resolving the issue. (OCPBUGS-37945)
-
Previously, due to an EFI Secure Boot failure in the SCOS, when the FCOS pivoted to the SCOS the virtual machine (VM) failed to boot. With this release, the Secure Boot is disabled only when the Secure Boot is enabled in the `coreos.ovf ` configuration file, and the issue is resolved (OCPBUGS-37736)
-
Previously, when deprecated and supported fields were used with the installation program on VMware vSphere a validation error message was reported. With this release, warning messages are added specifying that using deprecated and supported fields are not recommended with the installation program on VMware vSphere. (OCPBUGS-37628)
-
Previously, if you tried to install a second cluster using existing Azure Virtual Networks (VNet) on Microsoft Azure, the installation failed. Where the front end IP address of the API server load balancer was not specified, the Cluster API fixed the address to 10.0.0.100
. As this IP address was already taken by the first cluster, this resulted in the second load balancer failing to install. With this release, a dynamic IP address checks whether the default IP address is available. If it is unavailable, the dynamic IP selects the next available address and you can install the second cluster successfully with a different load balancer IP. (OCPBUGS-37442)
-
Previously, the installation program attempted to download the OVA on VMware vSphere whether the template field was defined or not. With this update, the issue is resolved. The installation program verifies if the template field is defined. If the template field is not defined, the OVA is downloaded. If the template field is defined, the OVA is not downloaded. (OCPBUGS-36494)
-
Previously, when installing a cluster on IBM Cloud the installation program checked the first group of subnets, that is 50, only when searching for subnet details by name. With this release, pagination support is provided to search all subnets. (OCPBUGS-36236)
-
Previously, when installing Cluster API Provider Google Cloud Platform (GCP) into a shared Virtual Private Cloud (VPC) without the required permission compute.firewalls.create
the installation failed because no firewall rules were created. With this release, a fix ensures that a rule to create the firewall is skipped during installation and the issue is resolved. (OCPBUGS-35262)
-
Previously, for the Agent-Based installer, the networking layout defined through nmstate might result in a configuration error if all hosts do not have an entry in the interfaces section that matches an entry in the networkConfig
section. However, if the entry in the networkConfig
section uses a physical interface name then the entry in the interfaces section is not required.
This fix ensures that the configuration will not result in an error if an entry in the networkConfig
section has a physical interface name and does not have a corresponding entry in the interfaces table. (OCPBUGS-34849)
-
Previously, the container tools module was enabled by default on the RHEL node. With this release, the container-tools module is disabled to install the correct package between conflicting repositories. (OCPBUGS-34844)