From 9f9578c47ccc88ae02b7496193103d1910ce214d Mon Sep 17 00:00:00 2001 From: Andrea Hoffer Date: Tue, 4 Nov 2025 13:11:23 -0500 Subject: [PATCH] Playing with how modularizing release notes might look --- modules/rn-about-release.adoc | 26 + modules/rn-async-errata.adoc | 19 + modules/rn-bug-fixes.adoc | 264 ++ modules/rn-compatibility.adoc | 4 + modules/rn-deprecated-features.adoc | 17 + modules/rn-deprecated-removed-tables.adoc | 272 ++ modules/rn-known-issues.adoc | 62 + modules/rn-new-features.adoc | 144 + modules/rn-notable-changes.adoc | 15 + modules/rn-ocp-4-20-0.adoc | 22 + modules/rn-ocp-4-20-1.adoc | 62 + modules/rn-removed-features.adoc | 33 + modules/rn-technology-preview.adoc | 755 +++++ release_notes/ocp-4-20-release-notes.adoc | 3028 +-------------------- 14 files changed, 1739 insertions(+), 2984 deletions(-) create mode 100644 modules/rn-about-release.adoc create mode 100644 modules/rn-async-errata.adoc create mode 100644 modules/rn-bug-fixes.adoc create mode 100644 modules/rn-compatibility.adoc create mode 100644 modules/rn-deprecated-features.adoc create mode 100644 modules/rn-deprecated-removed-tables.adoc create mode 100644 modules/rn-known-issues.adoc create mode 100644 modules/rn-new-features.adoc create mode 100644 modules/rn-notable-changes.adoc create mode 100644 modules/rn-ocp-4-20-0.adoc create mode 100644 modules/rn-ocp-4-20-1.adoc create mode 100644 modules/rn-removed-features.adoc create mode 100644 modules/rn-technology-preview.adoc diff --git a/modules/rn-about-release.adoc b/modules/rn-about-release.adoc new file mode 100644 index 000000000000..c9b1dc571e1b --- /dev/null +++ b/modules/rn-about-release.adoc @@ -0,0 +1,26 @@ +[id="ocp-4-20-about-this-release_{context}"] += About this release + +// TODO: Update with the relevant information closer to release. +{product-title} (link:https://access.redhat.com/errata/RHSA-2025:9562[RHSA-2025:9562]) is now available. This release uses link:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.33.md[Kubernetes 1.33] with CRI-O runtime. New features, changes, and known issues that pertain to {product-title} {product-version} are included in this topic. + +{product-title} {product-version} clusters are available at https://console.redhat.com/openshift. From the {hybrid-console}, you can deploy {product-title} clusters to either on-premises or cloud environments. + +You must use {op-system} machines for the control plane and for the compute machines. +//Removed the note per https://issues.redhat.com/browse/GRPA-3517 +//Removed paragraph about the RHEL package because mode workers are removed from 4.19, per Scott Dodson +//Even-numbered release lifecycle verbiage (Comment in for even-numbered releases) + +Starting from {product-title} 4.14, the Extended Update Support (EUS) phase for even-numbered releases increases the total available lifecycle to 24 months on all supported architectures, including `x86_64`, 64-bit ARM (`aarch64`), {ibm-power-name} (`ppc64le`), and {ibm-z-name} (`s390x`) architectures. Beyond this, Red{nbsp}Hat also offers a 12-month additional EUS add-on, denoted as _Additional EUS Term 2_, that extends the total available lifecycle from 24 months to 36 months. The Additional EUS Term 2 is available on all architecture variants of {product-title}. For more information about support for all versions, see the link:https://access.redhat.com/support/policy/updates/openshift[Red Hat {product-title} Life Cycle Policy]. + +//Odd-numbered release lifecycle verbiage (Comment in for odd-numbered releases) +//// +The support lifecycle for odd-numbered releases, such as {product-title} {product-version}, on all supported architectures, including `x86_64`, 64-bit ARM (`aarch64`), {ibm-power-name} (`ppc64le`), and {ibm-z-name} (`s390x`) architectures is 18 months. For more information about support for all versions, see the link:https://access.redhat.com/support/policy/updates/openshift[Red Hat {product-title} Life Cycle Policy]. + +Commencing with the {product-title} 4.14 release, Red{nbsp}Hat is simplifying the administration and management of Red{nbsp}Hat shipped cluster Operators with the introduction of three new life cycle classifications; Platform Aligned, Platform Agnostic, and Rolling Stream. These life cycle classifications provide additional ease and transparency for cluster administrators to understand the life cycle policies of each Operator and form cluster maintenance and upgrade plans with predictable support boundaries. For more information, see link:https://access.redhat.com/webassets/avalon/j/includes/session/scribe/?redirectTo=https%3A%2F%2Faccess.redhat.com%2Fsupport%2Fpolicy%2Fupdates%2Fopenshift_operators[OpenShift Operator Life Cycles]. +//// + +// Added in 4.14. Language came directly from Kirsten Newcomer. +{product-title} is designed for FIPS. When running {op-system-base-full} or {op-system-first} booted in FIPS mode, {product-title} core components use the {op-system-base} cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the `x86_64`, `ppc64le`, and `s390x` architectures. + +For more information about the NIST validation program, see link:https://csrc.nist.gov/Projects/cryptographic-module-validation-program/validated-modules[Cryptographic Module Validation Program]. For the latest NIST status for the individual versions of {op-system-base} cryptographic libraries that have been submitted for validation, see link:https://access.redhat.com/articles/2918071#fips-140-2-and-fips-140-3-2[Compliance Activities and Government Standards]. diff --git a/modules/rn-async-errata.adoc b/modules/rn-async-errata.adoc new file mode 100644 index 000000000000..26a323fd6550 --- /dev/null +++ b/modules/rn-async-errata.adoc @@ -0,0 +1,19 @@ + +[id="ocp-release-asynchronous-errata-updates_{context}"] += Asynchronous errata updates + +Security, bug fix, and enhancement updates for {product-title} {product-version} are released as asynchronous errata through the Red{nbsp}Hat Network. All {product-title} {product-version} errata is https://access.redhat.com/downloads/content/290/[available on the Red Hat Customer Portal]. See the https://access.redhat.com/support/policy/updates/openshift[{product-title} Life Cycle] for more information about asynchronous errata. + +Red{nbsp}Hat Customer Portal users can enable errata notifications in the account settings for Red{nbsp}Hat Subscription Management (RHSM). When errata notifications are enabled, users are notified through email whenever new errata relevant to their registered systems are released. + +[NOTE] +==== +Red{nbsp}Hat Customer Portal user accounts must have systems registered and consuming {product-title} entitlements for {product-title} errata notification emails to generate. +==== + +This section will continue to be updated over time to provide notes on enhancements and bug fixes for future asynchronous errata releases of {product-title} {product-version}. Versioned asynchronous releases, for example with the form {product-title} {product-version}.z, will be detailed in subsections. In addition, releases in which the errata text cannot fit in the space provided by the advisory will be detailed in subsections that follow. + +[IMPORTANT] +==== +For any {product-title} release, always review the instructions on xref:../updating/updating_a_cluster/updating-cluster-web-console.adoc#updating-cluster-web-console[updating your cluster] properly. +==== diff --git a/modules/rn-bug-fixes.adoc b/modules/rn-bug-fixes.adoc new file mode 100644 index 000000000000..2f9d7db91bf8 --- /dev/null +++ b/modules/rn-bug-fixes.adoc @@ -0,0 +1,264 @@ + +[id="ocp-release-bug-fixes_{context}"] += Bug fixes +//Bug fix work for TELCODOCS-750 +//Bare Metal Hardware Provisioning / OS Image Provider +//Bare Metal Hardware Provisioning / baremetal-operator +//Bare Metal Hardware Provisioning / cluster-baremetal-operator +//Bare Metal Hardware Provisioning / ironic" +//CNF Platform Validation +//Cloud Native Events / Cloud Event Proxy +//Cloud Native Events / Cloud Native Events +//Cloud Native Events / Hardware Event Proxy +//Cloud Native Events +//Driver Toolkit +//Installer / Assisted installer +//Installer / OpenShift on Bare Metal IPI +//Networking / ptp +//Node Feature Discovery Operator +//Performance Addon Operator +//Telco Edge / HW Event Operator +//Telco Edge / RAN +//Telco Edge / Core + +//Telco Edge / TALO +//Telco Edge / ZTP + + +//[id="ocp-release-note-api-auth-bug-fixes_{context}"] +//== API Server and Authentication + +[id="ocp-release-note-bare-metal-hardware-bug-fixes_{context}"] +== Bare Metal Hardware Provisioning + +* Before this update, when installing a dual-stack cluster on bare metal by using installer-provisioned infrastructure, the installation failed because the Virtual Media URL was IPv4 instead of IPv6. As IPv4 was unreachable, the bootstrap failed on the virtual machine (VM) and cluster nodes were not created. With this release, when you install a dual-stack cluster on bare metal for installer-provisioned infrastructure, the dual-stack cluster uses the Virtual Media URL IPv6 and the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-60240[OCPBUGS-60240]) + +* Before this update, when installing a cluster with the bare metal as a service (BMaaS) API, an ambiguous validation error was reported. When you set an image URL without a checksum, BMaaS failed to validate the deployment image source information. With this release, when you do not provide a required checksum for an image, a clear message is reported. (link:https://issues.redhat.com/browse/OCPBUGS-57472[OCPBUGS-57472]) + +* Before this update, when installing a cluster using bare metal, if cleaning was not disabled, the hardware tried to delete any Software RAID configuration before it ran the `coreos-installer` tool. With this release, the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-56029[OCPBUGS-56029]) + +* Before this update, by using a Redfish system ID, such as `redfish://host/redfish/v1/` instead of `redfish://host/redfish/v1/Self`, in a Baseboard Management Console (BMC) URL, a registration error about an invalid JSON was reported. This issue was caused by a bug in the Bare Metal Operator (BMO). With this release, BMO now handles URLs without a Redfish system ID as a valid address without causing a JSON parsing issue. This fix improves the software handling of a missing Redfish system ID in BMC URLs. (link:https://issues.redhat.com/browse/OCPBUGS-55717[OCPBUGS-55717]) + +* Before this update, virtual media boot attempts sometimes failed because some models of SuperMicro such as `ars-111gl-nhr` used a different virtual media device string than other SuperMicro machines. With this release, an extra conditional check is added to sushy library code to check for the specific model affected and to adjust its behavior. As a result, Supermicro `ars-111gl-nhr` can boot from virtual media. (link:https://issues.redhat.com/browse/OCPBUGS-55434[OCPBUGS-55434]) + +* Before this update, RAM Disk logs did not include clear file separators, which occasionally caused the content to overlap on a single line. As a consequence, users could not parse RAM Disk logs. With this release, RAM Disk logs include clear file headers to indicate the boundary between the content of each file. As a result, the readability of RAM Disk logs for users is improved. (link:https://issues.redhat.com/browse/OCPBUGS-55381[OCPBUGS-55381]) + +* Before this update, during Ironic Python Agent (IPA) deployments, the RAM disk logs in the `metal3-ramdisk-logs` container did not include `NetworkManager` logs. The absence of `NetworkManager` logs hindered effective debugging, which affected network issue resolution. With this release, the existing RAM disk logs in the `metal3-ramdisk-logs` container of a metal3 pod include the entire journal from the host rather than just the `dmesg` and IPA logs. As result, IPA logs provide comprehensive `NetworkManager` data for improved debugging. (link:https://issues.redhat.com/browse/OCPBUGS-55350[OCPBUGS-55350]) + +* Before this update, when the provisioning network was disabled in the cluster configuration, you could create a bare-metal host with a driver that required a network boot, for example Intelligent Platform Management Interface (IPMI) or Redfish without virtual media. As a result, boot failures occurred during inspection or provisioning because the correct DHCP options could not be identified. With this release, when you create a bare-metal host in this scenario the host fails to register and the reported error references the disabled provisioning network. To create the host, you must enable the provisioning network or use a virtual-media-based driver, for example, Redfish virtual media. (link:https://issues.redhat.com/browse/OCPBUGS-54965[OCPBUGS-54965]) + +[id="ocp-release-note-cloud-compute-bug-fixes_{context}"] +== Cloud Compute + +* Before this update, {aws-short} compute machine sets could include a null value for the `userDataSecret` parameter. +Using a null value sometimes caused machines to get stuck in the `Provisioning` state. With this release, the `userDataSecret` parameter requires a value. +(link:https://issues.redhat.com/browse/OCPBUGS-55135[OCPBUGS-55135]) + +* Before this update, {product-title} clusters on {aws-short} that were created with version 4.13 or earlier could not update to version 4.19. +Clusters that were created with version 4.14 and later have an {aws-short} `cloud-conf` ConfigMap by default, and this ConfigMap is required starting in {product-title} 4.19. +With this release, the Cloud Controller Manager Operator creates a default `cloud-conf` ConfigMap when none is present on the cluster. +This change enables clusters that were created with version 4.13 or earlier to update to version 4.19. +(link:https://issues.redhat.com/browse/OCPBUGS-59251[OCPBUGS-59251]) + +* Before this update, a `failed to find machine for node ...` appeared in the logs when the `InternalDNS` address for a machine was not set as expected. +As a consequence, the user might interpret this error as the machine not existing. +With this release, the log message reads `failed to find machine with InternalDNS matching ...`. +As a result, the user has a clearer indication of why the match is failing. +(link:https://issues.redhat.com/browse/OCPBUGS-19856[OCPBUGS-19856]) + +* Before this update, a bug fix altered the availability set configuration by changing the fault domain count to use the maximum available value instead of being fixed at 2. +This inadvertently caused scaling issues for compute machine sets that were created prior to the bug fix, because the controller attempted to modify immutable availability sets. +With this release, availability sets are no longer modified after creation, allowing affected compute machine sets to scale properly. +(link:https://issues.redhat.com/browse/OCPBUGS-56380[OCPBUGS-56380]) + +* Before this update, compute machine sets migrating from the Cluster API to the Machine API got stuck in the `Migrating` state. +As a consequence, the compute machine set could not finish transitioning to use a different authoritative API or perform further reconciliation of the `MachineSet` object status. +With this release, the migration controllers watch for changes in Cluster API resources and react to authoritative API transitions. +As a result, compute machine sets successfully transition from the Cluster API to the Machine API. +(link:https://issues.redhat.com/browse/OCPBUGS-56487[OCPBUGS-56487]) + +* Before this update, for the `maxUnhealthy` field in the `MachineHealthCheck` custom resource definition (CRD), it did not document the default value. +With this release, the CRD documents the default value. +(link:https://issues.redhat.com/browse/OCPBUGS-61314[OCPBUGS-61314]) + +* Before this update, it was possible to specify the use of the `CapacityReservationsOnly` capacity reservation behavior and Spot Instances in the same machine template. +As a consequence, machines with these two incompatible settings were created. +With this release, validation of machine templates ensures that these two incompatible settings are not used in the same machine template. +As a result, machines with these two incompatible settings cannot be created. (link:https://issues.redhat.com/browse/OCPBUGS-60943[OCPBUGS-60943]) + +* Before this update, on clusters that support migrating Machine API resources to Cluster API resources, deleting a nonauthoritative machine did not delete the corresponding authoritative machine. +As a consequence, orphaned machines that should have been cleaned up remained on the cluster and could cause a resource leak. +With this release, deleting a nonauthoritative machine triggers propagation of the deletion to the corresponding authoritative machine. +As a result, deletion requests on nonauthoritative machine correctly cascade, preventing orphaned authoritative machines and ensuring consistency in machine cleanup. +(link:https://issues.redhat.com/browse/OCPBUGS-55985[OCPBUGS-55985]) + +* Before this update, on clusters that support migrating Machine API resources to Cluster API resources, the {cluster-capi-operator} could create an authoritative Cluster API compute machine set in the `Paused` state. +As a consequence, the newly created Cluster API compute machine set could not reconcile or scale machines even though it was using the authoritative API. +With this release, the Operator now ensures that Cluster API compute machine sets are created in an unpaused state when the Cluster API is authoritative. +As a result, newly created Cluster API compute machine sets are reconciled immediately and scaling and machine lifecycle operations proceed as intended when the Cluster API is authoritative. +(link:https://issues.redhat.com/browse/OCPBUGS-56604[OCPBUGS-56604]) + +* Before this update, scaling large numbers of nodes was slow because scaling requires reconciling each machine several times and each machine was reconciled individually. +With this release, up to ten machines can be reconciled concurrently. +This change improves the processing speed for machines during scaling. +(link:https://issues.redhat.com/browse/OCPBUGS-59376[OCPBUGS-59376]) + +* Before this update, the {cluster-capi-operator} status controller used an unsorted list of related objects, leading to status updates when there were no functional changes. +As a consequence, users would see significant noise in the {cluster-capi-operator} object and in logs due to continuous and unnecessary status updates. +With this release, the status controller logic sorts the list of related objects before comparing them for changes. +As a result, a status update only occurs when there is a change to the Operator's state. +(link:https://issues.redhat.com/browse/OCPBUGS-56805[OCPBUGS-56805], link:https://issues.redhat.com/browse/OCPBUGS-58880[OCPBUGS-58880]) + +* Before this update, the `config-sync-controller` component of the Cloud Controller Manager Operator did not display logs. +The issue is resolved in this release. +(link:https://issues.redhat.com/browse/OCPBUGS-56508[OCPBUGS-56508]) + +* Before this update, the Control Plane Machine Set configuration used availability zones from compute machine sets. +This is not a valid configuration. +As a consequence, the Control Plane Machine Set could not be generated when the control plane machines were in a single zone while compute machine sets spanned multiple zones. +With this release, the Control Plane Machine Set derives an availability zone configuration from existing control plane machines. +As a result, the Control Plane Machine Set generates a valid zone configuration that accurately reflects the current control plane machines. +(link:https://issues.redhat.com/browse/OCPBUGS-52448[OCPBUGS-52448]) + +* Before this update, the controller that annotates a Machine API compute machine set did not check whether the Machine API was authoritative before adding scale-from-zero annotations. +As a consequence, the controller repeatedly added these annotations and caused a loop of continuous changes to the `MachineSet` object. +With this release, the controller checks the value of the `authoritativeAPI` field before adding scale-from-zero annotations. +As a result, the controller avoids the looping behavior by only adding these annotations to a Machine API compute machine set when the Machine API is authoritative. +(link:https://issues.redhat.com/browse/OCPBUGS-57581[OCPBUGS-57581]) + +* Before this update, the Machine API Operator attempted to reconcile `Machine` resources on platforms other than {aws-short} where the `.status.authoritativeAPI` field was not populated. +As a consequence, compute machines remained in the `Provisioning` state indefinitely and never became operational. +With this release, the Machine API Operator now populates the empty `.status.authoritativeAPI` field with the corresponding value in the machine specification. +A guard is also added to the controllers to handle cases where this field might still be empty. +As a result, `Machine` and `MachineSet` resources are reconciled properly and compute machines no longer remain in the `Provisioning` state indefinitely. +(link:https://issues.redhat.com/browse/OCPBUGS-56849[OCPBUGS-56849]) + +* Before this update, the Machine API Provider Azure used an old version of the Azure SDK, which used an old API version that did not support referencing a Capacity Reservation group. +As a consequence, creating a Machine API machine that referenced a Capacity Reservation group in another subscription resulted in an Azure API error. +With this release, the Machine API Provider Azure uses a version of the Azure SDK that supports this configuration. +As a result, creating a Machine API machine that references a Capacity Reservation group in another subscription works as expected. +(link:https://issues.redhat.com/browse/OCPBUGS-55372[OCPBUGS-55372]) + +* Before this update, the two-way synchronization controller on clusters that support migrating Machine API resources to Cluster API resources did not correctly compare the machine specification when converting an authoritative Cluster API machine template to a Machine API machine set. +As a consequence, changes to the Cluster API machine template specification were not synchronized to the Machine API machine set. +With this release, changes to the comparison logic resolve the issue. +As a result, the Machine API machine set synchronizes correctly after the Cluster API machine set references the new Cluster API machine template. +(link:https://issues.redhat.com/browse/OCPBUGS-56010[OCPBUGS-56010]) + +* Before this update, the two-way synchronization controller on clusters that support migrating Machine API resources to Cluster API resources did not delete the machine template when its corresponding Machine API machine set was deleted. +As a consequence, unneeded Cluster API machine templates persisted in the cluster and cluttered the `openshift-cluster-api` namespace. +With this release, the two-way synchronization controller correctly handles deletion synchronization for the machine template. +As a result, deleting a Machine API authoritative machine set deletes the corresponding Cluster API machine template. +(link:https://issues.redhat.com/browse/OCPBUGS-57195[OCPBUGS-57195]) + +* Before this update, the two-way synchronization controller on clusters that support migrating Machine API resources to Cluster API resources prematurely reported a successful migration. +As a consequence, if any errors occurred when updating the status of related objects, the operation was not retried. +With this release, the controller ensures that all related object statuses are written before reporting a successful status. +As a result, the controller handles errors during migration better. +(link:https://issues.redhat.com/browse/OCPBUGS-57040[OCPBUGS-57040]) + +[id="ocp-release-note-cloud-credential-operator-bug-fixes_{context}"] +== Cloud Credential Operator + +* Before this update, the `ccoctl` command unnecessarily required the `baseDomainResourceGroupName` parameter when creating the OpenID Connect (OIDC) issuer and managed identities for a private cluster by using {entra-first}. As a consequence, an error displayed when `ccoctl` tried to create private clusters. With this release, the `baseDomainResourceGroupName` parameter is removed as a requirement. As a result, the process for creating a private cluster on {azure-full} is logical and consistent with expectations. (link:https://issues.redhat.com/browse/OCPBUGS-34993[OCPBUGS-34993]) + +[id="ocp-release-note-cluster-autoscaler-bug-fixes_{context}"] +== Cluster Autoscaler + +* Before this update, the cluster autoscaler attempted to include machine objects that were in a deleting state. As a consequence, the cluster autoscaler count of machines was inaccurate. This issue caused the cluster autoscaler to add additional taints that were not needed. With this release, the autoscaler accurately counts the machines. (link:https://issues.redhat.com/browse/OCPBUGS-60035[OCPBUGS-60035]) + +* Before this update, when you created a cluster autoscaler object with the Cluster Autoscaler Operator enabled in the cluster, two `cluster-autoscaler-default` pods in the `openshift-machine-api` were sometimes created at the same time and one of the pods was immediately killed. With this release, only one pod is created. (link:https://issues.redhat.com/browse/OCPBUGS-57041[OCPBUGS-57041]) + +//[id="ocp-release-note-cluster-override-admin-operator-bug-fixes_{context}"] +//== Cluster Resource Override Admission Operator + +[id="ocp-release-note-cluster-version-operator-bug-fixes_{context}"] +== Cluster Version Operator + +* Before this update, the status of the `ClusterVersion` condition could incorrectly show `ImplicitlyEnabled` instead of `ImplicitlyEnabledCapabilities`. With this release, the `ClusterVersion` condition type is fixed and changed from `ImplicitlyEnabled` to `ImplicitlyEnabledCapabilities`. (link:https://issues.redhat.com/browse/OCPBUGS-56114[OCPBUGS-56114]) + +[id="ocp-release-note-config-operator-bug-fixes_{context}"] +== config-operator + +* Before this update, the cluster incorrectly switched to the `CustomNoUpgrade` state without the correct `featureGate` configuration. As a consequence, empty `featureGates` and subsequent controller panics occurred. With this release, the `featureGate` configuration for the `CustomNoUpgrade` cluster state matches the default which prevents empty `featureGates` and subsequent controller panics. (link:https://issues.redhat.com/browse/OCPBUGS-57187[OCPBUGS-57187]) + +[id="ocp-release-note-dev-console-bug-fixes_{context}"] +== Dev Console + +* Before this update, some entries on the *Quick Starts* page displayed duplicate link buttons. With this update, the duplicates are removed, and the link buttons are correctly displayed. (link:https://issues.redhat.com/browse/OCPBUGS-60373[OCPBUGS-60373]) + +* Before this update, the onboarding modal that displayed when you first logged in was missing visuals and images, which made the modal messaging unclear. With this release, the missing elements are added to the modal. As a result, the onboarding experience provides complete visuals consistent with the overall console design. (link:https://issues.redhat.com/browse/OCPBUGS-57392[OCPBUGS-57392]) + +* Before this update, importing multiple files in the YAML editor copied the existing content and appended the new file, which created duplicates. With this release, the import behavior is fixed. As a result, the YAML editor displays only the new file content without duplication. (link:https://issues.redhat.com/browse/OCPBUGS-45297[OCPBUGS-45297]) +* Before this update, the status of the `ClusterVersion` condition could incorrectly show `ImplicitlyEnabled` instead of `ImplicitlyEnabledCapabilities`. With this release, the `ClusterVersion` condition type is fixed and changed from `ImplicitlyEnabled` to `ImplicitlyEnabledCapabilities`. (link:https://issues.redhat.com/browse/OCPBUGS-56114[OCPBUGS-56114]) + +[id="ocp-release-note-etcd-bug-fixes_{context}"] +== etcd + +* Before this update, the timeout on one etcd member caused context deadlines to exceed. As a consequence, all members were declared unhealthy, even though some were reachable. With this release, if one member times out, other members are no longer incorrectly marked as unhealthy. (link:https://issues.redhat.com/browse/OCPBUGS-60941[OCPBUGS-60941]) + +* Before this update, when you deployed {sno} with many IPs on the primary interface, the IP in the etcd certificate mismatched with the IP in the config map that the API server used to connect to etcd. As a consequence, the API server pod failed during {sno} deployment, which caused cluster initialization issues. With this release, the single IP in the etcd config map matches the IP in the certificate for {sno} deployments. As a result, the API server connects to etcd by using the correct IP included in the etcd certificate, which prevents pod failure during cluster initialization. (link:https://issues.redhat.com/browse/OCPBUGS-55404[OCPBUGS-55404]) + +* Before this update, during temporary downtime of the API server, the Cluster etcd Operator reported incorrect information, such as messages that the `openshift-etcd` namespace was non-existent. With this update, the Cluster etcd Operator status message correctly indicates API server unavailability instead of suggesting the absence of the `openshift-etcd` namespace. As a result, the Cluster etcd Operator status accurately reflects the presence of the `openshift-etcd` namespace, enhancing system reliability. (link:https://issues.redhat.com/browse/OCPBUGS-44570[OCPBUGS-44570]) + +[id="ocp-release-note-extensions-olmv1-bug-fixes_{context}"] +== Extensions ({olmv1}) + +* Before this update, the preflight custom resource definition (CRD) safety check in {olmv1} blocked updates if it detected changes in the description fields of a CRD. With this update, the preflight CRD safety check does not block updates when there are changes to documentation fields. (link:https://issues.redhat.com/browse/OCPBUGS-55051[OCPBUGS-55051]) + +* Before this update, the catalogd and Operator Controller components did not display the correct version and commit information in the {oc-first}. With this update, the correct commit and version information is displayed. (link:https://issues.redhat.com/browse/OCPBUGS-23055[OCPBUGS-23055]) + +//[id="ocp-release-note-image-streams-bug-fixes_{context}"] +//== ImageStreams + +[id="ocp-release-note-installer-bug-fixes_{context}"] +== Installer + +* Before this update, when you installed a Konflux-built cluster on {ibm-power-server-name}, the installation could fail due to errors in semantic versioning (SemVer) parsing. With this release, the parsing issue has been resolved so that the installation can continue successfully. (link:https://issues.redhat.com/browse/OCPBUGS-61120[OCPBUGS-61120]) + +* Before this update, when you installed a cluster on {azure-short} Stack Hub with a user-provisioned infrastructure, the API and API-int load balancers could fail to be created. As a consequence, the installation failed. With this release, the user-provisioned infrastructure templates is updated so that the load balancers are created. As a result, installation is successful. (link:https://issues.redhat.com/browse/OCPBUGS-60545[OCPBUGS-60545]) + +* Before this update, when you installed a cluster on {gcp-short}, the installation program read and processed the `install-config.yaml` file even when an unrecoverable error was reported about not finding a matching public DNS zone. This error was due to an invalid `baseDomain` parameter. As a consequence, cluster administrators recreated the `install-config.yaml` file unnecessarily. With this release, when the installation program reports this error the installation progam does not read and process the `install-config.yaml` file. (link:https://issues.redhat.com/browse/OCPBUGS-59430[OCPBUGS-59430]) + +* Before this update, {ibm-cloud-title} was omitted from the list of platforms that supported {sno} installation in the validation code. As a consequence, users could not install a single-node configuration on {ibm-cloud-title} because of a validation error. With this release, {ibm-cloud-title} support for single-node installations is enabled. As a result, users can complete single-node installations on {ibm-cloud-title}. (link:https://issues.redhat.com/browse/OCPBUGS-59220[OCPBUGS-59220]) + +* Before this update, installing {sno} on `platform: None` with user-provisioned infrastructure was not supported, which led to installation failures. With this release, {sno} installation on `platform: None` is supported. (link:https://issues.redhat.com/browse/OCPBUGS-58216[OCPBUGS-58216]) + +* Before this update, when you installed {product-title} on {aws-first}, the Machine Config Operator (MCO) for disabling boot image management failed to check edge compute machine pools. When determining whether to disable boot image management, the installation progream only checked the first compute machine pool entry in the `install-config.yaml`. As a consequence, when you specified multiple compute pools but only the second had a custom Amazon Machine Image (AMI), the installation program did not disable MCO boot image management and the MCO could overwrite the custom AMI. With this release, the installation program checks all edge compute machine pools for custom images. As a result, boot image management is disabled when a custom image is specified in any machine pool. (link:https://issues.redhat.com/browse/OCPBUGS-57803[OCPBUGS-57803]) + +* Before this update, the Agent-based Installer set the permissions for the etcd directory `/var/lib/etcd/member` as `0755` when using an {sno} deployment instead of `0700`, which is correctly set on a multi-node deployment. With this release, the etcd directory `/var/lib/etcd/member` permissions are set to `0700` for {sno} deployments. (link:https://issues.redhat.com/browse/OCPBUGS-57021[OCPBUGS-57201]) + +* Before this update, when you used the Agent-based Installer, pressing the TAB key immediately after escaping the Network Manager Text User Interface (TUI) sometimes failed to register, which caused the cursor to remain on `Configure Network` instead of moving to `Quit`. As a consequence, you were not able to quit the agent console application that verifies whether the current host can retrieve release images. With this release, the TAB key is always registered. (link:https://issues.redhat.com/browse/OCPBUGS-56934[OCPBUGS-56934]) + +* Before this update, when you used the Agent-based Installer, exiting the NetworkManager TUI would sometimes result in a blank screen, rather than displaying an error or proceeding with the installation. With this update, the blank screen is not displayed. (link:https://issues.redhat.com/browse/OCPBUGS-56880[OCPBUGS-56880]) + +* Before this update, installing a cluster on {vmw-full} failed when the API VIP and the ingress VIP used one load balancer IP address. With this release, the API VIP and the ingress VIP are now distinct in `machineNetworks` and the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-56601[OCPBUGS-56601]) + +* Before this update, when you use the Agent-based Installer, setting the `additionalTrustBundlePolicy` field would have no effect. As a consequence, other overrides such the `fips` parameter were ignored. With this update, the `additionalTrustBundlePolicy` parameter is correctly imported and other overrides are not ignored. (link:https://issues.redhat.com/browse/OCPBUGS-56596[OCPBUGS-56596]) + +* Before this update, the lack of detailed logging in the cluster destroy logic for {vmw-full} meant it was unclear why virtual machines (VMs) were not properly removed. Additionally, missing power state information could cause the destroy operation to enter an infinite loop. With this update, logging for the destroy operation is enhanced to indicate when specific cleanup actions begin, include vCenter names, and display a warning if the operation fails to find VMs. As a result, the destroy process provides detailed, actionable logs. (link:https://issues.redhat.com/browse/OCPBUGS-56262[OCPBUGS-56262]) + +* Before this update, when you used the Agent-based Installer to install a cluster in a disconnected environment, exiting the NetworkManager Text User Interface (TUI) returned you to the agent console application that checks whether release images can be pulled from a registry. With this update, you are not returned to the agent console application when you exit the NetworkManager TUI. (link:https://issues.redhat.com/browse/OCPBUGS-56223[OCPBUGS-56223]) + +* Before this update, the Agent-based Installer did not validate the values used to enable disk encryption, which potentially prevented disk encryption from being enabled. With this release, validation for correct disk encryption values is performed during image creation. (link:https://issues.redhat.com/browse/OCPBUGS-54885[OCPBUGS-54885]) + +* Before this update, the resources containing the configuration for vSphere connection could get broken due to a mismatch between the UI and API. With this release, the UI uses the updated API definition. (link:https://issues.redhat.com/browse/OCPBUGS-54434[OCPBUGS-54434]) + +* Before this update, when you used the Agent-based Installer, some validation checks for the `hostPrefix` parameter were not performed when generating the ISO image. As a consequence, invalid `hostPrefix` values were detected only when users failed to boot using the ISO. With this update, these validation checks are performed during ISO generation and causes an immediate failure. (link:https://issues.redhat.com/browse/OCPBUGS-53473[OCPBUGS-53473]) + +* Before this update, some systemd services in the Agent-based Installer continued to run after being stopped, which caused confusing log messages during cluster installation. With this update, these services are correctly stopped. (link:https://issues.redhat.com/browse/OCPBUGS-53107[OCPBUGS-53107]) + +* Before this update, if the proxy configuration for an {azure-first} cluster was deleted while installing a cluster, the program reported an unreadable error and the proxy connection timed out. With this release, when the proxy configuration for the cluster is deleted while installing a cluster, the program reports a readable error message and the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-45805[OCPBUGS-45805]) + +* Before this update, after an installation was completed, the `kubeconfig` file generated by the Agent-based Installer did not contain the ingress router certificate authority (CA). With this release, the `kubeconfig` file contains the ingress router CA upon the completion of a cluster installation. (link:https://issues.redhat.com/browse/OCPBUGS-45256[OCPBUGS-45256]) + +* Before this update, the Agent-based Installer announced a complete cluster installation without first checking whether Operators were in a stable state. Consequently, messages about a completed installation might have appeared even if there were still issues with any of the Operators. With this release, the Agent-based Installer waits until Operators are in a stable state before declaring the cluster installation to be complete. (link:https://issues.redhat.com/browse/OCPBUGS-18658[OCPBUGS-18658]) + +* Before this update, the installation program did not prevent you from attempting to install {sno} on bare metal on the installer-provisioned infrastructure. As a consequence, the installation failed because it was not supported. With this release, {product-title} prevents {sno} cluster installations on unsupported platforms. (link:https://issues.redhat.com/browse/OCPBUGS-6508[OCPBUGS-6508]) + +[id="ocp-release-note-kube-controller-manager-bug-fixes_{context}"] +== Kube Controller Manager + +* Before this update, the `cluster-policy-controller` was crashing when an invalid volume type was provided. With this release, the code no longer panics. As a result, the `cluster-policy-controller` logs an error to inform about invalidity of a volume type. (link:https://issues.redhat.com/browse/OCPBUGS-62053[OCPBUGS-62053]) + +* Before this update, the `cluster-policy-controller` container was exposing the `10357` port for all networks (the bind address was set to 0.0.0.0). The port was exposed outside the node's host network because the KCM pod manifest set 'hostNetwork` to `true`. This port is used solely for the container's probe. With this enhancement, the bind address was updated to listen on the localhost only. As result, the node security is improved because the port is not exposed outside the node network. (link:https://issues.redhat.com/browse/OCPBUGS-53290[OCPBUGS-53290]) diff --git a/modules/rn-compatibility.adoc b/modules/rn-compatibility.adoc new file mode 100644 index 000000000000..f61302cc35a2 --- /dev/null +++ b/modules/rn-compatibility.adoc @@ -0,0 +1,4 @@ +[id="ocp-4-20-add-on-support-status_{context}"] += {product-title} layered and dependent component support and compatibility + +The scope of support for layered and dependent components of {product-title} changes independently of the {product-title} version. To determine the current support status and compatibility for an add-on, refer to its release notes. For more information, see the link:https://access.redhat.com/support/policy/updates/openshift[Red Hat {product-title} Life Cycle Policy]. diff --git a/modules/rn-deprecated-features.adoc b/modules/rn-deprecated-features.adoc new file mode 100644 index 000000000000..06316f68357b --- /dev/null +++ b/modules/rn-deprecated-features.adoc @@ -0,0 +1,17 @@ +[id="ocp-release-deprecated-features_{context}"] += Deprecated features + +[id="ocp-release-amd-sev-deprecation_{context}"] +Deprecation of AMD Secure Encrypted Virtualization:: ++ +The use of Confidential Computing with AMD Secure Encrypted Virtualization (AMD SEV) on {gcp-first} has been deprecated and might be removed in a future release. ++ +You can use AMD Secure Encrypted Virtualization Secure Nested Paging (AMD SEV-SNP) instead. + +Docker v2 registries deprecated:: ++ +Support for Docker v2 registries is deprecated and is planned for removal in a future release. A registry that supports the Open Container Initiative (OCI) specification will be required for all mirroring operations in a future release. Additionally, `oc-mirror` v2 now only generates custom catalog images in the OCI format, whereas the deprecated `oc-mirror` v1 still supports the Docker v2 format. + +Red{nbsp}Hat Marketplace is deprecated:: ++ +The Red{nbsp}Hat Marketplace is deprecated. Customers who use the partner software from the Marketplace should contact the software vendor about how to migrate from the Marketplace Operator to an Operator in the Red{nbsp}Hat Ecosystem Catalog. It is expected that the Marketplace index will be removed in an upcoming {product-title} release. For more information, see link:https://access.redhat.com/articles/7130828[Sunset of the Red Hat Marketplace, operated by IBM]. diff --git a/modules/rn-deprecated-removed-tables.adoc b/modules/rn-deprecated-removed-tables.adoc new file mode 100644 index 000000000000..1768559ffa36 --- /dev/null +++ b/modules/rn-deprecated-removed-tables.adoc @@ -0,0 +1,272 @@ += Deprecated and removed features + +== Images deprecated and removed features + +.Images deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.18 |4.19 |4.20 + +|Cluster Samples Operator +|Deprecated +|Deprecated +|Deprecated +|==== + + +[id="ocp-release-note-install-dep-rem_{context}"] +== Installation deprecated and removed features + +.Installation deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.18 |4.19 |4.20 + +|`--cloud` parameter for `oc adm release extract` +|Deprecated +|Deprecated +|Deprecated + +|CoreDNS wildcard queries for the `cluster.local` domain +|Deprecated +|Deprecated +|Deprecated + +|`compute.platform.openstack.rootVolume.type` for {rh-openstack} +|Deprecated +|Deprecated +|Deprecated + +|`controlPlane.platform.openstack.rootVolume.type` for {rh-openstack} +|Deprecated +|Deprecated +|Deprecated + +|`ingressVIP` and `apiVIP` settings in the `install-config.yaml` file for installer-provisioned infrastructure clusters +|Deprecated +|Deprecated +|Deprecated + +|Package-based {op-system-base} compute machines +|Deprecated +|Removed +|Removed + +|`platform.aws.preserveBootstrapIgnition` parameter for {aws-first} +|Deprecated +|Deprecated +|Deprecated + +|Installing a cluster on {aws-short} with compute nodes in {aws-short} Outposts +|Deprecated +|Deprecated +|Deprecated +|==== + +// No deprecated or removed features for 3 consecutive releases +// +// [id="ocp-release-note-monitoring-dep-rem_{context}"] +// == Monitoring deprecated and removed features + +// .Monitoring deprecated and removed tracker +// [cols="4,1,1,1",options="header"] +// |==== +// |Feature |4.18 |4.19 |4.20 +// |==== + +[id="ocp-release-note-machine-manage-dep-rem_{context}"] +== Machine Management deprecated and removed features + +.Machine management deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.18 |4.19|4.20 + +|Confidential Computing with AMD Secure Encrypted Virtualization for {gcp-first} +|General Availability +|General Availability +|Deprecated +|==== + +[id="ocp-release-note-networking-dep-rem_{context}"] +== Networking deprecated and removed features + +.Networking deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.18 |4.19 |4.20 + +|iptables +|Deprecated +|Deprecated +|Deprecated + +|==== + + +[id="ocp-release-note-node-dep-rem_{context}"] +== Node deprecated and removed features + +.Node deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.18 |4.19 |4.20 + +|`ImageContentSourcePolicy` (ICSP) objects +|Deprecated +|Deprecated +|Deprecated + +|Kubernetes topology label `failure-domain.beta.kubernetes.io/zone` +|Deprecated +|Deprecated +|Deprecated + +|Kubernetes topology label `failure-domain.beta.kubernetes.io/region` +|Deprecated +|Deprecated +|Deprecated + +|cgroup v1 +|Deprecated +|Removed +|Removed +|==== + + +[id="ocp-release-note-cli-dep-rem_{context}"] +== OpenShift CLI (oc) deprecated and removed features + +.OpenShift CLI (oc) deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.18 |4.19|4.20 + +|oc-mirror plugin v1 +|Deprecated +|Deprecated +|Deprecated + +|Docker v2 registries +|General Availability +|General Availability +|Deprecated +|==== + + +[id="ocp-release-note-operators-dep-rem_{context}"] +== Operator lifecycle and development deprecated and removed features + +// "Operator lifecycle" refers to OLMv0 and "development" refers to Operator SDK + +.Operator lifecycle and development deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.18 |4.19 |4.20 + +|Operator SDK +|Deprecated +|Removed +|Removed + +|Scaffolding tools for Ansible-based Operator projects +|Deprecated +|Removed +|Removed + +|Scaffolding tools for Helm-based Operator projects +|Deprecated +|Removed +|Removed + +|Scaffolding tools for Go-based Operator projects +|Deprecated +|Removed +|Removed + +|Scaffolding tools for Hybrid Helm-based Operator projects +|Removed +|Removed +|Removed + +|Scaffolding tools for Java-based Operator projects +|Removed +|Removed +|Removed + +// Do not remove the SQLite database... entry until otherwise directed by the Operator Framework PM +|SQLite database format for Operator catalogs +|Deprecated +|Deprecated +|Deprecated +|==== + + +//[id="ocp-hardware-an-driver-dep-rem_{context}"] +//== Specialized hardware and driver enablement deprecated and removed features + +//.Specialized hardware and driver enablement deprecated and removed tracker +//[cols="4,1,1,1",options="header"] +//|==== +//|Feature |4.18 |4.19 |4.20 +//|==== + + +== Storage deprecated and removed features + +.Storage deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.18 |4.19 |4.20 + +|Shared Resources CSI Driver Operator +|Removed +|Removed +|Removed +|==== + + +//[id="ocp-clusters-dep-rem_{context}"] +//== Updating clusters deprecated and removed features + +//.Updating clusters deprecated and removed tracker +//[cols="4,1,1,1",options="header"] +//|==== +//|Feature |4.18 |4.19 |4.20 +//|==== + + +[id="ocp-release-note-web-console-dep-rem_{context}"] +== Web console deprecated and removed features + +.Web console deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.18 |4.19 |4.20 + +|`useModal` hook for dynamic plugin SDK +|General Availability +|Deprecated +|Deprecated + +|Patternfly 4 +|Deprecated +|Removed +|Removed + +|==== + + +[id="ocp-release-note-workloads-dep-rem_{context}"] +== Workloads deprecated and removed features + +.Workloads deprecated and removed tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.18 |4.19 |4.20 + +|`DeploymentConfig` objects +|Deprecated +|Deprecated +|Deprecated +|==== diff --git a/modules/rn-known-issues.adoc b/modules/rn-known-issues.adoc new file mode 100644 index 000000000000..9953bd95391b --- /dev/null +++ b/modules/rn-known-issues.adoc @@ -0,0 +1,62 @@ + +[id="ocp-release-known-issues_{context}"] += Known issues + +* There is a known issue with Gateway API and {aws-first}, {gcp-first}, and {azure-first} private clusters. The load balancer that is provisioned for a gateway is always configured to be external, which can cause errors or unexpected behavior: ++ +-- +** In an {aws-short} private cluster, the load balancer becomes stuck in the `pending` state and reports the error: `Error syncing load balancer: failed to ensure load balancer: could not find any suitable subnets for creating the ELB`. + +** In {gcp-short} and {azure-short} private clusters, the load balancer is provisioned with an external IP address, when it should not have an external IP address. +-- ++ +There is no supported workaround for this issue. (link:https://issues.redhat.com/browse/OCPBUGS-57440[OCPBUGS-57440]) + +* When running a pod in an isolated user namespace, the UID/GID inside a pod container no longer matches the UID/GID on the host. For file system ownership to work correctly, the Linux kernel uses ID-mapped mounts, which translate user IDs between the container and the host at the virtual file system (VFS) layer. ++ +However, not all file systems currently support ID-mapped mounts, such as Network File Systems (NFS) and other network or distributed file systems. Because such file systems do not support ID-mapped mounts, pods running within user namespaces can fail to access mounted NFS volumes. This behavior is not specific to {product-title}. It applies to all Kubernetes distributions from Kubernetes v1.33 and later. ++ +When upgrading to {product-title} 4.20, clusters are unaffected until you opt in to user namespaces. After enabling user namespaces, any pod that is using an NFS-backed persistent volume from a vendor that does not support ID-mapped mounts might experience access or permission issues when running in a user namespace. For more information about enabling user namespaces, see xref:../nodes/pods/nodes-pods-user-namespaces.adoc#nodes-pods-user-namespaces-configuring_nodes-pods-user-namespaces[Configuring Linux user namespace support]. ++ +[NOTE] +==== +Existing {product-title} 4.19 clusters are unaffected until you explicitly enable user namespaces, which is a Technology Preview feature in {product-title} 4.19. +==== + +* When installing a cluster on {azure-short}, if you set any of the `compute.platform.azure.identity.type`, `controlplane.platform.azure.identity.type`, or `platform.azure.defaultMachinePlatform.identity.type` field values to `None`, your cluster is unable to pull images from the Azure Container Registry. +You can avoid this issue by providing a user-assigned identity or by leaving the identity field blank. +In both cases, the installation program generates a user-assigned identity. (link:https://issues.redhat.com/browse/OCPBUGS-56008[OCPBUGS-56008]) + +* There is a known issue in the unified software catalog view of the console. When you select *Ecosystem* -> *Software Catalog*, you must enter an existing project name or create a new project to view the software catalog. The project selection field does not effect how catalog content is installed on the cluster. As a workaround, enter any existing project name to view the software catalog. (link:https://issues.redhat.com/browse/OCPBUGS-61870[OCPBUGS-61870]) + +* Starting with OCP 4.20, there is a decrease in the default maximum open files soft limit for containers. As a consequence, end users may experience application failures. To work around this problem, increase the container runtimes (CRI-O) ulimit configuration. (link:https://issues.redhat.com/browse/OCPBUGS-62095[OCPBUGS-62095]) + +* Deleting and recreating test workloads with a BlueField-3 NIC causes clock jumps due to inconsistent PTP synchronization. This disrupts time synchronization in test workloads. The time synchronization stabilizes when the workloads are stable. (link:https://issues.redhat.com/browse/RHEL-93579[RHEL-93579]) + +* Event logs for GNR-D interfaces are ambiguous due to identical three-letter prefixes ("eno"). As a consequence, affected interfaces are not clearly identified during state changes. To work around this problem, change interfaces used by ptp-operator to follow the "path" naming convention, ensuring per clock events are identified correctly based on interface names and clearly indicate which clock is affected by state changes. For more information, see link:https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_networking/consistent-network-interface-device-naming_configuring-and-managing-networking#network-interface-naming-policies_consistent-network-interface-device-naming[Network interface naming policies]. (link:https://issues.redhat.com/browse/OCPBUGS-62817[OCPBUGS-62817]) + +[id="ocp-installer-known-issues_{context}"] + +* When you install a cluster on {aws-short}, if you do not configure {aws-short} credentials before running any `openshift-install create` command, the installation program fails. (link:https://issues.redhat.com/browse/OCPBUGS-56658[OCPBUGS-56658]) + +[id="ocp-telco-core-release-known-issues_{context}"] + +* On systems using specific AMD EPYC processors, some low-level system interrupts, for example `AMD-Vi`, might contain CPUs in the CPU mask that overlaps with CPU-pinned workloads. This behavior is because of the hardware design. These specific error-reporting interrupts are generally inactive and there is currently no known performance impact.(link:https://issues.redhat.com/browse/OCPBUGS-57787[OCPBUGS-57787]) + +* Currently, pods that use a `guaranteed` QoS class and request whole CPUs might not restart automatically after a node reboot or kubelet restart. The issue might occur in nodes configured with a static CPU Manager policy and using the `full-pcpus-only` specification, and when most or all CPUs on the node are already allocated by such workloads. As a workaround, manually delete and re-create the affected pods. (link:https://issues.redhat.com/browse/OCPBUGS-43280[*OCPBUGS-43280*]) + +* The Performance Profile Creator tool fails to analyze a `must-gather` archive if the archive contains a custom namespace directory that ends with the suffix `nodes`. The failure occurs because of the tool's search logic, which incorrectly reports an error for multiple matches. As a workaround, rename the custom namespace directory so that it does not end with the `nodes` suffix, and run the tool again. (link:https://issues.redhat.com/browse/OCPBUGS-60218[*OCPBUGS-60218*]) + +* Currently, on clusters with SR-IOV network virtual functions configured, a race condition might occur between system services responsible for network device renaming and the TuneD service managed by the Node Tuning Operator. As a consequence, the TuneD profile might become degraded after the node restarts, leading to performance degradation. As a workaround, restart the TuneD pod to restore the profile state. (link:https://issues.redhat.com/browse/OCPBUGS-41934[*OCPBUGS-41934*]) + +[id="ocp-telco-ran-release-known-issues_{context}"] + +* The SuperMicro ARS-111GL-NHR server is unable to access virtual media during boot when the virtual media image is served through an IPv6 address. As a consequence, you cannot use virtual media on the SuperMicro ARS-111GL-NHR server model with an IPv6 network configuration. (link:https://issues.redhat.com/browse/OCPBUGS-60070[*OCPBUGS-60070*]) + +* A known latency issue currently affects systems running on 4th Gen Intel Xeon processors. (link:https://issues.redhat.com/browse/OCPBUGS-46528[OCPBUGS-46528]) + +* When attempting simultaneous BIOS and BMC firmware update on Dell R740, the BMC update might fail, leaving the server powered down and unresponsive. This issue occurs when the update process does not complete successfully, causing the system to remain in a non-operational state. (link:https://issues.redhat.com/browse/OCPBUGS-62009[*OCPBUGS-62009*]) + +* Updating the BMC firmware might fail if you configure the server with an incorrect network share location or invalid credentials, causing the server to remain powered off and unable to recover. (link:https://issues.redhat.com/browse/OCPBUGS-62010[*OCPBUGS-62010*]) + +[id="ocp-storage-core-release-known-issues_{context}"] diff --git a/modules/rn-new-features.adoc b/modules/rn-new-features.adoc new file mode 100644 index 000000000000..584bfe30ac41 --- /dev/null +++ b/modules/rn-new-features.adoc @@ -0,0 +1,144 @@ +[id="ocp-4-20-new-features-and-enhancements_{context}"] += New features and enhancements + +This release adds improvements related to the following components and concepts: + +[id="ocp-release-notes-api_{context}"] +== API server + +Extended loopback certificate validity to three years for kube-apiserver:: ++ +Before this update, the self-signed loopback certificate for the Kubernetes API Server expired after one year. With this release, the expiration date of the certificate is extended to three years. + +Dry-run option is connected to 'oc delete istag':: ++ +Before this update, deleting an `istag` resource with the `--dry-run=server` option unintentionally caused actual deletion of the image from the server. This unexpected deletion occurred due to the `dry-run` option being implemented incorrectly in the `oc delete istag` command. With this release, the `dry-run` option is wired to the `oc delete istag` command. As a result, the accidental deletion of image objects is prevented and the `istag` object remains intact when using the `--dry-run=server` option. + +No service interruptions for certificate-related issues:: ++ +With this update, self-signed loopback certificates in API servers are prevented from expiring, and ensures a stable and secure connection within Kubernetes 4.16.z. This enhancement backports a solution from a newer version, cherry-picks a specific pull request and applies it to the selected version. This reduces the likelihood of service interruptions due to certificate-related issues, providing a more reliable user experience in Kubernetes 4.16.z deployments. + +Enhanced communication matrix for TCP ports:: ++ +With this update, the communication flows matrix for {product-title} is enhanced. The feature automatically generates services for open ports 17697 (TCP) and 6080 (TCP) on the primary node, and ensures that all open ports have corresponding endpoint slices. This results in accurate and up-to-date communication flows matrixes, improves the overall security and efficiency of the communication matrix, and provides a more comprehensive and reliable communication matrix for users. + + + + + + + + + + +[id="ocp-release-notes-edge-computing_{context}"] +== Edge computing + +NetworkPolicy support for the {lvms} Operator:: ++ +The {lvms} Operator now applies Kubernetes `NetworkPolicy` objects during installation to restrict network communication to only the required components. This feature enforces default network isolation for {lvms} deployments on {product-title} clusters. + +Support for hostname labelling for persistent volumes created by using the {lvms} Operator:: ++ +When you create a persistent volume (PV) by using the {lvms} Operator, the PV now includes the `kubernetes.io/hostname` label. This label shows which node the PV is located on, making it easier to identify the node associated with a workload. This change only applies to newly created PVs. Existing PVs are not modified. + +Default namespace for the {lvms} Operator:: ++ +The default namespace for the {lvms} Operator is now `openshift-lvm-storage`. You can still install {lvms} in a custom namespace. + +SiteConfig CR to ClusterInstance CR migration tool:: ++ +{product-title} {product-version} introduces the `siteconfig-converter` tool to help migrate managed clusters from using a `SiteConfig` custom resource (CR) to a `ClusterInstance` CR. Using a `SiteConfig` CR to define a managed cluster is deprecated and will be removed in a future release. The `ClusterInstance` CR provides a more unified and generic approach to defining clusters and is the preferred method for managing cluster deployments in the {ztp} workflow. ++ +Using the `siteconfig-converter` tool, you can convert `SiteConfig` CRs to `ClusterInstance` CRs and then incrementally migrate one or more clusters at a time. Existing and new pipelines run in parallel, so you can migrate clusters in a controlled, phased manner and without downtime. ++ +[NOTE] +==== +The `siteconfig-converter` tool does not convert SiteConfig CRs that use the deprecated `spec.clusters.extraManifestPath` field. +==== ++ +For more information, see xref:../edge_computing/ztp-migrate-clusterinstance.adoc#ztp-migrate-clusterinstance[Migrating from SiteConfig CRs to ClusterInstance CRs]. + +[id="ocp-release-notes-extensions_{context}"] +== Extensions ({olmv1}) + +Deploying cluster extensions that use webhooks (Technology Preview):: ++ +With this release, you can deploy cluster extensions that use webhooks on clusters with the `TechPreviewNoUpgrade` feature set enabled. ++ +For more information, see xref:../extensions/ce/managing-ce.adoc#olmv1-supported-extensions_managing-ce[Supported extensions]. + +[id="ocp-release-notes-hcp_{context}"] +== Hosted control planes + +include::snippets/hcp-snippet.adoc[] + +// Because {hcp} releases asynchronously from {product-title}, it has its own release notes. For more information, see xref:../hosted_control_planes/hosted-control-planes-release-notes.adoc#hosted-control-planes-release-notes[{hcp-capital} release notes]. + +[id="ocp-release-notes-storage_{context}"] +== Storage + +NetworkPolicy support for the {secrets-store-operator}:: ++ +The {secrets-store-operator} version 4.20 is now based on the upstream v1.5.2 release. The {secrets-store-operator} now applies Kubernetes `NetworkPolicy` objects during installation to restrict network communication to only the required components. + +Volume populators are generally available:: ++ +The volume populators feature allows you to create pre-populated volumes. ++ +{product-title} 4.20 introduces a new field `dataSourceRef` for volume populator functionality that expands the objects that can be used as a data source for pre-population of volumes, from only persistent volume claims (PVC) and snapshots, to any appropriate custom resource (CR). ++ +{product-title} now ships `volume-data-source-validator`, which reports events on PVCs that use a volume populator without a corresponding `VolumePopulator` instance. Previous {product-title} versions did not require `VolumePopulator` instances, so if you are upgrading from 4.12, or later, you might receive events about unregistered populators. If you installed `volume-data-source-validator` yourself previously, you can remove your version. ++ +The volume populators feature, which was introduced in {product-title} 4.12 as a Technology Preview feature, is now supported as generally available. ++ +Volume population is enabled by default. However, {product-title} does not ship with any volume populators. ++ +For more information about volume populators, see xref:../storage/container_storage_interface/persistent-storage-csi-vol-populators.adoc[Volume populators]. + +Performance plus for Azure Disk is generally available:: ++ +By enabling performance plus, the input/output operations per second (IOPS) and throughput limits can be increased for the following types of disks that are 513 GiB, and larger: ++ +-- +* Azure Premium solid-state drives (SSD) + +* Standard SSDs + +* Standard hard disk drives (HDD) +-- ++ +This feature is generally available in {product-title} 4.20. ++ +For more information about performance plus, see xref:../storage/container_storage_interface/persistent-storage-csi-azure.html#performance-plus-for-azure-disk[Performance plus for Azure Disk]. + +Changed block tracking (Developer Preview):: ++ +Changed block tracking enables efficient and incremental backups and disaster recovery for persistent volumes (PVs) managed by Container Storage Interface (CSI) drivers that support this feature. ++ +Changed block tracking allows consumers to requests a list of blocks that have changed between two snapshots, which is useful for backup solutions vendors. By only backing up changed blocks, rather than entire volumes, back up processes are more efficient. ++ +:FeatureName: Changed block tracking ++ +-- +include::snippets/developer-preview.adoc[] +-- ++ +For more information about changed block tracking, see this link:https://access.redhat.com/solutions/7131061[KB article]. + +AWS EFS One Zone volume support is generally available:: ++ +{product-title} 4.20 introduces AWS Elastic File Storage (EFS) One Zone volume support as generally available. With this feature, if file system Domain Name System (DNS) resolution fails, the EFS CSI driver can fall back to mount targets. A mount target serves as a network endpoint that allows AWS EC2 instances or other AWS compute instances within a Virtual Private Cloud (VPC) to connect to, and mount, an EFS file system. ++ +For more information about One Zone, see xref:../storage/container_storage_interface/persistent-storage-csi-aws-efs.adoc#one-zone-file-systems[Support for One Zone]. + +[id="ocp-release-notes-web-console_{context}"] +== Web console + +Support for custom application icons in the Import flow:: ++ +Before this update, the *Container image* form flow provided only a limited set of predefined icons for applications. ++ +With this update, you can add custom icons when you import applications through the *Container image* form. For existing applications, apply the `app.openshift.io/custom-icon` annotation to add a custom icon to the corresponding *Topology* node. ++ +As a result, you can better identify applications in the *Topology* view and organize your projects more clearly. diff --git a/modules/rn-notable-changes.adoc b/modules/rn-notable-changes.adoc new file mode 100644 index 000000000000..0c351303051e --- /dev/null +++ b/modules/rn-notable-changes.adoc @@ -0,0 +1,15 @@ +[id="ocp-release-notable-technical-changes_{context}"] += Notable technical changes + +MachineOSConfig naming changes:: ++ +The name of the `MachineOSConfig` object used with {image-mode-os-on-lower} must now be the same as the machine config pool where you want to deploy the custom layered image. Previously, you could use any name. This change was made to prevent attempts to use multiple `MachineOSConfig` objects with each machine config pool. + +oc-mirror plugin v2 verifies credentials and certificates before mirroring operations:: ++ +With this update, the oc-mirror plugin v2 now verifies information such as registry credentials, DNS name, and SSL certificates before populating the cache and beginning mirroring operations. +This prevents users from discovering certain problems only after the cache is populated and mirroring has begun. + +{vmw-full} 7 and VMware Cloud Foundation 4 end of general support:: ++ +Broadcom has ended general support for {vmw-full} 7 and VMware Cloud Foundation (VCF) 4. If your existing {product-title} cluster is running on either of these platforms, you must plan to migrate or upgrade your VMware infrastructure to a supported version. {product-title} supports installation on {vmw-short} 8 Update 1 or later, or VCF 5 or later. diff --git a/modules/rn-ocp-4-20-0.adoc b/modules/rn-ocp-4-20-0.adoc new file mode 100644 index 000000000000..521bcf1c8541 --- /dev/null +++ b/modules/rn-ocp-4-20-0.adoc @@ -0,0 +1,22 @@ +//Update with relevant advisory information +[id="ocp-4-20-0-ga_{context}"] += RHSA-2025:9562 - {product-title} {product-version}.0 image release, bug fix, and security update advisory + +Issued: 21 Oct 2025 + +{product-title} release {product-version}.0, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the link:https://access.redhat.com/errata/RHSA-2025:9562[RHSA-2025:9562] advisory. The RPM packages that are included in the update are provided by the link:https://access.redhat.com/errata/RHEA-2025:4782[RHEA-2025:4782] advisory. + +Space precluded documenting all of the container images for this release in the advisory. + +You can view the container images in this release by running the following command: + +[source,terminal] +---- +$ oc adm release info 4.20.0 --pullspecs +---- + +[id="ocp-4-20-0-updating_{context}"] +== Updating +To update an {product-title} 4.20 cluster to this latest release, see xref:../updating/updating_a_cluster/updating-cluster-cli.adoc#updating-cluster-cli[Updating a cluster using the CLI]. + +//replace 4.y.z for the correct values for the release. You do not need to update oc to run this command. diff --git a/modules/rn-ocp-4-20-1.adoc b/modules/rn-ocp-4-20-1.adoc new file mode 100644 index 000000000000..5b5e7883d453 --- /dev/null +++ b/modules/rn-ocp-4-20-1.adoc @@ -0,0 +1,62 @@ +//4.20.1 +[id="ocp-4-20-1_{context}"] += RHSA-2025:19003 - {product-title} {product-version}.1 image release, bug fix, and security update advisory + +Issued: 28 Oct 2025 + +{product-title} release {product-version}.1, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the link:https://access.redhat.com/errata/RHSA-2025:19003[RHSA-2025:19003] advisory. The RPM packages that are included in the update are provided by the link:https://access.redhat.com/errata/RHEA-2025:19001[RHEA-2025:19001] advisory. + +Space precluded documenting all of the container images for this release in the advisory. + +You can view the container images in this release by running the following command: + +[source,terminal] +---- +$ oc adm release info 4.20.1 --pullspecs +---- + +[id="ocp-4-20-1-known-issues_{context}"] +== Known issues + +* Starting with {product-title} 4.20, there is a decrease in the default maximum open files soft limit for containers. As a consequence, end users may experience application failures. To work around this problem, increase the container runtimes (CRI-O) ulimit configuration. (link:https://issues.redhat.com/browse/OCPBUGS-62095[OCPBUGS-62095]) + +[id="ocp-4-20-1-bug-fixes_{context}"] +== Bug fixes + +* Before this update, iDRAC10 hardware provisioning was failing due to an incorrect data type for the Dell Original Equipment Manufacturer (OEM) `Target` property and the use of an incorrect virtual media slot. As a result, users were unable to provision Dell iDRAC10 servers. With this release, the Dell iDRAC10 can be provisioned. (link:https://issues.redhat.com/browse/OCPBUGS-52427[OCPBUGS-52427]) + +* Before this release, two identical copies of the same controller were updating the same certificate authority (CA) bundle in a `configmap` causing them to receive different metadata inputs, rewrite each other's changes, and create duplicate events. With this release, the controllers use optimistic updating and server-side apply to avoid update events and handle update conflicts. As a result, metadata updates no longer trigger duplicate events, and the expected metadata is set correctly. (link:https://issues.redhat.com/browse/OCPBUGS-55217[OCPBUGS-55217]) + + +* Before this update, when installing a cluster on {ibm-power-server-title} you could only specify a name for an existing Transit Gateway or virtual private cloud (VPC). As the uniqueness of names was not guaranteed, this could cause conflicts and installation failures. With this release, you can use Universally Unique Identifiers (UUIDs) for a Transit Gateway and VPC. By using unique identifiers, the installation program can unambiguously identify the correct Transit Gateway or VPC. This prevents the naming conflicts and the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-59678[OCPBUGS-59678]) + +* Before this update, the Cloud event proxy for the Precision Time Protocol (PTP) Operator incorrectly parsed BF3 Network Interface Card (NIC) names, causing the interface alias to be formatted incorrectly. As a consequence, the incorrect parsing caused end users to misinterpret cloud events. With this release, the Cloud event proxy has been updated to correctly parse BF3 NIC names in the PTP Operator. As a result, fix improves parsing of BF3 NIC names, ensuring correct event publication for the PTP Operator. (link:https://issues.redhat.com/browse/OCPBUGS-60466[OCPBUGS-60466]) + +* Before this update, a pod with a secondary interface in an OVN-Kubernetes Localnet network (mapped to the br-ex bridge) could communicate with pods on the same node that used the default network for connectivity only if the Localnet IP addresses were within the same subnet as the host network. With this release, the localnet IP addresses can be drawn from any subnet; in this generalized case, an external router outside the cluster is expected to connect the localnet subnet to the host network. (link:https://issues.redhat.com/browse/OCPBUGS-61453[OCPBUGS-61453]) + +* Before this update, the Precision Time Protocol (PTP) Operator wrongly parsed network interface controller (NIC) names. As a result, interface aliases were incorrectly formatted and this impacted identifying a PTP hardware clock (PHC) when using Mellaonox cards to send clock state events. With this release, the PTP now correctly parses the NIC names so that generated aliases align with Mellanox naming conventions. Mellanox cards can now accurately identify a PHC when sending clock state events. (link:https://issues.redhat.com/browse/OCPBUGS-61581[OCPBUGS-61581]) + +* Before this update, the `cluster in workload identity mode` warning was missing when only the `token-auth-azure` annotation was set, which could lead to misconfiguration. This update adds a check for the `token-auth-azure` annotation when showing the warning. As a result, clusters that use only Azure Workload Identity now show the “cluster in workload identity mode” warning as expected. (link:https://issues.redhat.com/browse/OCPBUGS-61861[OCPBUGS-61861]) + +* Before this update, the YAML editor in the web console would default to indenting YAML files with 4 spaces. With this release, the default indentation has changed to 2 spaces to align with recommendations. (link:https://issues.redhat.com/browse/OCPBUGS-61990[OCPBUGS-61990]) + +* Before this update, deploying hosted control planes in version 4.20 or later with user-supplied `ignition-server-serving-cert` and `ignition-server-ca-cert` secrets`, along with the `disable-pki-reconciliation annotation`, caused the system to remove the user supplied ignition secrets and the `ignition-server` pods to fail. With this release, the `ignition-server` secrets are preserved during reconciliation after removing the delete action for the `disable-pki-reconciliation` annotation ensuring ignition-server pods start. (link:https://issues.redhat.com/browse/OCPBUGS-62006[OCPBUGS-62006]) + +* Before this update, if the `OVNKube-controller` on a node failed to process updates and configure its local OVN database, the `OVN-controller` could connect to this stale database. This caused the `OVN-controller` to consume outdated `EgressIP` configurations and send incorrect Gratuitous ARPs (GARPs) for an IP address that might have already moved to a different node. With this release, the `OVN-controller` is blocked from sending these GARPs during the time when the `OVNKube-controller` is not processing updates. As a result, network disruptions are prevented by ensuring GARPs are not sent based on stale database information. (link:https://issues.redhat.com/browse/OCPBUGS-62273[OCPBUGS-62273]) + +* Before this update, upgrading a `ClusterExtension` could fail when unhandled Customer Resource Definition (CRD) changes produced a large JSON diff for the validation status. This diff often exceeded the Kubernetes 32 KB limit, causing the status update to fail and leaving users with no information about why the upgrade did not occur. With this release, the diff output is truncated and summarized for unhandled scenarios instead of including the full JSON diff. This ensures the status updates remain within size limits, allowing them to post successfully and provide users with clear, actionable error messages. (link:https://issues.redhat.com/browse/OCPBUGS-62722[OCPBUGS-62722]) + +* Before this update, gRPC connection logs were set at a highly verbose log level. This generated an excessive number of messages, which caused the logs to overflow. With this release, the gRPC connection logs have been moved to the V(4) log level. Consequently, the logs no longer overflow, as these specific messages are now less verbose by default. (link:https://issues.redhat.com/browse/OCPBUGS-62844[OCPBUGS-62844]) + +* Before this update, running `oc-mirror` without displaying its version caused delays in debugging, as the correct version with required fixes was not known. As a consequence, the user was unable to identify `oc-mirror` version, hindering efficient debugging. With this release, `oc-mirror` now displays its version in the output, aiding faster debugging and ensuring correct fix application. (link:https://issues.redhat.com/browse/OCPBUGS-62283[OCPBUGS-62283]) + +* Before this update, a bug occurred when the `cluster-api-operator` kubeconfig controller tried to use a regenerated authentication token secret before the token value was fully populated. This caused users to experience recurring, transient reconciliation errors every 30 minutes, which briefly put the Operator into a degraded state. With this release, the controller now waits for the authentication token to be populated within the secret before proceeding, preventing the Operator from going into a degraded state and eliminates the recurring errors. (link:https://issues.redhat.com/browse/OCPBUGS-62755[OCPBUGS-62755]) + +* Before this update, in {product-title] 4.19.9, the Cluster Version Operator began requiring bearer token authentication in metrics requests. As a consequence, this broke the metrics scraper on hosted control plane clusters because their scrapers provided no client authentication. With this release, the Cluster Version Operator no longer requires client authentication for metrics requests in hosted control plane clusters. (link:https://issues.redhat.com/browse/OCPBUGS-62867[OCPBUGS-62867]) + +* Before this update, during failover, the system's duplicate address detection (DAD) could incorrectly disable the Egress IPv6 address if it was briefly present on both nodes, breaking the connection. With this release, the Egress IPv6 is configured to skip the DAD check during failover, guaranteeing uninterrupted egress IPv6 traffic after an Egress IP address successfully moves to a different node and ensuring greater network stability. (link:https://issues.redhat.com/browse/OCPBUGS-62913[OCPBUGS-62913]) + + +[id="ocp-4-20-1-updating_{context}"] +== Updating +To update an {product-title} 4.20 cluster to this latest release, see xref:../updating/updating_a_cluster/updating-cluster-cli.adoc#updating-cluster-cli[Updating a cluster using the CLI]. diff --git a/modules/rn-removed-features.adoc b/modules/rn-removed-features.adoc new file mode 100644 index 000000000000..e710be1eb770 --- /dev/null +++ b/modules/rn-removed-features.adoc @@ -0,0 +1,33 @@ + +[id="ocp-release-removed-features_{context}"] += Removed features + +Removed Kubernetes APIs:: ++ +{product-title} 4.20 removed the following Kubernetes APIs. You must migrate your manifests, automation, and API clients to use the new, supported API versions before updating to 4.20. For more information about migrating removed APIs, see the link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/[Kubernetes documentation]. ++ +.Kubernetes APIs removed from {product-title} 4.20 +[cols="2,2,2,1",options="header",] +|=== +|Resource |Removed API |Migrate to |Notable changes + +|`MutatingWebhookConfiguration` +|`admissionregistration.k8s.io/v1beta1` +|`admissionregistration.k8s.io/v1` +|link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/#webhook-resources-v122[Yes] + +|`ValidatingAdmissionPolicy` +|`admissionregistration.k8s.io/v1beta1` +|`admissionregistration.k8s.io/v1` +|link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/#webhook-resources-v122[Yes] + +|`ValidatingAdmissionPolicyBinding` +|`admissionregistration.k8s.io/v1beta1` +|`admissionregistration.k8s.io/v1` +|link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/#webhook-resources-v122[Yes] + +|`ValidatingWebhookConfiguration` +|`admissionregistration.k8s.io/v1beta1` +|`admissionregistration.k8s.io/v1` +|link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/#webhook-resources-v122[Yes] +|=== diff --git a/modules/rn-technology-preview.adoc b/modules/rn-technology-preview.adoc new file mode 100644 index 000000000000..97f2724da60f --- /dev/null +++ b/modules/rn-technology-preview.adoc @@ -0,0 +1,755 @@ + +[id="ocp-release-technology-preview-tables_{context}"] += Technology Preview features status + +Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red{nbsp}Hat Customer Portal for these features: + +link:https://access.redhat.com/support/offerings/techpreview[Technology Preview Features Support Scope] + +In the following tables, features are marked with the following statuses: + +* _Not Available_ +* _Technology Preview_ +* _General Availability_ +* _Deprecated_ +* _Removed_ + + + +[id="ocp-release-notes-auth-tech-preview_{context}"] +== Authentication and authorization Technology Preview features + +.Authentication and authorization Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.18 |4.19 |4.20 + +|Pod security admission restricted enforcement +|Technology Preview +|Technology Preview +|Technology Preview + +|Direct authentication with an external OIDC identity provider +|Not Available +|Technology Preview +|Technology Preview + +|==== + + +[id="ocp-release-notesedge-computing-tp-features_{context}"] +== Edge computing Technology Preview features + +.Edge computing Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.18 |4.19 |4.20 + +|Accelerated provisioning of {ztp} +|Technology Preview +|Technology Preview +|Technology Preview + +|Enabling disk encryption with TPM and PCR protection +|Technology Preview +|Technology Preview +|Technology Preview + +|Configuring a local arbiter node +|Not Available +|Technology Preview +|General Availability + +|Configuring a two-node OpenShift cluster with fencing +|Not Available +|Not Available +|Technology Preview +|==== + + +[id="ocp-release-notes-extensions-tech-preview_{context}"] +== Extensions Technology Preview features + +// "Extensions" refers to OLMv1 + +.Extensions Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.18 |4.19 |4.20 + +|{olmv1-first} +|General Availability +|General Availability +|General Availability + +|{olmv1} runtime validation of container images using sigstore signatures +|Technology Preview +|Technology Preview +|Technology Preview + +|{olmv1} permissions preflight check for cluster extensions +|Not Available +|Technology Preview +|Technology Preview + +|{olmv1} deploying a cluster extension in a specified namespace +|Not Available +|Technology Preview +|Technology Preview + +|{olmv1} deploying a cluster extension that uses webhooks +|Not Available +|Not Available +|Technology Preview +|==== + + +[id="ocp-release-notes-installing-tech-preview_{context}"] +== Installation Technology Preview features + +.Installation Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.18 |4.19 |4.20 + +// All GA in 4.17 notes for oci-first +|Adding kernel modules to nodes with kvc +|Technology Preview +|Technology Preview +|Technology Preview + +|Enabling NIC partitioning for SR-IOV devices +|General Availability +|General Availability +|General Availability + +|User-defined labels and tags for {gcp-first} +|General Availability +|General Availability +|General Availability + +|Installing a cluster on Alibaba Cloud by using Assisted Installer +|Technology Preview +|Technology Preview +|Technology Preview + +|Installing a cluster on {azure-first} with confidential VMs +|Technology Preview +|General Availability +|General Availability + +|Dedicated disk for etcd on {azure-full} +|Not Available +|Not Available +|Technology Preview + +|Mount shared entitlements in BuildConfigs in RHEL +|Technology Preview +|Technology Preview +|Technology Preview + +|OpenShift zones support for vSphere host groups +|Not Available +|Technology Preview +|Technology Preview + +|Selectable Cluster Inventory +|Technology Preview +|Technology Preview +|Technology Preview + +|Installing a cluster on {gcp-short} using the Cluster API implementation +|General Availability +|General Availability +|General Availability + +|Enabling a user-provisioned DNS on {gcp-short} +|Not Available +|Technology Preview +|Technology Preview + +|Installing a cluster on {vmw-full} with multiple network interface controllers +|Technology Preview +|Technology Preview +|General Availability + +|Using bare metal as a service +|Not Available +|Technology Preview +|Technology Preview + +|Changing the CVO log level +|Not Available +|Not Available +|Technology Preview +|==== + + +[id="ocp-release-notes-mco-tech-preview_{context}"] +== Machine Config Operator Technology Preview features + +.Machine Config Operator Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.18 |4.19 |4.20 + +|Improved MCO state reporting (`oc get machineconfignode`) +|Technology Preview +|Technology Preview +|General Availability + +|Image mode for OpenShift/On-cluster RHCOS image layering for {aws-short} and {gcp-short} +|Technology Preview +|General Availability +|General Availability + +|Image mode for OpenShift/On-cluster RHCOS image layering for {vmw-short} +|Not available +|Not available +|Technology Preview + +|==== + + +[id="ocp-release-notes-machine-management-tech-preview_{context}"] +== Machine management Technology Preview features + +.Machine management Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.18 |4.19 |4.20 + +|Managing machines with the Cluster API for {aws-full} +|Technology Preview +|Technology Preview +|Technology Preview + +|Managing machines with the Cluster API for {gcp-full} +|Technology Preview +|Technology Preview +|Technology Preview + +|Managing machines with the Cluster API for {ibm-power-server-name} +|Technology Preview +|Technology Preview +|Technology Preview + +|Managing machines with the Cluster API for {azure-full} +|Technology Preview +|Technology Preview +|Technology Preview + +|Managing machines with the Cluster API for {rh-openstack} +|Technology Preview +|Technology Preview +|Technology Preview + +|Managing machines with the Cluster API for {vmw-full} +|Technology Preview +|Technology Preview +|Technology Preview + +|Managing machines with the Cluster API for bare metal +|Not Available +|Technology Preview +|Technology Preview + +|Cloud controller manager for {ibm-power-server-name} +|Technology Preview +|Technology Preview +|Technology Preview + +|Adding multiple subnets to an existing {vmw-full} cluster by using compute machine sets +|Technology Preview +|Technology Preview +|Technology Preview + +|Configuring Trusted Launch for {azure-full} virtual machines by using machine sets +|Technology Preview +|General Availability +|General Availability + +|Configuring {azure-short} confidential virtual machines by using machine sets +|Technology Preview +|General Availability +|General Availability +|==== + + +[id="ocp-release-notes-monitoring-tech-preview_{context}"] +== Monitoring Technology Preview features + +.Monitoring Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.18 |4.19 |4.20 + +|Metrics Collection Profiles +|Technology Preview +|General Availability +|General Availability + +|==== + + +[id="ocp-release-notes-multi-arch-tech-preview_{context}"] +== Multi-Architecture Technology Preview features + +.Multi-Architecture Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.18 |4.19 |4.20 + +|`kdump` on `arm64` architecture +|Technology Preview +|Technology Preview +|General Availability + +|`kdump` on `s390x` architecture +|Technology Preview +|Technology Preview +|General Availability + +|`kdump` on `ppc64le` architecture +|Technology Preview +|Technology Preview +|General Availability + +|Support for configuring the image stream import mode behavior +|Technology Preview +|Technology Preview +|Technology Preview +|==== + + +[id="ocp-release-notes-networking-tech-preview_{context}"] +== Networking Technology Preview features + +.Networking Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.18 |4.19 |4.20 + +|eBPF manager Operator +|Technology Preview +|Technology Preview +|Technology Preview + +|Advertise using L2 mode the MetalLB service from a subset of nodes, using a specific pool of IP addresses +|Technology Preview +|Technology Preview +|Technology Preview + +|Updating the interface-specific safe sysctls list +|Technology Preview +|Technology Preview +|Technology Preview + +|Egress service custom resource +|Technology Preview +|Technology Preview +|Technology Preview + +|VRF specification in `BGPPeer` custom resource +|Technology Preview +|Technology Preview +|Technology Preview + +|VRF specification in `NodeNetworkConfigurationPolicy` custom resource +|Technology Preview +|General Availability +|General Availability + +|Host network settings for SR-IOV VFs +|General Availability +|General Availability +|General Availability + +|Integration of MetalLB and FRR-K8s +|General Availability +|General Availability +|General Availability + +|Automatic leap seconds handling for PTP grandmaster clocks +|General Availability +|General Availability +|General Availability + +|PTP events REST API v2 +|General Availability +|General Availability +|General Availability + +|OVN-Kubernetes customized `br-ex` bridge on bare metal +|General Availability +|General Availability +|General Availability + +|OVN-Kubernetes customized `br-ex` bridge on {vmw-short} and {rh-openstack} +|Technology Preview +|Technology Preview +|Technology Preview + +|Live migration to OVN-Kubernetes from OpenShift SDN +|Not Available +|Not Available +|Not Available + +|User-defined network segmentation +|General Availability +|General Availability +|General Availability + +|Dynamic configuration manager +|Technology Preview +|Technology Preview +|Technology Preview + +|SR-IOV Network Operator support for Intel C741 Emmitsburg Chipset +|Technology Preview +|Technology Preview +|Technology Preview + +|SR-IOV Network Operator support on ARM architecture +|General Availability +|General Availability +|General Availability + +|Gateway API and Istio for Ingress management +|Technology Preview +|General Availability +|General Availability + +|Dual-port NIC for PTP ordinary clock +|Not Available +|Technology Preview +|Technology Preview + +|DPU Operator +|Not Available +|Technology Preview +|Technology Preview + +|Fast IPAM for the Whereabouts IPAM CNI plugin +|Not Available +|Technology Preview +|Technology Preview + +|Unnumbered BGP peering +|Not Available +|Technology Preview +|General Availability + +|Load balancing across the aggregated bonded interface with xmitHashPolicy +|Not Available +|Not Available +|Technology Preview + +|PF Status Relay Operator for high availability with SR-IOV networks +|Not Avaialable +|Not Available +|Technology Preview + +|Preconfigured user-defined network end points using {mtv-short} +|Not Available +|Not Available +|Technology Preview + +|Unassisted holdover for PTP devices +|Not Available +|Not Available +|Technology Preview + +|==== + + +[id="ocp-release-notes-nodes-tech-preview_{context}"] +== Node Technology Preview features + +.Nodes Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.18 |4.19 |4.20 + +|`MaxUnavailableStatefulSet` featureset +|Technology Preview +|Technology Preview +|Technology Preview + +|sigstore support +|Technology Preview +|Technology Preview +|General Availability + +|Default sigstore `openshift` cluster image policy +|Technology Preview +|Technology Preview +|Technology Preview + +|Linux user namespace support +|Technology Preview +|Technology Preview +|General Availability + +|Attribute-Based GPU Allocation +|Not Available +|Not Available +|Technology Preview +|==== + + +[id="ocp-release-notes-oc-cli-tech-preview_{context}"] +== OpenShift CLI (oc) Technology Preview features + +.OpenShift CLI (`oc`) Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.18 |4.19 |4.20 + +|oc-mirror plugin v2 +|General Availability +|General Availability +|General Availability + +|oc-mirror plugin v2 enclave support +|General Availability +|General Availability +|General Availability + +|oc-mirror plugin v2 delete functionality +|General Availability +|General Availability +|General Availability +|==== + + +[id="ocp-release-notes-operator-lifecycle-tech-preview_{context}"] +== Operator lifecycle and development Technology Preview features + +// "Operator lifecycle" refers to OLMv0 and "development" refers to Operator SDK + +.Operator lifecycle and development Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.18 |4.19 |4.20 + +|{olmv1-first} +|General Availability +|General Availability +|General Availability + +|Scaffolding tools for Hybrid Helm-based Operator projects +|Removed +|Removed +|Removed + +|Scaffolding tools for Java-based Operator projects +|Removed +|Removed +|Removed +|==== + + +[id="ocp-release-notes-rhcos-tech-preview_{context}"] +== {rh-openstack-first} Technology Preview features + +.{rh-openstack} Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.18 |4.19 |4.20 + +|{rh-openstack} integration into the {cluster-capi-operator} +|Technology Preview +|Technology Preview +|Technology Preview + +|Control plane with `rootVolumes` and `etcd` on local disk +|General Availability +|General Availability +|General Availability + +|Hosted control planes on {rh-openstack} 17.1 +|Not Available +|Technology Preview +|Technology Preview +|==== + + +[id="ocp-release-notes-scalability-tech-preview_{context}"] +== Scalability and performance Technology Preview features + +.Scalability and performance Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.18 |4.19 |4.20 + +|{factory-prestaging-tool} +|Technology Preview +|Technology Preview +|Technology Preview + +|Hyperthreading-aware CPU manager policy +|Technology Preview +|Technology Preview +|Technology Preview + +|Mount namespace encapsulation +|Technology Preview +|Technology Preview +|Technology Preview + +|Node Observability Operator +|Technology Preview +|Technology Preview +|Technology Preview + +|Increasing the etcd database size +|Technology Preview +|Technology Preview +|Technology Preview + +|Using {rh-rhacm} `PolicyGenerator` resources to manage {ztp} cluster policies +|Technology Preview +|General Availability +|General Availability + +|Pinned Image Sets +|Technology Preview +|Technology Preview +|Technology Preview + +|Configuring NUMA-aware scheduler replicas and high availability +|Not available +|Not available +|Technology Preview +|==== + + +//[id="ocp-release-notes-special-hardware-tech-preview_{context}"] +//== Specialized hardware and driver enablement Technology Preview features + +//.Specialized hardware and driver enablement Technology Preview tracker +//[cols="4,1,1,1",options="header"] +//|==== +//|Feature |4.18 |4.19 |4.20 +//|==== + + +[id="ocp-release-notes-storage-tech-preview_{context}"] +== Storage Technology Preview features + +.Storage Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.18 |4.19 |4.20 + +|AWS EFS One Zone volume +|Not Available +|Not Available +|General Availability + +|Automatic device discovery and provisioning with Local Storage Operator +|Technology Preview +|Technology Preview +|Technology Preview + +|Azure File CSI snapshot support +|Technology Preview +|Technology Preview +|Technology Preview + +|Azure File cross-subscription support +|Not Available +|General Availability +|General Availability + +|Azure Disk performance plus +|Not Available +|Not Available +|General Availability + +|Configuring fsGroupChangePolicy per namespace +|Not Available +|Not Available +|General Availability + +|Shared Resources CSI Driver in OpenShift Builds +|Technology Preview +|Technology Preview +|Technology Preview + +|{secrets-store-operator} +|General Availability +|General Availability +|General Availability + +|CIFS/SMB CSI Driver Operator +|General Availability +|General Availability +|General Availability + +|VMware vSphere multiple vCenter support +|General Availability +|General Availability +|General Availability + +|Disabling/enabling storage on vSphere +|Technology Preview +|General Availability +|General Availability + +|Increasing max number of volumes per node for vSphere +|Not Available +|Technology Preview +|Technology Preview + +|RWX/RWO SELinux mount option +|Developer Preview +|Developer Preview +|Technology Preview + +|Migrating CNS Volumes Between Datastores +|Developer Preview +|General Availability +|General Availability + +|CSI volume group snapshots +|Technology Preview +|Technology Preview +|Technology Preview + +|GCP PD supports C3/N4 instance types and hyperdisk-balanced disks +|General Availability +|General Availability +|General Availability + +|OpenStack Manila support for CSI resize +|General Availability +|General Availability +|General Availability + +|Volume Attribute Classes +|Not Available +|Technology Preview +|Technology Preview + +|Volume populators +|Technology Preview +|Technology Preview +|General Availability +|==== + + +[id="ocp-release-notes-web-console-tech-preview_{context}"] +== Web console Technology Preview features + +.Web console Technology Preview tracker +[cols="4,1,1,1",options="header"] +|==== +|Feature |4.18 |4.19 |4.20 + +|{ols-official} in the {product-title} web console +|Technology Preview +|Technology Preview +|Technology Preview +|==== diff --git a/release_notes/ocp-4-20-release-notes.adoc b/release_notes/ocp-4-20-release-notes.adoc index c711ebcba7cd..0237220404df 100644 --- a/release_notes/ocp-4-20-release-notes.adoc +++ b/release_notes/ocp-4-20-release-notes.adoc @@ -10,3001 +10,61 @@ Red{nbsp}Hat {product-title} provides developers and IT organizations with a hyb Built on {op-system-base-full} and Kubernetes, {product-title} provides a more secure and scalable multitenant operating system for today's enterprise-class applications, while delivering integrated application runtimes and libraries. {product-title} enables organizations to meet security, privacy, compliance, and governance requirements. -[id="ocp-4-20-about-this-release_{context}"] -== About this release +// About this release +include::modules/rn-about-release.adoc[leveloffset=+1] -// TODO: Update with the relevant information closer to release. -{product-title} (link:https://access.redhat.com/errata/RHSA-2025:9562[RHSA-2025:9562]) is now available. This release uses link:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.33.md[Kubernetes 1.33] with CRI-O runtime. New features, changes, and known issues that pertain to {product-title} {product-version} are included in this topic. +// {product-title} layered and dependent component support and compatibility +include::modules/rn-compatibility.adoc[leveloffset=+1] -{product-title} {product-version} clusters are available at https://console.redhat.com/openshift. From the {hybrid-console}, you can deploy {product-title} clusters to either on-premises or cloud environments. +// New features and enhancements +// Categories are L2; individual items are DL:: +include::modules/rn-new-features.adoc[leveloffset=+1] -You must use {op-system} machines for the control plane and for the compute machines. -//Removed the note per https://issues.redhat.com/browse/GRPA-3517 -//Removed paragraph about the RHEL package because mode workers are removed from 4.19, per Scott Dodson -//Even-numbered release lifecycle verbiage (Comment in for even-numbered releases) +// Notable technical changes +// individual items are DL:: +include::modules/rn-notable-changes.adoc[leveloffset=+1] -Starting from {product-title} 4.14, the Extended Update Support (EUS) phase for even-numbered releases increases the total available lifecycle to 24 months on all supported architectures, including `x86_64`, 64-bit ARM (`aarch64`), {ibm-power-name} (`ppc64le`), and {ibm-z-name} (`s390x`) architectures. Beyond this, Red{nbsp}Hat also offers a 12-month additional EUS add-on, denoted as _Additional EUS Term 2_, that extends the total available lifecycle from 24 months to 36 months. The Additional EUS Term 2 is available on all architecture variants of {product-title}. For more information about support for all versions, see the link:https://access.redhat.com/support/policy/updates/openshift[Red Hat {product-title} Life Cycle Policy]. +// Deprecated and removed features +// Categories are L2 +// individual tables stay as is +include::modules/rn-deprecated-removed-tables.adoc[leveloffset=+1] -//Odd-numbered release lifecycle verbiage (Comment in for odd-numbered releases) -//// -The support lifecycle for odd-numbered releases, such as {product-title} {product-version}, on all supported architectures, including `x86_64`, 64-bit ARM (`aarch64`), {ibm-power-name} (`ppc64le`), and {ibm-z-name} (`s390x`) architectures is 18 months. For more information about support for all versions, see the link:https://access.redhat.com/support/policy/updates/openshift[Red Hat {product-title} Life Cycle Policy]. +// Deprecated features +// individual items are DL:: +// leveloffset +2 +include::modules/rn-deprecated-features.adoc[leveloffset=+2] -Commencing with the {product-title} 4.14 release, Red{nbsp}Hat is simplifying the administration and management of Red{nbsp}Hat shipped cluster Operators with the introduction of three new life cycle classifications; Platform Aligned, Platform Agnostic, and Rolling Stream. These life cycle classifications provide additional ease and transparency for cluster administrators to understand the life cycle policies of each Operator and form cluster maintenance and upgrade plans with predictable support boundaries. For more information, see link:https://access.redhat.com/webassets/avalon/j/includes/session/scribe/?redirectTo=https%3A%2F%2Faccess.redhat.com%2Fsupport%2Fpolicy%2Fupdates%2Fopenshift_operators[OpenShift Operator Life Cycles]. -//// +// Removed features +// individual items are DL:: +// leveloffset +2 +include::modules/rn-removed-features.adoc[leveloffset=+2] -// Added in 4.14. Language came directly from Kirsten Newcomer. -{product-title} is designed for FIPS. When running {op-system-base-full} or {op-system-first} booted in FIPS mode, {product-title} core components use the {op-system-base} cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the `x86_64`, `ppc64le`, and `s390x` architectures. +// Bug fixes +// Categories are L2 +// individual items stay bullets +include::modules/rn-bug-fixes.adoc[leveloffset=+1] -For more information about the NIST validation program, see link:https://csrc.nist.gov/Projects/cryptographic-module-validation-program/validated-modules[Cryptographic Module Validation Program]. For the latest NIST status for the individual versions of {op-system-base} cryptographic libraries that have been submitted for validation, see link:https://access.redhat.com/articles/2918071#fips-140-2-and-fips-140-3-2[Compliance Activities and Government Standards]. +// Technology Preview features status +// Categories are L2 +// individual tables stay as is +include::modules/rn-technology-preview.adoc[leveloffset=+1] -[id="ocp-4-20-add-on-support-status_{context}"] -== {product-title} layered and dependent component support and compatibility -The scope of support for layered and dependent components of {product-title} changes independently of the {product-title} version. To determine the current support status and compatibility for an add-on, refer to its release notes. For more information, see the link:https://access.redhat.com/support/policy/updates/openshift[Red Hat {product-title} Life Cycle Policy]. +// Known issues +// individual items stay bullets +include::modules/rn-known-issues.adoc[leveloffset=+1] -[id="ocp-4-20-new-features-and-enhancements_{context}"] -== New features and enhancements +// Asynchronous errata updates +// Each release is L2 +// TODO: Could either make each subheading underneath a DL::, or add them each as separate modules +// This example does DL:: +include::modules/rn-async-errata.adoc[leveloffset=+1] -This release adds improvements related to the following components and concepts: +// Include each new z-stream as separate module with leveloffset +2 +// Headings adjusted down a level -[id="ocp-release-notes-api_{context}"] -=== API server +// RHSA-2025:19003 - {product-title} {product-version}.1 image release, bug fix, and security update advisory +include::modules/rn-ocp-4-20-1.adoc[leveloffset=+2] -==== Extended loopback certificate validity to three years for kube-apiserver - -Before this update, the self-signed loopback certificate for the Kubernetes API Server expired after one year. With this release, the expiration date of the certificate is extended to three years. - -==== Dry-run option is connected to 'oc delete istag' - -Before this update, deleting an `istag` resource with the `--dry-run=server` option unintentionally caused actual deletion of the image from the server. This unexpected deletion occurred due to the `dry-run` option being implemented incorrectly in the `oc delete istag` command. With this release, the `dry-run` option is wired to the `oc delete istag` command. As a result, the accidental deletion of image objects is prevented and the `istag` object remains intact when using the `--dry-run=server` option. - -//[id="ocp-release-notes-auth_{context}"] -//=== Authentication and authorization - -[id="ocp-release-notes-service-interruptions_{context}"] -==== No service interruptions for certificate-related issues - -With this update, self-signed loopback certificates in API servers are prevented from expiring, and ensures a stable and secure connection within Kubernetes 4.16.z. This enhancement backports a solution from a newer version, cherry-picks a specific pull request and applies it to the selected version. This reduces the likelihood of service interruptions due to certificate-related issues, providing a more reliable user experience in Kubernetes 4.16.z deployments. - -[id="ocp-release-notes-communication-flows_{context}"] -==== Enhanced communication matrix for TCP ports - -With this update, the communication flows matrix for {product-title} is enhanced. The feature automatically generates services for open ports 17697 (TCP) and 6080 (TCP) on the primary node, and ensures that all open ports have corresponding endpoint slices. This results in accurate and up-to-date communication flows matrixes, improves the overall security and efficiency of the communication matrix, and provides a more comprehensive and reliable communication matrix for users. - -//[id="ocp-release-notes-auth_{context}"] -//=== Authentication and authorization - -//[id="ocp-release-notes-documentation_{context}"] -//=== Documentation - -[id="ocp-release-notes-edge-computing_{context}"] -=== Edge computing - -[id="ocp-release-edge-computing-networkpolicy-support-for-lvms_{context}"] -==== NetworkPolicy support for the {lvms} Operator - -The {lvms} Operator now applies Kubernetes `NetworkPolicy` objects during installation to restrict network communication to only the required components. This feature enforces default network isolation for {lvms} deployments on {product-title} clusters. - -[id="ocp-release-edge-computing-hostname-label-for-pv_{context}"] -==== Support for hostname labelling for persistent volumes created by using the {lvms} Operator - -When you create a persistent volume (PV) by using the {lvms} Operator, the PV now includes the `kubernetes.io/hostname` label. This label shows which node the PV is located on, making it easier to identify the node associated with a workload. This change only applies to newly created PVs. Existing PVs are not modified. - -[id="ocp-release-edge-computing-default-namespace_{context}"] -==== Default namespace for the {lvms} Operator - -The default namespace for the {lvms} Operator is now `openshift-lvm-storage`. You can still install {lvms} in a custom namespace. - -[id="ocp-release-edge-computing-default-clusterinstance_{context}"] -==== SiteConfig CR to ClusterInstance CR migration tool - -{product-title} {product-version} introduces the `siteconfig-converter` tool to help migrate managed clusters from using a `SiteConfig` custom resource (CR) to a `ClusterInstance` CR. Using a `SiteConfig` CR to define a managed cluster is deprecated and will be removed in a future release. The `ClusterInstance` CR provides a more unified and generic approach to defining clusters and is the preferred method for managing cluster deployments in the {ztp} workflow. - -Using the `siteconfig-converter` tool, you can convert `SiteConfig` CRs to `ClusterInstance` CRs and then incrementally migrate one or more clusters at a time. Existing and new pipelines run in parallel, so you can migrate clusters in a controlled, phased manner and without downtime. - -[NOTE] -==== -The `siteconfig-converter` tool does not convert SiteConfig CRs that use the deprecated `spec.clusters.extraManifestPath` field. -==== - -For more information, see xref:../edge_computing/ztp-migrate-clusterinstance.adoc#ztp-migrate-clusterinstance[Migrating from SiteConfig CRs to ClusterInstance CRs]. - -[id="ocp-release-notes-etcd_{context}"] -=== etcd - -With this update, the Cluster etcd Operator introduces alert levels for the `etcdDatabaseQuotaLowSpace` alert, offering administrators timely notifications about low etcd quota usage. This proactive alert system aims to prevent API server instability and allows for effective resource management in managed OpenShift clusters. The alert levels are `info`, `warning`, and `critical`, providing a more granular approach to monitoring etcd quota usage, which results in dynamic etcd quota management and improved overall cluster performance. - -[id="ocp-4-20-edge-computing-arbiter-node_{context}"] -==== Configuring a local arbiter node - -You can configure an {product-title} cluster with two control plane nodes and one local arbiter node to retain high availability (HA) while reducing infrastructure costs for your cluster. - -A local arbiter node is a lower-cost, co-located machine that participates in control plane quorum decisions. Unlike a standard control plane node, the arbiter node does not run the full set of control plane services. You can use this configuration to maintain HA in your cluster with only two fully provisioned control plane nodes instead of three. - -This feature is now Generally Available. - -For more information, see xref:../installing/installing_with_agent_based_installer/installing-with-agent-based-installer.adoc#installing-ocp-agent-local-arbiter-node_installing-with-agent-based-installer[Configuring a local arbiter node]. - -[id="ocp-4-20-edge-computing-two-node-fencing_{context}"] -==== Configuring a two-node OpenShift cluster with fencing (Technology Preview) - -A two-node OpenShift cluster with fencing provides high availability (HA) with a reduced hardware footprint. This configuration is designed for distributed or edge environments where deploying a full three-node control plane cluster is not practical. - -A two-node cluster does not include compute nodes. The two control plane machines run user workloads in addition to managing the cluster. - -[NOTE] -==== -You can deploy a two-node OpenShift cluster with fencing by using either the user-provisioned infrastructure method or the installer-provisioned infrastructure method. -==== - -For more information, see xref:../installing/installing_two_node_cluster/installing_tnf/installing-two-node-fencing.adoc#installing-two-node-fencing_installing-two-node-fencing[Preparing to install a two-node OpenShift cluster with fencing]. - -[id="ocp-release-notes-extensions_{context}"] -=== Extensions ({olmv1}) - -[id="ocp-release-notes-extensions-webhooks-tp_{context}"] -==== Deploying cluster extensions that use webhooks (Technology Preview) - -With this release, you can deploy cluster extensions that use webhooks on clusters with the `TechPreviewNoUpgrade` feature set enabled. - -For more information, see xref:../extensions/ce/managing-ce.adoc#olmv1-supported-extensions_managing-ce[Supported extensions]. - -[id="ocp-release-notes-hcp_{context}"] -=== Hosted control planes - -include::snippets/hcp-snippet.adoc[] - -// Because {hcp} releases asynchronously from {product-title}, it has its own release notes. For more information, see xref:../hosted_control_planes/hosted-control-planes-release-notes.adoc#hosted-control-planes-release-notes[{hcp-capital} release notes]. - -[id="ocp-release-notes-ibm-power_{context}"] -=== {ibm-power-title} - -The {ibm-power-name} release on {product-title} {product-version} adds improvements and new capabilities to {product-title} components. - -This release introduces support for the following features on {ibm-power-title}: - -* Enable accelerators on {ibm-power-name} - -[id="ocp-release-notes-ibm-z_{context}"] -=== {ibm-z-title} and {ibm-linuxone-title} - -The {ibm-z-name} and {ibm-linuxone-name} release on {product-title} {product-version} adds improvements and new capabilities to {product-title} components. - -This release introduces support for the following features on {ibm-z-name} and {ibm-linuxone-name}: - -* Enable accelerators on {ibm-z-name} - -[id="ocp-release-notes-ibm-z-power-support-matrix_{context}"] -=== {ibm-power-title}, {ibm-z-title}, and {ibm-linuxone-title} support matrix - -Starting in {product-title} 4.14, Extended Update Support (EUS) is extended to the {ibm-power-name} and the {ibm-z-name} platform. For more information, see the link:https://access.redhat.com/support/policy/updates/openshift-eus[OpenShift EUS Overview]. - -.CSI Volumes -[cols="2,1,1",options="header"] -|==== -|Feature |{ibm-power-name} |{ibm-z-name} and {ibm-linuxone-name} - -|Cloning -|Supported -|Supported - -|Expansion -|Supported -|Supported - -|Snapshot -|Supported -|Supported -|==== - -.Multus CNI plugins -[cols="2,1,1",options="header"] -|==== -|Feature |{ibm-power-name} |{ibm-z-name} and {ibm-linuxone-name} - -|Bridge -|Supported -|Supported - -|Host-device -|Supported -|Supported - -|IPAM -|Supported -|Supported - -|IPVLAN -|Supported -|Supported -|==== - -.{product-title} features -[cols="3,1,1",options="header"] -|==== -|Feature |{ibm-power-name} |{ibm-z-name} and {ibm-linuxone-name} - -|Adding compute nodes to on-premise clusters using {oc-first} -|Supported -|Supported - -|Alternate authentication providers -|Supported -|Supported - -|Agent-based Installer -|Supported -|Supported - -|Assisted Installer -|Supported -|Supported - -|Automatic Device Discovery with Local Storage Operator -|Unsupported -|Supported - -|Automatic repair of damaged machines with machine health checking -|Unsupported -|Unsupported - -|Cloud controller manager for {ibm-cloud-name} -|Supported -|Unsupported - -|Controlling overcommit and managing container density on nodes -|Unsupported -|Unsupported - -|CPU manager -|Supported -|Supported - -|Cron jobs -|Supported -|Supported - -|Descheduler -|Supported -|Supported - -|Egress IP -|Supported -|Supported - -|Encrypting data stored in etcd -|Supported -|Supported - -|FIPS cryptography -|Supported -|Supported - -|Helm -|Supported -|Supported - -|Horizontal pod autoscaling -|Supported -|Supported - -|Hosted control planes -|Supported -|Supported - -|IBM Secure Execution -|Unsupported -|Supported - -|Installer-provisioned Infrastructure Enablement for {ibm-power-server-name} -|Supported -|Unsupported - -|Installing on a single node -|Supported -|Supported - -|IPv6 -|Supported -|Supported - -|Monitoring for user-defined projects -|Supported -|Supported - -|Multi-architecture compute nodes -|Supported -|Supported - -|Multi-architecture control plane -|Supported -|Supported - -|Multipathing -|Supported -|Supported - -|Network-Bound Disk Encryption - External Tang Server -|Supported -|Supported - -|Non-volatile memory express drives (NVMe) -|Supported -|Unsupported - -|nx-gzip for Power10 (Hardware Acceleration) -|Supported -|Unsupported - -|oc-mirror plugin -|Supported -|Supported - -|OpenShift CLI (`oc`) plugins -|Supported -|Supported - -|Operator API -|Supported -|Supported - -|OpenShift Virtualization -|Unsupported -|Supported - -|OVN-Kubernetes, including IPsec encryption -|Supported -|Supported - -|PodDisruptionBudget -|Supported -|Supported - -|Precision Time Protocol (PTP) hardware -|Unsupported -|Unsupported - -|{openshift-local-productname} -|Unsupported -|Unsupported - -|Scheduler profiles -|Supported -|Supported - -|Secure Boot -|Unsupported -|Supported - -|Stream Control Transmission Protocol (SCTP) -|Supported -|Supported - -|Support for multiple network interfaces -|Supported -|Supported - -|The `openshift-install` utility to support various SMT levels on {ibm-power-name} (Hardware Acceleration) -|Supported -|Unsupported - -|Three-node cluster support -|Supported -|Supported - -|Topology Manager -|Supported -|Unsupported - -|z/VM Emulated FBA devices on SCSI disks -|Unsupported -|Supported - -|4K FCP block device -|Supported -|Supported -|==== - -.Operators -[cols="2,1,1",options="header"] -|==== -|Feature |{ibm-power-name} |{ibm-z-name} and {ibm-linuxone-name} - -|{cert-manager-operator} -|Supported -|Supported - -|Cluster Logging Operator -|Supported -|Supported - -|Cluster Resource Override Operator -|Supported -|Supported - -|Compliance Operator -|Supported -|Supported - -|Cost Management Metrics Operator -|Supported -|Supported - -|File Integrity Operator -|Supported -|Supported - -|HyperShift Operator -|Supported -|Supported - -|{ibm-power-server-name} Block CSI Driver Operator -|Supported -|Unsupported - -|Ingress Node Firewall Operator -|Supported -|Supported - -|Local Storage Operator -|Supported -|Supported - -|MetalLB Operator -|Supported -|Supported - -|Network Observability Operator -|Supported -|Supported - -|NFD Operator -|Supported -|Supported - -|NMState Operator -|Supported -|Supported - -|OpenShift Elasticsearch Operator -|Supported -|Supported - -|Vertical Pod Autoscaler Operator -|Supported -|Supported -|==== - -.Persistent storage options -[cols="2,1,1",options="header"] -|==== -|Feature |{ibm-power-name} |{ibm-z-name} and {ibm-linuxone-name} -|Persistent storage using iSCSI -|Supported ^[1]^ -|Supported ^[1]^,^[2]^ - -|Persistent storage using local volumes (LSO) -|Supported ^[1]^ -|Supported ^[1]^,^[2]^ - -|Persistent storage using hostPath -|Supported ^[1]^ -|Supported ^[1]^,^[2]^ - -|Persistent storage using Fibre Channel -|Supported ^[1]^ -|Supported ^[1]^,^[2]^ - -|Persistent storage using Raw Block -|Supported ^[1]^ -|Supported ^[1]^,^[2]^ - -|Persistent storage using EDEV/FBA -|Supported ^[1]^ -|Supported ^[1]^,^[2]^ -|==== -[.small] --- -1. Persistent shared storage must be provisioned by using either {rh-storage-first} or other supported storage protocols. -2. Persistent non-shared storage must be provisioned by using local storage, such as iSCSI, FC, or by using LSO with DASD, FCP, or EDEV/FBA. --- - -[id="ocp-release-notes-insights-operator-enhancements_{context}"] -=== Insights Operator - -[id="ocp-insights-operator-logging-update_{context}"] -==== Support for obtaining `virt-launcher` logs across the cluster - -With this release, command line logs from `virt-launcher` pods can be collected across a Kubernetes cluster. JSON-encoded logs are saved at the path `namespaces//pods//virt-launcher.json`, which facilitates troubleshooting and debugging of virtual machines. - -[id="ocp-release-notes-installation-and-update_{context}"] -=== Installation and update - -[id="ocp-4-20-install-update-cvo-log-levels_{context}"] -==== Changing the CVO log level (Technology Preview) - -With this release, the Cluster Version Operator (CVO) log level verbosity can be changed by the cluster administrator. - -For more information, see xref:../updating/troubleshooting_updates/gathering-data-cluster-update.adoc#changing-log-data_gathering-data-cluster-update[Changing CVO log level]. - -[id="ocp-4-20-installation-and-update-vsphere-multiple-nics_{context}"] -==== Installing a cluster on {vmw-full} with multiple network interface controllers (Generally Available) - -{product-title} 4.18 enabled you to install a {vmw-full} cluster with multiple network interface controllers (NICs) for a node as a Technology Preview feature. This feature is now Generally Available. - -For more information, see xref:../installing/installing_vsphere/ipi/installing-vsphere-installer-provisioned-customizations.adoc#installation-vsphere-multiple-nics_installing-vsphere-installer-provisioned-customizations[Configuring multiple NICs]. - -For an existing {vmw-short} cluster, you can add multiple subnets by using xref:../machine_management/creating_machinesets/creating-machineset-vsphere.adoc#machineset-vsphere-multiple-nics_creating-machineset-vsphere[compute machine sets]. - -[id="ocp-release-notes-installation-gcp-xpn-dns-zones_{context}"] -==== Installing a cluster on {gcp-full} into a shared VPC specifying a DNS private zone in a third project - -With this release, you can specify the location of a DNS private zone when installing a cluster on {gcp-short} into a shared VPC. The private zone can be located in a service project that is distinct from the host project or main service project. - -For more information, see xref:../installing/installing_gcp/installation-config-parameters-gcp.adoc#installation-configuration-parameters-additional-gcp_installation-config-parameters-gcp[Additional {gcp-short} configuration parameters]. - -[id="ocp-release-notes-installation-azure-encrypted-vnet_{context}"] -==== Installing a cluster on {azure-full} with virtual network encryption - -With this release, you can install a cluster on {azure-short} using encrypted virtual networks. You are required to use {azure-short} virtual machines that have the `premiumIO` parameter set to `true`. See Microsoft's documentation about link:https://learn.microsoft.com/en-us/azure/virtual-network/how-to-create-encryption?tabs=portal[Creating a virtual network with encryption] and link:https://learn.microsoft.com/en-us/azure/virtual-network/virtual-network-encryption-overview#requirements[Requirements and Limitations] for more information. - -[id="ocp-release-notes-installation-firewall-updates_{context}"] -==== Firewall requirements when installing a cluster that uses {ibm-title} Cloud Paks - -With this release, if you install a cluster using {ibm-title} Cloud Paks, you must allow outbound access to `icr.io` and `cp.icr.io` on port 443. This access is required for {ibm-title} Cloud Pak container images. For more information, see xref:../installing/install_config/configuring-firewall.adoc#configuring-firewall[Configuring your firewall]. - -[id="ocp-release-notes-installation-azure-confidential-vms_{context}"] -==== Installing a cluster on {azure-full} using Intel TDX Confidential VMs - -With this release, you can install a cluster on {azure-short} using Intel-based Confidential VMs. The following machine sizes are now supported: - -* DCesv5-series -* DCedsv5-series -* ECesv5-series -* ECedsv5-series - -For more information, see xref:../installing/installing_azure/ipi/installing-azure-customizations.adoc#installation-azure-confidential-vms_installing-azure-customizations[Enabling confidential VMs]. - -[id="ocp-release-notes-installation-azure-dedicated-disk-etcd_{context}"] -==== Dedicated disk for etcd on {azure-full} (Technology Preview) - -With this release, you can install your {product-title} cluster on {azure-short} with a dedicated data disk for `etcd`. This configuration attaches a separate managed disk to each control plane node and uses it only for `etcd` data, which can improve cluster performance and stability. This feature is available as a Technology Preview. For more information, see xref:../installing/installing_azure/ipi/installing-azure-customizations.adoc#installation-azure-dedicated-disks_installing-azure-customizations[Configuring a dedicated disk for etcd]. - -[id="ocp-release-notes-installation-bm-multiarch_{context}"] -==== Multi-architecture support for bare metal -With this release, you can install a bare-metal environment that supports multi-architecture capabilities. You can provision both `x86_64` and `aarch64` architectures from an existing `x86_64` cluster by using virtual media, meaning you can manage a diverse hardware environment more efficiently. - -For more information, see xref:../post_installation_configuration/configuring-multi-arch-compute-machines/multi-architecture-configuration.adoc#configuring-your-cluster-with-multi-architecture-compute-machines[Configuring your cluster with multi-architecture compute machines]. - -[id="ocp-release-notes-installation-nic-firmware-updates_{context}"] -==== Support for updating the host firmware components of NICs for bare metal -With this release, the `HostFirmwareComponents` resource for bare metal describes network interface controllers (NICs). To update NIC host firmware components, the server must support Redfish and must permit you to use Redfish to update NIC firmware. - -For more information, see xref:../installing/installing_bare_metal/bare-metal-postinstallation-configuration.adoc#bmo-about-the-hostfirmwarecomponents-resource_bare-metal-postinstallation-configuration[About the HostFirmwareComponents resource]. - -[id="ocp-4-20-admin-ack-updating_{context}"] -==== Required administrator acknowledgment when updating from {product-title} 4.19 to 4.20 - -In {product-title} 4.17, a previously xref:../release_notes/ocp-4-20-release-notes.adoc#ocp-4-20-removed-kube-apis_release-notes[removed Kubernetes API] was inadvertently reintroduced. It has been removed again in {product-title} 4.20. - -Before a cluster can be updated from {product-title} 4.19 to 4.20, a cluster administrator must manually provide acknowledgment. This safeguard helps to prevent update issues that could occur if workloads, tools, or other components still depend on the Kubernetes API that has been removed in {product-title} 4.20. - -Administrators must take the following actions before proceeding with the cluster update: - -. Evaluate the cluster for the use of APIs that will be removed. -. Migrate the affected manifests, workloads, and API clients to use the supported API version. -. Provide the administrator acknowledgment that all necessary updates have been made. - -All {product-title} 4.19 clusters require this administrator acknowledgment before they can be updated to {product-title} 4.20. - -For more information, see xref:../updating/preparing_for_updates/updating-cluster-prepare.adoc#kube-api-removals_updating-cluster-prepare[Kubernetes API removals]. - -[id="ocp-release-notes-using-UUIDs-for-Transit-Gateway-and-VPC_{context}"] -==== Using UUIDs for a Transit Gateway and Virtual Private Cloud (VPC) - -Previously, when installing a cluster on {ibm-power-server-title}, you could only specify a name for an existing Transit Gateway or Virtual Private Cloud (VPC). As the uniqueness of names was not guaranteed, this could cause conflicts and installation failures. With this release, you can use Universally Unique Identifiers (UUIDs) for a Transit Gateway and VPC. By using unique identifiers, the installation program can unambiguously identify the correct Transit Gateway or VPC. This prevents the naming conflicts and the issue is resolved. - -[id="ocp-release-notes-machine-config-operator_{context}"] -=== Machine Config Operator - -[id="ocp-release-notes-machine-config-operator-boot_{context}"] -==== Updated boot images for vSphere now supported (Technology Preview) - -Updated boot images is now supported as a Technology Preview feature for {vmw-first} clusters. This feature allows you configure your cluster to update the node boot image whenever you update your cluster. By default, the boot image in your cluster is not updated along with your cluster. For more information, see xref:../machine_configuration/mco-update-boot-images.adoc#mco-update-boot-images[Updated boot images]. - -[id="ocp-release-notes-machine-config-operator-ocl-ga_{context}"] -==== {image-mode-os-on-caps} reboot improvements - -The following machine configuration changes no longer cause a reboot of nodes with on-cluster custom layered images: - -* Modifying the configuration files in the `/var` or `/etc` directory -* Adding or modifying a systemd service -* Changing SSH keys -* Removing mirroring rules from `ICSP`, `ITMS`, and `IDMS` objects -* Changing the trusted CA, by updating the `user-ca-bundle` configmap in the `openshift-config` namespace - -For more information, see xref:../machine_configuration/mco-coreos-layering.adoc#coreos-layering-configuring-on-limitations_mco-coreos-layering[On-cluster image mode known limitations]. - -[id="ocp-release-notes-machine-config-operator-boot-image_{context}"] -==== {image-mode-os-on-caps} status reporting improvements - -When {image-mode-os-lower} is configured, there are improvements to error reporting including the following changes: - -* In certain scenarios after the custom layered image has been built and pushed, errors could cause the build process to fail. If this happens, the MCO now reports the errors and the `machineosbuild` object and builder pod are reported as failed. - -* The `oc describe mcp` output has a new `ImageBuildDegraded` status field that reports if a custom layered image build has failed. - -[id="ocp-release-notes-machine-config-operator-cert-changes_{context}"] -==== Setting the kernel type parameter is now supported on {image-mode-os-on-lower} nodes - -You can now use the `kernelType` parameter in a `MachineConfig` object on nodes with on-cluster custom layered images in order to install a realtime kernel on the node. Previously, on nodes with on-cluster custom layered images the `kernelType` parameter was ignored. For information, see xref:../machine_configuration/machine-configs-configure.adoc#nodes-nodes-rtkernel-arguments_machine-configs-configure[Adding a real-time kernel to nodes]. - -[id="ocp-release-notes-machine-config-operator-pin_{context}"] -==== Pinning images to nodes - -In clusters with slow, unreliable connections to an image registry, you can use a `PinnedImageSet` object to pull the images in advance, before they are needed, then associate those images with a machine config pool. This ensures that the images are available to the nodes in that pool when needed. The `must-gather` for the Machine Config Operator includes all `PinnedImageSet` objects in the cluster. For more information, see xref:../machine_configuration/machine-config-pin-preload-images-about.adoc#machine-config-pin-preload-images_machine-config-operator[Pinning images to nodes]. - -[id="ocp-release-notes-machine-config-operator-mcn_{context}"] -==== Improved MCO state reporting is now generally available - -The machine config nodes custom resource, which you can use to monitor the progress of machine configuration updates to nodes, is now generally available. - -You can now view the status of updates to custom machine config pools in addition to the control plane and worker pools. The functionality for the feature has not changed. However, some of the information in the command output and in the status fields in the `MachineConfigNode` object has been updated. The `must-gather` for the Machine Config Operator now includes all `MachineConfigNodes` objects in the cluster. For more information, see xref:../machine_configuration/index.adoc#checking-mco-node-status_machine-config-overview[About checking machine config node status]. - -[id="ocp-release-notes-auth-hostmount-anyuid-v2-scc_{context}"] -==== Enabling direct - -This release includes a new security context constraint (SCC), named `hostmount-anyuid-v2`. This SCC provides the same features as the `hostmount-anyuid` SCC, but contains `seLinuxContext: RunAsAny`. This SCC was added because the `hostmount-anyuid` SCC was intended to allow trusted pods to access any paths on the host, but SELinux prevents containers from accessing most paths. The `hostmount-anyuid-v2` allows host file system access as any UID, including UID 0, and is intended to be used instead of the `privileged` SCC. Grant with caution. - -[id="ocp-release-notes-machine-management_{context}"] -=== Machine management - -[id="ocp-4-20-capi-aws-capacity-preferences_{context}"] -==== Additional {aws-short} Capacity Reservation configuration options - -On clusters that manage machines with the Cluster API, you can specify additional constraints to determine whether your compute machines use {aws-short} capacity reservations. For more information, see xref:../machine_management/cluster_api_machine_management/cluster_api_provider_configurations/cluster-api-config-options-aws.adoc#machine-feature-agnostic-capacity-reservation_cluster-api-config-options-aws[Capacity Reservation configuration options]. - -[id="ocp-release-notes-machine-management-ca-scale-up_{context}"] -==== Cluster autoscaler scale up delay - -You can now configure a delay before the cluster autoscaler recognizes newly pending pods and schedules the pods to a new node by using the `spec.scaleUp.newPodScaleUpDelay` parameter in the `ClusterAutoscaler` CR. If the node remains unscheduled after the delay, the cluster autoscaler can scale up a new node. This delay gives the cluster autoscaler additional time to locate an appropriate node or it can wait for space on an existing pod to become available. For more information, see xref:../machine_management/applying-autoscaling.adoc#configuring-clusterautoscaler_applying-autoscaling[Configuring the cluster autoscaler]. - -[id="ocp-4-20-release-notes-monitoring_{context}"] -=== Monitoring - -The in-cluster monitoring stack for this release includes the following new and modified features: - -[id="ocp-4-20-monitoring-updates-to-monitoring-stack-components-and-dependencies"] -==== Updates to monitoring stack components and dependencies - -This release includes the following version updates for in-cluster monitoring stack components and dependencies: - -* Prometheus to 3.5.0 -* Prometheus Operator to 0.85.0 -* Metrics Server to 0.8.0 -* Thanos to 0.39.2 -* kube-state-metrics agent to 2.16.0 -* prom-label-proxy to 0.12.0 - -[id="ocp-4-20-monitoring-changes-to-alerting-rules"] -==== Changes to alerting rules - -[NOTE] -==== -Red{nbsp}Hat does not guarantee backward compatibility for recording rules or alerting rules. -==== - -* The expression for the `AlertmanagerClusterFailedToSendAlerts` alert has changed. The alert now evaluates the rate over a longer time period, from `5m` to `15m`. - -[id="ocp-4-20-monitoring-support-log-verbosity-for-metrics-server"] -==== Support log verbosity configuration for Metrics Server - -With this release, you can configure log verbosity for Metrics Server. You can set a numeric verbosity level to control the amount of logged information, where higher numbers increase the logging detail. - -For more information, see xref:../observability/monitoring/configuring-core-platform-monitoring/storing-and-recording-data.adoc#setting-log-levels-for-monitoring-components_storing-and-recording-data[Setting log levels for monitoring components]. - -[id="ocp-release-notes-networking_{context}"] -=== Networking - -[id="ocp-4-20-networking-gateway-api-ossm-version-bump_{context}"] -==== Support for Gateway API Inference Extension - -{product-title} {product-version} updates {SMProductName} to version 3.1.0, which now supports {rhoai-full}. This version update incorporates essential CVE fixes, resolves other bugs, and upgrades Istio to version 1.26.2 for improved security and performance. See the link:https://docs.redhat.com/en/documentation/red_hat_openshift_service_mesh/3.1/html/release_notes/ossm-release-notes[{SMProductShortName} 3.1.0 release notes] for more information. - -[id="ocp-4-20-support-for-bgp-routing-protocol_{context}"] -==== Support for the BGP routing protocol - -The Cluster Network Operator (CNO) now supports enabling Border Gateway Protocol (BGP) routing. With BGP, you can import and export routes to the underlying provider network and use multi-homing, link redundancy, and fast convergence. BGP configuration is managed with the `FRRConfiguration` custom resource (CR). - -When upgrading from an earlier version of {product-title} in which you installed the MetalLB Operator, you must manually migrate your custom frr-k8s configurations from the `metallb-system` namespace to the `openshift-frr-k8s` namespace. To move these CRs, enter the following commands: - -. To create the `openshift-frr-k8s` namespace, enter the following command: -+ -[source,terminal] ----- -$ oc create namespace openshift-frr-k8s ----- - -. To automate the migration, create a `migrate.sh` file with the following content: -+ -[source,bash] ----- -#!/bin/bash -OLD_NAMESPACE="metallb-system" -NEW_NAMESPACE="openshift-frr-k8s" -FILTER_OUT="metallb-" -oc get frrconfigurations.frrk8s.metallb.io -n "${OLD_NAMESPACE}" -o json |\ - jq -r '.items[] | select(.metadata.name | test("'"${FILTER_OUT}"'") | not)' |\ - jq -r '.metadata.namespace = "'"${NEW_NAMESPACE}"'"' |\ - oc create -f - ----- - -. To run the migration script, enter the following command: -+ -[source,terminal] ----- -$ bash migrate.sh ----- - -. To verify that the migration succeeded, enter the following command: -+ -[source,terminal] ----- -$ oc get frrconfigurations.frrk8s.metallb.io -n openshift-frr-k8s ----- - -After the migration is complete, you can remove the `FRR-K8s` custom resources from the `metallb-system` namespace. - -For more information, see xref:../networking/advanced_networking/bgp_routing/about-bgp-routing.adoc#about-bgp-routing[About BGP routing]. - -[id="ocp-4-20-support-for-route-advertisements-cudns-with-bgp_{context}"] -==== Support for route advertisements for cluster user-defined networks (CUDNs) with Border Gateway Protocol (BGP) - -With route advertisements enabled, the OVN-Kubernetes network plugin supports the direct advertisement of routes for pods and services associated with cluster user-defined networks (CUDNs) to the provider network. This feature enables some of the following benefits: - -- Learns routes to pods dynamically -- Advertises routes dynamically -- Enables layer 3 notifications of EgressIP failovers in addition to the layer 2 ones based on gratuitous ARPs. -- Supports external route reflectors, which reduces the number of BGP connections required in large networks - -For more information, see xref:../networking/advanced_networking/route_advertisements/about-route-advertisements.adoc#about-route-advertisements[About route advertisements]. - -[id="ocp-4-20-tech-preview-PreconfiguredUDNAddresses_{context}"] -==== Preconfigured user-defined network endpoints only for use with {mtv-first} (Technology Preview) - -Preconfigured user-defined network endpoints is available as a Technology Preview and controlled by the feature gate, `PreconfiguredUDNAddresses`. You can now explicitly control the overlay network configuration including: IP address, MAC address, and default gateway. This feature is available for Layer 2 as part of the `ClusterUserDefinedNetwork` (CUDN) custom resource (CR). Administrators can preconfigure endpoints to migrate KubeVirt virtual machines (VMs) without disruption. To enable the feature use the new fields, `reservedSubnets`, `infrastructureSubnets`, and `defaultGatewayIPs`, found in the CUDN CR. For more information about the configurations, see xref:../networking/multiple_networks/primary_networks/about-user-defined-networks.adoc#nw-udn-additional-config-details_about-user-defined-networks[Additional configuration details for user-defined networks]. Currently, static IP addresses are only supported for the `ClusterUserDefinedNetworks` CR and only for use with {mtv-short}. - -[id="ocp-4-20-migration-configure-ovs-nmstate_{context}"] -==== Support for migrating a configured br-ex bridge to NMState - -If you used the `configure-ovs.sh` shell script to set a `br-ex` bridge during cluster installation, you can migrate the `br-ex` bridge to NMState as a postinstallation task. For more information, see xref:../installing/installing_bare_metal/bare-metal-postinstallation-configuration.adoc#migrating-br-ex-bridge-nmstate_bare-metal-postinstallation-configuration[Migrating a configured br-ex bridge to NMState]. - -[id="ocp-release-notes-ptp-logging-config_{context}"] -==== Configuring enhanced PTP logging - -You can now configure enhanced log reduction for the PTP Operator to reduce the volume of logs generated by the `linuxptp-daemon`. - -This feature provides a periodic summary of filtered logs, which is not available with basic log reduction. Optionally, you can set a specific interval for the summary logs and a threshold in nanoseconds for the master offset logs. - -For more information, see xref:../networking/advanced_networking/ptp/configuring-ptp.adoc#cnf-configuring-enhanced-log-reduction-for-linuxptp_configuring-ptp[Configuring enhanced PTP logging]. - -[id="ocp-4-20-networking-arm-dual-oc_{context}"] -==== PTP ordinary clocks with added redundancy on AArch64 nodes (Technology Preview) - -With this release, you can configure PTP ordinary clocks with added redundancy on AArch64 architecture nodes that use the following dual-port NICs only: - -* NVIDIA ConnectX-7 series -* NVIDIA BlueField-3 series, in NIC mode - -This feature is available as a Technology Preview. For more information, see xref:../networking/advanced_networking/ptp/about-ptp.adoc#ptp-dual-ports-oc_about-ptp[Using dual-port NICs to improve redundancy for PTP ordinary clocks]. - -[id="ocp-release-notes-bond-cni-load-balancing_{context}"] -==== Load balancing configuration with bond CNI plugin (Technology Preview) - -In this release you can now specify the transmit hash policy for load balancing across the aggregated interfaces with the `xmitHashPolicy` as part of bond CNI plugin configuration. This feature is available as a Technology Preview. - -For more information, see xref:../networking/multiple_networks/secondary_networks/creating-secondary-nwt-other-cni.adoc#nw-multus-bond-cni-object_configuring-additional-network-cni[Configuration for a Bond CNI secondary network]. - -[id="ocp-4-20-networking-namespaced-sriov-app-owners_{context}"] -==== SR-IOV network management in application namespaces - -With {product-title} {product-version}, you can now create and manage SR-IOV networks directly within your application namespaces. This new feature provides greater control over your network configurations and helps simplify your workflow. - -Previously, creating an SR-IOV network required a cluster administrator to configure it for you. Now, you can manage these resources directly in your own namespace, which offers several key benefits: - -* Increased autonomy and control: You can now create your own `SriovNetwork` objects, removing the need to involve a cluster administrator for network configuration tasks. - -* Enhanced security: Managing resources within your own namespace improves security by providing better separation between applications and helps prevent unintentional misconfigurations. - -* Simplified permissions: You can now simplify permissions and reduce operational overhead by using namespaced SR-IOV networks. - -For more information, see xref:../networking/hardware_networks/configuring-namespaced-sriov-resources.adoc#configuring-namespaced-sriov-resources[Configuring namespaced SR-IOV resources]. - -[id="ocp-4-20-unnumbered-bgp-peering_{context}"] -==== Unnumbered BGP peering - -With this release, {product-title} includes unnumbered BGP peering. -This was previously available as a Technology Preview feature. -You can use the `spec.interface` field of the BGP peer custom resource to configure unnumbered BGP peering. - -For more information, see xref:../networking/ingress_load_balancing/metallb/metallb-frr-k8s.adoc#nw-metallb-frrconfiguration-crd-interface[Configuring the integration of MetalLB and FRR-K8s ]. - -[id="ocp-4-20-networking-pfrs-operator_{context}"] -==== High-availability for pod-level bonding on SR-IOV networks (Technology Preview) - -This Technology Preview feature introduces the PF Status Relay Operator. The Operator uses Link Aggregation Control Protocol (LACP) as a health check to detect upstream switch failures, enabling high availability for workloads that use pod-level bonding with SR-IOV network virtual functions (VF). - -Without this feature, an upstream switch can fail while the underlying physical function (PF) still reports an `up` state. VFs attached to the PF also remain up, causing pods to send traffic to a dead endpoint and leading to packet loss. - -The PF Status Relay Operator prevents this by monitoring the LACP status of the PF. When a failure is detected, the Operator forces the link state of the attached VFs down, triggering the pod's bond to fail over to a backup path. This ensures the workload remains available and minimizes packet loss. - -For more information, see xref:../networking/hardware_networks/configure-lacp-for-sriov.adoc#sriov-lacp-sriov[High availability for pod-level bonds on SR-IOV networks]. - -[id="ocp-4-20-network-policies_{context}"] -==== Network policies for additional namespaces -With this release, {product-title} deploys Kubernetes network policies to additional system namespaces to control ingress and egress traffic. It is anticipated that future releases might include network policies for additional system namespaces and Red{nbsp}Hat Operators. - -[id="ocp-4-20-ptp-holdover_{context}"] -==== Unassisted holdover for PTP devices (Technology Preview) -With this release, the PTP Operator provides unassisted holdover as a Technology Preview feature. When the upstream timing signal is lost, the PTP Operator automatically places PTP devices configured as either a boundary clock or a time slave clock into holdover mode. Automatic placement into holdover mode helps to maintain a continuous and stable time source for cluster nodes, minimizing time synchronization disruptions. - -[NOTE] -==== -This feature is available only for nodes with Intel E810-XXVDA4T network interface cards. -==== - -For more information, see xref:../networking/advanced_networking/ptp/configuring-ptp.adoc#configuring-ptp[Configuring PTP devices]. - -[id="ocp-release-notes-nodes_{context}"] -=== Nodes - -[id="ocp-release-notes-machine-config-operator-sigtore_{context}"] -==== sigstore support is now generally available - -Support for sigstore `ClusterImagePolicy` and `ImagePolicy` objects is now generally available. The API version is now `config.openshift.io/v1`. For more information, see xref:../nodes/nodes-sigstore-using.adoc#nodes-sigstore-using[Manage secure signatures with sigstore]. - -[NOTE] -==== -The default `openshift` cluster image policy is Technology Preview and is active only in clusters that have enabled Technology Preview features. -==== - -[id="ocp-release-notes-machine-config-operator-sigtore-pki_{context}"] -=== Support for sigstore bring your own PKI (BYOPKI) image validation - -You can now use sigstore `ClusterImagePolicy` and `ImagePolicy` objects to generate BYOPKI config to the `policy.json` file, enabling you to verify image signatures with link:https://developers.redhat.com/articles/2025/09/08/verify-cosign-bring-your-own-pki-signature-openshift?source=sso#configure_openshift_for_pki_verification[BYOPKI]. For more information, see xref:../nodes/nodes-sigstore-using.adoc#nodes-sigstore-configure-parameters_nodes-sigstore-using[About cluster and image policy parameters]. - -[id="ocp-release-notes-machine-config-operator-namespace_{context}"] -==== Linux user namespace support is now generally available - -Support for deploying pods and containers into Linux user namespaces is now generally available and enabled by default. Running pods and containers in individual user namespaces can mitigate several vulnerabilities that a compromised container can pose to other pods and the node itself. This change also includes two new security context constraints, `restricted-v3` and `nested-container`, that are specifically designed for use with user namespaces. You can also configure the `/proc` file system in pods as `unmasked`. For more information, see xref:../nodes/pods/nodes-pods-user-namespaces.adoc#nodes-pods-user-namespaces[Running pods in Linux user namespaces]. - -[id="ocp-release-notes-machine-config-operator-in-place_{context}"] -==== Adjust pod resource levels without pod disruption - -By using the in-place pod resizing feature, you can apply a resize policy to change the CPU and memory resources for containers within a running pod without re-creating or restarting the pod. For more information, see xref:../nodes/pods/nodes-pods-adjust-resources-in-place.adoc#nodes-pods-adjust-resources-in-place[Manually adjust pod resource levels]. - -[id="ocp-release-notes-machine-config-operator-mount-oci_{context}"] -==== Mounting an OCI image into a pod - -You can you use an image volume to mount an Open Container Initiative (OCI)-compliant container image or artifact directly into a pod. For more information, see xref:../nodes/pods/nodes-pods-image-volume.adoc#odes-pods-image-volume[Mounting an OCI image into a pod]. - -[id="ocp-release-notes-machine-config-operator-allocate-gpu_{context}"] -==== Allocating specific GPUs to pods (Technology Preview) - -You can now enable pods to request GPUs based on specific device attributes, such as product name, GPU memory capacity, compute capability, vendor name, and driver version. These attributes are exposed by the by using a third-party DRA resource driver that you install. For more information, see xref:../nodes/pods/nodes-pods-allocate-dra.adoc#nodes-pods-allocate-dra[Allocating GPUs to pods]. - -[id="ocp-release-notes-openshift-cli_{context}"] -=== OpenShift CLI (oc) - -[id="ocp-oc-adm-upgrade-recommend_{context}"] -==== Introducing the oc adm upgrade recommend command (General Availability) - -Formerly Technology Preview and now Generally Available, the `oc adm upgrade recommend` command allows system administrators to perform a pre-update check on their {product-title} clusters using the command line interface (CLI). The pre-update check helps identify potential issues, enabling users to address them before initiating an update. By running the precheck command and inspecting the output, users can prepare for updating their cluster and make informed decisions about when to start an update. - -For more information, see xref:../updating/updating_a_cluster/updating-cluster-cli.adoc#update-upgrading-cli[Updating a cluster by using the CLI]. - -[id="ocp-oc-adm-upgrade-status_{context}"] -==== Introducing the oc adm upgrade status command (General Availability) - -Formerly Technology Preview and now Generally Available, the `oc adm upgrade status` command allows cluster administrators to get high-level summary information about the state of their {product-title} cluster update using the command line interface (CLI). Three types of information are provided when you enter the command: control plane information, worker node information, and health insights. - -The command is not currently supported on Hosted Control Plane (HCP) clusters. - -For more information, see xref:../updating/updating_a_cluster/updating-cluster-cli.adoc#update-upgrading-cli[Updating a cluster by using the CLI]. - -[id="ocp-release-notes-openshift-cli-mirror-environment-var-imagepaths_{context}"] -==== oc-mirror v2 mirrors container images in environment variables of deployment templates - -Operand images, dynamically deployed by Operator controllers at runtime, are typically referenced by environment variables within the controller’s deployment template. - -Before {product-title} 4.20, while `oc-mirror` plugin v2 could access these environment variables, it attempted to mirror all values, including non-image references, for example, log levels, leading to failures. With this update, {product-title} identifies and mirrors only the container images referenced in these environment variables. - -For more information, see xref:../disconnected/about-installing-oc-mirror-v2.adoc#oc-mirror-imageset-config-parameters-v2_about-installing-oc-mirror-v2[ImageSet configuration parameters for oc-mirror plugin v2]. - -[id="ocp-release-notes-osdk_{context}"] -=== Operator development - -[id="ocp-release-notes-osdk-base-images_{context}"] -==== Supported Operator base images - -include::snippets/osdk-release-notes-operator-images.adoc[] - -[id="ocp-release-notes-olm_{context}"] -=== Operator lifecycle - -[id="ocp-release-notes-olm-operatorhub-rename_{context}"] -==== Red{nbsp}Hat Operator catalogs moved from OperatorHub to the software catalog in the console - -With this release, the Red{nbsp}Hat-provided Operator catalogs have moved from OperatorHub to the software catalog and the *Operators* navigation item is renamed to *Ecosystem* in the console. The unified software catalog presents Operators, Helm charts, and other installable content in the same console view. - -* To access the Red{nbsp}Hat-provided Operator catalogs in the console, select *Ecosystem* -> *Software Catalog*. -* To manage, update, and remove installed Operators, select *Ecosystem* -> *Installed Operators*. - -[NOTE] -==== -Currently, the console only supports managing Operators by using {olmv0-first}. If you want to use {olmv1} to install and manage cluster extensions, such as Operators, you must use the CLI. -==== - -To manage the default or custom catalog sources, you still interact with OperatorHub custom resource (CR) in the console or CLI. - -[id="ocp-release-notes-postinstallation-configuration_{context}"] -=== Postinstallation configuration - -[id="ocp-release-notes-enabling-sts-existing-cluster_{context}"] -==== Enabling {aws-full} {sts-first} on an existing cluster - -With this release, you can configure your {aws-short} {product-title} cluster to use {sts-short} even if you did not do so during installation. - -For more information, see xref:../post_installation_configuration/changing-cloud-credentials-configuration.adoc#enabling-aws-sts-existing-cluster_changing-cloud-credentials-configuration[Enabling AWS Security Token Service (STS) on an existing cluster]. - -[id="ocp-release-notes-rhcos_{context}"] -=== {op-system-first} - -[id="ocp-4-20-kdump-ga-support"] -==== Investigate kernel crashes with kdump (General Availability) - -With this update, `kdump` is now Generally Available for all supported architectures, including `x86_64`, `arm64`, `s390x`, and `ppc64le`. This enhancement enables users to diagnose and resolve kernel problems more efficiently. - -[id="ocp-4-20-coreos-ignition-2-20_{context}"] -==== Ignition update to version 2.20.0 - -{op-system} introduces version 2.20.0 of Ignition. This enhancement supports partitioning disks with mounted partitions using the `partx` utility, which is now included with `dracut` module installations. Additionally, this update adds support for Proxmox Virtual Environment. - -[id="ocp-4-20-coreos-butane-0-23_{context}"] -==== Butane update to version 0.23.0 - -{op-system} now includes Butane version 0.23.0. - -[id="ocp-4-20-coreos-rust-afterburn-5-7_{context}"] -==== Afterburn update to version 5.7.0 - -{op-system} now includes Afterburn version 5.7.0. This update adds support for Proxmox Virtual Environment. - -[id="ocp-4-20-coreos-installer-update_{context}"] -==== `coreos-installer` update to version 0.23.0 - -With this release, the `coreos-installer` utility is updated to version 0.23.0. - -[id="ocp-release-notes-scalability-and-performance_{context}"] -=== Scalability and performance - -[id="ocp-release-notes-numa-resources-operator-replicas_{context}"] -==== Configuring NUMA-aware scheduler replicas and high availability (Technology Preview) - -In {product-title} {product-version}, the NUMA Resources Operator automatically enables high availability (HA) mode by default. In this mode, the NUMA Resources Operator creates one scheduler replica for each control-plane node in the cluster to ensure redundancy. This default behavior occurs if the `spec.replicas` field is not specified in the `NUMAResourcesScheduler` custom resource. Alternatively, you can explicitly set a specific number of scheduler replicas to override the default HA behavior or disable the scheduler entirely by setting the `spec.replicas` field to `0`. The maximum number of replicas is 3, even if the number of control plane nodes exceeds 3. - -For more information, see xref:../scalability_and_performance/cnf-numa-aware-scheduling.adoc#cnf-managing-ha-nrop_numa-aware[Managing high availability (HA) for the NUMA-aware scheduler]. - -[id="ocp-release-notes-numa-resources-operator-schedulable-control-planes_{context}"] -==== NUMA Resources Operator now supports schedulable control plane nodes - -With this release, the NUMA Resources Operator can now manage control plane nodes that are configured as schedulable. This capability allows you to deploy topology-aware workloads on control plane nodes, which is especially useful in resource-constrained environments like compact clusters. - -This enhancement helps the NUMA Resources Operator schedule your NUMA-aware pods on the node with the most suitable NUMA topology, even on control plane nodes. - -For more information, see xref:../scalability_and_performance/cnf-numa-aware-scheduling.adoc#cnf-numa-resource-operator-support-scheduling-cp_numa-aware[NUMA Resources Operator support for schedulable control-plane nodes]. - -[id="ocp-4-20-receive-packet-steering-disabled_{context}"] -==== Receive Packet Steering (RPS) is now disabled by default - -With this release, Receive Packet Steering (RPS) is no longer configured when Performance Profile is applied. The RPS configuration affects containers that perform networking system calls, such as send, directly within latency-sensitive threads. To avoid latency impacts when RPS is not configured, move networking calls to helper threads or processes. - -The previous RPS configuration resolved latency issues at the expense of overall pod kernel networking performance. The current default configuration promotes transparency by requiring developers to address the underlying application design instead of obscuring performance impacts. - -To revert to the previous behavior, add the `performance.openshift.io/enable-rps` annotation to the PerformanceProfile manifest: - -[source,yaml] ----- -apiVersion: performance.openshift.io/v2 -kind: PerformanceProfile -metadata: - name: example-performanceprofile - annotations: - performance.openshift.io/enable-rps: "enable" ----- - -[NOTE] -==== -This action restores the prior functionality at the cost of globally reducing networking performance for all pods. -==== - -[id="ocp-release-notes-intel-sierra-forest-support_{context}"] -==== Performance tuning for worker nodes with Intel Sierra Forest CPUs - -With this release, you can use the `PerformanceProfile` custom resource to configure worker nodes on machines equipped with Intel Sierra Forest CPUs. These CPUs are supported when configured with a single NUMA domain (NPS=1). - -[id="ocp-release-notes-amd-turin-support_{context}"] -==== Performance tuning for worker nodes with AMD Turin CPUs - -With this release, you can use the `PerformanceProfile` custom resource to configure worker nodes on machines equipped with AMD Turin CPUs. These CPUs are fully supported when configured with a single NUMA domain (NPS=1). - -[id="ocp-release-notes-hitless-tls-certificate-rotation_{context}"] -==== Hitless TLS certificate rotation for the Kubernetes API - -This new feature enhances TLS certificate rotations in {product-title}, ensuring 95% expected cluster availability. It is particularly beneficial for high-transaction-rate clusters and {sno} deployments, ensuring seamless operation even under heavy loads. - -[id="ocp-release-notes-additional-cluster-requirements-for-etcd_{context}"] -==== Additional cluster latency requirements for etcd - -With this update, the etcd product documentation is updated to include additional requirements for reducing {product-title} cluster latency. This update clarifies the prerequisites and setup procedures for using etcd, resulting in an improved user experience. As a result, this feature introduces support for Transport Layer Security (TLS) 1.3 in etcd, which enhances security and performance for data transmission, and enables etcd to comply with the latest security standards, reducing potential vulnerabilities. The improved encryption ensures more secure communication between etcd and its clients. For more information, see xref:../etcd/etcd-practices.adoc#recommended-cluster-latency-etcd_etcd-practices[Cluster latency requirements for etcd]. - -//[id="ocp-release-notes-security_{context}"] -//=== Security - -[id="ocp-release-notes-storage_{context}"] -=== Storage - -[id="release-notes-sscsi-network-policies_{context}"] -==== NetworkPolicy support for the {secrets-store-operator} - -The {secrets-store-operator} version 4.20 is now based on the upstream v1.5.2 release. The {secrets-store-operator} now applies Kubernetes `NetworkPolicy` objects during installation to restrict network communication to only the required components. - -[id="ocp-release-notes-storage-vol-populators_{context}"] -==== Volume populators are generally available - -The volume populators feature allows you to create pre-populated volumes. - -{product-title} 4.20 introduces a new field `dataSourceRef` for volume populator functionality that expands the objects that can be used as a data source for pre-population of volumes, from only persistent volume claims (PVC) and snapshots, to any appropriate custom resource (CR). - -{product-title} now ships `volume-data-source-validator`, which reports events on PVCs that use a volume populator without a corresponding `VolumePopulator` instance. Previous {product-title} versions did not require `VolumePopulator` instances, so if you are upgrading from 4.12, or later, you might receive events about unregistered populators. If you installed `volume-data-source-validator` yourself previously, you can remove your version. - -The volume populators feature, which was introduced in {product-title} 4.12 as a Technology Preview feature, is now supported as generally available. - -Volume population is enabled by default. However, {product-title} does not ship with any volume populators. - -For more information about volume populators, see xref:../storage/container_storage_interface/persistent-storage-csi-vol-populators.adoc[Volume populators]. - -[id="ocp-release-notes-storage-performance-plus-azure_{context}"] -==== Performance plus for Azure Disk is generally available - -By enabling performance plus, the input/output operations per second (IOPS) and throughput limits can be increased for the following types of disks that are 513 GiB, and larger: - -* Azure Premium solid-state drives (SSD) - -* Standard SSDs - -* Standard hard disk drives (HDD) - -This feature is generally available in {product-title} 4.20. - -For more information about performance plus, see xref:../storage/container_storage_interface/persistent-storage-csi-azure.html#performance-plus-for-azure-disk[Performance plus for Azure Disk]. - -[id="ocp-release-notes-storage-change-block-tracking_{context}"] -==== Changed block tracking (Developer Preview) - -Changed block tracking enables efficient and incremental backups and disaster recovery for persistent volumes (PVs) managed by Container Storage Interface (CSI) drivers that support this feature. - -Changed block tracking allows consumers to requests a list of blocks that have changed between two snapshots, which is useful for backup solutions vendors. By only backing up changed blocks, rather than entire volumes, back up processes are more efficient. - -:FeatureName: Changed block tracking -include::snippets/developer-preview.adoc[] - -For more information about changed block tracking, see this link:https://access.redhat.com/solutions/7131061[KB article]. - -[id="ocp-release-notes-storage-efs-zonal-vol-support_{context}"] -==== AWS EFS One Zone volume support is generally available - -{product-title} 4.20 introduces AWS Elastic File Storage (EFS) One Zone volume support as generally available. With this feature, if file system Domain Name System (DNS) resolution fails, the EFS CSI driver can fall back to mount targets. A mount target serves as a network endpoint that allows AWS EC2 instances or other AWS compute instances within a Virtual Private Cloud (VPC) to connect to, and mount, an EFS file system. - -For more information about One Zone, see xref:../storage/container_storage_interface/persistent-storage-csi-aws-efs.adoc#one-zone-file-systems[Support for One Zone]. - -[id="ocp-release-notes-storage-storage-security-policy_{context}"] -==== Configuring fsGroupChangePolicy and seLinuxChangePolicy at namespace and pod level - -Certain operations of a volume can cause pod startup delays, which might cause pod timeouts. - -*fsGroup:* -For volumes with many files, pod startup timeouts can occur because, by default, {product-title} recursively changes ownership and permissions for the contents of each volume to match the `fsGroup` specified in a pod’s `securityContext` when that volume is mounted. This can be time consuming, slowing pod startup. You can use the `fsGroupChangePolicy` parameter inside a `securityContext` to control the way that {product-title} checks and manages ownership and permissions for a volume. - -Changing this parameter at the pod level was introduced in {product-title} 4.10. In 4.20, you can set this parameter at the namespace level, in addition to the pod level, as a generally available feature. - -*SELinux:* -SELinux (Security-Enhanced Linux) is a security mechanism that assigns security labels (contexts) to all objects (files, processes, network ports, etc.) on a system. These labels determine what a process can access. When a pod starts, the container runtime recursively relabels all files on a volume to match a pod’s SELinux context. For volumes with a lot of files, this can significantly increase pod startup times. Mount option specifies avoiding recursive relabeling of all files by attempting to mount the volume with the correct SELinux label directly using the -o context mount option, thus helping to avoid pod timeout problems. - -*RWOP and SELinux mount option:* -ReadWriteOncePod (RWOP) persistent volumes use the SELinux mount feature by default. Mount option was introduced in {product-title} 4.15 as a Technology Preview feature, and became generally available in 4.16. - -*RWO and RWX and SELinux mount option:* -ReadWriteOnce (RWO) and ReadWriteMany (RWX) volumes use recursive relabeling by default. Mount option for RWO/RWX was introduced in {product-title} 4.17 as a Developer Preview feature, but is now supported in 4.20 as a Technology Preview feature. - -[IMPORTANT] -==== -In a future {product-title} version, RWO and RWX volumes will use mount option by default. -==== - -To assist you with the upcoming move to the mount option default, {product-title} 4.20 reports SELinux-related conflicts when creating pods, and on running pods, to make you aware of potential conflicts, and to help you resolve them. For more information about this reporting, see this link:https://access.redhat.com/solutions/7131398[KB article]. - -If you are unable to resolve the SELinux-related conflicts, you can proactively opt-out of the future move to mount option as default for selected pods or namespaces. - -In {product-title} 4.20, you can evaluate the mount option feature for RWO and RWX volumes as a Technology Preview feature. - -:FeatureName: RWO/RWX SELinux mount -include::snippets/technology-preview.adoc[] - -For more information about fsGroup, see xref:../storage/understanding-persistent-storage.adoc#using_fsGroup_overview_understanding-persistent-storage[Reducing pod timeouts using fsGroup]. - -For more information about SELinux, see xref:../storage/understanding-persistent-storage.adoc#using_selinuxChangePolicy_overview_understanding-persistent-storage[Reducing pod timeouts using seLinuxChangePolicy]. - - -[id="ocp-release-notes-storage-honor-vol-reclaim-policy_{context}"] -==== Always honor persistent volume reclaim policy is generally available - -Before {product-title} 4.18, the persistent volume (PV) reclaim policy was not always applied. - -For a bound PV and persistent volume claim (PVC) pair, the ordering of PV-PVC deletion determined whether the PV delete reclaim policy was applied or not. The PV applied the reclaim policy if the PVC was deleted before deleting the PV. However, if the PV was deleted before deleting the PVC, then the reclaim policy was not applied. As a result of that behavior, the associated storage asset in the external infrastructure was not removed. - -Starting with {product-title} 4.18, the PV reclaim policy is consistently always applied as a Technical Preview feature. With {product-title} 4.20, this feature is generally available. - -For more information, see xref:../storage/understanding-persistent-storage.adoc#reclaiming_understanding-persistent-storage[Reclaim policy for persistent volumes]. - -[id="ocp-release-notes-storage-csi-manila-multiple-cdir_{context}"] -==== Manila CSI driver allows multiple CIDRs when creating NFS volumes is generally available - -By default, {product-title} creates Manila storage classes that provide access to all IPv4 clients, with the possibility of updating it to a single IP address or subnet. In {product-title} 4.20, you can limit client access by defining custom storage classes that use multiple client IP addresses or subnets by using the `nfs-ShareClient` parameter. - -This feature is generally available in {product-title} 4.20. - -For more information, see xref:../storage/container_storage_interface/persistent-storage-csi-manila.adoc#persistent-storage-csi-manila-share-access-rules_persistent-storage-csi-manila[Customizing Manila share access rules]. - -[id="ocp-release-notes-storage-csi-aws-efs-cross-account-procedure-rewrite_{context}"] -==== AWS EFS cross account procedure revision - -To enhance usability and provide both Security Token Service (STS) and non-STS support, the Amazone Web Serivces (AWS) Elastic File Service (EFS) cross account support procedure has been revised. - -To view the revised procedure, see xref:../storage/container_storage_interface/persistent-storage-csi-aws-efs.adoc#persistent-storage-csi-efs-cross-account_persistent-storage-csi-aws-efs[AWS EFS cross account support]. - -[id="ocp-release-notes-web-console_{context}"] -=== Web console - -==== Support for custom application icons in the Import flow - -Before this update, the *Container image* form flow provided only a limited set of predefined icons for applications. - -With this update, you can add custom icons when you import applications through the *Container image* form. For existing applications, apply the `app.openshift.io/custom-icon` annotation to add a custom icon to the corresponding *Topology* node. - -As a result, you can better identify applications in the *Topology* view and organize your projects more clearly. - -[id="ocp-release-notable-technical-changes_{context}"] -== Notable technical changes - -[id="notable-technical-changes-mosc-naming_{context}"] -=== MachineOSConfig naming changes - -The name of the `MachineOSConfig` object used with {image-mode-os-on-lower} must now be the same as the machine config pool where you want to deploy the custom layered image. Previously, you could use any name. This change was made to prevent attempts to use multiple `MachineOSConfig` objects with each machine config pool. - -[id="ocp-4-20-oc-mirror-v2-verify-creds_{context}"] -=== oc-mirror plugin v2 verifies credentials and certificates before mirroring operations - -With this update, the oc-mirror plugin v2 now verifies information such as registry credentials, DNS name, and SSL certificates before populating the cache and beginning mirroring operations. -This prevents users from discovering certain problems only after the cache is populated and mirroring has begun. - -[id="vmw-7-vcf-4-eogs_{context}"] -=== {vmw-full} 7 and VMware Cloud Foundation 4 end of general support - -Broadcom has ended general support for {vmw-full} 7 and VMware Cloud Foundation (VCF) 4. If your existing {product-title} cluster is running on either of these platforms, you must plan to migrate or upgrade your VMware infrastructure to a supported version. {product-title} supports installation on {vmw-short} 8 Update 1 or later, or VCF 5 or later. - -[id="ocp-release-deprecated-removed-features_{context}"] -== Deprecated and removed features - - -[id="ocp-release-note-images-dep-rem_{context}"] -=== Images deprecated and removed features - -.Images deprecated and removed tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.18 |4.19 |4.20 - -|Cluster Samples Operator -|Deprecated -|Deprecated -|Deprecated -|==== - - -[id="ocp-release-note-install-dep-rem_{context}"] -=== Installation deprecated and removed features - -.Installation deprecated and removed tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.18 |4.19 |4.20 - -|`--cloud` parameter for `oc adm release extract` -|Deprecated -|Deprecated -|Deprecated - -|CoreDNS wildcard queries for the `cluster.local` domain -|Deprecated -|Deprecated -|Deprecated - -|`compute.platform.openstack.rootVolume.type` for {rh-openstack} -|Deprecated -|Deprecated -|Deprecated - -|`controlPlane.platform.openstack.rootVolume.type` for {rh-openstack} -|Deprecated -|Deprecated -|Deprecated - -|`ingressVIP` and `apiVIP` settings in the `install-config.yaml` file for installer-provisioned infrastructure clusters -|Deprecated -|Deprecated -|Deprecated - -|Package-based {op-system-base} compute machines -|Deprecated -|Removed -|Removed - -|`platform.aws.preserveBootstrapIgnition` parameter for {aws-first} -|Deprecated -|Deprecated -|Deprecated - -|Installing a cluster on {aws-short} with compute nodes in {aws-short} Outposts -|Deprecated -|Deprecated -|Deprecated -|==== - -// No deprecated or removed features for 3 consecutive releases -// -// [id="ocp-release-note-monitoring-dep-rem_{context}"] -// === Monitoring deprecated and removed features - -// .Monitoring deprecated and removed tracker -// [cols="4,1,1,1",options="header"] -// |==== -// |Feature |4.18 |4.19 |4.20 -// |==== - -[id="ocp-release-note-machine-manage-dep-rem_{context}"] -=== Machine Management deprecated and removed features - -.Machine management deprecated and removed tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.18 |4.19|4.20 - -|Confidential Computing with AMD Secure Encrypted Virtualization for {gcp-first} -|General Availability -|General Availability -|Deprecated -|==== - -[id="ocp-release-note-networking-dep-rem_{context}"] -=== Networking deprecated and removed features - -.Networking deprecated and removed tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.18 |4.19 |4.20 - -|iptables -|Deprecated -|Deprecated -|Deprecated - -|==== - - -[id="ocp-release-note-node-dep-rem_{context}"] -=== Node deprecated and removed features - -.Node deprecated and removed tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.18 |4.19 |4.20 - -|`ImageContentSourcePolicy` (ICSP) objects -|Deprecated -|Deprecated -|Deprecated - -|Kubernetes topology label `failure-domain.beta.kubernetes.io/zone` -|Deprecated -|Deprecated -|Deprecated - -|Kubernetes topology label `failure-domain.beta.kubernetes.io/region` -|Deprecated -|Deprecated -|Deprecated - -|cgroup v1 -|Deprecated -|Removed -|Removed -|==== - - -[id="ocp-release-note-cli-dep-rem_{context}"] -=== OpenShift CLI (oc) deprecated and removed features - -.OpenShift CLI (oc) deprecated and removed tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.18 |4.19|4.20 - -|oc-mirror plugin v1 -|Deprecated -|Deprecated -|Deprecated - -|Docker v2 registries -|General Availability -|General Availability -|Deprecated -|==== - - -[id="ocp-release-note-operators-dep-rem_{context}"] -=== Operator lifecycle and development deprecated and removed features - -// "Operator lifecycle" refers to OLMv0 and "development" refers to Operator SDK - -.Operator lifecycle and development deprecated and removed tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.18 |4.19 |4.20 - -|Operator SDK -|Deprecated -|Removed -|Removed - -|Scaffolding tools for Ansible-based Operator projects -|Deprecated -|Removed -|Removed - -|Scaffolding tools for Helm-based Operator projects -|Deprecated -|Removed -|Removed - -|Scaffolding tools for Go-based Operator projects -|Deprecated -|Removed -|Removed - -|Scaffolding tools for Hybrid Helm-based Operator projects -|Removed -|Removed -|Removed - -|Scaffolding tools for Java-based Operator projects -|Removed -|Removed -|Removed - -// Do not remove the SQLite database... entry until otherwise directed by the Operator Framework PM -|SQLite database format for Operator catalogs -|Deprecated -|Deprecated -|Deprecated -|==== - - -//[id="ocp-hardware-an-driver-dep-rem_{context}"] -//=== Specialized hardware and driver enablement deprecated and removed features - -//.Specialized hardware and driver enablement deprecated and removed tracker -//[cols="4,1,1,1",options="header"] -//|==== -//|Feature |4.18 |4.19 |4.20 -//|==== - - -=== Storage deprecated and removed features - -.Storage deprecated and removed tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.18 |4.19 |4.20 - -|Shared Resources CSI Driver Operator -|Removed -|Removed -|Removed -|==== - - -//[id="ocp-clusters-dep-rem_{context}"] -//=== Updating clusters deprecated and removed features - -//.Updating clusters deprecated and removed tracker -//[cols="4,1,1,1",options="header"] -//|==== -//|Feature |4.18 |4.19 |4.20 -//|==== - - -[id="ocp-release-note-web-console-dep-rem_{context}"] -=== Web console deprecated and removed features - -.Web console deprecated and removed tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.18 |4.19 |4.20 - -|`useModal` hook for dynamic plugin SDK -|General Availability -|Deprecated -|Deprecated - -|Patternfly 4 -|Deprecated -|Removed -|Removed - -|==== - - -[id="ocp-release-note-workloads-dep-rem_{context}"] -=== Workloads deprecated and removed features - -.Workloads deprecated and removed tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.18 |4.19 |4.20 - -|`DeploymentConfig` objects -|Deprecated -|Deprecated -|Deprecated -|==== - -[id="ocp-release-deprecated-features_{context}"] -=== Deprecated features - -[id="ocp-release-amd-sev-deprecation_{context}"] -==== Deprecation of AMD Secure Encrypted Virtualization - -The use of Confidential Computing with AMD Secure Encrypted Virtualization (AMD SEV) on {gcp-first} has been deprecated and might be removed in a future release. - -You can use AMD Secure Encrypted Virtualization Secure Nested Paging (AMD SEV-SNP) instead. -[id="ocp-4-20-docker-v2-registries-removed_{context}"] -==== Docker v2 registries deprecated - -Support for Docker v2 registries is deprecated and is planned for removal in a future release. A registry that supports the Open Container Initiative (OCI) specification will be required for all mirroring operations in a future release. Additionally, `oc-mirror` v2 now only generates custom catalog images in the OCI format, whereas the deprecated `oc-mirror` v1 still supports the Docker v2 format. - -[id="ocp-4-20-sunset-redhat-marketplace_{context}"] -==== Red{nbsp}Hat Marketplace is deprecated - -The Red{nbsp}Hat Marketplace is deprecated. Customers who use the partner software from the Marketplace should contact the software vendor about how to migrate from the Marketplace Operator to an Operator in the Red{nbsp}Hat Ecosystem Catalog. It is expected that the Marketplace index will be removed in an upcoming {product-title} release. For more information, see link:https://access.redhat.com/articles/7130828[Sunset of the Red Hat Marketplace, operated by IBM]. - -[id="ocp-release-removed-features_{context}"] -=== Removed features - -[id="ocp-4-20-removed-kube-apis_{context}"] -==== Removed Kubernetes APIs - -{product-title} 4.20 removed the following Kubernetes APIs. You must migrate your manifests, automation, and API clients to use the new, supported API versions before updating to 4.20. For more information about migrating removed APIs, see the link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/[Kubernetes documentation]. - -.Kubernetes APIs removed from {product-title} 4.20 -[cols="2,2,2,1",options="header",] -|=== -|Resource |Removed API |Migrate to |Notable changes - -|`MutatingWebhookConfiguration` -|`admissionregistration.k8s.io/v1beta1` -|`admissionregistration.k8s.io/v1` -|link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/#webhook-resources-v122[Yes] - -|`ValidatingAdmissionPolicy` -|`admissionregistration.k8s.io/v1beta1` -|`admissionregistration.k8s.io/v1` -|link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/#webhook-resources-v122[Yes] - -|`ValidatingAdmissionPolicyBinding` -|`admissionregistration.k8s.io/v1beta1` -|`admissionregistration.k8s.io/v1` -|link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/#webhook-resources-v122[Yes] - -|`ValidatingWebhookConfiguration` -|`admissionregistration.k8s.io/v1beta1` -|`admissionregistration.k8s.io/v1` -|link:https://kubernetes.io/docs/reference/using-api/deprecation-guide/#webhook-resources-v122[Yes] -|=== - -[id="ocp-release-bug-fixes_{context}"] -== Bug fixes -//Bug fix work for TELCODOCS-750 -//Bare Metal Hardware Provisioning / OS Image Provider -//Bare Metal Hardware Provisioning / baremetal-operator -//Bare Metal Hardware Provisioning / cluster-baremetal-operator -//Bare Metal Hardware Provisioning / ironic" -//CNF Platform Validation -//Cloud Native Events / Cloud Event Proxy -//Cloud Native Events / Cloud Native Events -//Cloud Native Events / Hardware Event Proxy -//Cloud Native Events -//Driver Toolkit -//Installer / Assisted installer -//Installer / OpenShift on Bare Metal IPI -//Networking / ptp -//Node Feature Discovery Operator -//Performance Addon Operator -//Telco Edge / HW Event Operator -//Telco Edge / RAN -//Telco Edge / Core - -//Telco Edge / TALO -//Telco Edge / ZTP - - -//[id="ocp-release-note-api-auth-bug-fixes_{context}"] -//=== API Server and Authentication - -[id="ocp-release-note-bare-metal-hardware-bug-fixes_{context}"] -=== Bare Metal Hardware Provisioning - -* Before this update, when installing a dual-stack cluster on bare metal by using installer-provisioned infrastructure, the installation failed because the Virtual Media URL was IPv4 instead of IPv6. As IPv4 was unreachable, the bootstrap failed on the virtual machine (VM) and cluster nodes were not created. With this release, when you install a dual-stack cluster on bare metal for installer-provisioned infrastructure, the dual-stack cluster uses the Virtual Media URL IPv6 and the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-60240[OCPBUGS-60240]) - -* Before this update, when installing a cluster with the bare metal as a service (BMaaS) API, an ambiguous validation error was reported. When you set an image URL without a checksum, BMaaS failed to validate the deployment image source information. With this release, when you do not provide a required checksum for an image, a clear message is reported. (link:https://issues.redhat.com/browse/OCPBUGS-57472[OCPBUGS-57472]) - -* Before this update, when installing a cluster using bare metal, if cleaning was not disabled, the hardware tried to delete any Software RAID configuration before it ran the `coreos-installer` tool. With this release, the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-56029[OCPBUGS-56029]) - -* Before this update, by using a Redfish system ID, such as `redfish://host/redfish/v1/` instead of `redfish://host/redfish/v1/Self`, in a Baseboard Management Console (BMC) URL, a registration error about an invalid JSON was reported. This issue was caused by a bug in the Bare Metal Operator (BMO). With this release, BMO now handles URLs without a Redfish system ID as a valid address without causing a JSON parsing issue. This fix improves the software handling of a missing Redfish system ID in BMC URLs. (link:https://issues.redhat.com/browse/OCPBUGS-55717[OCPBUGS-55717]) - -* Before this update, virtual media boot attempts sometimes failed because some models of SuperMicro such as `ars-111gl-nhr` used a different virtual media device string than other SuperMicro machines. With this release, an extra conditional check is added to sushy library code to check for the specific model affected and to adjust its behavior. As a result, Supermicro `ars-111gl-nhr` can boot from virtual media. (link:https://issues.redhat.com/browse/OCPBUGS-55434[OCPBUGS-55434]) - -* Before this update, RAM Disk logs did not include clear file separators, which occasionally caused the content to overlap on a single line. As a consequence, users could not parse RAM Disk logs. With this release, RAM Disk logs include clear file headers to indicate the boundary between the content of each file. As a result, the readability of RAM Disk logs for users is improved. (link:https://issues.redhat.com/browse/OCPBUGS-55381[OCPBUGS-55381]) - -* Before this update, during Ironic Python Agent (IPA) deployments, the RAM disk logs in the `metal3-ramdisk-logs` container did not include `NetworkManager` logs. The absence of `NetworkManager` logs hindered effective debugging, which affected network issue resolution. With this release, the existing RAM disk logs in the `metal3-ramdisk-logs` container of a metal3 pod include the entire journal from the host rather than just the `dmesg` and IPA logs. As result, IPA logs provide comprehensive `NetworkManager` data for improved debugging. (link:https://issues.redhat.com/browse/OCPBUGS-55350[OCPBUGS-55350]) - -* Before this update, when the provisioning network was disabled in the cluster configuration, you could create a bare-metal host with a driver that required a network boot, for example Intelligent Platform Management Interface (IPMI) or Redfish without virtual media. As a result, boot failures occurred during inspection or provisioning because the correct DHCP options could not be identified. With this release, when you create a bare-metal host in this scenario the host fails to register and the reported error references the disabled provisioning network. To create the host, you must enable the provisioning network or use a virtual-media-based driver, for example, Redfish virtual media. (link:https://issues.redhat.com/browse/OCPBUGS-54965[OCPBUGS-54965]) - -[id="ocp-release-note-cloud-compute-bug-fixes_{context}"] -=== Cloud Compute - -* Before this update, {aws-short} compute machine sets could include a null value for the `userDataSecret` parameter. -Using a null value sometimes caused machines to get stuck in the `Provisioning` state. With this release, the `userDataSecret` parameter requires a value. -(link:https://issues.redhat.com/browse/OCPBUGS-55135[OCPBUGS-55135]) - -* Before this update, {product-title} clusters on {aws-short} that were created with version 4.13 or earlier could not update to version 4.19. -Clusters that were created with version 4.14 and later have an {aws-short} `cloud-conf` ConfigMap by default, and this ConfigMap is required starting in {product-title} 4.19. -With this release, the Cloud Controller Manager Operator creates a default `cloud-conf` ConfigMap when none is present on the cluster. -This change enables clusters that were created with version 4.13 or earlier to update to version 4.19. -(link:https://issues.redhat.com/browse/OCPBUGS-59251[OCPBUGS-59251]) - -* Before this update, a `failed to find machine for node ...` appeared in the logs when the `InternalDNS` address for a machine was not set as expected. -As a consequence, the user might interpret this error as the machine not existing. -With this release, the log message reads `failed to find machine with InternalDNS matching ...`. -As a result, the user has a clearer indication of why the match is failing. -(link:https://issues.redhat.com/browse/OCPBUGS-19856[OCPBUGS-19856]) - -* Before this update, a bug fix altered the availability set configuration by changing the fault domain count to use the maximum available value instead of being fixed at 2. -This inadvertently caused scaling issues for compute machine sets that were created prior to the bug fix, because the controller attempted to modify immutable availability sets. -With this release, availability sets are no longer modified after creation, allowing affected compute machine sets to scale properly. -(link:https://issues.redhat.com/browse/OCPBUGS-56380[OCPBUGS-56380]) - -* Before this update, compute machine sets migrating from the Cluster API to the Machine API got stuck in the `Migrating` state. -As a consequence, the compute machine set could not finish transitioning to use a different authoritative API or perform further reconciliation of the `MachineSet` object status. -With this release, the migration controllers watch for changes in Cluster API resources and react to authoritative API transitions. -As a result, compute machine sets successfully transition from the Cluster API to the Machine API. -(link:https://issues.redhat.com/browse/OCPBUGS-56487[OCPBUGS-56487]) - -* Before this update, for the `maxUnhealthy` field in the `MachineHealthCheck` custom resource definition (CRD), it did not document the default value. -With this release, the CRD documents the default value. -(link:https://issues.redhat.com/browse/OCPBUGS-61314[OCPBUGS-61314]) - -* Before this update, it was possible to specify the use of the `CapacityReservationsOnly` capacity reservation behavior and Spot Instances in the same machine template. -As a consequence, machines with these two incompatible settings were created. -With this release, validation of machine templates ensures that these two incompatible settings are not used in the same machine template. -As a result, machines with these two incompatible settings cannot be created. (link:https://issues.redhat.com/browse/OCPBUGS-60943[OCPBUGS-60943]) - -* Before this update, on clusters that support migrating Machine API resources to Cluster API resources, deleting a nonauthoritative machine did not delete the corresponding authoritative machine. -As a consequence, orphaned machines that should have been cleaned up remained on the cluster and could cause a resource leak. -With this release, deleting a nonauthoritative machine triggers propagation of the deletion to the corresponding authoritative machine. -As a result, deletion requests on nonauthoritative machine correctly cascade, preventing orphaned authoritative machines and ensuring consistency in machine cleanup. -(link:https://issues.redhat.com/browse/OCPBUGS-55985[OCPBUGS-55985]) - -* Before this update, on clusters that support migrating Machine API resources to Cluster API resources, the {cluster-capi-operator} could create an authoritative Cluster API compute machine set in the `Paused` state. -As a consequence, the newly created Cluster API compute machine set could not reconcile or scale machines even though it was using the authoritative API. -With this release, the Operator now ensures that Cluster API compute machine sets are created in an unpaused state when the Cluster API is authoritative. -As a result, newly created Cluster API compute machine sets are reconciled immediately and scaling and machine lifecycle operations proceed as intended when the Cluster API is authoritative. -(link:https://issues.redhat.com/browse/OCPBUGS-56604[OCPBUGS-56604]) - -* Before this update, scaling large numbers of nodes was slow because scaling requires reconciling each machine several times and each machine was reconciled individually. -With this release, up to ten machines can be reconciled concurrently. -This change improves the processing speed for machines during scaling. -(link:https://issues.redhat.com/browse/OCPBUGS-59376[OCPBUGS-59376]) - -* Before this update, the {cluster-capi-operator} status controller used an unsorted list of related objects, leading to status updates when there were no functional changes. -As a consequence, users would see significant noise in the {cluster-capi-operator} object and in logs due to continuous and unnecessary status updates. -With this release, the status controller logic sorts the list of related objects before comparing them for changes. -As a result, a status update only occurs when there is a change to the Operator's state. -(link:https://issues.redhat.com/browse/OCPBUGS-56805[OCPBUGS-56805], link:https://issues.redhat.com/browse/OCPBUGS-58880[OCPBUGS-58880]) - -* Before this update, the `config-sync-controller` component of the Cloud Controller Manager Operator did not display logs. -The issue is resolved in this release. -(link:https://issues.redhat.com/browse/OCPBUGS-56508[OCPBUGS-56508]) - -* Before this update, the Control Plane Machine Set configuration used availability zones from compute machine sets. -This is not a valid configuration. -As a consequence, the Control Plane Machine Set could not be generated when the control plane machines were in a single zone while compute machine sets spanned multiple zones. -With this release, the Control Plane Machine Set derives an availability zone configuration from existing control plane machines. -As a result, the Control Plane Machine Set generates a valid zone configuration that accurately reflects the current control plane machines. -(link:https://issues.redhat.com/browse/OCPBUGS-52448[OCPBUGS-52448]) - -* Before this update, the controller that annotates a Machine API compute machine set did not check whether the Machine API was authoritative before adding scale-from-zero annotations. -As a consequence, the controller repeatedly added these annotations and caused a loop of continuous changes to the `MachineSet` object. -With this release, the controller checks the value of the `authoritativeAPI` field before adding scale-from-zero annotations. -As a result, the controller avoids the looping behavior by only adding these annotations to a Machine API compute machine set when the Machine API is authoritative. -(link:https://issues.redhat.com/browse/OCPBUGS-57581[OCPBUGS-57581]) - -* Before this update, the Machine API Operator attempted to reconcile `Machine` resources on platforms other than {aws-short} where the `.status.authoritativeAPI` field was not populated. -As a consequence, compute machines remained in the `Provisioning` state indefinitely and never became operational. -With this release, the Machine API Operator now populates the empty `.status.authoritativeAPI` field with the corresponding value in the machine specification. -A guard is also added to the controllers to handle cases where this field might still be empty. -As a result, `Machine` and `MachineSet` resources are reconciled properly and compute machines no longer remain in the `Provisioning` state indefinitely. -(link:https://issues.redhat.com/browse/OCPBUGS-56849[OCPBUGS-56849]) - -* Before this update, the Machine API Provider Azure used an old version of the Azure SDK, which used an old API version that did not support referencing a Capacity Reservation group. -As a consequence, creating a Machine API machine that referenced a Capacity Reservation group in another subscription resulted in an Azure API error. -With this release, the Machine API Provider Azure uses a version of the Azure SDK that supports this configuration. -As a result, creating a Machine API machine that references a Capacity Reservation group in another subscription works as expected. -(link:https://issues.redhat.com/browse/OCPBUGS-55372[OCPBUGS-55372]) - -* Before this update, the two-way synchronization controller on clusters that support migrating Machine API resources to Cluster API resources did not correctly compare the machine specification when converting an authoritative Cluster API machine template to a Machine API machine set. -As a consequence, changes to the Cluster API machine template specification were not synchronized to the Machine API machine set. -With this release, changes to the comparison logic resolve the issue. -As a result, the Machine API machine set synchronizes correctly after the Cluster API machine set references the new Cluster API machine template. -(link:https://issues.redhat.com/browse/OCPBUGS-56010[OCPBUGS-56010]) - -* Before this update, the two-way synchronization controller on clusters that support migrating Machine API resources to Cluster API resources did not delete the machine template when its corresponding Machine API machine set was deleted. -As a consequence, unneeded Cluster API machine templates persisted in the cluster and cluttered the `openshift-cluster-api` namespace. -With this release, the two-way synchronization controller correctly handles deletion synchronization for the machine template. -As a result, deleting a Machine API authoritative machine set deletes the corresponding Cluster API machine template. -(link:https://issues.redhat.com/browse/OCPBUGS-57195[OCPBUGS-57195]) - -* Before this update, the two-way synchronization controller on clusters that support migrating Machine API resources to Cluster API resources prematurely reported a successful migration. -As a consequence, if any errors occurred when updating the status of related objects, the operation was not retried. -With this release, the controller ensures that all related object statuses are written before reporting a successful status. -As a result, the controller handles errors during migration better. -(link:https://issues.redhat.com/browse/OCPBUGS-57040[OCPBUGS-57040]) - -[id="ocp-release-note-cloud-credential-operator-bug-fixes_{context}"] -=== Cloud Credential Operator - -* Before this update, the `ccoctl` command unnecessarily required the `baseDomainResourceGroupName` parameter when creating the OpenID Connect (OIDC) issuer and managed identities for a private cluster by using {entra-first}. As a consequence, an error displayed when `ccoctl` tried to create private clusters. With this release, the `baseDomainResourceGroupName` parameter is removed as a requirement. As a result, the process for creating a private cluster on {azure-full} is logical and consistent with expectations. (link:https://issues.redhat.com/browse/OCPBUGS-34993[OCPBUGS-34993]) - -[id="ocp-release-note-cluster-autoscaler-bug-fixes_{context}"] -=== Cluster Autoscaler - -* Before this update, the cluster autoscaler attempted to include machine objects that were in a deleting state. As a consequence, the cluster autoscaler count of machines was inaccurate. This issue caused the cluster autoscaler to add additional taints that were not needed. With this release, the autoscaler accurately counts the machines. (link:https://issues.redhat.com/browse/OCPBUGS-60035[OCPBUGS-60035]) - -* Before this update, when you created a cluster autoscaler object with the Cluster Autoscaler Operator enabled in the cluster, two `cluster-autoscaler-default` pods in the `openshift-machine-api` were sometimes created at the same time and one of the pods was immediately killed. With this release, only one pod is created. (link:https://issues.redhat.com/browse/OCPBUGS-57041[OCPBUGS-57041]) - -//[id="ocp-release-note-cluster-override-admin-operator-bug-fixes_{context}"] -//=== Cluster Resource Override Admission Operator - -[id="ocp-release-note-cluster-version-operator-bug-fixes_{context}"] -=== Cluster Version Operator - -* Before this update, the status of the `ClusterVersion` condition could incorrectly show `ImplicitlyEnabled` instead of `ImplicitlyEnabledCapabilities`. With this release, the `ClusterVersion` condition type is fixed and changed from `ImplicitlyEnabled` to `ImplicitlyEnabledCapabilities`. (link:https://issues.redhat.com/browse/OCPBUGS-56114[OCPBUGS-56114]) - -[id="ocp-release-note-config-operator-bug-fixes_{context}"] -=== config-operator - -* Before this update, the cluster incorrectly switched to the `CustomNoUpgrade` state without the correct `featureGate` configuration. As a consequence, empty `featureGates` and subsequent controller panics occurred. With this release, the `featureGate` configuration for the `CustomNoUpgrade` cluster state matches the default which prevents empty `featureGates` and subsequent controller panics. (link:https://issues.redhat.com/browse/OCPBUGS-57187[OCPBUGS-57187]) - -[id="ocp-release-note-dev-console-bug-fixes_{context}"] -=== Dev Console - -* Before this update, some entries on the *Quick Starts* page displayed duplicate link buttons. With this update, the duplicates are removed, and the link buttons are correctly displayed. (link:https://issues.redhat.com/browse/OCPBUGS-60373[OCPBUGS-60373]) - -* Before this update, the onboarding modal that displayed when you first logged in was missing visuals and images, which made the modal messaging unclear. With this release, the missing elements are added to the modal. As a result, the onboarding experience provides complete visuals consistent with the overall console design. (link:https://issues.redhat.com/browse/OCPBUGS-57392[OCPBUGS-57392]) - -* Before this update, importing multiple files in the YAML editor copied the existing content and appended the new file, which created duplicates. With this release, the import behavior is fixed. As a result, the YAML editor displays only the new file content without duplication. (link:https://issues.redhat.com/browse/OCPBUGS-45297[OCPBUGS-45297]) -* Before this update, the status of the `ClusterVersion` condition could incorrectly show `ImplicitlyEnabled` instead of `ImplicitlyEnabledCapabilities`. With this release, the `ClusterVersion` condition type is fixed and changed from `ImplicitlyEnabled` to `ImplicitlyEnabledCapabilities`. (link:https://issues.redhat.com/browse/OCPBUGS-56114[OCPBUGS-56114]) - -[id="ocp-release-note-etcd-bug-fixes_{context}"] -=== etcd - -* Before this update, the timeout on one etcd member caused context deadlines to exceed. As a consequence, all members were declared unhealthy, even though some were reachable. With this release, if one member times out, other members are no longer incorrectly marked as unhealthy. (link:https://issues.redhat.com/browse/OCPBUGS-60941[OCPBUGS-60941]) - -* Before this update, when you deployed {sno} with many IPs on the primary interface, the IP in the etcd certificate mismatched with the IP in the config map that the API server used to connect to etcd. As a consequence, the API server pod failed during {sno} deployment, which caused cluster initialization issues. With this release, the single IP in the etcd config map matches the IP in the certificate for {sno} deployments. As a result, the API server connects to etcd by using the correct IP included in the etcd certificate, which prevents pod failure during cluster initialization. (link:https://issues.redhat.com/browse/OCPBUGS-55404[OCPBUGS-55404]) - -* Before this update, during temporary downtime of the API server, the Cluster etcd Operator reported incorrect information, such as messages that the `openshift-etcd` namespace was non-existent. With this update, the Cluster etcd Operator status message correctly indicates API server unavailability instead of suggesting the absence of the `openshift-etcd` namespace. As a result, the Cluster etcd Operator status accurately reflects the presence of the `openshift-etcd` namespace, enhancing system reliability. (link:https://issues.redhat.com/browse/OCPBUGS-44570[OCPBUGS-44570]) - -[id="ocp-release-note-extensions-olmv1-bug-fixes_{context}"] -=== Extensions ({olmv1}) - -* Before this update, the preflight custom resource definition (CRD) safety check in {olmv1} blocked updates if it detected changes in the description fields of a CRD. With this update, the preflight CRD safety check does not block updates when there are changes to documentation fields. (link:https://issues.redhat.com/browse/OCPBUGS-55051[OCPBUGS-55051]) - -* Before this update, the catalogd and Operator Controller components did not display the correct version and commit information in the {oc-first}. With this update, the correct commit and version information is displayed. (link:https://issues.redhat.com/browse/OCPBUGS-23055[OCPBUGS-23055]) - -//[id="ocp-release-note-image-streams-bug-fixes_{context}"] -//=== ImageStreams - -[id="ocp-release-note-installer-bug-fixes_{context}"] -=== Installer - -* Before this update, when you installed a Konflux-built cluster on {ibm-power-server-name}, the installation could fail due to errors in semantic versioning (SemVer) parsing. With this release, the parsing issue has been resolved so that the installation can continue successfully. (link:https://issues.redhat.com/browse/OCPBUGS-61120[OCPBUGS-61120]) - -* Before this update, when you installed a cluster on {azure-short} Stack Hub with a user-provisioned infrastructure, the API and API-int load balancers could fail to be created. As a consequence, the installation failed. With this release, the user-provisioned infrastructure templates is updated so that the load balancers are created. As a result, installation is successful. (link:https://issues.redhat.com/browse/OCPBUGS-60545[OCPBUGS-60545]) - -* Before this update, when you installed a cluster on {gcp-short}, the installation program read and processed the `install-config.yaml` file even when an unrecoverable error was reported about not finding a matching public DNS zone. This error was due to an invalid `baseDomain` parameter. As a consequence, cluster administrators recreated the `install-config.yaml` file unnecessarily. With this release, when the installation program reports this error the installation progam does not read and process the `install-config.yaml` file. (link:https://issues.redhat.com/browse/OCPBUGS-59430[OCPBUGS-59430]) - -* Before this update, {ibm-cloud-title} was omitted from the list of platforms that supported {sno} installation in the validation code. As a consequence, users could not install a single-node configuration on {ibm-cloud-title} because of a validation error. With this release, {ibm-cloud-title} support for single-node installations is enabled. As a result, users can complete single-node installations on {ibm-cloud-title}. (link:https://issues.redhat.com/browse/OCPBUGS-59220[OCPBUGS-59220]) - -* Before this update, installing {sno} on `platform: None` with user-provisioned infrastructure was not supported, which led to installation failures. With this release, {sno} installation on `platform: None` is supported. (link:https://issues.redhat.com/browse/OCPBUGS-58216[OCPBUGS-58216]) - -* Before this update, when you installed {product-title} on {aws-first}, the Machine Config Operator (MCO) for disabling boot image management failed to check edge compute machine pools. When determining whether to disable boot image management, the installation progream only checked the first compute machine pool entry in the `install-config.yaml`. As a consequence, when you specified multiple compute pools but only the second had a custom Amazon Machine Image (AMI), the installation program did not disable MCO boot image management and the MCO could overwrite the custom AMI. With this release, the installation program checks all edge compute machine pools for custom images. As a result, boot image management is disabled when a custom image is specified in any machine pool. (link:https://issues.redhat.com/browse/OCPBUGS-57803[OCPBUGS-57803]) - -* Before this update, the Agent-based Installer set the permissions for the etcd directory `/var/lib/etcd/member` as `0755` when using an {sno} deployment instead of `0700`, which is correctly set on a multi-node deployment. With this release, the etcd directory `/var/lib/etcd/member` permissions are set to `0700` for {sno} deployments. (link:https://issues.redhat.com/browse/OCPBUGS-57021[OCPBUGS-57201]) - -* Before this update, when you used the Agent-based Installer, pressing the TAB key immediately after escaping the Network Manager Text User Interface (TUI) sometimes failed to register, which caused the cursor to remain on `Configure Network` instead of moving to `Quit`. As a consequence, you were not able to quit the agent console application that verifies whether the current host can retrieve release images. With this release, the TAB key is always registered. (link:https://issues.redhat.com/browse/OCPBUGS-56934[OCPBUGS-56934]) - -* Before this update, when you used the Agent-based Installer, exiting the NetworkManager TUI would sometimes result in a blank screen, rather than displaying an error or proceeding with the installation. With this update, the blank screen is not displayed. (link:https://issues.redhat.com/browse/OCPBUGS-56880[OCPBUGS-56880]) - -* Before this update, installing a cluster on {vmw-full} failed when the API VIP and the ingress VIP used one load balancer IP address. With this release, the API VIP and the ingress VIP are now distinct in `machineNetworks` and the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-56601[OCPBUGS-56601]) - -* Before this update, when you use the Agent-based Installer, setting the `additionalTrustBundlePolicy` field would have no effect. As a consequence, other overrides such the `fips` parameter were ignored. With this update, the `additionalTrustBundlePolicy` parameter is correctly imported and other overrides are not ignored. (link:https://issues.redhat.com/browse/OCPBUGS-56596[OCPBUGS-56596]) - -* Before this update, the lack of detailed logging in the cluster destroy logic for {vmw-full} meant it was unclear why virtual machines (VMs) were not properly removed. Additionally, missing power state information could cause the destroy operation to enter an infinite loop. With this update, logging for the destroy operation is enhanced to indicate when specific cleanup actions begin, include vCenter names, and display a warning if the operation fails to find VMs. As a result, the destroy process provides detailed, actionable logs. (link:https://issues.redhat.com/browse/OCPBUGS-56262[OCPBUGS-56262]) - -* Before this update, when you used the Agent-based Installer to install a cluster in a disconnected environment, exiting the NetworkManager Text User Interface (TUI) returned you to the agent console application that checks whether release images can be pulled from a registry. With this update, you are not returned to the agent console application when you exit the NetworkManager TUI. (link:https://issues.redhat.com/browse/OCPBUGS-56223[OCPBUGS-56223]) - -* Before this update, the Agent-based Installer did not validate the values used to enable disk encryption, which potentially prevented disk encryption from being enabled. With this release, validation for correct disk encryption values is performed during image creation. (link:https://issues.redhat.com/browse/OCPBUGS-54885[OCPBUGS-54885]) - -* Before this update, the resources containing the configuration for vSphere connection could get broken due to a mismatch between the UI and API. With this release, the UI uses the updated API definition. (link:https://issues.redhat.com/browse/OCPBUGS-54434[OCPBUGS-54434]) - -* Before this update, when you used the Agent-based Installer, some validation checks for the `hostPrefix` parameter were not performed when generating the ISO image. As a consequence, invalid `hostPrefix` values were detected only when users failed to boot using the ISO. With this update, these validation checks are performed during ISO generation and causes an immediate failure. (link:https://issues.redhat.com/browse/OCPBUGS-53473[OCPBUGS-53473]) - -* Before this update, some systemd services in the Agent-based Installer continued to run after being stopped, which caused confusing log messages during cluster installation. With this update, these services are correctly stopped. (link:https://issues.redhat.com/browse/OCPBUGS-53107[OCPBUGS-53107]) - -* Before this update, if the proxy configuration for an {azure-first} cluster was deleted while installing a cluster, the program reported an unreadable error and the proxy connection timed out. With this release, when the proxy configuration for the cluster is deleted while installing a cluster, the program reports a readable error message and the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-45805[OCPBUGS-45805]) - -* Before this update, after an installation was completed, the `kubeconfig` file generated by the Agent-based Installer did not contain the ingress router certificate authority (CA). With this release, the `kubeconfig` file contains the ingress router CA upon the completion of a cluster installation. (link:https://issues.redhat.com/browse/OCPBUGS-45256[OCPBUGS-45256]) - -* Before this update, the Agent-based Installer announced a complete cluster installation without first checking whether Operators were in a stable state. Consequently, messages about a completed installation might have appeared even if there were still issues with any of the Operators. With this release, the Agent-based Installer waits until Operators are in a stable state before declaring the cluster installation to be complete. (link:https://issues.redhat.com/browse/OCPBUGS-18658[OCPBUGS-18658]) - -* Before this update, the installation program did not prevent you from attempting to install {sno} on bare metal on the installer-provisioned infrastructure. As a consequence, the installation failed because it was not supported. With this release, {product-title} prevents {sno} cluster installations on unsupported platforms. (link:https://issues.redhat.com/browse/OCPBUGS-6508[OCPBUGS-6508]) - -[id="ocp-release-note-kube-controller-manager-bug-fixes_{context}"] -=== Kube Controller Manager - -* Before this update, the `cluster-policy-controller` was crashing when an invalid volume type was provided. With this release, the code no longer panics. As a result, the `cluster-policy-controller` logs an error to inform about invalidity of a volume type. (link:https://issues.redhat.com/browse/OCPBUGS-62053[OCPBUGS-62053]) - -* Before this update, the `cluster-policy-controller` container was exposing the `10357` port for all networks (the bind address was set to 0.0.0.0). The port was exposed outside the node's host network because the KCM pod manifest set 'hostNetwork` to `true`. This port is used solely for the container's probe. With this enhancement, the bind address was updated to listen on the localhost only. As result, the node security is improved because the port is not exposed outside the node network. (link:https://issues.redhat.com/browse/OCPBUGS-53290[OCPBUGS-53290]) - -[id="ocp-release-note-kubeernetes-api-server-bug-fixes_{context}"] -=== Kubernetes API Server - -* Before this update, concurrent map iteration and kube-apiserver validation caused crashes. As a consequence, API server disruptions and `list watch` storms occurred. With this release, the concurrent map iteration and validation issue is resolved. As a result, API server crashes are prevented, and cluster stability is improved. (link:https://issues.redhat.com/browse/OCPBUGS-61347[OCPBUGS-61347]) - -* Before this update, the resource quantity and `IntOrString` fields validation cost were incorrectly calculated due to improper consideration of maximum field length in the Common Expression Language (CEL) validation. As a consequence, users encountered validation errors due to incorrect string length consideration in CEL validation. With this release, CEL validation correctly accounts for the maximum length of `IntOrString fields`. As a result, users can submit valid resource requests without CEL validation errors. (link:https://issues.redhat.com/browse/OCPBUGS-59756[OCPBUGS-59756]) - -* Before this update, the `node-system-admin-signer` validity was limited to one year and was not extended or refreshed at 2.5 years. This issue prevented issuing the `node-system-admin-client` for two years. With this release, the `node-system-admin-signer` validity is extended to three years, and issuing the `node-system-admin-client` for a two-year period is enabled. (link:https://issues.redhat.com/browse/OCPBUGS-59527[OCPBUGS-59527]) - -* Before this update, a cluster installation failure occurred on {ibm-title} and {azure-first} systems due to incompatibility with the `ShortCertRotation` feature gate. As a consequence, the cluster installation failed, and caused nodes to remain offline. With this release, the fix removes the `ShortCertRotation` feature gate during a cluster installation on {ibm-title} and {azure-first} systems. As a result, cluster installations are successful on these platforms. (link:https://issues.redhat.com/browse/OCPBUGS-57202[OCPBUGS-57202]) - -* Before this update, the `admissionregistration.k8s.io/v1beta1` API was served incorrectly in {product-title} version 4.17, despite being intended for deprecation and removal. This led to dependency issues for users. With this release, the deprecated API filter is registered for a phased removal, and requires administrative acknowledgment for upgrades. As a result, users do not encounter deprecated API errors in {product-title} version 4.20, and the system stability is improved. (link:https://issues.redhat.com/browse/OCPBUGS-55465[OCPBUGS-55465]) - -* Before this update, the certificate rotation controller copied and rewrote all of their changes, and caused excessive event spamming. As a consequence, users experienced excessive event spamming and potential etcd overload. With this release, the certificate rotation controller conflict is resolved, and reduces excessive event spamming. As a result, excessive event spamming in the certificate rotation controller is resolved, reduces the load on etcd, and improves the system stability.(link:https://issues.redhat.com/browse/OCPBUGS-55217[OCPBUGS-55217]) - -* Before this update, user secrets were logged in audit logs after enabling `WriteRequestBodies` profile settings. As a consequence, sensitive data was visible in the audit log. With this release, the `MachineConfig` object is removed from the audit log response, and prevents user secrets from being logged. As a result, secrets and credentials do not appear in audit logs. (link:https://issues.redhat.com/browse/OCPBUGS-52466[OCPBUGS-52466]) - -* Before this update, testing Operator conditions using synthesized methods instead of deploying and scheduling pods by using the deployment controller caused incorrect test results. As a consequence, users experienced test failures due to the incorrect use of synthesized conditions instead of real pod creation. With this release, the Kubernetes deployment controller is used for testing Operator conditions, and improves pod deployment reliability. (link:https://issues.redhat.com/browse/OCPBUGS-43777[OCPBUGS-43777]) - - -[id="ocp-release-note-machine-config-operator-bug-fixes_{context}"] -=== Machine Config Operator - -* Before this update, an external actor could uncordon a node that the Machine Config Operator (MCO) was draining. As a consequence, the MCO and the scheduler would schedule and unschedule pods at the same time, prolonging the drain process. With this release, the MCO attempts to recordon the node if an external actor uncordons it during the drain process. As a result, the MCO and scheduler no longer schedule and remove pods at the same time. (link:https://issues.redhat.com/browse/OCPBUGS-61516[OCPBUGS-61516]) - -* Before this update, during an update from {product-title} 4.18.21 to {product-title} 4.19.6, the Machine Config Operator (MCO) failed due to multiple labels in the `capacity.cluster-autoscaler.kubernetes.io/labels` annotation in one or more machine sets. With this release, the MCO now accepts multiple labels in the `capacity.cluster-autoscaler.kubernetes.io/labels` annotation and no longer fails during the update to {product-title} 4.19.6. (link:https://issues.redhat.com/browse/OCPBUGS-60119[OCPBUGS-60119]) - -* Before this update, the Machine Config Operator (MCO) certificate management failed during an Azure Red Hat OpenShift (ARO) upgrade to 4.19 due to missing infrastructure status fields. As a consequence, certificates were refreshed without required Storage Area Network (SAN) IPs, causing connectivity issues for upgraded ARO clusters. With this release, the MCO now adds and retains SAN IPs during certificate management in ARO, preventing immediate rotation on upgrade to 4.19. (link:https://issues.redhat.com/browse/OCPBUGS-59780[OCPBUGS-59780]) - -* Before this update, when updating from a version of {product-title} prior to 4.15, the `MachineConfigNode` Custom Resource Definitions (CRDs)feature was installed as Technology Preview (TP) causing the update to fail. This feature was fully introduced in {product-title} 4.16. With this release, the update no longer deploys the Technology Preview CRDs, ensuring a successful upgrade. (link:https://issues.redhat.com/browse/OCPBUGS-59723[OCPBUGS-59723]) - -* Before this update, the Machine Config Operator (MCO) was updating node boot images without checking whether the current boot image was from {gcp-first} or {aws-first} Marketplace. As a consequence, the MCO would override a marketplace boot image with a standard {product-title} image. With this release, for {aws-short} images, the MCO has a lookup table that has all of the standard {product-title} installer Advanced Metering Infrastructures (AMIs), which it references before updating the boot image. For {gcp-first} images, the MCO checks the URL header before updating the boot image. As a result, the MCO no longer updates machine sets that have a marketplace boot image. (link:https://issues.redhat.com/browse/OCPBUGS-57426[OCPBUGS-57426]) - -* Before this update, {product-title} updates that shipped a change to Core DNS templates would restart the `coredns` pod before the image pull for the updated base operating system (OS) image. As a consequence, a race occurred when the operating system update manager failed failed the image pull because of network errors, causing the update to stall. With this release, a retry update operation is added to the Machine Config Operator (MCO) to work around this race condition. https://issues.redhat.com/browse/OCPBUGS-43406[OCPBUGS-43406] - - -[id="ocp-release-note-management-console-bug-fixes_{context}"] -=== Management Console - -* Before this update, the YAML editor in the web console would default to indenting YAML files with 4 spaces. With this release, the default indentation has changed to 2 spaces to align with recommendations. (link:https://issues.redhat.com/browse/OCPBUGS-61990[OCPBUGS-61990]) - -* Before this update, expanding the terminal in the web console caused the session to close because the {product-title} logo and header overlapped the terminal view. With this release, the terminal layout is fixed so that it expands correctly. As a result, you can expand or collapse the terminal without connection loss or input interruption. (link:https://issues.redhat.com/browse/OCPBUGS-61819[OCPBUGS-61819]) - -* Before this update, visiting the `/auth/error` page without the required state cookie showed a blank page and prevented error details from displaying. With this release, error handling is improved in the front-end code. As a result, the `/auth/error` page displays error content, making it easier to diagnose and resolve problems. (link:https://issues.redhat.com/browse/OCPBUGS-60912[OCPBUGS-60912]) - -* Before this update, the order of items in the **PersistentVolumeClaim** action menu was not defined, causing the **Delete PersistentVolumeClaim** option to display in the middle of the list. With this release, the option is reordered so it now displays last in the menu. As a result, the action list is consistent and easier to navigate. (link:https://issues.redhat.com/browse/OCPBUGS-60756[OCPBUGS-60756]) - -* Before this update, clicking **Download log** on the Build logs page added `undefined` to the downloaded file name, and clicking **Raw logs** did not open the raw log in a new tab. With this release, the file name is corrected ensuring that clicking **Raw logs** opens the raw log as expected. (link:https://issues.redhat.com/browse/OCPBUGS-60753[OCPBUGS-60753]) - -* Before this update, entering a wrong value in an OpenShift console form field caused multiple exclamation icons to display. With this release, only one icon displays when a field value is invalid. As a result, error messages in all fields now display clearly. (link:https://issues.redhat.com/browse/OCPBUGS-60428[OCPBUGS-60428]) - -* Before this update, some entries on the Quick Starts page displayed duplicate link buttons. With this release, the duplicates are removed, and links now display as intended, resulting in a cleaner and clearer page layout. (link:https://issues.redhat.com/browse/OCPBUGS-60373[OCPBUGS-60373]) - -* Before this update, the console included an outdated security instruction `X-XSS-Protection` when sending pages to your browser. With this release, the instruction is removed. As a result, the console runs securely in modern browsers. (link:https://issues.redhat.com/browse/OCPBUGS-60130[OCPBUGS-60130]) - -* Before this update, the error message in the **events page** would erroneously show the placeholder "{ error }" instead of an error message. With this release, the error message is shown. (link:https://issues.redhat.com/browse/OCPBUGS-60010[OCPBUGS-60010]) - -* Before this update, the console displayed the **Registry poll interval** drop-down menu for managed `CatalogSource` objects, but any change you made was automatically reverted. With this release, the drop-down menu is hidden for managed sources. As a result, the console no longer shows a menu option that cannot be applied. (link:https://issues.redhat.com/browse/OCPBUGS-59725[OCPBUGS-59725]) - -* Before this update, selecting the **Resource** menu on the Deploy from image page caused the view to jump to the top due to improper focus handling. With this release, the focus behavior is corrected so the page stays in place when you open the menu. As a result, your scroll position is preserved during selection. (link:https://issues.redhat.com/browse/OCPBUGS-59586[OCPBUGS-59586]) - -* Before this update, the **Get started** message occupied too much space when you did not have a project, preventing the **No resources found** message from fully displaying. This update reduces the space used by the **Get started** message. As a result, all messages now display completely on the page. (link:https://issues.redhat.com/browse/OCPBUGS-59483[OCPBUGS-59483]) - -* Before this update, improperly nested `flags` within `properties` in `console-crontab-plugin.json` caused the plugin to break. With this release, the nesting in the JSON file is fixed, resolving the conflict with OCPBUGS-58858. As a result, the plugin now loads and displays the `CronTabs` correctly. (link:https://issues.redhat.com/browse/OCPBUGS-59418[OCPBUGS-59418]) - -* Before this update, starting a job from the console always reset its `backoffLimit` to 6, overriding your configured value. With this release, the configured `backoffLimit` is preserved when you start a job in the console. As a result, jobs behave consistently between the console and the CLI. (link:https://issues.redhat.com/browse/OCPBUGS-59382[OCPBUGS-59382]) - -* Before this update, the YAML editor component did not handle some edge cases where the content could not be parsed into a JavaScript object, which caused errors in some situations. With this release, the component was updated to handle these edge cases reliably and the errors no longer occur. (link:https://issues.redhat.com/browse/OCPBUGS-59196[OCPBUGS-59196]) - -* Before this update, the **Namespace** column displayed on the MachineSets list page even when you viewed a single project, because the code did not correctly scope the columns. With this release, the column logic is fixed. As a result, the MachineSets list no longer shows the **Namespace** column for project-scoped views. (link:https://issues.redhat.com/browse/OCPBUGS-58334[OCPBUGS-58334]) - -* Before this update, navigating to a storage class page with multiple path elements in the `href` displayed a blank tab. With this release, the plugin is fixed so that the tab content displays correctly after switching. As a result, storage class pages no longer show blank tabs. (link:https://issues.redhat.com/browse/OCPBUGS-58258[OCPBUGS-58258]) - -* Before this update, editing a `HorizontalPodAutoscaler` (HPA) with a `ContainerResource` type caused a runtime error because the code did not define the `e.resource` variable. With this release, the `e.resource` is defined and the runtime error is fixed in the form editor. As a result, editing an HPA with the `ContainerResource` type no longer fails. (link:https://issues.redhat.com/browse/OCPBUGS-58208[OCPBUGS-58208]) - -* Before this update, the `TELEMETER_CLIENT_DISABLED` setting in the `ConsoleConfig` ConfigMap caused gaps in the telemetry, which limited troubleshooting. With this release, the telemetry client is temporarily disable to resolve "Too Many Requests" errors. As a result, telemetry data is collected reliably, removing limits on troubleshooting. (link:https://issues.redhat.com/browse/OCPBUGS-58094[OCPBUGS-58094]) - -* Before this update, clicking **Configure** in `AlertmanagerReceiversNotConfigured` failed with the error `navigate is not a function` because the code did not handle the configuration correctly. With this release, the issue is fixed. As a result, `AlertmanagerReceiversNotConfigured` now opens as expected. (link:https://issues.redhat.com/browse/OCPBUGS-56986[OCPBUGS-56986]) - -* Before this update, the **CronTab** list page returned an error when a `CronTab` resource was missing optional entries in its `spec` because the console did not validate them properly. with this release, the necessary validation is added. As a result, the **CronTab** list page loads correctly even when some `spec` fields are not defined. (link:https://issues.redhat.com/browse/OCPBUGS-56830[OCPBUGS-56830]) - -* Before this update, users without a project saw only part of the Roles list because of insufficient role-based access control (RBAC) permissions. With this release, the access logic is fixed. As a result, these users can no longer open the Roles page, keeping sensitive data secure. (link:https://issues.redhat.com/browse/OCPBUGS-56707[OCPBUGS-56707]) - -* Before this release, when there were no Quick Starts in the **Quick Starts** page, a plain text message was shown. With this release, cluster administrators are given actions to add or manage Quick Starts. (link:https://issues.redhat.com/browse/OCPBUGS-56629[OCPBUGS-56629)]) - -* Before this update, the generated console dynamic plugin API documentation used the wrong `k8s` utility function names, such as `k8sGetResource` instead of `k8sGet`. With this update, the documentation uses the correct function names with their export name aliases. As a result, the API documentation is clearer for console dynamic plugin developers working with `k8s` utility functions. (link:https://issues.redhat.com/browse/OCPBUGS-56248[OCPBUGS-56248]) - -* Before this update, unused code in the deployment and deployment configuration menus caused unnecessary menu items to display. With this release, the unused menu item definitions are removed, improving code maintainability and reducing potential issues in future updates. (link:https://issues.redhat.com/browse/OCPBUGS-56245[OCPBUGS-56245]) - - -* Before this update, the `/metrics` endpoint was not correctly parsing a bearer token from the authorization header on internal Prometheus scrape requests, which caused `TokenReviews` to fail and all of these requests to be denied with a 401 response. This triggered a `TargetDown` alert for the console metrics endpoint. With this release, the metrics endpoint handler was updated to correctly parse a bearer token from the authorization header for `TokenReview`. This made the `TokenReview` step behave as expected, and resolved the `TargetDown` alert. (link:https://issues.redhat.com/browse/OCPBUGS-56148[OCPBUGS-56148]) - -* Before this update, creating a node without a disk triggered a JavaScript `TypeError` when you accessed nodes in the console. With this release, the filter property initializes correctly. As a result, the node list displays without errors. (link:https://issues.redhat.com/browse/OCPBUGS-56050[OCPBUGS-56050]) - -* Before this update, the `VirtualizedTable` hid the `Started` column on smaller screens, which broke default sorting and disrupted the `PipelineRun` list. With this release, the default sorted column adjusts based on screen size, preventing the table from breaking. As a result, the `PipelineRun` list page remains stable and displays correctly on smaller screens. (link:https://issues.redhat.com/browse/OCPBUGS-56044[OCPBUGS-56044]) - -* Before this update, the cluster switcher allowed users to access {rh-rhacm-first} by choosing the **All Clusters** option. With this release, the {rh-rhacm} is accessed from the perspective selector by choosing the **Fleet Management** perspective. (link:https://issues.redhat.com/browse/OCPBUGS-55946[OCPBUGS-55946]) - -* Before this update, the web console displayed an outdated message about a 60-day update limit in versions 4.16 and later, even though the limit was removed. With this update, the outdated message is removed. As a result, the web console shows only current updated information. (link:https://issues.redhat.com/browse/OCPBUGS-55919[OCPBUGS-55919]) - -* Before this update, the web console home page showed the wrong icon for `Info` alerts, which caused a mismatch in alert severity. With this release, the severity icons are fixed so they match correctly. As a result, the console shows alert severity clearly. (link:https://issues.redhat.com/browse/OCPBUGS-55806[OCPBUGS-55806]) - -* Before this update, a dependency issue prevented the Console Operator from including the required `FeatureGate` resource for Cloud Service Provider (CSP) APIs. With this release, the missing `FeatureGate` resource is added to the `openshift/api` dependency. As a result, CSP APIs now work as expected in the console. (link:https://issues.redhat.com/browse/OCPBUGS-55698[OCPBUGS-55698]) - -* Before this update, clicking the accordian in the **Critical alerts** section of the notification drawer did nothing, so the section stayed expanded. With this release, the accordian is fixed. As a result, you can now collapse the section when critical alerts are present. (link:https://issues.redhat.com/browse/OCPBUGS-55633[OCPBUGS-55633]) - -* Before this update, additional HTTP client configurations increased the plugin initial loading time, which slowed overall {product-title} performance. With this update, the client configuration is fixed, reducing plugin load time and improving page load speed. (link:https://issues.redhat.com/browse/OCPBUGS-55514[OCPBUGS-55514]) - -* Before this update, the custom masthead logo replaced the default OpenShift logo in all themes, even when the light theme was set to use the default. With this release, the correct behavior is restored so the default OpenShift logo displays in the light theme when no custom logo is set. As a result, logos now display correctly in both light and dark themes, improving visual consistency. (link:https://issues.redhat.com/browse/OCPBUGS-55208[OCPBUGS-55208]) - -* Before this update, changing or removing a custom logo in the Console Operator configuration left outdated `ConfigMaps` in the `openshift-console` namespace due to delayed synchronization. With this release, the console operator removes these outdated `ConfigMaps` when the custom logo configuration changes. As a result, `ConfigMaps` in the `openshift-console` namespace remain accurate and up-to-date. (link:https://issues.redhat.com/browse/OCPBUGS-54780[OCPBUGS-54780]) - -* Before this update, the **Raw logs** page decoded Chinese log messages incorrectly, making them unreadable. With this release, the decoding is corrected. As a result, the page now displays Chinese log messages correctly. (link:https://issues.redhat.com/browse/OCPBUGS-52165[OCPBUGS-52165]) - -* Before this update, opening a modal on a Networking page caused some web console plugin panels, such as the **OpenShift Lightspeed UI** or the **Troubleshooting** panel, to disappear. With this release, the conflict is resolved between networking modals and web console plugins. As a result, modals on the Networking pages no longer hide other console panels. (link:https://issues.redhat.com/browse/OCPBUGS-49709[OCPBUGS-49709]) - -* Before this update, the console server did not handle Content Security Policy (CSP) directives correctly when run locally with JSON input because it did not support the `MultiValue` type. With this release, the console accepts CSP directives as `MultiValue` instead of JSON for local use. As a result, you can now pass separate CSP directives more easily during console development. (link:https://issues.redhat.com/browse/OCPBUGS-49291[OCPBUGS-49291]) - -* Before this update, importing multiple files in the YAML editor copied the existing content and appended the new file, creating duplicates. With this release, the import behavior is fixed. As a result, the YAML editor displays only the new file content without duplication. (link:https://issues.redhat.com/browse/OCPBUGS-45297[OCPBUGS-45297]) - -* Before this update, only one plugin using the `CreateProjectModal` extension could display its modal, causing conflicts when multiple plugins used the same extension point. As a result, there was no way to control which plugin extension was rendered. With this release, the plugin extensions resolve in the same order as their definitions in the cluster console Operator configuration. As a result, administrators can control which `CreateProjectModal` extension displays in the console by reordering the list. (link:https://issues.redhat.com/browse/OCPBUGS-43792[OCPBUGS-43792]) - -* Before this update, the console did not display the header defined by the `ResourceYAMLEditor` property, so the YAML view opened without it. With this release, the property is fixed. As a result, headers such as **Simple pod** now display correctly. (link:https://issues.redhat.com/browse/OCPBUGS-32157[OCPBUGS-32157]) - -[id="ocp-release-note-monitoring-bug-fixes_{context}"] -=== Monitoring - -* Before this update, the `KubeNodeNotReady` and `KubeNodeReadinessFlapping` alerts did not filter out cordoned nodes. As a consequence, users received alerts for nodes under maintenance, resulting in false positives. With this release, cordoned nodes are filtered from alerts. As a result, the number of false positives during maintenance is reduced. link:https://issues.redhat.com/browse/OCPBUGS-60692[OCPBUGS-60692] - -* Before this update, the `KubeAggregatedAPIErrors` alert was based on the sum of errors across all instances of an API. As a consequence, users were more likely to get alerted as the number of instances grew. With this release, alerts are evaluated at the instance level, rather than the API level. As a result, this reduces the number of false alarms due to the API error threshold getting hit sooner due to being evaluated cluster-wise, rather than instance-wise. link:https://issues.redhat.com/browse/OCPBUGS-60691[OCPBUGS-60691] - -* Before this update, the `KubeStatefulSetReplicasMismatch` alert did not fire when the `StatefulSet` controller failed to create pods. As a consequence, users were not notified when the `StatefulSet` did not reach the desired number of replicas. With this release, the alert now fires correctly when the controller cannot create pods. As a result, users are alerted whenever the `StatefulSet` replicas do not match the configured amount. link:https://issues.redhat.com/browse/OCPBUGS-60689[OCPBUGS-60689] - -* Before this update, the Cluster Monitoring Operator logged warnings about insecure Transport Layer Security (TLS) ciphers, which could raise concerns about security. With this release, the secure TLS settings are configured, removing the cipher warnings from the logs and ensuring the Operator reports correct, secure TLS configurations. link:https://issues.redhat.com/browse/OCPBUGS-58475[OCPBUGS-58475] - -* Before this update, the monitoring dashboard in the {product-title} web console sometimes displayed large negative CPU utilization values due to incorrect assumptions about intermediate results. As a consequence, users could see negative CPU utilization in the web console. With this release, CPU utilization values are properly calculated and the web console no longer shows negative utilization values. link:https://issues.redhat.com/browse/OCPBUGS-57481[OCPBUGS-57481] - -* Before this update, when a new secret was created or updated in any namespace, `Alertmanager` was reconciling even if that secret was not referenced in the `AlertmanagerConfig` resource. As a consequence, the Prometheus Operator generated excessive API calls, causing increased CPU usage on control plane nodes. With this release, `Alertmanager` only reconciles secrets that the `AlertmanagerConfig` resource explicitly references. (link:https://issues.redhat.com/browse/OCPBUGS-56158[OCPBUGS-56158]) - -* Before this update, Metrics Server logged the following warning even though functionality was not affected: -+ -[source,terminal] ----- -setting componentGlobalsRegistry in SetFallback. We recommend calling componentGlobalsRegistry.Set() right after parsing flags to avoid using feature gates before their final values are set by the flags. ----- -+ -With this release, the warning message no longer appears in the `metrics-server` logs. link:https://issues.redhat.com/browse/OCPBUGS-41851[OCPBUGS-41851] - -* Before this update, the `KubeCPUOvercommit` alert would not trigger on multi-node clusters even after CPU-consuming spikes over the permitted limits. With this release, the alert expression is adjusted to correctly account for multi-node clusters. As a result, the `KubeCPUOvercommit` alert triggers correctly after such instances. link:https://issues.redhat.com/browse/OCPBUGS-35095[OCPBUGS-35095] - -* Before this update, users could set `prometheus`, `prometheus_replica`, or `cluster` as Prometheus external labels to the `cluster-monitoring-config` and `user-workload-monitoring-config` config maps. This was not recommended and could cause issues with the cluster. With this release, the config maps no longer accept these reserved external labels. link:https://issues.redhat.com/browse/OCPBUGS-18282[OCPBUGS-18282] - - - -[id="ocp-release-note-networking-bug-fixes_{context}"] -=== Networking -* Before this update, an `NMState` service failure occurred in {product-title} deployments because of a `NetworkManager-wait-online` dependency issue in baremetal and multiple network interface controller (NIC) environments. As a consequence, an incorrect network configuration caused deployment failures. With this release, the `NetworkManager-wait-online` dependency for baremetal deployments is updated, which reduces deployment failures and ensures `NMState` service stability. (link:https://issues.redhat.com/browse/OCPBUGS-61824[OCPBUGS-61824]) - -* Before this release, the event data was not immediately available when the `cloud-event-proxy` container or pod rebooted. This caused the `getCurrenState` function to incorrectly return a `clockclass` of `0`. With this release, the `getCurrentState` function no longer returns an incorrect `clockclass` and instead returns an HTTP `400 Bad Request` or `404 Not Found Error`. (link:https://issues.redhat.com/browse/OCPBUGS-59969[OCPBUGS-59969]) - -* Before this update, the `HorizontalPodAutoscaler` object temporarily scaled the `istiod-openshift-gateway` deployment to two replicas. This caused a Continuous Integration (CI) failure because the tests expected one replica. With this release, the `HorizontalPodAutoscaler` object scaling verifies that the `istiod-openshift-gateway` resource has at least one replica to continue deployment. (link:https://issues.redhat.com/browse/OCPBUGS-59894[OCPBUGS-59894]) - -* Previously, the DNS Operator did not set the `readOnlyRootFilesystem` parameter to `true` in its configuration or for the configuration of its operands. As a result, the DNS Operator and its operands had `write` access to root file systems. With this release, the DNS Operator now sets the `readOnlyRootFilesystem` parameter to `true`, so that the DNS Operator and its operands now have `read-only` access to root file systems. This update provides enhanced security for your cluster. (link:https://issues.redhat.com/browse/OCPBUGS-59781[OCPBUGS-59781]) - -* Before this update, when the Gateway API feature was enabled, it installed an Istio control plane configured with one pod replica and an associated `PodDisruptionBudget` setting. The `PodDisruptionBudget` setting prevented the only pod replica from being evicted, blocking cluster upgrades. With this release, the Ingress Operator prevents the Istio control plane from being configured with the `PodDisruptionBudget` setting. Cluster upgrades are no longer blocked by the pod replica. (link:https://issues.redhat.com/browse/OCPBUGS-58358[OCPBUGS-58358]) - -* Before this update, the Cluster Network Operator (CNO) stopped during a cluster upgrade when the `whereabouts-shim` network attachment was enabled. This issue occurred because of a missing `release.openshift.io/version` annotation in the `openshift-multus` namespace. With this release, the missing annotation is now added to the cluster, so that the CNO no longer stops during a cluster upgrade when the `whereabouts-shim` attached is enabled. The cluster upgrade can now continue as expected. (link:https://issues.redhat.com/browse/OCPBUGS-57643[OCPBUGS-57643]) - -* Before this update, the Ingress Operator added resources, most noteably gateway resources, to the `status.relatedObjects` parameter of the Cluster Operator even if the CRDs for those resources did not exist. Additionally, the Ingress Operator specified a namespace for the `istios` and `GatewayClass`resources, which are both cluster-scoped resources. As a result of these configurations, the `relatedObjects` parameter contained misleading information. With this release, an update to the status controller of the Ingress Operator ensures that the controller checks if these resources already exist and also checks the related feature gates before adding any of these resources to the `relatedObjects` parameter . The controller no longer specifies namespaces for the `GatewayClass` and `istio` resources. This update ensures that the `relatedObjects` parameter contains accurate information for the `GatewayClass` and `istio` resources. (link:https://issues.redhat.com/browse/OCPBUGS-57433[OCPBUGS-57433]) - -* Before this update, a cluster upgrade caused inconsistent egress IP address allocation due to stale Network Address Translation (NAT) handling. This issue occurred only when you deleted an egress IP pod while the OVN-Kubernetes controller for an egress node was down. As a consequence, duplicate Logical Router Policies and egress IP address usage occurred, which caused inconsistent traffic flow and outage. With this release, egress IP address allocation cleanup ensures consistent and reliable egress IP address allocation in {product-title} 4.20 clusters. (link:https://issues.redhat.com/browse/OCPBUGS-57179[OCPBUGS-57179]) - -* Previously, when on-premise installer-provisioned infrastructure (IPI) deployments used the Cilium container network interface (CNI), the firewall rule that redirected traffic to the load balancer was ineffective. With this release, the rule works with the Cilium CNI and `OVNKubernetes`. (link:https://issues.redhat.com/browse/OCPBUGS-57065[OCPBUGS-57065]) - -* Before this update, one of the `keepalived` health check scripts was failing due to missing permissions. This could cause the ingress VIP to be misplaced when shared ingress services were in use. With this release, the necessary permission was added back to the container so the health check now works correctly. (link:https://issues.redhat.com/browse/OCPBUGS-55681[OCPBUGS-55681]) - -* Before this update, stale IP addresses existed in the `address_set` list of the corresponding DNS rule for the `EgressFirewall` CRD. Instead of being removed, these stale addresses continued to get added to the `address_set`, causing memory leak issues. With this release, when the time-to-live (TTL) expiration for an IP address is reached, the IP address gets removed from the `address_set` list after a 5-second grace period has been reached. (link:https://issues.redhat.com/browse/OCPBUGS-38735[OCPBUGS-38735]) - -* Before this update, certain traffic patterns with large packets running between {product-title} nodes and pods triggered an {product-title} host to send Internet Control Message Protocol (ICMP) needs frag to another {product-title} host. This situation lowered the viable maximum transmission unit (MTU) in the cluster. As a consequence, executing the `ip route show cache` command displayed a cached route with a lower MTU than the physical link. Packets were dropped and {product-title} components were degrading because the host did not send pod-to-pod traffic with the large packets. With this release, the `nftables` rules prevent the {product-title} nodes from lowering their MTU in response to these traffic patterns. (link:https://issues.redhat.com/browse/OCPBUGS-37733[OCPBUGS-37733]) - -* Before this update, you could not override the node IP address selection process for deployments that ran on installer-provisioned infrastructure. This limitation impacted user-managed load balancers that did not use VIP addresses on a machine network, and this caused problems in environments that had multiple IP addresses. With this release, deployments that run on installer-provisioned infrastructure now support the `NODEIP_HINT ` parameter for the `nodeip-configuration` systemd service. This support update ensures that the correct node IP address is used, even when the VIP addresses are not on the same subnet. (link:https://issues.redhat.com/browse/OCPBUGS-36859[OCPBUGS-36859]) - -[id="ocp-release-note-node-bug-fixes_{context}"] -=== Node - -* Before this update, in certain configurations, the kubelet's `podresources` API might have reported memory that was assigned to both active and terminated pods, instead of reporting memory assigned to only active pods. As a consequence, this inaccurate reporting might have affected workload placement by the NUMA-aware scheduler. With this release, kubelet's `podresources` no longer reports resources for terminated pods, which results in accurate workload placement by the NUMA-aware scheduler. (link:https://issues.redhat.com/browse/OCPBUGS-56785[OCPBUGS-56785]) - -* Before this release, the Container Runtime Interface-OpenShift (CRI-O) system failed to recognize the terminated state of a stateful set pod when the backend storage went down, causing the pod to remain in a `Terminating` state due to an inability to detect that the container process no longer existed. This caused resource inefficiency and potential service disruption. With this release, the CRI-O now correctly recognizes terminated pods, improving StatefulSet termination flow. (link:https://issues.redhat.com/browse/OCPBUGS-55485[OCPBUGS-55485]) - -* Before this update, if a CPU-pinned container within a Guaranteed QoS pod has cgroups quota defined, rounding and small delays in kernel CPU time accounting could cause throttling of the CPU-pinned process, even if the quota is set to allow 100% consumption for each allocated CPU. With this release, when `cpu-manager-policy=static` and the qualifications for static CPU assignment are satisfied, that is containers have Guaranteed QOS with integer CPU requests, the CFS quota is disabled. (link:https://issues.redhat.com/browse/OCPBUGS-14051[OCPBUGS-14051]) - - - -[id="ocp-release-note-node-tuning-operator-bug-fixes_{context}"] -=== Node Tuning Operator (NTO) - -* Before this update, the `iommu.passthrough=1` kernel argument caused an NVIDIA GPU validator failure on Advanced RISC Machine (ARM) CPUs in {product-title} 4.18. With this release, the kernel argument is removed from the default `Tuned` CR for ARM-based environments. (link:https://issues.redhat.com/browse/OCPBUGS-52853[OCPBUGS-52853]) - - - -[id="ocp-release-note-observability-bug-fixes_{context}"] -=== Observability - -* Before this update, the linked URL is in the developer perspective, but the perspective is not switched when you clicked the link. As a consequence, a blank page displayed. With this releae, the perspective changes when you click the link and the page displays correctly. (link:https://issues.redhat.com/browse/OCPBUGS-59215[OCPBUGS-59215]) - -* Before this update, the **Troubleshooting** panel only worked in the admin perspective even though you could open the panel in all perspectives. As a consequence, when opening the panel in another perspective, the panel was non-operational. With this release, the **Troubleshooting** panel can only be opened from the admin perspective. (link:https://issues.redhat.com/browse/OCPBUGS-58166[OCPBUGS-58166]) - - -[id="ocp-release-note-oc-mirror-bug-fixes_{context}"] -=== oc-mirror -* Before this update, the incorrect count of mirrored Helm images in `oc-mirror` caused a failure to note all mirrored Helm images. As a consequence, an incorrect Helm image count was displayed. With this release, the incorrect Helm image count in `oc-mirror` is fixed, and correctly mirrors all Helm images. As a result, the total mirrored images count for Helm charts in `oc-mirror` is accurate. (link:https://issues.redhat.com/browse/OCPBUGS-59949[OCPBUGS-59949]) - -* Before this update, the `--parallel-images` flag accepted invalid input, with a minimum value that was less than 1 or greater than the total number of images. As a consequence, parallel image copy failed with 0 or 100 `--parallel-images` flag, and limited the number of images that could be mirrored. With this release, the issue with invalid `--parallel-images` flags is fixed, and values between 1 and the total number of images are accepted. As a result, users can set the `--parallel-images` flag for any value in the valid range. (link:https://issues.redhat.com/browse/OCPBUGS-58467[OCPBUGS-58467]) - -* Before this update, high `oc-mirror v2` concurrency defaults caused registry overload and led to request rejections. As a consequence, high concurrency defaults caused registry rejections, and failed container image pushes failed. With this release, concurrency defaults for `oc-mirror v2` are reduced to avoid registry rejections, and the image push success rate is improved. (link:https://issues.redhat.com/browse/OCPBUGS-57370[OCPBUGS-57370]) - -* Before this update, a bug occurred due to a mismatch between image digests and blocked image tags in the `ImageSetConfig` parameter. This bug caused users to see images from various cloud providers in a mirrored set, although they were blocked. With this release, the `ImageSetConfig` parameter is updated to support regular expression in the `blockedImages` list for more flexible image exclusion, and allows the exclusion of images that match a regular expression pattern in the `blockedImages` list. (link:https://issues.redhat.com/browse/OCPBUGS-56117[OCPBUGS-56117]) - -* Before this update, the system umask value was set to `0077` for Security Technical Implementation Guide (STIG) compliance, and caused the `disk2mirror` parameter to stop uploading {product-title} release images. As a consequence, users could not upload {product-title} release images due to the umask command restriction. With this release, `oc-mirror` handles the faulty umask value and alerts the user. The {product-title} release images are uploaded correctly when the system umask is set to `0077`. (link:https://issues.redhat.com/browse/OCPBUGS-55374[OCPBUGS-55374]) - -* Before this update, an invalid Helm chart was incorrectly included in an Internet Systems Consortium (ISC) guideline, and caused an error message while running the `m2d`workflow. With this release, the error message for invalid Helm charts in `m2d` workflows is updated, and error message clarity is improved. (link:https://issues.redhat.com/browse/OCPBUGS-54473[OCPBUGS-54473]) - -* Before this update, multiple release collections occurred due to duplicate channel selection. As a consequence, duplicate release images were collected, and caused unnecessary storage usage. With this release, duplicate release collection is fixed, and each release is collected once. As a result, the duplicate release collection is eliminated, and ensures efficient storage with faster access. (link:https://issues.redhat.com/browse/OCPBUGS-52562[OCPBUGS-52562]) - -* Before this update, `oc-mirror` did not check the availability of the specific {product-title} version, and caused it to continue with non-existent versions. As a consequence, users assumed that the mirroring was successful because no error messages were received. With this release, `oc-mirror` returns an error when a non-existent {product-title} version is specified, in addition to a reason for the issue. As a result, users are aware of unavailable versions and can take appropriate action. (link:https://issues.redhat.com/browse/OCPBUGS-51157[OCPBUGS-51157]) - -[id="ocp-release-note-openshift-api-server-bug-fixes_{context}"] -=== OpenShift API Server - -* Before this update, on a cluster upgraded from {product-title} 4.16, or earlier, there might be previously generated image pull secrets that cannot be deleted due to the presence of the `openshift.io/legacy-token` finalizer, if the internal Image Registry was removed. With this release, the issue no longer occurs. (link:https://issues.redhat.com/browse/OCPBUGS-52193[OCPBUGS-52193]) - -* Before this update, deleting an `istag` resource with the `--dry-run=server` option unintentionally caused actual deletion of the image from the server. This unexpected deletion occurred due to the `dry-run` option being implemented incorrectly in the `oc delete istag` command. With this release, the `dry-run` option is wired to the 'oc delete istag' command. As a result, the accidental deletion of image objects is prevented and the `istag` object remains intact when using the `--dry-run=server` option. (link:https://issues.redhat.com/browse/OCPBUGS-35855[OCPBUGS-35855]) - -[id="ocp-release-note-oc-cli-bug-fixes_{context}"] -=== OpenShift CLI (oc) - -* Before this update, the `oc adm node-image create` command failed to create an International Organization for Standardization (ISO) image if the target cluster did not have a debug SSH key store in the `99-worker-ssh` config map, which is not a requirement for generating an image. With this release, the ISO image can successfully be created without this key store in the `99-worker-ssh` config map. (link:https://issues.redhat.com/browse/OCPBUGS-60600[OCPBUGS-60600]) - -* Before this update, a panic occurred during `oc describe templateinstance` due to nil pointer dereference in `TemplateInstanceDescriber`. With this release, nil pointer dereference in the `oc describe templateinstance` command was fixed by checking for nil secret before describing parameters. (link:https://issues.redhat.com/browse/OCPBUGS-60281[OCPBUGS-60281]) - -* Before this update, the `oc login -u` command in external OIDC environments succeeded but removed user credentials, causing subsequent `oc` commands to fail. With this release, the `oc login -u` command no longer modifies kubeconfig, preventing subsequent `oc` commands from failing. As a result, the fix prevents `oc login -u` from removing user credentials, ensuring subsequent "oc" commands work correctly. (link:https://issues.redhat.com/browse/OCPBUGS-58393[OCPBUGS-58393]) - -* Before this update, when using the`oc adm node-image create` command, the command would not provide descriptive error messages after failures. With this release, the command provides error messages when it fails. (link:https://issues.redhat.com/browse/OCPBUGS-55048[OCPBUGS-55048]) - -* Before this update, the must-gather pod could be scheduled on a node marked with a `NotReady` taint, resulting in deployment to an unavailable node and subsequent log collection failures. With this release, the scheduler now accounts for node taints and automatically applies a node selector to the pod specification. This change ensures that must-gather pods are not scheduled on tainted nodes, thereby preventing log collection failures. (link:https://issues.redhat.com/browse/OCPBUGS-50992[OCPBUGS-50992]) - -* Before this update, when using the `oc adm node-image create` command to add nodes to a cluster, the command erroneously modified the existing permissions of the target assets folder when saving the ISO on disk. With this release, the fix ensures that the copying operation will preserve the destination folder permissions. (link:https://issues.redhat.com/browse/OCPBUGS-49897[OCPBUGS-49897]) - -[id="ocp-release-note-openshift-controller-bug-fixes_{context}"] -=== OpenShift Controller - -* Before this update, the build controller looked for secrets that were linked for general use, not specifically for the image pull. With this release, when searching for default image pull secrets, the builds use `ImagePullSecrets` that are linked to the service account. (link:https://issues.redhat.com/browse/OCPBUGS-57918[OCPBUGS-57918]) - -* Before this update, incorrectly formatted proxy environment variables in Build pod led to build failures due to external binary format complaints. With this release, builds no longer fail due to incorrectly formatted proxy environment variables as they are now excluded. (link:https://issues.redhat.com/browse/OCPBUGS-54695[OCPBUGS-54695]) - -[id="ocp-release-note-olm-bug-fixes_{context}"] -=== {olmv0-first} - -* Before this update, bundle unpack jobs did not inherit control plane tolerances for the catalog Operator when they were created. As a result, bundle unpack jobs ran on worker nodes only. If no worker nodes were available due to taints, cluster administrators could not install or update Operators on the cluster. With this release, {olmv0} adopts control plane tolerations for bundle unpack jobs and the jobs can run as part of the control plane. (link://https://issues.redhat.com/browse/OCPBUGS-58349[OCPBUGS-58349]) - -* Before this update, when an Operator supplied more than one API in an Operator group namespace, {olmv0} made unnecessary update calls to the cluster roles that were created for the Operator group. As a result, these unnecessary calls caused churn for ectd and the API server. With this update, {olmv0} does not make unnecessary update calls to the cluster role objects in Operator groups. (link:https://issues.redhat.com/browse/OCPBUGS-57222[OCPBUGS-57222]) - -* Before this update, if the `olm-operator` pod crashed during cluster updates due to mislabeled resources, the notification message used the `info` label. With this update, crash notification messages due to mislabeled resources use the `error` label instead. (link:https://issues.redhat.com/browse/OCPBUGS-53161[OCPBUGS-53161]) - -* Before this update, the catalog Operator scheduled catalog snapshots for every 5 minutes. On clusters with many namespaces and subscriptions, snapshots failed and cascaded across catalog sources. As a result, the spikes in CPU loads effectively blocked installing and updating Operators. With this update, catalog snapshots are scheduled for every 30 minutes to allow enough time for the snapshotes to resolve. (link:https://issues.redhat.com/browse/OCPBUGS-43966[OCPBUGS-43966]) - -//[id="ocp-release-note-pao-bug-fixes_{context}"] -//=== Performance Addon Operator - -//[id="ocp-release-note-samples-operator-bug-fixes_{context}"] -//=== Samples Operator - - -[id="ocp-release-note-service-catalog-bug-fixes_{context}"] -=== Service Catalog - -* Before this update, setting an invalid certificate secret name in the service annotation `service.beta.openshift.io/serving-cert-secret-name` would cause the service Certificate Authority (CA) Operator to hot loop. With this release, the Operator stops retrying to create the secret after 10 tries. The number of retries cannot be changed. (link:https://issues.redhat.com/browse/OCPBUGS-61966[OCPBUGS-61966]) - -[id="ocp-release-note-storage-bug-fixes_{context}"] -=== Storage - -* Before this update, resizing or cloning small {gcp-first} Hyperdisk volumes (e.g., from 4Gi to 5Gi) would fail due to an Input/Output Operations Per Second (IOPS) validation error from the {gcp-full} API. This occurred because the Container Storage Interface (CSI) driver did not automatically adjust the provisioned IOPS to meet the minimum requirements of the new volume size. With this release, the driver has been updated to correctly calculate and provide the required IOPS during volume expansion operations. Users can now successfully resize and clone these smaller Hyperdisk volumes. (link:https://issues.redhat.com/browse/OCPBUGS-62117[OCPBUGS-62117]) - -* Before this update, a race condition would sometimes cause an intermittent failure, or _flake_, when a Persistent Volume Claim (PVC) was resized too quickly after being created. This resulted in an error where the system would incorrectly report that the bound Persistent Volume (PV) could not be found. With this release, the timing issue was fixed, so resizing a PVC right after its creation works. (link:https://issues.redhat.com/browse/OCPBUGS-61546[OCPBUGS-61546]) - - -//[id="ocp-release-note-rhcos-bug-fixes_{context}"] -//=== {op-system-first} - -[id="ocp-release-technology-preview-tables_{context}"] -== Technology Preview features status - -Some features in this release are currently in Technology Preview. These experimental features are not intended for production use. Note the following scope of support on the Red{nbsp}Hat Customer Portal for these features: - -link:https://access.redhat.com/support/offerings/techpreview[Technology Preview Features Support Scope] - -In the following tables, features are marked with the following statuses: - -* _Not Available_ -* _Technology Preview_ -* _General Availability_ -* _Deprecated_ -* _Removed_ - - - -[id="ocp-release-notes-auth-tech-preview_{context}"] -=== Authentication and authorization Technology Preview features - -.Authentication and authorization Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.18 |4.19 |4.20 - -|Pod security admission restricted enforcement -|Technology Preview -|Technology Preview -|Technology Preview - -|Direct authentication with an external OIDC identity provider -|Not Available -|Technology Preview -|Technology Preview - -|==== - - -[id="ocp-release-notesedge-computing-tp-features_{context}"] -=== Edge computing Technology Preview features - -.Edge computing Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.18 |4.19 |4.20 - -|Accelerated provisioning of {ztp} -|Technology Preview -|Technology Preview -|Technology Preview - -|Enabling disk encryption with TPM and PCR protection -|Technology Preview -|Technology Preview -|Technology Preview - -|Configuring a local arbiter node -|Not Available -|Technology Preview -|General Availability - -|Configuring a two-node OpenShift cluster with fencing -|Not Available -|Not Available -|Technology Preview -|==== - - -[id="ocp-release-notes-extensions-tech-preview_{context}"] -=== Extensions Technology Preview features - -// "Extensions" refers to OLMv1 - -.Extensions Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.18 |4.19 |4.20 - -|{olmv1-first} -|General Availability -|General Availability -|General Availability - -|{olmv1} runtime validation of container images using sigstore signatures -|Technology Preview -|Technology Preview -|Technology Preview - -|{olmv1} permissions preflight check for cluster extensions -|Not Available -|Technology Preview -|Technology Preview - -|{olmv1} deploying a cluster extension in a specified namespace -|Not Available -|Technology Preview -|Technology Preview - -|{olmv1} deploying a cluster extension that uses webhooks -|Not Available -|Not Available -|Technology Preview -|==== - - -[id="ocp-release-notes-installing-tech-preview_{context}"] -=== Installation Technology Preview features - -.Installation Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.18 |4.19 |4.20 - -// All GA in 4.17 notes for oci-first -|Adding kernel modules to nodes with kvc -|Technology Preview -|Technology Preview -|Technology Preview - -|Enabling NIC partitioning for SR-IOV devices -|General Availability -|General Availability -|General Availability - -|User-defined labels and tags for {gcp-first} -|General Availability -|General Availability -|General Availability - -|Installing a cluster on Alibaba Cloud by using Assisted Installer -|Technology Preview -|Technology Preview -|Technology Preview - -|Installing a cluster on {azure-first} with confidential VMs -|Technology Preview -|General Availability -|General Availability - -|Dedicated disk for etcd on {azure-full} -|Not Available -|Not Available -|Technology Preview - -|Mount shared entitlements in BuildConfigs in RHEL -|Technology Preview -|Technology Preview -|Technology Preview - -|OpenShift zones support for vSphere host groups -|Not Available -|Technology Preview -|Technology Preview - -|Selectable Cluster Inventory -|Technology Preview -|Technology Preview -|Technology Preview - -|Installing a cluster on {gcp-short} using the Cluster API implementation -|General Availability -|General Availability -|General Availability - -|Enabling a user-provisioned DNS on {gcp-short} -|Not Available -|Technology Preview -|Technology Preview - -|Installing a cluster on {vmw-full} with multiple network interface controllers -|Technology Preview -|Technology Preview -|General Availability - -|Using bare metal as a service -|Not Available -|Technology Preview -|Technology Preview - -|Changing the CVO log level -|Not Available -|Not Available -|Technology Preview -|==== - - -[id="ocp-release-notes-mco-tech-preview_{context}"] -=== Machine Config Operator Technology Preview features - -.Machine Config Operator Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.18 |4.19 |4.20 - -|Improved MCO state reporting (`oc get machineconfignode`) -|Technology Preview -|Technology Preview -|General Availability - -|Image mode for OpenShift/On-cluster RHCOS image layering for {aws-short} and {gcp-short} -|Technology Preview -|General Availability -|General Availability - -|Image mode for OpenShift/On-cluster RHCOS image layering for {vmw-short} -|Not available -|Not available -|Technology Preview - -|==== - - -[id="ocp-release-notes-machine-management-tech-preview_{context}"] -=== Machine management Technology Preview features - -.Machine management Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.18 |4.19 |4.20 - -|Managing machines with the Cluster API for {aws-full} -|Technology Preview -|Technology Preview -|Technology Preview - -|Managing machines with the Cluster API for {gcp-full} -|Technology Preview -|Technology Preview -|Technology Preview - -|Managing machines with the Cluster API for {ibm-power-server-name} -|Technology Preview -|Technology Preview -|Technology Preview - -|Managing machines with the Cluster API for {azure-full} -|Technology Preview -|Technology Preview -|Technology Preview - -|Managing machines with the Cluster API for {rh-openstack} -|Technology Preview -|Technology Preview -|Technology Preview - -|Managing machines with the Cluster API for {vmw-full} -|Technology Preview -|Technology Preview -|Technology Preview - -|Managing machines with the Cluster API for bare metal -|Not Available -|Technology Preview -|Technology Preview - -|Cloud controller manager for {ibm-power-server-name} -|Technology Preview -|Technology Preview -|Technology Preview - -|Adding multiple subnets to an existing {vmw-full} cluster by using compute machine sets -|Technology Preview -|Technology Preview -|Technology Preview - -|Configuring Trusted Launch for {azure-full} virtual machines by using machine sets -|Technology Preview -|General Availability -|General Availability - -|Configuring {azure-short} confidential virtual machines by using machine sets -|Technology Preview -|General Availability -|General Availability -|==== - - -[id="ocp-release-notes-monitoring-tech-preview_{context}"] -=== Monitoring Technology Preview features - -.Monitoring Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.18 |4.19 |4.20 - -|Metrics Collection Profiles -|Technology Preview -|General Availability -|General Availability - -|==== - - -[id="ocp-release-notes-multi-arch-tech-preview_{context}"] -=== Multi-Architecture Technology Preview features - -.Multi-Architecture Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.18 |4.19 |4.20 - -|`kdump` on `arm64` architecture -|Technology Preview -|Technology Preview -|General Availability - -|`kdump` on `s390x` architecture -|Technology Preview -|Technology Preview -|General Availability - -|`kdump` on `ppc64le` architecture -|Technology Preview -|Technology Preview -|General Availability - -|Support for configuring the image stream import mode behavior -|Technology Preview -|Technology Preview -|Technology Preview -|==== - - -[id="ocp-release-notes-networking-tech-preview_{context}"] -=== Networking Technology Preview features - -.Networking Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.18 |4.19 |4.20 - -|eBPF manager Operator -|Technology Preview -|Technology Preview -|Technology Preview - -|Advertise using L2 mode the MetalLB service from a subset of nodes, using a specific pool of IP addresses -|Technology Preview -|Technology Preview -|Technology Preview - -|Updating the interface-specific safe sysctls list -|Technology Preview -|Technology Preview -|Technology Preview - -|Egress service custom resource -|Technology Preview -|Technology Preview -|Technology Preview - -|VRF specification in `BGPPeer` custom resource -|Technology Preview -|Technology Preview -|Technology Preview - -|VRF specification in `NodeNetworkConfigurationPolicy` custom resource -|Technology Preview -|General Availability -|General Availability - -|Host network settings for SR-IOV VFs -|General Availability -|General Availability -|General Availability - -|Integration of MetalLB and FRR-K8s -|General Availability -|General Availability -|General Availability - -|Automatic leap seconds handling for PTP grandmaster clocks -|General Availability -|General Availability -|General Availability - -|PTP events REST API v2 -|General Availability -|General Availability -|General Availability - -|OVN-Kubernetes customized `br-ex` bridge on bare metal -|General Availability -|General Availability -|General Availability - -|OVN-Kubernetes customized `br-ex` bridge on {vmw-short} and {rh-openstack} -|Technology Preview -|Technology Preview -|Technology Preview - -|Live migration to OVN-Kubernetes from OpenShift SDN -|Not Available -|Not Available -|Not Available - -|User-defined network segmentation -|General Availability -|General Availability -|General Availability - -|Dynamic configuration manager -|Technology Preview -|Technology Preview -|Technology Preview - -|SR-IOV Network Operator support for Intel C741 Emmitsburg Chipset -|Technology Preview -|Technology Preview -|Technology Preview - -|SR-IOV Network Operator support on ARM architecture -|General Availability -|General Availability -|General Availability - -|Gateway API and Istio for Ingress management -|Technology Preview -|General Availability -|General Availability - -|Dual-port NIC for PTP ordinary clock -|Not Available -|Technology Preview -|Technology Preview - -|DPU Operator -|Not Available -|Technology Preview -|Technology Preview - -|Fast IPAM for the Whereabouts IPAM CNI plugin -|Not Available -|Technology Preview -|Technology Preview - -|Unnumbered BGP peering -|Not Available -|Technology Preview -|General Availability - -|Load balancing across the aggregated bonded interface with xmitHashPolicy -|Not Available -|Not Available -|Technology Preview - -|PF Status Relay Operator for high availability with SR-IOV networks -|Not Avaialable -|Not Available -|Technology Preview - -|Preconfigured user-defined network end points using {mtv-short} -|Not Available -|Not Available -|Technology Preview - -|Unassisted holdover for PTP devices -|Not Available -|Not Available -|Technology Preview - -|==== - - -[id="ocp-release-notes-nodes-tech-preview_{context}"] -=== Node Technology Preview features - -.Nodes Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.18 |4.19 |4.20 - -|`MaxUnavailableStatefulSet` featureset -|Technology Preview -|Technology Preview -|Technology Preview - -|sigstore support -|Technology Preview -|Technology Preview -|General Availability - -|Default sigstore `openshift` cluster image policy -|Technology Preview -|Technology Preview -|Technology Preview - -|Linux user namespace support -|Technology Preview -|Technology Preview -|General Availability - -|Attribute-Based GPU Allocation -|Not Available -|Not Available -|Technology Preview -|==== - - -[id="ocp-release-notes-oc-cli-tech-preview_{context}"] -=== OpenShift CLI (oc) Technology Preview features - -.OpenShift CLI (`oc`) Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.18 |4.19 |4.20 - -|oc-mirror plugin v2 -|General Availability -|General Availability -|General Availability - -|oc-mirror plugin v2 enclave support -|General Availability -|General Availability -|General Availability - -|oc-mirror plugin v2 delete functionality -|General Availability -|General Availability -|General Availability -|==== - - -[id="ocp-release-notes-operator-lifecycle-tech-preview_{context}"] -=== Operator lifecycle and development Technology Preview features - -// "Operator lifecycle" refers to OLMv0 and "development" refers to Operator SDK - -.Operator lifecycle and development Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.18 |4.19 |4.20 - -|{olmv1-first} -|General Availability -|General Availability -|General Availability - -|Scaffolding tools for Hybrid Helm-based Operator projects -|Removed -|Removed -|Removed - -|Scaffolding tools for Java-based Operator projects -|Removed -|Removed -|Removed -|==== - - -[id="ocp-release-notes-rhcos-tech-preview_{context}"] -=== {rh-openstack-first} Technology Preview features - -.{rh-openstack} Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.18 |4.19 |4.20 - -|{rh-openstack} integration into the {cluster-capi-operator} -|Technology Preview -|Technology Preview -|Technology Preview - -|Control plane with `rootVolumes` and `etcd` on local disk -|General Availability -|General Availability -|General Availability - -|Hosted control planes on {rh-openstack} 17.1 -|Not Available -|Technology Preview -|Technology Preview -|==== - - -[id="ocp-release-notes-scalability-tech-preview_{context}"] -=== Scalability and performance Technology Preview features - -.Scalability and performance Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.18 |4.19 |4.20 - -|{factory-prestaging-tool} -|Technology Preview -|Technology Preview -|Technology Preview - -|Hyperthreading-aware CPU manager policy -|Technology Preview -|Technology Preview -|Technology Preview - -|Mount namespace encapsulation -|Technology Preview -|Technology Preview -|Technology Preview - -|Node Observability Operator -|Technology Preview -|Technology Preview -|Technology Preview - -|Increasing the etcd database size -|Technology Preview -|Technology Preview -|Technology Preview - -|Using {rh-rhacm} `PolicyGenerator` resources to manage {ztp} cluster policies -|Technology Preview -|General Availability -|General Availability - -|Pinned Image Sets -|Technology Preview -|Technology Preview -|Technology Preview - -|Configuring NUMA-aware scheduler replicas and high availability -|Not available -|Not available -|Technology Preview -|==== - - -//[id="ocp-release-notes-special-hardware-tech-preview_{context}"] -//=== Specialized hardware and driver enablement Technology Preview features - -//.Specialized hardware and driver enablement Technology Preview tracker -//[cols="4,1,1,1",options="header"] -//|==== -//|Feature |4.18 |4.19 |4.20 -//|==== - - -[id="ocp-release-notes-storage-tech-preview_{context}"] -=== Storage Technology Preview features - -.Storage Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.18 |4.19 |4.20 - -|AWS EFS One Zone volume -|Not Available -|Not Available -|General Availability - -|Automatic device discovery and provisioning with Local Storage Operator -|Technology Preview -|Technology Preview -|Technology Preview - -|Azure File CSI snapshot support -|Technology Preview -|Technology Preview -|Technology Preview - -|Azure File cross-subscription support -|Not Available -|General Availability -|General Availability - -|Azure Disk performance plus -|Not Available -|Not Available -|General Availability - -|Configuring fsGroupChangePolicy per namespace -|Not Available -|Not Available -|General Availability - -|Shared Resources CSI Driver in OpenShift Builds -|Technology Preview -|Technology Preview -|Technology Preview - -|{secrets-store-operator} -|General Availability -|General Availability -|General Availability - -|CIFS/SMB CSI Driver Operator -|General Availability -|General Availability -|General Availability - -|VMware vSphere multiple vCenter support -|General Availability -|General Availability -|General Availability - -|Disabling/enabling storage on vSphere -|Technology Preview -|General Availability -|General Availability - -|Increasing max number of volumes per node for vSphere -|Not Available -|Technology Preview -|Technology Preview - -|RWX/RWO SELinux mount option -|Developer Preview -|Developer Preview -|Technology Preview - -|Migrating CNS Volumes Between Datastores -|Developer Preview -|General Availability -|General Availability - -|CSI volume group snapshots -|Technology Preview -|Technology Preview -|Technology Preview - -|GCP PD supports C3/N4 instance types and hyperdisk-balanced disks -|General Availability -|General Availability -|General Availability - -|OpenStack Manila support for CSI resize -|General Availability -|General Availability -|General Availability - -|Volume Attribute Classes -|Not Available -|Technology Preview -|Technology Preview - -|Volume populators -|Technology Preview -|Technology Preview -|General Availability -|==== - - -[id="ocp-release-notes-web-console-tech-preview_{context}"] -=== Web console Technology Preview features - -.Web console Technology Preview tracker -[cols="4,1,1,1",options="header"] -|==== -|Feature |4.18 |4.19 |4.20 - -|{ols-official} in the {product-title} web console -|Technology Preview -|Technology Preview -|Technology Preview -|==== - -[id="ocp-release-known-issues_{context}"] -== Known issues - -* There is a known issue with Gateway API and {aws-first}, {gcp-first}, and {azure-first} private clusters. The load balancer that is provisioned for a gateway is always configured to be external, which can cause errors or unexpected behavior: -+ --- -** In an {aws-short} private cluster, the load balancer becomes stuck in the `pending` state and reports the error: `Error syncing load balancer: failed to ensure load balancer: could not find any suitable subnets for creating the ELB`. - -** In {gcp-short} and {azure-short} private clusters, the load balancer is provisioned with an external IP address, when it should not have an external IP address. --- -+ -There is no supported workaround for this issue. (link:https://issues.redhat.com/browse/OCPBUGS-57440[OCPBUGS-57440]) - -* When running a pod in an isolated user namespace, the UID/GID inside a pod container no longer matches the UID/GID on the host. For file system ownership to work correctly, the Linux kernel uses ID-mapped mounts, which translate user IDs between the container and the host at the virtual file system (VFS) layer. -+ -However, not all file systems currently support ID-mapped mounts, such as Network File Systems (NFS) and other network or distributed file systems. Because such file systems do not support ID-mapped mounts, pods running within user namespaces can fail to access mounted NFS volumes. This behavior is not specific to {product-title}. It applies to all Kubernetes distributions from Kubernetes v1.33 and later. -+ -When upgrading to {product-title} 4.20, clusters are unaffected until you opt in to user namespaces. After enabling user namespaces, any pod that is using an NFS-backed persistent volume from a vendor that does not support ID-mapped mounts might experience access or permission issues when running in a user namespace. For more information about enabling user namespaces, see xref:../nodes/pods/nodes-pods-user-namespaces.adoc#nodes-pods-user-namespaces-configuring_nodes-pods-user-namespaces[Configuring Linux user namespace support]. -+ -[NOTE] -==== -Existing {product-title} 4.19 clusters are unaffected until you explicitly enable user namespaces, which is a Technology Preview feature in {product-title} 4.19. -==== - -* When installing a cluster on {azure-short}, if you set any of the `compute.platform.azure.identity.type`, `controlplane.platform.azure.identity.type`, or `platform.azure.defaultMachinePlatform.identity.type` field values to `None`, your cluster is unable to pull images from the Azure Container Registry. -You can avoid this issue by providing a user-assigned identity or by leaving the identity field blank. -In both cases, the installation program generates a user-assigned identity. (link:https://issues.redhat.com/browse/OCPBUGS-56008[OCPBUGS-56008]) - -* There is a known issue in the unified software catalog view of the console. When you select *Ecosystem* -> *Software Catalog*, you must enter an existing project name or create a new project to view the software catalog. The project selection field does not effect how catalog content is installed on the cluster. As a workaround, enter any existing project name to view the software catalog. (link:https://issues.redhat.com/browse/OCPBUGS-61870[OCPBUGS-61870]) - -* Starting with OCP 4.20, there is a decrease in the default maximum open files soft limit for containers. As a consequence, end users may experience application failures. To work around this problem, increase the container runtimes (CRI-O) ulimit configuration. (link:https://issues.redhat.com/browse/OCPBUGS-62095[OCPBUGS-62095]) - -* Deleting and recreating test workloads with a BlueField-3 NIC causes clock jumps due to inconsistent PTP synchronization. This disrupts time synchronization in test workloads. The time synchronization stabilizes when the workloads are stable. (link:https://issues.redhat.com/browse/RHEL-93579[RHEL-93579]) - -* Event logs for GNR-D interfaces are ambiguous due to identical three-letter prefixes ("eno"). As a consequence, affected interfaces are not clearly identified during state changes. To work around this problem, change interfaces used by ptp-operator to follow the "path" naming convention, ensuring per clock events are identified correctly based on interface names and clearly indicate which clock is affected by state changes. For more information, see link:https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_networking/consistent-network-interface-device-naming_configuring-and-managing-networking#network-interface-naming-policies_consistent-network-interface-device-naming[Network interface naming policies]. (link:https://issues.redhat.com/browse/OCPBUGS-62817[OCPBUGS-62817]) - -[id="ocp-installer-known-issues_{context}"] - -* When you install a cluster on {aws-short}, if you do not configure {aws-short} credentials before running any `openshift-install create` command, the installation program fails. (link:https://issues.redhat.com/browse/OCPBUGS-56658[OCPBUGS-56658]) - -[id="ocp-telco-core-release-known-issues_{context}"] - -* On systems using specific AMD EPYC processors, some low-level system interrupts, for example `AMD-Vi`, might contain CPUs in the CPU mask that overlaps with CPU-pinned workloads. This behavior is because of the hardware design. These specific error-reporting interrupts are generally inactive and there is currently no known performance impact.(link:https://issues.redhat.com/browse/OCPBUGS-57787[OCPBUGS-57787]) - -* Currently, pods that use a `guaranteed` QoS class and request whole CPUs might not restart automatically after a node reboot or kubelet restart. The issue might occur in nodes configured with a static CPU Manager policy and using the `full-pcpus-only` specification, and when most or all CPUs on the node are already allocated by such workloads. As a workaround, manually delete and re-create the affected pods. (link:https://issues.redhat.com/browse/OCPBUGS-43280[*OCPBUGS-43280*]) - -* The Performance Profile Creator tool fails to analyze a `must-gather` archive if the archive contains a custom namespace directory that ends with the suffix `nodes`. The failure occurs because of the tool's search logic, which incorrectly reports an error for multiple matches. As a workaround, rename the custom namespace directory so that it does not end with the `nodes` suffix, and run the tool again. (link:https://issues.redhat.com/browse/OCPBUGS-60218[*OCPBUGS-60218*]) - -* Currently, on clusters with SR-IOV network virtual functions configured, a race condition might occur between system services responsible for network device renaming and the TuneD service managed by the Node Tuning Operator. As a consequence, the TuneD profile might become degraded after the node restarts, leading to performance degradation. As a workaround, restart the TuneD pod to restore the profile state. (link:https://issues.redhat.com/browse/OCPBUGS-41934[*OCPBUGS-41934*]) - -[id="ocp-telco-ran-release-known-issues_{context}"] - -* The SuperMicro ARS-111GL-NHR server is unable to access virtual media during boot when the virtual media image is served through an IPv6 address. As a consequence, you cannot use virtual media on the SuperMicro ARS-111GL-NHR server model with an IPv6 network configuration. (link:https://issues.redhat.com/browse/OCPBUGS-60070[*OCPBUGS-60070*]) - -* A known latency issue currently affects systems running on 4th Gen Intel Xeon processors. (link:https://issues.redhat.com/browse/OCPBUGS-46528[OCPBUGS-46528]) - -* When attempting simultaneous BIOS and BMC firmware update on Dell R740, the BMC update might fail, leaving the server powered down and unresponsive. This issue occurs when the update process does not complete successfully, causing the system to remain in a non-operational state. (link:https://issues.redhat.com/browse/OCPBUGS-62009[*OCPBUGS-62009*]) - -* Updating the BMC firmware might fail if you configure the server with an incorrect network share location or invalid credentials, causing the server to remain powered off and unable to recover. (link:https://issues.redhat.com/browse/OCPBUGS-62010[*OCPBUGS-62010*]) - -[id="ocp-storage-core-release-known-issues_{context}"] - - - -[id="ocp-release-asynchronous-errata-updates_{context}"] -== Asynchronous errata updates - -Security, bug fix, and enhancement updates for {product-title} {product-version} are released as asynchronous errata through the Red{nbsp}Hat Network. All {product-title} {product-version} errata is https://access.redhat.com/downloads/content/290/[available on the Red Hat Customer Portal]. See the https://access.redhat.com/support/policy/updates/openshift[{product-title} Life Cycle] for more information about asynchronous errata. - -Red{nbsp}Hat Customer Portal users can enable errata notifications in the account settings for Red{nbsp}Hat Subscription Management (RHSM). When errata notifications are enabled, users are notified through email whenever new errata relevant to their registered systems are released. - -[NOTE] -==== -Red{nbsp}Hat Customer Portal user accounts must have systems registered and consuming {product-title} entitlements for {product-title} errata notification emails to generate. -==== - -This section will continue to be updated over time to provide notes on enhancements and bug fixes for future asynchronous errata releases of {product-title} {product-version}. Versioned asynchronous releases, for example with the form {product-title} {product-version}.z, will be detailed in subsections. In addition, releases in which the errata text cannot fit in the space provided by the advisory will be detailed in subsections that follow. - -[IMPORTANT] -==== -For any {product-title} release, always review the instructions on xref:../updating/updating_a_cluster/updating-cluster-web-console.adoc#updating-cluster-web-console[updating your cluster] properly. -==== - -//4.20.1 -[id="ocp-4-20-1_{context}"] -=== RHSA-2025:19003 - {product-title} {product-version}.1 image release, bug fix, and security update advisory - -Issued: 28 Oct 2025 - -{product-title} release {product-version}.1, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the link:https://access.redhat.com/errata/RHSA-2025:19003[RHSA-2025:19003] advisory. The RPM packages that are included in the update are provided by the link:https://access.redhat.com/errata/RHEA-2025:19001[RHEA-2025:19001] advisory. - -Space precluded documenting all of the container images for this release in the advisory. - -You can view the container images in this release by running the following command: - -[source,terminal] ----- -$ oc adm release info 4.20.1 --pullspecs ----- - -[id="ocp-4-20-1-known-issues_{context}"] -==== Known issues - -* Starting with {product-title} 4.20, there is a decrease in the default maximum open files soft limit for containers. As a consequence, end users may experience application failures. To work around this problem, increase the container runtimes (CRI-O) ulimit configuration. (link:https://issues.redhat.com/browse/OCPBUGS-62095[OCPBUGS-62095]) - -[id="ocp-4-20-1-bug-fixes_{context}"] -==== Bug fixes - -* Before this update, iDRAC10 hardware provisioning was failing due to an incorrect data type for the Dell Original Equipment Manufacturer (OEM) `Target` property and the use of an incorrect virtual media slot. As a result, users were unable to provision Dell iDRAC10 servers. With this release, the Dell iDRAC10 can be provisioned. (link:https://issues.redhat.com/browse/OCPBUGS-52427[OCPBUGS-52427]) - -* Before this release, two identical copies of the same controller were updating the same certificate authority (CA) bundle in a `configmap` causing them to receive different metadata inputs, rewrite each other's changes, and create duplicate events. With this release, the controllers use optimistic updating and server-side apply to avoid update events and handle update conflicts. As a result, metadata updates no longer trigger duplicate events, and the expected metadata is set correctly. (link:https://issues.redhat.com/browse/OCPBUGS-55217[OCPBUGS-55217]) - - -* Before this update, when installing a cluster on {ibm-power-server-title} you could only specify a name for an existing Transit Gateway or virtual private cloud (VPC). As the uniqueness of names was not guaranteed, this could cause conflicts and installation failures. With this release, you can use Universally Unique Identifiers (UUIDs) for a Transit Gateway and VPC. By using unique identifiers, the installation program can unambiguously identify the correct Transit Gateway or VPC. This prevents the naming conflicts and the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-59678[OCPBUGS-59678]) - -* Before this update, the Cloud event proxy for the Precision Time Protocol (PTP) Operator incorrectly parsed BF3 Network Interface Card (NIC) names, causing the interface alias to be formatted incorrectly. As a consequence, the incorrect parsing caused end users to misinterpret cloud events. With this release, the Cloud event proxy has been updated to correctly parse BF3 NIC names in the PTP Operator. As a result, fix improves parsing of BF3 NIC names, ensuring correct event publication for the PTP Operator. (link:https://issues.redhat.com/browse/OCPBUGS-60466[OCPBUGS-60466]) - -* Before this update, a pod with a secondary interface in an OVN-Kubernetes Localnet network (mapped to the br-ex bridge) could communicate with pods on the same node that used the default network for connectivity only if the Localnet IP addresses were within the same subnet as the host network. With this release, the localnet IP addresses can be drawn from any subnet; in this generalized case, an external router outside the cluster is expected to connect the localnet subnet to the host network. (link:https://issues.redhat.com/browse/OCPBUGS-61453[OCPBUGS-61453]) - -* Before this update, the Precision Time Protocol (PTP) Operator wrongly parsed network interface controller (NIC) names. As a result, interface aliases were incorrectly formatted and this impacted identifying a PTP hardware clock (PHC) when using Mellaonox cards to send clock state events. With this release, the PTP now correctly parses the NIC names so that generated aliases align with Mellanox naming conventions. Mellanox cards can now accurately identify a PHC when sending clock state events. (link:https://issues.redhat.com/browse/OCPBUGS-61581[OCPBUGS-61581]) - -* Before this update, the `cluster in workload identity mode` warning was missing when only the `token-auth-azure` annotation was set, which could lead to misconfiguration. This update adds a check for the `token-auth-azure` annotation when showing the warning. As a result, clusters that use only Azure Workload Identity now show the “cluster in workload identity mode” warning as expected. (link:https://issues.redhat.com/browse/OCPBUGS-61861[OCPBUGS-61861]) - -* Before this update, the YAML editor in the web console would default to indenting YAML files with 4 spaces. With this release, the default indentation has changed to 2 spaces to align with recommendations. (link:https://issues.redhat.com/browse/OCPBUGS-61990[OCPBUGS-61990]) - -* Before this update, deploying hosted control planes in version 4.20 or later with user-supplied `ignition-server-serving-cert` and `ignition-server-ca-cert` secrets`, along with the `disable-pki-reconciliation annotation`, caused the system to remove the user supplied ignition secrets and the `ignition-server` pods to fail. With this release, the `ignition-server` secrets are preserved during reconciliation after removing the delete action for the `disable-pki-reconciliation` annotation ensuring ignition-server pods start. (link:https://issues.redhat.com/browse/OCPBUGS-62006[OCPBUGS-62006]) - -* Before this update, if the `OVNKube-controller` on a node failed to process updates and configure its local OVN database, the `OVN-controller` could connect to this stale database. This caused the `OVN-controller` to consume outdated `EgressIP` configurations and send incorrect Gratuitous ARPs (GARPs) for an IP address that might have already moved to a different node. With this release, the `OVN-controller` is blocked from sending these GARPs during the time when the `OVNKube-controller` is not processing updates. As a result, network disruptions are prevented by ensuring GARPs are not sent based on stale database information. (link:https://issues.redhat.com/browse/OCPBUGS-62273[OCPBUGS-62273]) - -* Before this update, upgrading a `ClusterExtension` could fail when unhandled Customer Resource Definition (CRD) changes produced a large JSON diff for the validation status. This diff often exceeded the Kubernetes 32 KB limit, causing the status update to fail and leaving users with no information about why the upgrade did not occur. With this release, the diff output is truncated and summarized for unhandled scenarios instead of including the full JSON diff. This ensures the status updates remain within size limits, allowing them to post successfully and provide users with clear, actionable error messages. (link:https://issues.redhat.com/browse/OCPBUGS-62722[OCPBUGS-62722]) - -* Before this update, gRPC connection logs were set at a highly verbose log level. This generated an excessive number of messages, which caused the logs to overflow. With this release, the gRPC connection logs have been moved to the V(4) log level. Consequently, the logs no longer overflow, as these specific messages are now less verbose by default. (link:https://issues.redhat.com/browse/OCPBUGS-62844[OCPBUGS-62844]) - -* Before this update, running `oc-mirror` without displaying its version caused delays in debugging, as the correct version with required fixes was not known. As a consequence, the user was unable to identify `oc-mirror` version, hindering efficient debugging. With this release, `oc-mirror` now displays its version in the output, aiding faster debugging and ensuring correct fix application. (link:https://issues.redhat.com/browse/OCPBUGS-62283[OCPBUGS-62283]) - -* Before this update, a bug occurred when the `cluster-api-operator` kubeconfig controller tried to use a regenerated authentication token secret before the token value was fully populated. This caused users to experience recurring, transient reconciliation errors every 30 minutes, which briefly put the Operator into a degraded state. With this release, the controller now waits for the authentication token to be populated within the secret before proceeding, preventing the Operator from going into a degraded state and eliminates the recurring errors. (link:https://issues.redhat.com/browse/OCPBUGS-62755[OCPBUGS-62755]) - -* Before this update, in {product-title] 4.19.9, the Cluster Version Operator began requiring bearer token authentication in metrics requests. As a consequence, this broke the metrics scraper on hosted control plane clusters because their scrapers provided no client authentication. With this release, the Cluster Version Operator no longer requires client authentication for metrics requests in hosted control plane clusters. (link:https://issues.redhat.com/browse/OCPBUGS-62867[OCPBUGS-62867]) - -* Before this update, during failover, the system's duplicate address detection (DAD) could incorrectly disable the Egress IPv6 address if it was briefly present on both nodes, breaking the connection. With this release, the Egress IPv6 is configured to skip the DAD check during failover, guaranteeing uninterrupted egress IPv6 traffic after an Egress IP address successfully moves to a different node and ensuring greater network stability. (link:https://issues.redhat.com/browse/OCPBUGS-62913[OCPBUGS-62913]) - - -[id="ocp-4-20-1-updating_{context}"] -==== Updating -To update an {product-title} 4.20 cluster to this latest release, see xref:../updating/updating_a_cluster/updating-cluster-cli.adoc#updating-cluster-cli[Updating a cluster using the CLI]. - -//Update with relevant advisory information -[id="ocp-4-20-0-ga_{context}"] -=== RHSA-2025:9562 - {product-title} {product-version}.0 image release, bug fix, and security update advisory - -Issued: 21 Oct 2025 - -{product-title} release {product-version}.0, which includes security updates, is now available. The list of bug fixes that are included in the update is documented in the link:https://access.redhat.com/errata/RHSA-2025:9562[RHSA-2025:9562] advisory. The RPM packages that are included in the update are provided by the link:https://access.redhat.com/errata/RHEA-2025:4782[RHEA-2025:4782] advisory. - -Space precluded documenting all of the container images for this release in the advisory. - -You can view the container images in this release by running the following command: - -[source,terminal] ----- -$ oc adm release info 4.20.0 --pullspecs ----- - -[id="ocp-4-20-0-updating_{context}"] -==== Updating -To update an {product-title} 4.20 cluster to this latest release, see xref:../updating/updating_a_cluster/updating-cluster-cli.adoc#updating-cluster-cli[Updating a cluster using the CLI]. - -//replace 4.y.z for the correct values for the release. You do not need to update oc to run this command. +// RHSA-2025:9562 - {product-title} {product-version}.0 image release, bug fix, and security update advisory +include::modules/rn-ocp-4-20-0.adoc[leveloffset=+2]