From 83fe84eb54bb3c4922a5f6e2e6db5f494862f41d Mon Sep 17 00:00:00 2001 From: Laura Hinson Date: Tue, 4 Nov 2025 15:34:03 -0500 Subject: [PATCH] [OSDOCS-15069]: HCP 4.20 bug fix text --- .../hosted-control-planes-release-notes.adoc | 127 +++++------------- 1 file changed, 33 insertions(+), 94 deletions(-) diff --git a/hosted_control_planes/hosted-control-planes-release-notes.adoc b/hosted_control_planes/hosted-control-planes-release-notes.adoc index c6d7aa8c5d36..ba0fb3c24c82 100644 --- a/hosted_control_planes/hosted-control-planes-release-notes.adoc +++ b/hosted_control_planes/hosted-control-planes-release-notes.adoc @@ -8,107 +8,61 @@ toc::[] Release notes contain information about new and deprecated features, changes, and known issues. -[id="hcp-4-19-release-notes_{context}"] -== {hcp-capital} release notes for {product-title} 4.19 +[id="hcp-4-20-release-notes_{context}"] +== {hcp-capital} release notes for {product-title} 4.20 -With this release, {hcp} for {product-title} 4.19 is available. {hcp-capital} for {product-title} 4.19 supports {mce} version 2.9. +With this release, {hcp} for {product-title} 4.20 is available. {hcp-capital} for {product-title} 4.20 supports {mce} version 2.10. -[id="hcp-4-19-new-features-and-enhancements_{context}"] +[id="hcp-4-20-new-features-and-enhancements_{context}"] === New features and enhancements -[id="hcp-4-19-custom-dns_{context}"] -==== Defining a custom DNS name -Cluster administrators can now define a custom DNS name for a hosted cluster to provide for more flexibility in how DNS names are managed and used. For more information, see xref:../hosted_control_planes/hcp-deploy/hcp-deploy-bm.adoc#hcp-custom-dns_hcp-deploy-bm[Defining a custom DNS name]. - -[id="hcp-4-19-aws-tags_{context}"] -==== Adding or updating {aws-short} tags for hosted clusters - -Cluster administrators can add or update {aws-first} tags for several different types of resources. For more information, see xref:../hosted_control_planes/hcp-manage/hcp-manage-aws.adoc#hcp-aws-tags_hcp-managing-aws[Adding or updating {aws-short} tags for a hosted cluster]. - -[id="hcp-4-19-auto-dr-oadp_{context}"] -==== Automated disaster recovery for a hosted cluster by using {oadp-short} - -On bare metal or {aws-first} platforms, you can automate disaster recovery for a hosted cluster by using {oadp-first}. For more information, see xref:../hosted_control_planes/hcp_high_availability/hcp-disaster-recovery-oadp-auto.adoc#hcp-disaster-recovery-oadp-auto[Automated disaster recovery for a hosted cluster by using OADP]. - -[id="hcp-4-19-dr-agent_{context}"] -==== Disaster recovery for a hosted cluster on a bare-metal platform - -For hosted clusters on a bare-metal platform, you can complete disaster recovery tasks by using {oadp-short}, including backing up the data plane and control plane workloads and restoring either to the same management cluster or to a new management cluster. For more information, see xref:../hosted_control_planes/hcp_high_availability/hcp-disaster-recovery-oadp.adoc#hcp-disaster-recovery-oadp[Disaster recovery for a hosted cluster by using OADP]. - -[id="hcp-4-19-openstack_{context}"] -==== {hcp-capital} on {rh-openstack-first} 17.1 (Technology Preview) - -{hcp-capital} on {rh-openstack} 17.1 is now supported as a Technology Preview feature. - -For more information, see xref:../hosted_control_planes/hcp-deploy/hcp-deploy-openstack.adoc#hosted-clusters-openstack-prerequisites_hcp-deploy-openstack[Deploying {hcp} on OpenStack]. - -[id="hcp-4-19-np-capacity-blocks_{context}"] -==== Configuring node pool capacity blocks on {aws-short} - -You can now configure node pool capacity blocks for {hcp} on {aws-first}. For more information, see xref:../hosted_control_planes/hcp-manage/hcp-manage-aws.adoc#hcp-np-capacity-blocks_hcp-managing-aws[Configuring node pool capacity blocks on {aws-short}]. - -[id="bug-fixes-hcp-rn-4-19_{context}"] +[id="bug-fixes-hcp-rn-4-20_{context}"] === Bug fixes -//FYI - OCPBUGS-56792 is a duplicate of this bug -* Previously, when an IDMS or ICSP in the management OpenShift cluster defined a source that pointed to registry.redhat.io or registry.redhat.io/redhat, and the mirror registry did not contain the required OLM catalog images, provisioning for the `HostedCluster` resource stalled due to unauthorized image pulls. As a consequence, the `HostedCluster` resource was not deployed, and it remained blocked, where it could not pull essential catalog images from the mirrored registry. -+ -With this release, if a required image cannot be pulled due to authorization errors, the provisioning now explicitly fails. The logic for registry override is improved to allow matches on the root of the registry, such as registry.redhat.io, for OLM CatalogSource image resolution. A fallback mechanism is also introduced to use the original `ImageReference` if the registry override does not yield a working image. -+ -As a result, the `HostedCluster` resource can be deployed successfully, even in scenarios where the mirror registry lacks the required OLM catalog images, as the system correctly falls back to pulling from the original source when appropriate. (link:https://issues.redhat.com/browse/OCPBUGS-56492[OCPBUGS-56492]) - -* Previously, the control plane controller did not properly select the correct CVO manifests for a feature set. As a consequence, the incorrect CVO manifests for a feature set might have been deployed for hosted clusters. In practice, CVO manifests never differed between feature sets, so this issue had no actual impact. With this release, the control plane controller properly selects the correct CVO manifests for a feature set. As a result, the correct CVO manifests for a feature set are deployed for the hosted cluster. (link:https://issues.redhat.com/browse/OCPBUGS-44438[OCPBUGS-44438]) - -* Previously, when you set a secure proxy for a `HostedCluster` resource that served a certificate signed by a custom CA, that CA was not included in the initial ignition configuration for the node. As a result, the node did not boot due to failed ignition. This release fixes the issue by including the trusted CA for the proxy in the initial ignition configuration, which results in a successful node boot and ignition. (link:https://issues.redhat.com/browse/OCPBUGS-56896[OCPBUGS-56896]) +* Before this update, the ignition server deployment used a global `mirroredReleaseImage` state that could be modified by concurrent image lookup operations from different reconciliation processes, causing race conditions. As a consequence, the `MIRRORED_RELEASE_IMAGE` environment variable alternated between the original image and its mirror registry, triggering constant deployment regenerations. In this release, the global mirror state is replaced with image-specific lookup logic that deterministically resolves mirrors based on the specific control plane release image and registry policies, with defensive filtering for empty registry entries. As a result, the ignition server deployments remain stable with consistent `MIRRORED_RELEASE_IMAGE` values, eliminating unnecessary pod restarts and deployment churn. (link:https://issues.redhat.com/browse/OCPBUGS-61667[OCPBUGS-61667]) -* Previously, the IDMS or ICSP resources from the management cluster were processed without considering that a user might specify only the root registry name as a mirror or source for image replacement. As a consequence, any IDMS or ICSP entries that used only the root registry name did not work as expected. With this release, the mirror replacement logic now correctly handles cases where only the root registry name is provided. As a result, the issue no longer occurs, and the root registry mirror replacements are now supported. (link:https://issues.redhat.com/browse/OCPBUGS-55693[OCPBUGS-55693]) +* Before this update, the SAN validation for custom certificates in `hc.spec.configuration.apiServer.servingCerts.namedCertificates` did not properly handle wildcard DNS patterns, such as `\*.example.com`. As a consequence, the wildcard DNS patterns in custom certificates could conflict with internal Kubernetes API server certificate SANs without being detected, leading to certificate validation failures and potential deployment issues. This release provides enhanced DNS SAN conflict detection to include RFC-compliant wildcard support, implementing bidirectional conflict validation that properly handles wildcard patterns such as `*.example.com` matching `sub.example.com`. As a result, wildcard DNS patterns are now properly validated, preventing certificate conflicts and ensuring more reliable hosted cluster deployments with wildcard certificate support. (link:https://issues.redhat.com/browse/OCPBUGS-60381[OCPBUGS-60381]) -* Previously, the OADP plugin looked for the `DataUpload` object in the wrong namespace. As a consequence, the backup process was stalled indefinitely. In this release, the plugin uses the source namespace of the backup object, so this problem no longer occurs. (link:https://issues.redhat.com/browse/OCPBUGS-55469[OCPBUGS-55469]) +* Before this update, the Azure cloud provider did not set the default ping target, `HTTP:10256/healthz`, for the Azure load balancer. Instead, services of the `LoadBalancer` type that ran on Azure had a ping target of `TCP:30810`. As a consequence, the health probes for cluster-wide services were non-functional, and during upgrades, they experienced downtime. With this release, the `ClusterServiceLoadBalancerHealthProbeMode` property of the cloud configuration is set to `shared`. As a result, load balancers in Azures have the correct health check ping target, `HTTP:10256/healthz`, which points to `kube-proxy` health endpoints that run on nodes. (link:https://issues.redhat.com/browse/OCPBUGS-58031[OCPBUGS-58031]) -* Previously, the SAN of the custom certificate that the user added to the `hc.spec.configuration.apiServer.servingCerts.namedCertificates` field conflicted with the hostname that was set in the `hc.spec.services.servicePublishingStrategy` field for the Kubernetes agent server (KAS). As a consequence, the KAS certificate was not added to the set of certificates to generate a new payload, and any new nodes that attempted to join the `HostedCluster` resource had issues with certificate validation. This release adds a validation step to fail earlier and warn the user about the issue, so that the problem no longer occurs. (link:https://issues.redhat.com/browse/OCPBUGS-53261[OCPBUGS-53261]) +* Before this update, the HyperShift Operator failed to clear the `user-ca-bundle` config map after the removal of the `additionalTrustBundle` parameter from the `HostedCluster` resource. As a consequence, the `user-ca-bundle` config map was not updated, resulting in failure to generate ignition payloads. With this release, the HyperShift Operator actively removes the `user-ca-bundle` config map from the control plane namespace when it is removed from the `HostedCluster` resource. As a result, the `user-ca-bundle` config map is now correctly cleared, enabling the generation of ignition payloads. (link:https://issues.redhat.com/browse/OCPBUGS-57336[OCPBUGS-57336]) -* Previously, when you created a hosted cluster in a shared VPC, the private link controller sometimes failed to assume the shared VPC role to manage the VPC endpoints in the shared VPC. With this release, a client is created for every reconciliation in the private link controller so that you can recover from invalid clients. As a result, the hosted cluster endpoints and the hosted cluster are created successfully. (link:https://issues.redhat.com/browse/OCPBUGS-45184[*OCPBUGS-45184*]) +* Before this update, if you tried to create a hosted cluster on AWS when the Kubernetes API server service publishing strategy was `LoadBalancer` with `PublicAndPrivate` endpoint access, a private router admitted the OAuth route even though the External DNS Operator did not register a DNS record. As a consequence, the private router did not properly resolve the route URL and the OAuth server was inaccessible. The Console Cluster Operator also failed to start, and the hosted cluster installation failed. With this release, a private router admits the OAuth route only when the external DNS is defined. Otherwise, the router admits the route in the management cluster. As a result, the OAuth route is accessible, the Console Cluster Operator properly starts, and the hosted cluster installation succeeds. (link:https://issues.redhat.com/browse/OCPBUGS-56914[OCPBUGS-56914]) -* Previously, ARM64 architecture was not allowed in the `NodePool` API on the Agent platform. As a consequence, you could not deploy heterogeneous clusters on the Agent platform. In this release, the API allows ARM64-based `NodePool` resources on the Agent platform. (link:https://issues.redhat.com/browse/OCPBUGS-46342[OCPBUGS-46342]) +* Before this release, when an IDMS or ICSP in the management OpenShift cluster defined a source that pointed to registry.redhat.io or registry.redhat.io/redhat, and the mirror registry did not contain the required OLM catalog images, provisioning for the `HostedCluster` resource stalled due to unauthorized image pulls. As a consequence, the `HostedCluster` resource was not deployed, and it remained blocked, where it could not pull essential catalog images from the mirrored registry. With this release, if a required image cannot be pulled due to authorization errors, the provisioning now explicitly fails. The logic for registry override is improved to allow matches on the root of the registry, such as registry.redhat.io, for OLM CatalogSource image resolution. A fallback mechanism is also introduced to use the original `ImageReference` if the registry override does not yield a working image. As a result, the `HostedCluster` resource can be deployed successfully, even in scenarios where the mirror registry lacks the required OLM catalog images, as the system correctly falls back to pulling from the original source when appropriate. (link:https://issues.redhat.com/browse/OCPBUGS-56492[OCPBUGS-56492]) -* Previously, the HyperShift Operator always validated the subject alternative names (SANs) for the Kubernetes API server. With this release, the Operator validates the SANs only if PKI reconciliation is enabled. (link:https://issues.redhat.com/browse/OCPBUGS-56562[OCPBUGS-56562]) +* Before this update, the AWS Cloud Provider did not set the default ping target, `HTTP:10256/healthz`, for the AWS load balancer. For services of the `LoadBalancer` type that run on AWS, the load balancer object created in AWS had a ping target of `TCP:32518`. As a consequence, the health probes for cluster-wide services were non-functional, and during upgrades, those services were down. With this release, the `ClusterServiceLoadBalancerHealthProbeMode` property of the cloud configuration is set to `Shared`. This cloud configuration is passed to the AWS Cloud Provider. As a result, the load balancers in AWS have the correct health check ping target, `HTTP:10256/healthz`, which points to the `kube-proxy` health endpoints that are running on nodes. (link:https://issues.redhat.com/browse/OCPBUGS-56011[OCPBUGS-56011]) -* Previously, in a hosted cluster that existed for more than 1 year, when the internal serving certificates were renewed, the control plane workloads did not restart to pick up the renewed certificates. As a consequence, the control plane became degraded. With this release, when certificates are renewed, the control plane workloads are automatically restarted. As a result, the control plane remains stable. (link:https://issues.redhat.com/browse/OCPBUGS-52331[OCPBUGS-52331]) +* Before this update, when you disabled the image registry capability by using the `--disable-cluster-capabilities` option, {hcp} still required you to configure a managed identity for the image registry. In this release, when the image registry is disabled, the image registry managed identity configuration is optional. (link:https://issues.redhat.com/browse/OCPBUGS-55892[OCPBUGS-55892]) -* Previously, when you created a validating webhook on a resource that the OpenShift OAuth API server managed, such as a user or a group, the validating webhook was not executed. This release fixes the communication between the OpenShift OAuth API server and the data plane by adding a `Konnectivity` proxy sidecar. As a result, the process to validate webhooks on users and groups works as expected. (link:https://issues.redhat.com/browse/OCPBUGS-52190[OCPBUGS-52190]) +* Before this update, the `ImageDigestMirrorSet` (IDMS) and `ImageContentSourcePolicy` (ICSP) resources from the management cluster were processed without considering that someone might specify only the root registry name as a mirror or source for image replacement. As a consequence, the IDMS and ICSP entries that used only the root registry name did not work as expected. In this release, the mirror replacement logic now correctly handles cases where only the root registry name is provided. As a result, the issue no longer occurs, and root registry mirror replacements are now supported. (link:https://issues.redhat.com/browse/OCPBUGS-54483[OCPBUGS-54483]) -* Previously, when the `HostedCluster` resource was not available, the reason was not propagated correctly from `HostedControlPlane` resource in the condition. The `Status` and the `Message` information was propagated for the `Available` condition in the `HostedCluster` custom resource, but the `Resource` value was not propagated. In this release, the reason is also propagated, so you have more information to identify the root cause of unavailability. (link:https://issues.redhat.com/browse/OCPBUGS-50907[OCPBUGS-50907]) +* Before this update, {hcp} did not correctly persist registry metadata and release image provider caches in the `HostedCluster` resource. As a consequence, caches for release and image metadata reset on `HostedCluster` controller reconciliation. This release introduces a common registry provider which is used by the `HostedCluster` resource to fix cache loss. This reduces the number of image pulls and network traffic, thus improving overall performance. (link:https://issues.redhat.com/browse/OCPBUGS-53259[OCPBUGS-53259]) -* Previously, the `managed-trust-bundle` volume mount and the `trusted-ca-bundle-managed` config map were introduced as mandatory components. This requirement caused deployment failures if you used your own Public Key Infrastructure (PKI), because the OpenShift API server expected the presence of the `trusted-ca-bundle-managed` config map. To address this issue, these components are now optional, so that clusters can deploy successfully without the `trusted-ca-bundle-managed` config map when you are using a custom PKI. (link:https://issues.redhat.com/browse/OCPBUGS-52323[OCPBUGS-52323]) +* Before this update, when you configured an OIDC provider for a `HostedCluster` resource with an OIDC client that did not specify a client secret, the system automatically generated a default secret name. As a consequence, you could not configure OIDC public clients, which are not supposed to use secrets. This release fixes the issue. If no client secret is provided, no default secret name is generated, enabling proper support for public clients. (link:https://issues.redhat.com/browse/OCPBUGS-58149[OCPBUGS-58149]) -* Previously, there was no way to verify that an `IBMPowerVSImage` resource was deleted, which led to unnecessary cluster retrieval attempts. As a consequence, hosted clusters on {ibm-power-server-title} were stuck in the destroy state. In this release, you can retrieve and process a cluster that is associated with an image only if the image is not in the process of being deleted. (link:https://issues.redhat.com/browse/OCPBUGS-46037[OCPBUGS-46037]) +* Before this update, multiple mirror images caused a hosted control plane payload error due to failed image lookup. As a consequence, users could not create hosted clusters. With this release, the hosted control plane payload now supports multiple mirrors, avoiding errors when a primary mirror is unavailable. As a result, users can create hosted clusters. (link:https://issues.redhat.com/browse/OCPBUGS-54720[OCPBUGS-54720]) -* Previously, when you created a cluster with secure proxy enabled and set the certificate configuration to `configuration.proxy.trustCA`, the cluster installation failed. In addition, the OpenShift OAuth API server could not use the management cluster proxy to reach cloud APIs. This release introduces fixes to prevent these issues. (link:https://issues.redhat.com/browse/OCPBUGS-51050[OCPBUGS-51050]) - -* Previously, both the `NodePool` controller and the cluster API controller set the `updatingConfig` status condition on the `NodePool` custom resource. As a consequence, the `updatingConfig` status was constantly changing. With this release, the logic to update the `updatingConfig` status is consolidated between the two controllers. As a result, the `updatingConfig` status is correctly set. (link:https://issues.redhat.com/browse/OCPBUGS-45322[OCPBUGS-45322]) - -* Previously, the process to validate the container image architecture did not pass through the image metadata provider. As a consequence, image overrides did not take effect, and disconnected deployments failed. In this release, the methods for the image metadata provider are modified to allow multi-architecture validations, and are propagated through all components for image validation. As a result, the issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-44655[OCPBUGS-44655]) - -* Previously, the `--goaway-chance` flag for the Kubernetes API Server was not configurable. The default value for the flag was `0`. With this release, you can change the value for the `--goaway-chance` flag by adding an annotation to the `HostedCluster` custom resource. (link:https://issues.redhat.com/browse/OCPBUGS-54863[OCPBUGS-54863]) - -* Previously, on instances of Red{nbsp}Hat OpenShift on {ibm-cloud-title} that are based on {hcp}, in non-OVN clusters, the Cluster Network Operator could not patch service monitors and Prometheus rules in the `monitoring.coreos.com` API group. As a consequence, the Cluster Network Operator logs showed permissions errors and "could not apply" messages. With this release, permissions for service monitors and Prometheus rules are added in the Cluster Network Operator for non-OVN clusters. As a result, the Cluster Network Operator logs no longer show permissions errors. (link:https://issues.redhat.com/browse/OCPBUGS-54178[OCPBUGS-54178]) - -* Previously, if you tried to use the {hcp} command-line interface (CLI) to create a disconnected cluster, the creation failed because the CLI could not access the payload. With this release, the release payload architecture check is skipped in disconnected environments because the registry where it is hosted is not usually accessible from the machine where the CLI runs. As a result, you can now use the CLI to create a disconnected cluster. (link:https://issues.redhat.com/browse/OCPBUGS-47715[OCPBUGS-47715]) - -* Previously, when the Control Plane Operator checked API endpoint availability, the Operator did not honor the `_*PROXY` variables that were set. As a consequence, HTTP requests to validate the Kubernetes API Server failed when egress traffic was blocked, except through a proxy, and the hosted control plane and hosted cluster were not available. With this release, this issue is resolved. (link:https://issues.redhat.com/browse/OCPBUGS-49913[OCPBUGS-49913]) - -* Previously, when you used the {hcp} CLI (`hcp`) to create a hosted cluster, you could not configure the etcd storage size. As a consequence, the disk size was not sufficient for some larger clusters. With this release, you can set the etcd storage size by setting a flag in your `HostedCluster` resource. The flag was initially added to help the OpenShift performance team with testing higher `NodePool` resources on ROSA with HCP. As a result, you can now set the etcd storage size when you create a hosted cluster by using the `hcp` CLI. (link:https://issues.redhat.com/browse/OCPBUGS-52655[OCPBUGS-52655]) +* Before this update, when a hosted cluster was upgraded to multiple versions over time, the version history in the `HostedCluster` resource sometimes exceeded 10 entries. However, the API had a strict validation limit of 10 items maximum for the version history field. As a consequence, users could not edit or update their `HostedCluster` resources when the version history exceeded 10 entries. Operations such as adding annotations (for example, for cluster size overrides) or performing maintenance tasks like resizing request serving nodes failed with a validation error: "status.version.history: Too many: 11: must have at most 10 items". This error prevented ROSA SREs from performing critical maintenance operations that might impact customer API access. ++ +With this release, the maximum items validation constraint has been removed from the version history field in the `HostedCluster` API, allowing the history to grow beyond 10 entries without triggering validation errors. As a result, `HostedCluster` resources can now be edited and updated regardless of how many entries exist in the version history, so that administrators can perform necessary maintenance operations on clusters that have undergone multiple version upgrades. (link:https://issues.redhat.com/browse/OCPBUGS-58200[OCPBUGS-58200]) -* Previously, if you tried to update a hosted cluster that used in-place updates, the proxy variables were not honored and the update failed. With this release, the pod that performs in-place upgrades honors the cluster proxy settings. As a result, updates now work for hosted clusters that use in-place updates. (link:https://issues.redhat.com/browse/OCPBUGS-48540[OCPBUGS-48540]) +* Before this update, following a CLI refactoring, the `MarkPersistentFlagRequired` function stopped working correctly. The `--name` and `--pull-secret` flags, which are critical for cluster creation, were marked as required, but the validation was not being enforced. As a consequence, users could run the `hypershift create cluster` command without providing the required `--name` or `--pull-secret` flags, and the CLI would not immediately alert them that these required flags were missing. This could lead to misconfigured deployments and confusing error messages later in the process. ++ +This release adds an explicit validation in the `RawCreateOptions.Validate()` function to check for the presence of the `--name` and `--pull-secret` flags, returning clear error messages when either flag is missing. Additionally, the default "example" value is removed from the name field to ensure proper validation. As a result, when users attempt to create a cluster without the required `--name` or `--pull-secret` flags, they now receive immediate, clear error messages indicating which required flag is missing (for example, "Error: --name is required" or "Error: --pull-secret is required"), preventing misconfigured deployments and improving the user experience. (link:https://issues.redhat.com/browse/OCPBUGS-37323[OCPBUGS-37323]) -* Previously, the liveness and readiness probes that are associated with the OpenShift API server in {hcp} were misaligned with the probes that are used in installer-provisioned infrastructure. This release updates the liveness and readiness probes to use the `/livez` and `/readyz` endpoints instead of the `/healthz` endpoint. (link:https://issues.redhat.com/browse/OCPBUGS-54819[OCPBUGS-54819]) +* Before this update, a variable shadowing bug in the `GetSupportedOCPVersions()` function caused the `supportedVersions` variable to be incorrectly assigned using `:=` instead of `=`, creating a local variable that was immediately discarded rather than updating the intended outer scope variable. As a consequence, when users ran the `hypershift version` command with the HyperShift Operator deployed, the CLI would either display `` for the Server Version or panic with a "nil pointer dereference" error, preventing users from verifying the deployed HyperShift Operator version. ++ +This release corrects the variable assignment from `supportedVersions :=` to `supportedVersions =` in the `GetSupportedOCPVersions()` function to properly assign the config map to the outer scope variable, ensuring the supported versions data is correctly populated. As a result, the `hypershift version` command now correctly displays the Server Version (for example, "Server Version: f001510b35842df352d1ab55d961be3fdc2dae32") when the HyperShift Operator is deployed, so that users can verify the running operator version and supported {product-title} versions. (link:https://issues.redhat.com/browse/OCPBUGS-57316[OCPBUGS-57316]) -* Previously, the Konnectivity agent on a hosted control plane did not have a readiness check. This release adds a readiness probe to the Konnectivity agent to indicate pod readiness when the connection to Konnectivity server drops. (link:https://issues.redhat.com/browse/OCPBUGS-49611[OCPBUGS-49611]) +* Before this update, the HyperShift Operator validated the Kubernetes API Server subject alternative names (SANs) in all cases. As a consequence, users sometimes experienced invalid API Server SANs during public key infrastructure (PKI) reconciliation. With this release, the Kubernetes API Server SANs are validated only if PKI reconciliation is not disabled. (link:https://issues.redhat.com/browse/OCPBUGS-56457[OCPBUGS-56457]) -* Previously, when the HyperShift Operator was scoped to a subset of hosted clusters and node pools, the Operator did not properly clean up token and user data secrets in control plane namespaces. As a consequence, secrets accumulated. With this release, the Operator properly cleans up secrets. (link:https://issues.redhat.com/browse/OCPBUGS-54272[OCPBUGS-54272]) +* Before this update, the shared ingress controller did not handle the `HostedCluster.Spec.KubeAPIServerDNSName` field, so custom kube-apiserver DNS names were not added to the router configuration. As a consequence, traffic destined for the kube-apiserver on a hosted control plane that used a custom DNS name (via `HostedCluster.Spec.KubeAPIServerDNSName`) was not routed correctly, preventing the `KubeAPIExternalName` feature from working with platforms that use shared ingress. ++ +This release adds handling for `HostedCluster.Spec.KubeAPIServerDNSName` in the shared ingress controller. When a hosted cluster specifies a custom kube-apiserver DNS name, the controller now automatically creates a route that directs traffic to the kube-apiserver service. As a result, traffic destined for custom kube-apiserver DNS names is now correctly routed by the shared ingress controller, enabling the `KubeAPIExternalName` feature to work on platforms that use shared ingress. (link:https://issues.redhat.com/browse/OCPBUGS-57790[OCPBUGS-57790]) -[id="known-issues-hcp-rn-4-19_{context}"] +[id="known-issues-hcp-rn-4-20_{context}"] === Known issues * If the annotation and the `ManagedCluster` resource name do not match, the {mce} console displays the cluster as `Pending import`. The cluster cannot be used by the {mce-short}. The same issue happens when there is no annotation and the `ManagedCluster` name does not match the `Infra-ID` value of the `HostedCluster` resource. @@ -163,30 +117,15 @@ For {ibm-power-title} and {ibm-z-title}, you must run the control plane on machi .{hcp-capital} GA and TP tracker [cols="4,1,1,1",options="header"] |=== -|Feature |4.17 |4.18 |4.19 +|Feature |4.18 |4.19 |4.20 |{hcp-capital} for {product-title} using non-bare-metal agent machines |Technology Preview |Technology Preview |Technology Preview -|{hcp-capital} for an ARM64 {product-title} cluster on {aws-full} -|General Availability -|General Availability -|General Availability - -|{hcp-capital} for {product-title} on {ibm-power-title} -|General Availability -|General Availability -|General Availability - -|{hcp-capital} for {product-title} on {ibm-z-title} -|General Availability -|General Availability -|General Availability - |{hcp-capital} for {product-title} on {rh-openstack} |Developer Preview -|Developer Preview +|Technology Preview |Technology Preview |===