This is a cache of https://docs.okd.io/4.13/post_installation_configuration/configuring-private-cluster.html. It is a snapshot of the page at 2024-11-17T00:08:16.245+0000.
Configuring a private cluster | Post-installation configuration | OKD 4.13
×

After you install an OKD version 4.13 cluster, you can set some of its core components to be private.

About private clusters

By default, OKD is provisioned using publicly-accessible dns and endpoints. You can set the dns, Ingress Controller, and API server to private after you deploy your private cluster.

If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private.

dns

If you install OKD on installer-provisioned infrastructure, the installation program creates records in a pre-existing public zone and, where possible, creates a private zone for the cluster’s own dns resolution. In both the public zone and the private zone, the installation program or cluster creates dns entries for *.apps, for the Ingress object, and api, for the API server.

The *.apps records in the public and private zone are identical, so when you delete the public zone, the private zone seamlessly provides all dns resolution for the cluster.

Ingress Controller

Because the default Ingress object is created as public, the load balancer is internet-facing and in the public subnets.

The Ingress Operator generates a default certificate for an Ingress Controller to serve as a placeholder until you configure a custom default certificate. Do not use Operator-generated default certificates in production clusters. The Ingress Operator does not rotate its own signing certificate or the default certificates that it generates. Operator-generated default certificates are intended as placeholders for custom default certificates that you configure.

API server

By default, the installation program creates appropriate network load balancers for the API server to use for both internal and external traffic.

On Amazon Web Services (AWS), separate public and private load balancers are created. The load balancers are identical except that an additional port is available on the internal one for use within the cluster. Although the installation program automatically creates or destroys the load balancer based on API server requirements, the cluster does not manage or maintain them. As long as you preserve the cluster’s access to the API server, you can manually modify or move the load balancers. For the public load balancer, port 6443 is open and the health check is configured for HTTPS against the /readyz path.

On Google Cloud Platform, a single load balancer is created to manage both internal and external API traffic, so you do not need to modify the load balancer.

On Microsoft Azure, both public and private load balancers are created. However, because of limitations in current implementation, you just retain both load balancers in a private cluster.

Configuring dns records to be published in a private zone

For all OKD clusters, whether public or private, dns records are published in a public zone by default.

You can remove the public zone from the cluster dns configuration to avoid exposing dns records to the public. You might want to avoid exposing sensitive information, such as internal domain names, internal IP addresses, or the number of clusters at an organization, or you might simply have no need to publish records publicly. If all the clients that should be able to connect to services within the cluster use a private dns service that has the dns records from the private zone, then there is no need to have a public dns record for the cluster.

After you deploy a cluster, you can modify its dns to use only a private zone by modifying the dns custom resource (CR). Modifying the dns CR in this way means that any dns records that are subsequently created are not published to public dns servers, which keeps knowledge of the dns records isolated to internal users. This can be done when you configure the cluster to be private, or if you never want dns records to be publicly resolvable.

Alternatively, even in a private cluster, you might keep the public zone for dns records because it allows clients to resolve dns names for applications running on that cluster. For example, an organization can have machines that connect to the public internet and then establish VPN connections for certain private IP ranges in order to connect to private IP addresses. The dns lookups from these machines use the public dns to determine the private addresses of those services, and then connect to the private addresses over the VPN.

Procedure
  1. Review the dns CR for your cluster by running the following command and observing the output:

    $ oc get dnses.config.openshift.io/cluster -o yaml
    Example output
    apiVersion: config.openshift.io/v1
    kind: dns
    metadata:
      creationTimestamp: "2019-10-25T18:27:09Z"
      generation: 2
      name: cluster
      resourceVersion: "37966"
      selfLink: /apis/config.openshift.io/v1/dnses/cluster
      uid: 0e714746-f755-11f9-9cb1-02ff55d8f976
    spec:
      baseDomain: <base_domain>
      privateZone:
        tags:
          Name: <infrastructure_id>-int
          kubernetes.io/cluster/<infrastructure_id>: owned
      publicZone:
        id: Z2XXXXXXXXXXA4
    status: {}

    Note that the spec section contains both a private and a public zone.

  2. Patch the dns CR to remove the public zone by running the following command:

    $ oc patch dnses.config.openshift.io/cluster --type=merge --patch='{"spec": {"publicZone": null}}'
    Example output
    dns.config.openshift.io/cluster patched

    The Ingress Operator consults the dns CR definition when it creates dns records for IngressController objects. If only private zones are specified, only private records are created.

    Existing dns records are not modified when you remove the public zone. You must manually delete previously published public dns records if you no longer want them to be published publicly.

Verification
  • Review the dns CR for your cluster and confirm that the public zone was removed, by running the following command and observing the output:

    $ oc get dnses.config.openshift.io/cluster -o yaml
    Example output
    apiVersion: config.openshift.io/v1
    kind: dns
    metadata:
      creationTimestamp: "2019-10-25T18:27:09Z"
      generation: 2
      name: cluster
      resourceVersion: "37966"
      selfLink: /apis/config.openshift.io/v1/dnses/cluster
      uid: 0e714746-f755-11f9-9cb1-02ff55d8f976
    spec:
      baseDomain: <base_domain>
      privateZone:
        tags:
          Name: <infrastructure_id>-int
          kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned
    status: {}

Setting the Ingress Controller to private

After you deploy a cluster, you can modify its Ingress Controller to use only a private zone.

Procedure
  1. Modify the default Ingress Controller to use only an internal endpoint:

    $ oc replace --force --wait --filename - <<EOF
    apiVersion: operator.openshift.io/v1
    kind: IngressController
    metadata:
      namespace: openshift-ingress-operator
      name: default
    spec:
      endpointPublishingStrategy:
        type: LoadBalancerService
        loadBalancer:
          scope: Internal
    EOF
    Example output
    ingresscontroller.operator.openshift.io "default" deleted
    ingresscontroller.operator.openshift.io/default replaced

    The public dns entry is removed, and the private zone entry is updated.

Restricting the API server to private

After you deploy a cluster to Amazon Web Services (AWS) or Microsoft Azure, you can reconfigure the API server to use only the private zone.

Prerequisites
  • Install the OpenShift CLI (oc).

  • Have access to the web console as a user with admin privileges.

Procedure
  1. In the web portal or console for your cloud provider, take the following actions:

    1. Locate and delete the appropriate load balancer component:

      • For AWS, delete the external load balancer. The API dns entry in the private zone already points to the internal load balancer, which uses an identical configuration, so you do not need to modify the internal load balancer.

      • For Azure, delete the api-internal rule for the load balancer.

    2. Delete the api.$clustername.$yourdomain dns entry in the public zone.

  2. Remove the external load balancers:

    You can run the following steps only for an installer-provisioned infrastructure (IPI) cluster. For a user-provisioned infrastructure (UPI) cluster, you must manually remove or disable the external load balancers.

    • If your cluster uses a control plane machine set, delete the following lines in the control plane machine set custom resource:

      providerSpec:
        value:
          loadBalancers:
          - name: lk4pj-ext (1)
            type: network (1)
          - name: lk4pj-int
            type: network
      1 Delete this line.
    • If your cluster does not use a control plane machine set, you must delete the external load balancers from each control plane machine.

      1. From your terminal, list the cluster machines by running the following command:

        $ oc get machine -n openshift-machine-api
        Example output
        NAME                            STATE     TYPE        REGION      ZONE         AGE
        lk4pj-master-0                  running   m4.xlarge   us-east-1   us-east-1a   17m
        lk4pj-master-1                  running   m4.xlarge   us-east-1   us-east-1b   17m
        lk4pj-master-2                  running   m4.xlarge   us-east-1   us-east-1a   17m
        lk4pj-worker-us-east-1a-5fzfj   running   m4.xlarge   us-east-1   us-east-1a   15m
        lk4pj-worker-us-east-1a-vbghs   running   m4.xlarge   us-east-1   us-east-1a   15m
        lk4pj-worker-us-east-1b-zgpzg   running   m4.xlarge   us-east-1   us-east-1b   15m

        The control plane machines contain master in the name.

      2. Remove the external load balancer from each control plane machine:

        1. Edit a control plane machine object to by running the following command:

          $ oc edit machines -n openshift-machine-api <control_plane_name> (1)
          1 Specify the name of the control plane machine object to modify.
        2. Remove the lines that describe the external load balancer, which are marked in the following example:

          providerSpec:
            value:
              loadBalancers:
              - name: lk4pj-ext (1)
                type: network (1)
              - name: lk4pj-int
                type: network
          1 Delete this line.
        3. Save your changes and exit the object specification.

        4. Repeat this process for each of the control plane machines.