The endpointPublishingStrategy
is used to publish the Ingress Controller endpoints to other networks, enable load balancer integrations, and provide access to other systems.
On OpenStack, the For more information, see the "Setting OpenStack Cloud Controller Manager options" section of the OpenStack installation documentation. |
nodeportService
endpoint publishing strategy
The nodeportService
endpoint publishing strategy publishes the Ingress Controller using a Kubernetes nodeport service.
In this configuration, the Ingress Controller deployment uses container networking. A nodeportService
is created to publish the deployment. The specific node ports are dynamically allocated by OKD; however, to support static port allocations, your changes to the node port field of the managed nodeportService
are preserved.
The preceding graphic shows the following concepts pertaining to OKD Ingress nodeport endpoint publishing strategy:
All the available nodes in the cluster have their own, externally accessible IP addresses. The service running in the cluster is bound to the unique nodeport for all the nodes.
When the client connects to a node that is down, for example, by connecting the 10.0.128.4
IP address in the graphic, the node port directly connects the client to an available node that is running the service. In this scenario, no load balancing is required. As the image shows, the 10.0.128.4
address is down and another IP address must be used instead.
The Ingress Operator ignores any updates to By default, ports are allocated automatically and you can access the port allocations for integrations. However, sometimes static port allocations are necessary to integrate with existing infrastructure which may not be easily reconfigured in response to dynamic ports. To achieve integrations with static node ports, you can update the managed service resource directly. |
For more information, see the Kubernetes Services documentation on nodeport
.
HostNetwork
endpoint publishing strategy
The HostNetwork
endpoint publishing strategy publishes the Ingress Controller on node ports where the Ingress Controller is deployed.
An Ingress Controller with the HostNetwork
endpoint publishing strategy can have only one pod replica per node. If you want n replicas, you must use at least n nodes where those replicas can be scheduled. Because each pod replica requests ports 80
and 443
on the node host where it is scheduled, a replica cannot be scheduled to a node if another pod on the same node is using those ports.
The HostNetwork
object has a hostNetwork
field with the following default values for the optional binding ports: httpPort: 80
, httpsPort: 443
, and statsPort: 1936
. By specifying different binding ports for your network, you can deploy multiple Ingress Controllers on the same node for the HostNetwork
strategy.
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
name: internal
namespace: openshift-ingress-operator
spec:
domain: example.com
endpointPublishingStrategy:
type: HostNetwork
hostNetwork:
httpPort: 80
httpsPort: 443
statsPort: 1936
When a cluster administrator installs a new cluster without specifying that the cluster is private, the default Ingress Controller is created with a scope
set to External
. Cluster administrators can change an External
scoped Ingress Controller to Internal
.
You installed the oc
CLI.
To change an External
scoped Ingress Controller to Internal
, enter the following command:
$ oc -n openshift-ingress-operator patch ingresscontrollers/default --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"type":"LoadBalancerService","loadBalancer":{"scope":"Internal"}}}}'
To check the status of the Ingress Controller, enter the following command:
$ oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml
The Progressing
status condition indicates whether you must take further action. For example, the status condition can indicate that you need to delete the service by entering the following command:
$ oc -n openshift-ingress delete services/router-default
If you delete the service, the Ingress Operator recreates it as Internal
.
When a cluster administrator installs a new cluster without specifying that the cluster is private, the default Ingress Controller is created with a scope
set to External
.
The Ingress Controller’s scope can be configured to be Internal
during installation or after, and cluster administrators can change an Internal
Ingress Controller to External
.
On some platforms, it is necessary to delete and recreate the service. Changing the scope can cause disruption to Ingress traffic, potentially for several minutes. This applies to platforms where it is necessary to delete and recreate the service, because the procedure can cause OKD to deprovision the existing service load balancer, provision a new one, and update DNS. |
You installed the oc
CLI.
To change an Internal
scoped Ingress Controller to External
, enter the following command:
$ oc -n openshift-ingress-operator patch ingresscontrollers/private --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"type":"LoadBalancerService","loadBalancer":{"scope":"External"}}}}'
To check the status of the Ingress Controller, enter the following command:
$ oc -n openshift-ingress-operator get ingresscontrollers/default -o yaml
The Progressing
status condition indicates whether you must take further action. For example, the status condition can indicate that you need to delete the service by entering the following command:
$ oc -n openshift-ingress delete services/router-default
If you delete the service, the Ingress Operator recreates it as External
.
Instead of creating a nodeport
-type Service
for each project, you can create a custom Ingress Controller to use the nodeportService
endpoint publishing strategy. To prevent port conflicts, consider this configuration for your Ingress Controller when you want to apply a set of routes, through Ingress sharding, to nodes that might already have a HostNetwork
Ingress Controller.
Before you set a nodeport
-type Service
for each project, read the following considerations:
You must create a wildcard DNS record for the nodeport Ingress Controller domain. A nodeport Ingress Controller route can be reached from the address of a worker node. For more information about the required DNS records for routes, see "User-provisioned DNS requirements".
You must expose a route for your service and specify the --hostname
argument for your custom Ingress Controller domain.
You must append the port that is assigned to the nodeport
-type Service
in the route so that you can access application pods.
You installed the OpenShift CLI (oc
).
Logged in as a user with cluster-admin
privileges.
You created a wildcard DNS record.
Create a custom resource (CR) file for the Ingress Controller:
IngressController
objectapiVersion: v1
items:
- apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
name: <custom_ic_name> (1)
namespace: openshift-ingress-operator
spec:
replicas: 1
domain: <custom_ic_domain_name> (2)
nodePlacement:
nodeSelector:
matchLabels:
<key>: <value> (3)
namespaceSelector:
matchLabels:
<key>: <value> (4)
endpointPublishingStrategy:
type: nodeportService
# ...
1 | Specify the a custom name for the IngressController CR. |
2 | The DNS name that the Ingress Controller services. As an example, the default ingresscontroller domain is apps.ipi-cluster.example.com , so you would specify the <custom_ic_domain_name> as nodeportsvc.ipi-cluster.example.com . |
3 | Specify the label for the nodes that include the custom Ingress Controller. |
4 | Specify the label for a set of namespaces. Substitute <key>:<value> with a map of key-value pairs where <key> is a unique name for the new label and <value> is its value. For example: ingresscontroller: custom-ic . |
Add a label to a node by using the oc label node
command:
$ oc label node <node_name> <key>=<value> (1)
1 | Where <value> must match the key-value pair specified in the nodePlacement section of your IngressController CR. |
Create the IngressController
object:
$ oc create -f <ingress_controller_cr>.yaml
Find the port for the service created for the IngressController
CR:
$ oc get svc -n openshift-ingress
80:32432/TCP
for the router-nodeport-custom-ic3
serviceNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
router-internal-default ClusterIP 172.30.195.74 <none> 80/TCP,443/TCP,1936/TCP 223d
router-nodeport-custom-ic3 nodeport 172.30.109.219 <none> 80:32432/TCP,443:31366/TCP,1936:30499/TCP 155m
To create a new project, enter the following command:
$ oc new-project <project_name>
To label the new namespace, enter the following command:
$ oc label namespace <project_name> <key>=<value> (1)
1 | Where <key>=<value> must match the value in the namespaceSelector section of your Ingress Controller CR. |
Create a new application in your cluster:
$ oc new-app --image=<image_name> (1)
1 | An example of <image_name> is quay.io/openshifttest/hello-openshift:multiarch . |
Create a Route
object for a service, so that the pod can use the service to expose the application external to the cluster.
$ oc expose svc/<service_name> --hostname=<svc_name>-<project_name>.<custom_ic_domain_name> (1)
You must specify the domain name of your custom Ingress Controller in the |
Check that the route has the Admitted
status and that it includes metadata for the custom Ingress Controller:
$ oc get route/hello-openshift -o json | jq '.status.ingress'
# ...
{
"conditions": [
{
"lastTransitionTime": "2024-05-17T18:25:41Z",
"status": "True",
"type": "Admitted"
}
],
[
{
"host": "hello-openshift.nodeportsvc.ipi-cluster.example.com",
"routerCanonicalHostname": "router-nodeportsvc.nodeportsvc.ipi-cluster.example.com",
"routerName": "nodeportsvc", "wildcardPolicy": "None"
}
],
}
Update the default IngressController
CR to prevent the default Ingress Controller from managing the nodeport
-type Service
. The default Ingress Controller will continue to monitor all other cluster traffic.
$ oc patch --type=merge -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"namespaceSelector":{"matchExpressions":[{"key":"<key>","operator":"NotIn","values":["<value>]}]}}}'
Verify that the DNS entry can route inside and outside of your cluster by entering the following command. The command outputs the IP address of the node that received the label from running the oc label node
command earlier in the procedure.
$ dig +short <svc_name>-<project_name>.<custom_ic_domain_name>
To verify that your cluster uses the IP addresses from external DNS servers for DNS resolution, check the connection of your cluster by entering the following command:
$ curl <svc_name>-<project_name>.<custom_ic_domain_name>:<port> (1)
1 | Where <port> is the node port from the nodeport -type Service . Based on example output from the oc get svc -n openshift-ingress command, the 80:32432/TCP HTTP route means that 32432 is the node port. |
Hello OpenShift!