apiVersion: config.openshift.io/v1
kind: Ingress
metadata:
name: cluster
spec:
domain: apps.openshiftdemos.com
The Ingress Operator implements the ingresscontroller
API and is the
component responsible for enabling external access to OpenShift Container Platform
cluster services. The Operator makes this possible by deploying and
managing one or more HAProxy-based
Ingress Controllers
to handle routing. You can use the Ingress Operator to route traffic by
specifying OpenShift Container Platform Route
and Kubernetes Ingress
resources.
The installation program generates an asset with an Ingress
resource in the config.openshift.io
API group, cluster-ingress-02-config.yml
.
Ingress
resourceapiVersion: config.openshift.io/v1
kind: Ingress
metadata:
name: cluster
spec:
domain: apps.openshiftdemos.com
The installation program stores this asset in the cluster-ingress-02-config.yml
file in the manifests/
directory. This Ingress
resource defines the cluster-wide configuration for Ingress. This Ingress configuration is used as follows:
The Ingress Operator uses the domain from the cluster Ingress configuration as the domain for the default Ingress Controller.
The OpenShift API Server Operator uses the domain from the cluster Ingress configuration. This domain is also used when generating a default host for a Route
resource that does not specify an explicit host.
The ingresscontrollers.operator.openshift.io
resource offers the following
configuration parameters.
Parameter | Description | ||||
---|---|---|---|---|---|
|
The If empty, the default value is |
||||
|
|
||||
|
If not set, the default value is based on
The |
||||
|
The The secret must contain the following keys and data:
* If not set, a wildcard certificate is automatically generated and used. The certificate is valid for the Ingress controller The in-use certificate, whether generated or user-specified, is automatically integrated with OpenShift Container Platform built-in OAuth server. |
||||
|
|
||||
|
|
||||
|
If not set, the defaults values are used.
|
||||
|
If not set, the default value is based on the When using the The minimum TLS version for Ingress controllers is
|
||||
|
|
||||
|
|
All parameters are optional. |
The tlsSecurityProfile
parameter defines the schema for a TLS security profile. This object is used by operators to apply TLS security settings to operands.
There are four TLS security profile types:
Old
Intermediate
Modern
Custom
The Old
, Intermediate
, and Modern
profiles are based on recommended configurations. The Custom
profile provides the ability to specify individual TLS security profile parameters.
Old
profile configurationspec:
tlsSecurityProfile:
type: Old
Intermediate
profile configurationspec:
tlsSecurityProfile:
type: Intermediate
Modern
profile configurationspec:
tlsSecurityProfile:
type: Modern
The Custom
profile is a user-defined TLS security profile.
You must be careful using a |
Custom
profilespec:
tlsSecurityProfile:
type: Custom
custom:
ciphers:
- ECDHE-ECDSA-AES128-GCM-SHA256
- ECDHE-RSA-AES128-GCM-SHA256
minTLSVersion: VersionTLS11
NodePortService
endpoint publishing strategy
The NodePortService
endpoint publishing strategy publishes the Ingress Controller using a Kubernetes NodePort service.
In this configuration, the Ingress Controller deployment uses container networking. A NodePortService
is created to publish the deployment. The specific node ports are dynamically allocated by OpenShift Container Platform; however, to support static port allocations, your changes to the node port field of the managed NodePortService
are preserved.
The Ingress Operator ignores any updates to By default, ports are allocated automatically and you can access the port allocations for integrations. However, sometimes static port allocations are necessary to integrate with existing infrastructure which may not be easily reconfigured in response to dynamic ports. To achieve integrations with static node ports, you can update the managed service resource directly. |
For more information, see the Kubernetes Services documentation on NodePort
.
HostNetwork
endpoint publishing strategy
The HostNetwork
endpoint publishing strategy publishes the Ingress Controller on node ports where the Ingress Controller is deployed.
An Ingress controller with the HostNetwork
endpoint publishing strategy can have only one pod replica per node. If you want n replicas, you must use at least n nodes where those replicas can be scheduled. Because each pod replica requests ports 80
and 443
on the node host where it is scheduled, a replica cannot be scheduled to a node if another pod on the same node is using those ports.
The Ingress Operator is a core feature of OpenShift Container Platform and is enabled out of the box.
Every new OpenShift Container Platform installation has an ingresscontroller
named default. It
can be supplemented with additional Ingress Controllers. If the default
ingresscontroller
is deleted, the Ingress Operator will automatically recreate it
within a minute.
View the default Ingress Controller:
$ oc describe --namespace=openshift-ingress-operator ingresscontroller/default
You can view and inspect the status of your Ingress Operator.
View your Ingress Operator status:
$ oc describe clusteroperators/ingress
You can view your Ingress Controller logs.
View your Ingress Controller logs:
$ oc logs --namespace=openshift-ingress-operator deployments/ingress-operator
Your can view the status of a particular Ingress Controller.
View the status of an Ingress Controller:
$ oc describe --namespace=openshift-ingress-operator ingresscontroller/<name>
As an administrator, you can configure an Ingress Controller to use a custom
certificate by creating a secret resource and editing the IngressController
custom resource (CR).
You must have a certificate/key pair in PEM-encoded files, where the certificate is signed by a trusted certificate authority or by a private trusted certificate authority that you configured in a custom PKI.
Your certificate meets the following requirements:
The certificate is valid for the ingress domain.
The certificate uses the subjectAltName
extension to specify a wildcard domain, such as *.apps.ocp4.example.com
.
You must have an IngressController
CR. You may use the default one:
$ oc --namespace openshift-ingress-operator get ingresscontrollers
NAME AGE
default 10m
If you have intermediate certificates, they must be included in the |
The following assumes that the custom certificate and key pair are in the
tls.crt
and tls.key
files in the current working directory. Substitute the
actual path names for tls.crt
and tls.key
. You also may substitute another
name for custom-certs-default
when creating the secret resource and
referencing it in the IngressController CR.
This action will cause the Ingress Controller to be redeployed, using a rolling deployment strategy. |
Create a secret resource containing the custom certificate in the
openshift-ingress
namespace using the tls.crt
and tls.key
files.
$ oc --namespace openshift-ingress create secret tls custom-certs-default --cert=tls.crt --key=tls.key
Update the IngressController CR to reference the new certificate secret:
$ oc patch --type=merge --namespace openshift-ingress-operator ingresscontrollers/default \
--patch '{"spec":{"defaultCertificate":{"name":"custom-certs-default"}}}'
Verify the update was effective:
$ oc get --namespace openshift-ingress-operator ingresscontrollers/default \
--output jsonpath='{.spec.defaultCertificate}'
map[name:custom-certs-default]
The certificate secret name should match the value used to update the CR.
Once the IngressController CR has been modified, the Ingress Operator updates the Ingress Controller’s deployment to use the custom certificate.
Manually scale an Ingress Controller to meeting routing performance or
availability requirements such as the requirement to increase throughput. oc
commands are used to scale the IngressController
resource. The following
procedure provides an example for scaling up the default IngressController
.
View the current number of available replicas for the default IngressController
:
$ oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{$.status.availableReplicas}'
2
Scale the default IngressController
to the desired number of replicas using
the oc patch
command. The following example scales the default IngressController
to 3 replicas:
$ oc patch -n openshift-ingress-operator ingresscontroller/default --patch '{"spec":{"replicas": 3}}' --type=merge
ingresscontroller.operator.openshift.io/default patched
Verify that the default IngressController
scaled to the number of replicas
that you specified:
$ oc get -n openshift-ingress-operator ingresscontrollers/default -o jsonpath='{$.status.availableReplicas}'
3
Scaling is not an immediate action, as it takes time to create the desired number of replicas. |
You can configure the Ingress Controller to enable access logs. If you have clusters that do not receive much traffic, then you can log to a sidecar. If you have high traffic clusters, to avoid exceeding the capacity of the logging stack or to integrate with a logging infrastructure outside of OpenShift Container Platform, you can forward logs to a custom syslog endpoint. You can also specify the format for access logs.
Container logging is useful to enable access logs on low-traffic clusters when there is no existing Syslog logging infrastructure, or for short-term use while diagnosing problems with the Ingress Controller.
Syslog is needed for high-traffic clusters where access logs could exceed the cluster logging stack’s capacity, or for environments where any logging solution needs to integrate with an existing Syslog logging infrastructure. The Syslog use-cases can overlap.
Log in as a user with cluster-admin
privileges.
Configure Ingress access logging to a sidecar.
To configure Ingress access logging, you must specify a destination using spec.logging.access.destination
. To specify logging to a sidecar container, you must specify Container
spec.logging.access.destination.type
. The following example is an Ingress Controller definition that logs to a Container
destination:
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
name: default
namespace: openshift-ingress-operator
spec:
replicas: 2
endpointPublishingStrategy:
type: NodePortService (1)
logging:
access:
destination:
type: Container
1 | NodePortService is not required to configure Ingress access logging to a sidecar. Ingress logging is compatible with any endpointPublishingStrategy . |
When you configure the Ingress Controller to log to a sidecar, the operator creates a container named logs
inside the Ingress Controller Pod:
$ oc -n openshift-ingress logs deployment.apps/router-default -c logs
2020-05-11T19:11:50.135710+00:00 router-default-57dfc6cd95-bpmk6 router-default-57dfc6cd95-bpmk6 haproxy[108]: 174.19.21.82:39654 [11/May/2020:19:11:50.133] public be_http:hello-openshift:hello-openshift/pod:hello-openshift:hello-openshift:10.128.2.12:8080 0/0/1/0/1 200 142 - - --NI 1/1/0/0/0 0/0 "GET / HTTP/1.1"
Configure Ingress access logging to a Syslog endpoint.
To configure Ingress access logging, you must specify a destination using spec.logging.access.destination
. To specify logging to a Syslog endpoint destination, you must specify Syslog
for spec.logging.access.destination.type
. If the destination type is Syslog
, you must also specify a destination endpoint using spec.logging.access.destination.syslog.endpoint
and you can specify a facility using spec.logging.access.destination.syslog.facility
. The following example is an Ingress Controller definition that logs to a Syslog
destination:
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
name: default
namespace: openshift-ingress-operator
spec:
replicas: 2
endpointPublishingStrategy:
type: NodePortService
logging:
access:
destination:
type: Syslog
syslog:
address: 1.2.3.4
port: 10514
The |
Configure Ingress access logging with a specific log format.
You can specify spec.logging.access.httpLogFormat
to customize the log format. The following example is an Ingress Controller definition that logs to a syslog
endpoint with IP address 1.2.3.4 and port 10514:
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
name: default
namespace: openshift-ingress-operator
spec:
replicas: 2
endpointPublishingStrategy:
type: NodePortService
logging:
access:
destination:
type: Syslog
syslog:
address: 1.2.3.4
port: 10514
httpLogFormat: '%ci:%cp [%t] %ft %b/%s %B %bq %HM %HU %HV'
Disable Ingress access logging.
To disable Ingress access logging, leave spec.logging
or spec.logging.access
empty:
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
name: default
namespace: openshift-ingress-operator
spec:
replicas: 2
endpointPublishingStrategy:
type: NodePortService
logging:
access: null
As the primary mechanism for traffic to enter the cluster, the demands on the Ingress Controller, or router, can be significant. As a cluster administrator, you can shard the routes to:
Balance Ingress Controllers, or routers, with several routes to speed up responses to changes.
Allocate certain routes to have different reliability guarantees than other routes.
Allow certain Ingress Controllers to have different policies defined.
Allow only specific routes to use additional features.
Expose different routes on different addresses so that internal and external users can see different routes, for example.
Ingress Controller can use either route labels or namespace labels as a sharding method.
Ingress Controller sharding by using route labels means that the Ingress Controller serves any route in any namespace that is selected by the route selector.
Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another.
Edit the router-internal.yaml
file:
# cat router-internal.yaml
apiVersion: v1
items:
- apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
name: sharded
namespace: openshift-ingress-operator
spec:
domain: <apps-sharded.basedomain.example.net>
nodePlacement:
nodeSelector:
matchLabels:
node-role.kubernetes.io/worker: ""
routeSelector:
matchLabels:
type: sharded
status: {}
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Apply the Ingress Controller router-internal.yaml
file:
# oc apply -f router-internal.yaml
The Ingress Controller selects routes in any namespace that have the label
type: sharded
.
Ingress Controller sharding by using namespace labels means that the Ingress Controller serves any route in any namespace that is selected by the namespace selector.
Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another.
Edit the router-internal.yaml
file:
# cat router-internal.yaml
apiVersion: v1
items:
- apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
name: sharded
namespace: openshift-ingress-operator
spec:
domain: <apps-sharded.basedomain.example.net>
nodePlacement:
nodeSelector:
matchLabels:
node-role.kubernetes.io/worker: ""
namespaceSelector:
matchLabels:
type: sharded
status: {}
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Apply the Ingress Controller router-internal.yaml
file:
# oc apply -f router-internal.yaml
The Ingress Controller selects routes in any namespace that is selected by the
namespace selector that have the label type: sharded
.
When creating an Ingress Controller on cloud platforms, the Ingress Controller is published by a public cloud load balancer by default. As an administrator, you can create an Ingress Controller that uses an internal cloud load balancer.
If your cloud provider is Microsoft Azure, you must have at least one public load balancer that points to your nodes. If you do not, all of your nodes will lose egress connectivity to the internet. |
If you want to change the |
See the Kubernetes Services documentation for implementation details.
Install the OpenShift CLI (oc
).
Log in as a user with cluster-admin
privileges.
Create an IngressController
custom resource (CR) in a file named <name>-ingress-controller.yaml
, such as in the following example:
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
namespace: openshift-ingress-operator
name: <name> (1)
spec:
domain: <domain> (2)
endpointPublishingStrategy:
type: LoadBalancerService
loadBalancer:
scope: Internal (3)
1 | Replace <name> with a name for the IngressController object. |
2 | Specify the domain for the application published by the controller. |
3 | Specify a value of Internal to use an internal load balancer. |
Create the Ingress Controller defined in the previous step by running the following command:
$ oc create -f <name>-ingress-controller.yaml (1)
1 | Replace <name> with the name of the IngressController object. |
Optional: Confirm that the Ingress Controller was created by running the following command:
$ oc --all-namespaces=true get ingresscontrollers
You can configure the default
Ingress Controller for your cluster to be internal by deleting and recreating it.
If your cloud provider is Microsoft Azure, you must have at least one public load balancer that points to your nodes. If you do not, all of your nodes will lose egress connectivity to the internet. |
If you want to change the |
Install the OpenShift CLI (oc
).
Log in as a user with cluster-admin
privileges.
Configure the default
Ingress Controller for your cluster to be internal by deleting and recreating it.
$ oc replace --force --wait --filename - <<EOF
apiVersion: operator.openshift.io/v1
kind: IngressController
metadata:
namespace: openshift-ingress-operator
name: default
spec:
endpointPublishingStrategy:
type: LoadBalancerService
loadBalancer:
scope: Internal
EOF
Administrators and application developers can run applications in multiple namespaces with the same domain name. This is for organizations where multiple teams develop microservices that are exposed on the same host name.
Allowing claims across namespaces should only be enabled for clusters with trust between namespaces, otherwise a malicious user could take over a host name. For this reason, the default admission policy disallows host name claims across namespaces. |
Cluster administrator privileges.
Edit the .spec.routeAdmission
field of the ingresscontroller
resource variable using the following command:
$ oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{"spec":{"routeAdmission":{"namespaceOwnership":"InterNamespaceAllowed"}}}' --type=merge
spec:
routeAdmission:
namespaceOwnership: InterNamespaceAllowed
...
The HAProxy Ingress Controller has support for wildcard routes. The Ingress Operator uses wildcardPolicy
to configure the ROUTER_ALLOW_WILDCARD_ROUTES
environment variable of the Ingress Controller.
The default behavior of the Ingress Controller is to admit routes with a wildcard policy of None
, which is backwards compatible with existing IngressController
resources.
Configure the wildcard policy.
Use the following command to edit the IngressController
resource:
$ oc edit IngressController
Under spec
, set the wildcardPolicy
field to WildcardsDisallowed
or WildcardsAllowed
:
spec:
routeAdmission:
wildcardPolicy: WildcardsDisallowed # or WildcardsAllowed
You can enable transparent end-to-end HTTP/2 connectivity in HAProxy. It allows application owners to make use of HTTP/2 protocol capabilities, including single connection, header compression, binary streams, and more.
You can enable HTTP/2 connectivity for an individual Ingress Controller or for the entire cluster.
To enable the use of HTTP/2 for the connection from the client to HAProxy, a route must specify a custom certificate. A route that uses the default certificate cannot use HTTP/2. This restriction is necessary to avoid problems from connection coalescing, where the client re-uses a connection for different routes that use the same certificate.
The connection from HAProxy to the application pod can use HTTP/2 only for re-encrypt routes and not for edge-terminated or insecure routes. This restriction is because HAProxy uses Application-Level Protocol Negotiation (ALPN), which is a TLS extension, to negotiate the use of HTTP/2 with the back-end. The implication is that end-to-end HTTP/2 is possible with passthrough and re-encrypt and not with insecure or edge-terminated routes.
For non-passthrough routes, the Ingress Controller negotiates its connection to the application independently of the connection from the client. This means a client may connect to the Ingress Controller and negotiate HTTP/1.1, and the Ingress Controller may then connect to the application, negotiate HTTP/2, and forward the request from the client HTTP/1.1 connection using the HTTP/2 connection to the application. This poses a problem if the client subsequently tries to upgrade its connection from HTTP/1.1 to the WebSocket protocol, because the Ingress Controller cannot forward WebSocket to HTTP/2 and cannot upgrade its HTTP/2 connection to WebSocket. Consequently, if you have an application that is intended to accept WebSocket connections, it must not allow negotiating the HTTP/2 protocol or else clients will fail to upgrade to the WebSocket protocol. |
Enable HTTP/2 on a single Ingress Controller.
To enable HTTP/2 on an Ingress Controller, enter the oc annotate
command:
$ oc -n openshift-ingress-operator annotate ingresscontrollers/<ingresscontroller_name> ingress.operator.openshift.io/default-enable-http2=true
Replace <ingresscontroller_name>
with the name of the Ingress Controller to annotate.
Enable HTTP/2 on the entire cluster.
To enable HTTP/2 for the entire cluster, enter the oc annotate
command:
$ oc annotate ingresses.config/cluster ingress.operator.openshift.io/default-enable-http2=true