$ export ROSA_CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}" | sed 's/-[a-z0-9]\{5\}$//')
$ export ROSA_MACHINE_POOL_NAME=worker
You can assign a consistent IP address for traffic that leaves your cluster such as security groups which require an IP-based configuration to meet security standards.
By default, Red Hat OpenShift Service on AWS (ROSA) uses the OVN-Kubernetes container network interface (CNI) to assign random IP addresses from a pool. This can make configuring security lockdowns unpredictable or open.
See Configuring an egress IP address for more information.
Learn how to configure a set of predictable IP addresses for egress cluster traffic.
A ROSA cluster deployed with OVN-Kubernetes
The OpenShift CLI (oc
)
The ROSA CLI (rosa
)
Set your environment variables by running the following command:
Replace the value of the |
$ export ROSA_CLUSTER_NAME=$(oc get infrastructure cluster -o=jsonpath="{.status.infrastructureName}" | sed 's/-[a-z0-9]\{5\}$//')
$ export ROSA_MACHINE_POOL_NAME=worker
The number of IP addresses assigned to each node is limited for each public cloud provider.
Verify sufficient capacity by running the following command:
$ oc get node -o json | \
jq '.items[] |
{
"name": .metadata.name,
"ips": (.status.addresses | map(select(.type == "InternalIP") | .address)),
"capacity": (.metadata.annotations."cloud.network.openshift.io/egress-ipconfig" | fromjson[] | .capacity.ipv4)
}'
---
{
"name": "ip-10-10-145-88.ec2.internal",
"ips": [
"10.10.145.88"
],
"capacity": 14
}
{
"name": "ip-10-10-154-175.ec2.internal",
"ips": [
"10.10.154.175"
],
"capacity": 14
}
---
Before creating the egress IP rules, identify which egress IPs you will use.
The egress IPs that you select should exist as a part of the subnets in which the worker nodes are provisioned. |
Optional: Reserve the egress IPs that you requested to avoid conflicts with the AWS Virtual Private Cloud (VPC) Dynamic Host Configuration Protocol (DHCP) service.
Request explicit IP reservations on the AWS documentation for CIDR reservations page.
Create a new project by running the following command:
$ oc new-project demo-egress-ns
Create the egress rule for all pods within the namespace by running the following command:
$ cat <<EOF | oc apply -f -
apiVersion: k8s.ovn.org/v1
kind: egressIP
metadata:
name: demo-egress-ns
spec:
# NOTE: these egress IPs are within the subnet range(s) in which my worker nodes
# are deployed.
egressIPs:
- 10.10.100.253
- 10.10.150.253
- 10.10.200.253
namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: demo-egress-ns
EOF
Create a new project by running the following command:
$ oc new-project demo-egress-pod
Create the egress rule for the pod by running the following command:
|
$ cat <<EOF | oc apply -f -
apiVersion: k8s.ovn.org/v1
kind: egressIP
metadata:
name: demo-egress-pod
spec:
# NOTE: these egress IPs are within the subnet range(s) in which my worker nodes
# are deployed.
egressIPs:
- 10.10.100.254
- 10.10.150.254
- 10.10.200.254
namespaceSelector:
matchLabels:
kubernetes.io/metadata.name: demo-egress-pod
podSelector:
matchLabels:
run: demo-egress-pod
EOF
Obtain your pending egress IP assignments by running the following command:
$ oc get egressips
NAME egressIPS ASSIGNED NODE ASSIGNED egressIPS
demo-egress-ns 10.10.100.253
demo-egress-pod 10.10.100.254
The egress IP rule that you created only applies to nodes with the k8s.ovn.org/egress-assignable
label. Make sure that the label is only on a specific machine pool.
Assign the label to your machine pool using the following command:
If you rely on node labels for your machine pool, this command will replace those labels. Be sure to input your desired labels into the |
$ rosa update machinepool ${ROSA_MACHINE_POOL_NAME} \
--cluster="${ROSA_CLUSTER_NAME}" \
--labels "k8s.ovn.org/egress-assignable="
Review the egress IP assignments by running the following command:
$ oc get egressips
NAME egressIPS ASSIGNED NODE ASSIGNED egressIPS
demo-egress-ns 10.10.100.253 ip-10-10-156-122.ec2.internal 10.10.150.253
demo-egress-pod 10.10.100.254 ip-10-10-156-122.ec2.internal 10.10.150.254
To test the egress IP rule, create a service that is restricted to the egress IP addresses which we have specified. This simulates an external service that is expecting a small subset of IP addresses.
Run the echoserver
command to replicate a request:
$ oc -n default run demo-service --image=gcr.io/google_containers/echoserver:1.4
Expose the pod as a service and limit the ingress to the egress IP addresses you specified by running the following command:
$ cat <<EOF | oc apply -f -
apiVersion: v1
kind: Service
metadata:
name: demo-service
namespace: default
annotations:
service.beta.kubernetes.io/aws-load-balancer-scheme: "internal"
service.beta.kubernetes.io/aws-load-balancer-internal: "true"
spec:
selector:
run: demo-service
ports:
- port: 80
targetPort: 8080
type: LoadBalancer
externalTrafficPolicy: Local
# NOTE: this limits the source IPs that are allowed to connect to our service. It
# is being used as part of this demo, restricting connectivity to our egress
# IP addresses only.
# NOTE: these egress IPs are within the subnet range(s) in which my worker nodes
# are deployed.
loadBalancerSourceRanges:
- 10.10.100.254/32
- 10.10.150.254/32
- 10.10.200.254/32
- 10.10.100.253/32
- 10.10.150.253/32
- 10.10.200.253/32
EOF
Retrieve the load balancer hostname and save it as an environment variable by running the following command:
$ export LOAD_BALANCER_HOSTNAME=$(oc get svc -n default demo-service -o json | jq -r '.status.loadBalancer.ingress[].hostname')
Start an interactive shell to test the namespace egress rule:
$ oc run \
demo-egress-ns \
-it \
--namespace=demo-egress-ns \
--env=LOAD_BALANCER_HOSTNAME=$LOAD_BALANCER_HOSTNAME \
--image=registry.access.redhat.com/ubi9/ubi -- \
bash
Send a request to the load balancer and ensure that you can successfully connect:
$ curl -s http://$LOAD_BALANCER_HOSTNAME
Check the output for a successful connection:
The |
CLIENT VALUES:
client_address=10.10.207.247
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com:8080/
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
accept=*/*
host=internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com
user-agent=curl/7.76.1
BODY:
-no body in request-
Exit the pod by running the following command:
$ exit
Start an interactive shell to test the pod egress rule:
$ oc run \
demo-egress-pod \
-it \
--namespace=demo-egress-pod \
--env=LOAD_BALANCER_HOSTNAME=$LOAD_BALANCER_HOSTNAME \
--image=registry.access.redhat.com/ubi9/ubi -- \
bash
Send a request to the load balancer by running the following command:
$ curl -s http://$LOAD_BALANCER_HOSTNAME
Check the output for a successful connection:
The |
CLIENT VALUES:
client_address=10.10.207.247
command=GET
real path=/
query=nil
request_version=1.1
request_uri=http://internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com:8080/
SERVER VALUES:
server_version=nginx: 1.10.0 - lua: 10001
HEADERS RECEIVED:
accept=*/*
host=internal-a3e61de18bfca4a53a94a208752b7263-148284314.us-east-1.elb.amazonaws.com
user-agent=curl/7.76.1
BODY:
-no body in request-
Exit the pod by running the following command:
$ exit
Optional: Test that the traffic is successfully blocked when the egress rules do not apply by running the following command:
$ oc run \
demo-egress-pod-fail \
-it \
--namespace=demo-egress-pod \
--env=LOAD_BALANCER_HOSTNAME=$LOAD_BALANCER_HOSTNAME \
--image=registry.access.redhat.com/ubi9/ubi -- \
bash
Send a request to the load balancer by running the following command:
$ curl -s http://$LOAD_BALANCER_HOSTNAME
If the command is unsuccessful, egress is successfully blocked.
Exit the pod by running the following command:
$ exit
Clean up your cluster by running the following commands:
$ oc delete svc demo-service -n default; \
$ oc delete pod demo-service -n default; \
$ oc delete project demo-egress-ns; \
$ oc delete project demo-egress-pod; \
$ oc delete egressip demo-egress-ns; \
$ oc delete egressip demo-egress-pod
Clean up the assigned node labels by running the following command:
If you rely on node labels for your machine pool, this command replaces those labels. Input your desired labels into the |
$ rosa update machinepool ${ROSA_MACHINE_POOL_NAME} \
--cluster="${ROSA_CLUSTER_NAME}" \
--labels ""