oc adm policy add-cluster-role-to-user cluster-admin username
OpenShift Container Platform provides methods for communicating from outside the cluster with services running in the cluster. This method uses a service external IP.
One method to expose a service is to assign an external IP address directly to the service you want to make accessible from outside the cluster.
The external IP address that you use must be provisioned on your infrastructure platform and attached to a cluster node.
With an external IP on the service, OpenShift Container Platform sets up NAT rules to allow traffic arriving at any cluster node attached to that IP address to be sent to one of the internal pods. This is similar to the internal service IP addresses, but the external IP tells OpenShift Container Platform that this service should also be exposed externally at the given IP. The administrator must assign the IP address to a host (node) interface on one of the nodes in the cluster. Alternatively, the address can be used as a virtual IP (VIP).
These IPs are not managed by OpenShift Container Platform and administrators are responsible for ensuring that traffic arrives at a node with this IP.
The procedures in this section require prerequisites performed by the cluster administrator. |
Before starting the following procedures, the administrator must:
Set up the external port to the cluster networking environment so that requests can reach the cluster.
Make sure there is at least one user with cluster admin role. To add this role to a user, run the following command:
oc adm policy add-cluster-role-to-user cluster-admin username
Have an OpenShift Container Platform cluster with at least one master and at least one node and a system outside the cluster that has network access to the cluster. This procedure assumes that the external system is on the same subnet as the cluster. The additional networking required for external systems on a different subnet is out-of-scope for this topic.
If the project and service that you want to expose do not exist, first create the project, then the service.
If the project and service already exist, skip to the procedure on exposing the service to create a route.
Install the oc
CLI and log in as a cluster administrator.
Create a new project for your service:
$ oc new-project <project_name>
For example:
$ oc new-project myproject
Use the oc new-app
command to create a service. For example:
$ oc new-app \ -e MYSQL_USeR=admin \ -e MYSQL_PASSWORD=redhat \ -e MYSQL_DATABASe=mysqldb \ registry.redhat.io/rhscl/mysql-80-rhel7
Run the following command to see that the new service is created:
$ oc get svc -n myproject NAMe TYPe CLUSTeR-IP eXTeRNAL-IP PORT(S) AGe mysql-80-rhel7 ClusterIP 172.30.63.31 <none> 3306/TCP 4m55s
By default, the new service does not have an external IP address.
You can expose the service as a route by using the oc expose
command.
To expose the service:
Log in to OpenShift Container Platform.
Log in to the project where the service you want to expose is located:
$ oc project project1
Run the following command to expose the route:
$ oc expose service <service_name>
For example:
$ oc expose service mysql-80-rhel7 route "mysql-80-rhel7" exposed
Use a tool, such as cURL, to make sure you can reach the service using the cluster IP address for the service:
$ curl <pod_ip>:<port>
For example:
$ curl 172.30.131.89:3306
The examples in this section use a MySQL service, which requires a client
application. If you get a string of characters with the Got packets out of order
message, you are connected to the service.
If you have a MySQL client, log in with the standard CLI command:
$ mysql -h 172.30.131.89 -u admin -p enter password: Welcome to the MariaDB monitor. Commands end with ; or \g. MySQL [(none)]>