kind: "service"
apiVersion: "v1"
metadata:
name: "external-mysql-service"
spec:
ports:
-
name: "mysql"
protocol: "TCP"
port: 3306
targetPort: 3306 (1)
nodePort: 0
selector: {} (2)
Azure Red Hat OpenShift 3.11 will be retired 30 June 2022. Support for creation of new Azure Red Hat OpenShift 3.11 clusters continues through 30 November 2020. Following retirement, remaining Azure Red Hat OpenShift 3.11 clusters will be shut down to prevent security vulnerabilities.
Follow this guide to create an Azure Red Hat OpenShift 4 cluster. If you have specific questions, please contact us
Many Azure Red Hat OpenShift applications use external resources, such as external databases, or an external SaaS endpoint. These external resources can be modeled as native Azure Red Hat OpenShift services, so that applications can work with them as they would any other internal service.
One of the most common types of external services is an external database. To support an external database, an application needs:
An endpoint to communicate with.
A set of credentials and coordinates, including:
A user name
A passphrase
A database name
The solution for integrating with an external database includes:
A service
object to represent the SaaS provider as an Azure Red Hat OpenShift service.
One or more Endpoints
for the service.
Environment variables in the appropriate pods containing the credentials.
The following steps outline a scenario for integrating with an external MySQL database:
You can define a service either by providing an IP address and endpoints, or by providing a Fully qualified domain name (FQDN).
Create an
Azure Red Hat OpenShift
service to represent your external database. This is similar to creating an
internal service; the difference is in the service’s Selector
field.
Internal Azure Red Hat OpenShift services use the Selector
field to associate pods with
services using
labels. The
EndpointsController
system component synchronizes the endpoints for services
that specify selectors with the pods that match the selector. The
service
proxy and Azure Red Hat OpenShift
router load-balance
requests to the service amongst the service’s endpoints.
services that represent an external resource do not require associated pods.
Instead, leave the Selector
field unset. This represents the external service,
making the EndpointsController
ignore the service and allows you to specify
endpoints manually:
kind: "service"
apiVersion: "v1"
metadata:
name: "external-mysql-service"
spec:
ports:
-
name: "mysql"
protocol: "TCP"
port: 3306
targetPort: 3306 (1)
nodePort: 0
selector: {} (2)
1 | Optional: The port on the backing pods to which the service forwards connections. |
2 | The selector field to leave blank. |
Next, create the required endpoints for the service. This gives the service proxy and router the location to send traffic directed to the service:
kind: "Endpoints"
apiVersion: "v1"
metadata:
name: "external-mysql-service" (1)
subsets: (2)
-
addresses:
-
ip: "10.0.0.0" (3)
ports:
-
port: 3306 (4)
name: "mysql"
1 | The name of the service instance, as defined in the previous step. |
2 | Traffic to the service will be load-balanced between the supplied
Endpoints if more than one is supplied. |
3 | Endpoints IPs cannot be loopback (127.0.0.0/8), link-local (169.254.0.0/16), or link-local multicast (224.0.0.0/24). |
4 | The port and name definition must match the port and name
value in the service defined in the previous step. |
Using external domain names make it easier to manage an external service linkage, because you do not have to worry about the external service’s IP addresses changing.
ExternalName
services do not have selectors, or any defined ports or
endpoints, therefore, you can use an ExternalName
service to direct traffic to
an external service.
kind: "service"
apiVersion: "v1"
metadata:
name: "external-mysql-service"
spec:
type: ExternalName
externalName: example.domain.name
selector: {} (1)
1 | The selector field to leave blank. |
Using an external domain name service tells the system that the DNS name in the
externalName
field (example.domain.name
in the previous example) is the
location of the resource that backs the service. When a DNS request is made
against the Kubernetes DNS server, it returns the externalName
in a CNAME
record telling the client to look up the returned name to get the IP address.
Now that the service and endpoints are defined, give the appropriate pods access to the credentials to use the service by setting environment variables in the appropriate containers:
kind: "DeploymentConfig"
apiVersion: "v1"
metadata:
name: "my-app-deployment"
spec: (1)
strategy:
type: "Rolling"
rollingParams:
updatePeriodSeconds: 1 (2)
intervalSeconds: 1 (3)
timeoutSeconds: 120
replicas: 2
selector:
name: "frontend"
template:
metadata:
labels:
name: "frontend"
spec:
containers:
-
name: "helloworld"
image: "origin-ruby-sample"
ports:
-
containerPort: 3306
protocol: "TCP"
env:
-
name: "MYSQL_USER"
value: "${MYSQL_USER}" (4)
-
name: "MYSQL_PASSWORD"
value: "${MYSQL_PASSWORD}" (5)
-
name: "MYSQL_DATABASE"
value: "${MYSQL_DATABASE}" (6)
1 | Other fields on the DeploymentConfig are omitted |
2 | The time to wait between individual pod updates. |
3 | The time to wait between polling the deployment status after update. |
4 | The user name to use with the service. |
5 | The passphrase to use with the service. |
6 | The database name. |
External Database Environment Variables
Using an external service in your application is similar to using an internal service. Your application will be assigned environment variables for the service and the additional environment variables with the credentials described in the previous step. For example, a MySQL container receives the following environment variables:
EXTERNAL_MYSQL_service_service_HOST=<ip_address>
EXTERNAL_MYSQL_service_service_PORT=<port_number>
MYSQL_USERNAME=<mysql_username>
MYSQL_PASSWORD=<mysql_password>
MYSQL_DATABASE_NAME=<mysql_database>
The application is responsible for reading the coordinates and credentials for the service from the environment and establishing a connection with the database via the service.
A common type of external service is an external SaaS endpoint. To support an external SaaS provider, an application needs:
An endpoint to communicate with
A set of credentials, such as:
An API key
A user name
A passphrase
The following steps outline a scenario for integrating with an external SaaS provider:
Create an Azure Red Hat OpenShift service to represent the external service. This is similar to creating an internal service; however the difference is in the service’s Selector
field.
Internal Azure Red Hat OpenShift services use the Selector
field to associate pods with
services using
labels. A
system component called EndpointsController
synchronizes the endpoints for
services that specify selectors with the pods that match the selector. The
service
proxy and Azure Red Hat OpenShift
router load-balance
requests to the service amongst the service’s endpoints.
services that represents an external resource do not require that pods be
associated with it. Instead, leave the Selector
field unset. This makes the
EndpointsController
ignore the service and allows you to specify endpoints
manually:
kind: "service"
apiVersion: "v1"
metadata:
name: "example-external-service"
spec:
ports:
-
name: "mysql"
protocol: "TCP"
port: 3306
targetPort: 3306 (1)
nodePort: 0
selector: {} (2)
1 | Optional: The port on the backing pods to which the service forwards connections. |
2 | The selector field to leave blank. |
Next, create endpoints for the service containing the information about where to send traffic directed to the service proxy and the router:
kind: "Endpoints"
apiVersion: "v1"
metadata:
name: "example-external-service" (1)
subsets: (2)
- addresses:
- ip: "10.10.1.1"
ports:
- name: "mysql"
port: 3306
1 | The name of the service instance. |
2 | Traffic to the service is load-balanced between the subsets supplied here. |
Now that the service and endpoints are defined, give pods the credentials to use the service by setting environment variables in the appropriate containers:
kind: "DeploymentConfig"
apiVersion: "v1"
metadata:
name: "my-app-deployment"
spec: (1)
strategy:
type: "Rolling"
rollingParams:
timeoutSeconds: 120
replicas: 1
selector:
name: "frontend"
template:
metadata:
labels:
name: "frontend"
spec:
containers:
-
name: "helloworld"
image: "openshift/openshift/origin-ruby-sample"
ports:
-
containerPort: 3306
protocol: "TCP"
env:
-
name: "SAAS_API_KEY" (2)
value: "<SaaS service API key>"
-
name: "SAAS_USERNAME" (3)
value: "<SaaS service user>"
-
name: "SAAS_PASSPHRASE" (4)
value: "<SaaS service passphrase>"
1 | Other fields on the DeploymentConfig are omitted. |
2 | SAAS_API_KEY : The API key to use with the service. |
3 | SAAS_USERNAME : The user name to use with the service. |
4 | SAAS_PASSPHRASE : The passphrase to use with the service. |
These variables get added to the containers as environment variables. Using environment variables allows service-to-service communication and it may or may not require additional parameters such as API keys, user name and password authentication, or certificates.
External SaaS Provider Environment Variables
Similarly, when using an internal service, your application is assigned environment variables for the service and the additional environment variables with the credentials described in the previous steps. In the previous example, the container receives the following environment variables:
EXAMPLE_EXTERNAL_service_service_HOST=<ip_address>
EXAMPLE_EXTERNAL_service_service_PORT=<port_number>
SAAS_API_KEY=<saas_api_key>
SAAS_USERNAME=<saas_username>
SAAS_PASSPHRASE=<saas_passphrase>
The application reads the coordinates and credentials for the service from the environment and establishes a connection with the service.
ExternalName
services do not have selectors, or any defined ports or
endpoints. You can use an ExternalName
service to assign traffic to an
external service outside the cluster.
kind: "service"
apiVersion: "v1"
metadata:
name: "external-mysql-service"
spec:
type: ExternalName
externalName: example.domain.name
selector: {} (1)
1 | The selector field to leave blank. |
Using an ExternalName
service maps the service to the value of the
externalName
field (example.domain.name
in the previous example), by
automatically injecting a CNAME record, mapping the service name directly to an
outside DNS address, and bypassing the need for endpoint records.