This is a cache of https://docs.openshift.com/enterprise/3.2/dev_guide/integrating_external_services.html. It is a snapshot of the page at 2024-11-26T04:31:38.395+0000.
Integrating External <strong>service</strong>s | Developer Guide | OpenShift Enterprise 3.2
×

Overview

Many OpenShift Enterprise applications use external resources, such as external databases, or an external SaaS endpoint. These external resources can be modeled as native OpenShift Enterprise services, so that applications can work with them as they would any other internal service.

External MySQL Database

One of the most common types of external services is an external database. To support an external database, an application needs:

  1. An endpoint to communicate with.

  2. A set of credentials and coordinates, including:

    1. A user name

    2. A passphrase

    3. A database name

The solution for integrating with an external database includes:

  • A service object to represent the SaaS provider as an OpenShift Enterprise service.

  • One or more Endpoints for the service.

  • Environment variables in the appropriate pods containing the credentials.

The following steps outline a scenario for integrating with an external MySQL database:

  1. Create an OpenShift Enterprise service to represent your external database. This is similar to creating an internal service; the difference is in the service’s Selector field.

    Internal OpenShift Enterprise services use the Selector field to associate pods with services using labels. The EndpointsController system component synchronizes the endpoints for services that specify selectors with the pods that match the selector. The service proxy and OpenShift Enterprise router load-balance requests to the service amongst the service’s endpoints.

    services that represent an external resource do not require associated pods. Instead, leave the Selector field unset. This represents the external service, making the EndpointsController ignore the service and allows you to specify endpoints manually:

      kind: "service"
      apiVersion: "v1"
      metadata:
        name: "external-mysql-service"
      spec:
        ports:
          -
            name: "mysql"
            protocol: "TCP"
            port: 3306
            targetPort: 3306
            nodePort: 0
      selector: {} (1)
    1 The selector field to leave blank.
  2. Next, create the required endpoints for the service. This gives the service proxy and router the location to send traffic directed to the service:

      kind: "Endpoints"
      apiVersion: "v1"
      metadata:
        name: "external-mysql-service" (1)
      subsets: (2)
        -
          addresses:
            -
              ip: "10.0.0.0" (3)
          ports:
            -
              port: 3306 (4)
              name: "mysql"
    1 The name of the service instance, as defined in the previous step.
    2 Traffic to the service will be load-balanced between the supplied Endpoints if more than one is supplied.
    3 Endpoints IPs cannot be loopback (127.0.0.0/8), link-local (169.254.0.0/16), or link-local multicast (224.0.0.0/24).
    4 The port and name definition must match the port and name value in the service defined in the previous step.
  3. Now that the service and endpoints are defined, give the appropriate pods access to the credentials to use the service by setting environment variables in the appropriate containers:

      kind: "DeploymentConfig"
      apiVersion: "v1"
      metadata:
        name: "my-app-deployment"
      spec: (1)
        strategy:
          type: "Rolling"
          rollingParams:
            updatePeriodSeconds: 1
            intervalSeconds: 1
            timeoutSeconds: 120
        replicas: 2
        selector:
          name: "frontend"
        template:
          metadata:
            labels:
              name: "frontend"
          spec:
            containers:
              -
                name: "helloworld"
                image: "origin-ruby-sample"
                ports:
                  -
                    containerPort: 3306
                    protocol: "TCP"
                env:
                  -
                    name: "MYSQL_USER"
                    value: "${MYSQL_USER}" (2)
                  -
                    name: "MYSQL_PASSWORD"
                    value: "${MYSQL_PASSWORD}" (3)
                  -
                    name: "MYSQL_DATABASE"
                    value: "${MYSQL_DATABASE}" (4)
    1 Other fields on the DeploymentConfig are omitted
    2 The user name to use with the service.
    3 The passphrase to use with the service.
    4 The database name.

External Database Environment Variables

Using an external service in your application is similar to using an internal service. Your application will be assigned environment variables for the service and the additional environment variables with the credentials described in the previous step. For example, a MySQL container receives the following environment variables:

  • EXTERNAL_MYSQL_service_service_HOST=<ip_address>

  • EXTERNAL_MYSQL_service_service_PORT=<port_number>

  • MYSQL_USERNAME=<mysql_username>

  • MYSQL_PASSWORD=<mysql_password>

  • MYSQL_DATABASE_NAME=<mysql_database>

The application is responsible for reading the coordinates and credentials for the service from the environment and establishing a connection with the database via the service.

External SaaS Provider

A common type of external service is an external SaaS endpoint. To support an external SaaS provider, an application needs:

  1. An endpoint to communicate with

  2. A set of credentials, such as:

    1. An API key

    2. A user name

    3. A passphrase

The following steps outline a scenario for integrating with an external SaaS provider:

  1. Create an OpenShift Enterprise service to represent the external service. This is similar to creating an internal service; however the difference is in the service’s Selector field.

    Internal OpenShift Enterprise services use the Selector field to associate pods with services using labels. A system component called EndpointsController synchronizes the endpoints for services that specify selectors with the pods that match the selector. The service proxy and OpenShift Enterprise router load-balance requests to the service amongst the service’s endpoints.

    services that represents an external resource do not require that pods be associated with it. Instead, leave the Selector field unset. This makes the EndpointsController ignore the service and allows you to specify endpoints manually:

      kind: "service"
      apiVersion: "v1"
      metadata:
        name: "example-external-service"
      spec:
        ports:
          -
            name: "mysql"
            protocol: "TCP"
            port: 3306
            targetPort: 3306
            nodePort: 0
      selector: {} (1)
    1 The selector field to leave blank.
  2. Next, create endpoints for the service containing the information about where to send traffic directed to the service proxy and the router:

    kind: "Endpoints"
    apiVersion: "v1"
    metadata:
      name: "example-external-service" (1)
    subsets: (2)
    - addresses:
      - ip: "10.10.1.1"
      ports:
      - name: "mysql"
        port: 3306
1 The name of the service instance.
2 Traffic to the service is load-balanced between the subsets supplied here.
  1. Now that the service and endpoints are defined, give pods the credentials to use the service by setting environment variables in the appropriate containers:

    ---
      kind: "DeploymentConfig"
      apiVersion: "v1"
      metadata:
        name: "my-app-deployment"
      spec:  (1)
        strategy:
          type: "Rolling"
          rollingParams:
            updatePeriodSeconds: 1
            intervalSeconds: 1
            timeoutSeconds: 120
        replicas: 1
        selector:
          name: "frontend"
        template:
          metadata:
            labels:
              name: "frontend"
          spec:
            containers:
              -
                name: "helloworld"
                image: "openshift/openshift/origin-ruby-sample"
                ports:
                  -
                    containerPort: 3306
                    protocol: "TCP"
                env:
                  -
                    name: "SAAS_API_KEY" (2)
                    value: "<SaaS service API key>"
                  -
                    name: "SAAS_USERNAME" (3)
                    value: "<SaaS service user>"
                  -
                    name: "SAAS_PASSPHRASE" (4)
                    value: "<SaaS service passphrase>"
    1 Other fields on the DeploymentConfig are omitted.
    2 SAAS_API_KEY: The API key to use with the service.
    3 SAAS_USERNAME: The user name to use with the service.
    4 SAAS_PASSPHRASE: The passphrase to use with the service.

External SaaS Provider Environment Variables

Similarly, when using an internal service, your application is assigned environment variables for the service and the additional environment variables with the credentials described in the above steps. In the above example, the container receives the following environment variables:

  • EXAMPLE_EXTERNAL_service_service_HOST=<ip_address>

  • EXAMPLE_EXTERNAL_service_service_PORT=<port_number>

  • SAAS_API_KEY=<saas_api_key>

  • SAAS_USERNAME=<saas_username>

  • SAAS_PASSPHRASE=<saas_passphrase>

The application reads the coordinates and credentials for the service from the environment and establishes a connection with the service.