This is a cache of https://docs.openshift.com/gitops/1.12/securing_openshift_gitops/managing-secrets-securely-using-sscsid-with-gitops.html. It is a snapshot of the page at 2024-11-24T01:02:30.294+0000.
Managing <strong>secret</strong>s securely using <strong>secret</strong>s Store CSI driver with GitOps | Security | Red Hat OpenShift GitOps 1.12
×

Overview of managing secrets using secrets Store CSI driver with GitOps

Some applications need sensitive information, such as passwords and usernames which must be concealed as good security practice. If sensitive information is exposed because role-based access control (RBAC) is not configured properly on your cluster, anyone with API or etcd access can retrieve or modify a secret.

Anyone who is authorized to create a pod in a namespace can use that RBAC to read any secret in that namespace. With the SSCSI Driver Operator, you can use an external secrets store to store and provide sensitive information to pods securely.

The process of integrating the OpenShift Container Platform SSCSI driver with the GitOps Operator consists of the following procedures:

Benefits

Integrating the SSCSI driver with the GitOps Operator provides the following benefits:

  • Enhance the security and efficiency of your GitOps workflows

  • Facilitate the secure attachment of secrets into deployment pods as a volume

  • Ensure that sensitive information is accessed securely and efficiently

secrets store providers

The following secrets store providers are available for use with the secrets Store CSI Driver Operator:

  • AWS secrets Manager

  • AWS Systems Manager Parameter Store

  • Microsoft Azure Key Vault

As an example, consider that you are using AWS secrets Manager as your secrets store provider with the SSCSI Driver Operator. The following example shows the directory structure in GitOps repository that is ready to use the secrets from AWS secrets Manager:

Example directory structure in GitOps repository
├── config
│   ├── argocd
│   │   ├── argo-app.yaml
│   │   ├── secret-provider-app.yaml (3)
│   │   ├── ...
│   └── sscsid (1)
│       └── aws-provider.yaml (2)
├── environments
│   ├── dev (4)
│   │   ├── apps
│   │   │   └── app-taxi (5)
│   │   │       ├── ...
│   │   ├── credentialsrequest-dir-aws (6)
│   │   └── env
│   │       ├── ...
│   ├── new-env
│   │   ├── ...
1 Directory that stores the aws-provider.yaml file.
2 Configuration file that installs the AWS secrets Manager provider and deploys resources for it.
3 Configuration file that creates an application and deploys resources for AWS secrets Manager.
4 Directory that stores the deployment pod and credential requests.
5 Directory that stores the secretProviderClass resources to define your secrets store provider.
6 Folder that stores the credentialsrequest.yaml file. This file contains the configuration for the credentials request to mount a secret to the deployment pod.

Prerequisites

  • You have access to the cluster with cluster-admin privileges.

  • You have access to the OpenShift Container Platform web console.

  • You have extracted and prepared the ccoctl binary.

  • You have installed the jq CLI tool.

  • Your cluster is installed on AWS and uses AWS Security Token Service (STS).

  • You have configured AWS secrets Manager to store the required secrets.

  • SSCSI Driver Operator is installed on your cluster.

  • Red Hat OpenShift GitOps Operator is installed on your cluster.

  • You have a GitOps repository ready to use the secrets.

  • You are logged in to the Argo CD instance by using the Argo CD admin account.

Storing AWS secrets Manager resources in GitOps repository

This guide provides instructions with examples to help you use GitOps workflows with the secrets Store Container Storage Interface (SSCSI) Driver Operator to mount secrets from AWS secrets Manager to a CSI volume in OpenShift Container Platform.

Using the SSCSI Driver Operator with AWS secrets Manager is not supported in a hosted control plane cluster.

Prerequisites
  • You have access to the cluster with cluster-admin privileges.

  • You have access to the OpenShift Container Platform web console.

  • You have extracted and prepared the ccoctl binary.

  • You have installed the jq CLI tool.

  • Your cluster is installed on AWS and uses AWS Security Token Service (STS).

  • You have configured AWS secrets Manager to store the required secrets.

  • SSCSI Driver Operator is installed on your cluster.

  • Red Hat OpenShift GitOps Operator is installed on your cluster.

  • You have a GitOps repository ready to use the secrets.

  • You are logged in to the Argo CD instance by using the Argo CD admin account.

Procedure
  1. Install the AWS secrets Manager provider and add resources:

    1. In your GitOps repository, create a directory and add aws-provider.yaml file in it with the following configuration to deploy resources for the AWS secrets Manager provider:

      The AWS secrets Manager provider for the SSCSI driver is an upstream provider.

      This configuration is modified from the configuration provided in the upstream AWS documentation so that it works properly with OpenShift Container Platform. Changes to this configuration might impact functionality.

      Example aws-provider.yaml file
      apiVersion: v1
      kind: ServiceAccount
      metadata:
        name: csi-secrets-store-provider-aws
        namespace: openshift-cluster-csi-drivers
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRole
      metadata:
        name: csi-secrets-store-provider-aws-cluster-role
      rules:
      - apiGroups: [""]
        resources: ["serviceaccounts/token"]
        verbs: ["create"]
      - apiGroups: [""]
        resources: ["serviceaccounts"]
        verbs: ["get"]
      - apiGroups: [""]
        resources: ["pods"]
        verbs: ["get"]
      - apiGroups: [""]
        resources: ["nodes"]
        verbs: ["get"]
      ---
      apiVersion: rbac.authorization.k8s.io/v1
      kind: ClusterRoleBinding
      metadata:
        name: csi-secrets-store-provider-aws-cluster-rolebinding
      roleRef:
        apiGroup: rbac.authorization.k8s.io
        kind: ClusterRole
        name: csi-secrets-store-provider-aws-cluster-role
      subjects:
      - kind: ServiceAccount
        name: csi-secrets-store-provider-aws
        namespace: openshift-cluster-csi-drivers
      ---
      apiVersion: apps/v1
      kind: DaemonSet
      metadata:
        namespace: openshift-cluster-csi-drivers
        name: csi-secrets-store-provider-aws
        labels:
          app: csi-secrets-store-provider-aws
      spec:
        updateStrategy:
          type: RollingUpdate
        selector:
          matchLabels:
            app: csi-secrets-store-provider-aws
        template:
          metadata:
            labels:
              app: csi-secrets-store-provider-aws
          spec:
            serviceAccountName: csi-secrets-store-provider-aws
            hostNetwork: false
            containers:
              - name: provider-aws-installer
                image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19
                imagePullPolicy: Always
                args:
                    - --provider-volume=/etc/kubernetes/secrets-store-csi-providers
                resources:
                  requests:
                    cpu: 50m
                    memory: 100Mi
                  limits:
                    cpu: 50m
                    memory: 100Mi
                securityContext:
                  privileged: true
                volumeMounts:
                  - mountPath: "/etc/kubernetes/secrets-store-csi-providers"
                    name: providervol
                  - name: mountpoint-dir
                    mountPath: /var/lib/kubelet/pods
                    mountPropagation: HostToContainer
            tolerations:
            - operator: Exists
            volumes:
              - name: providervol
                hostPath:
                  path: "/etc/kubernetes/secrets-store-csi-providers"
              - name: mountpoint-dir
                hostPath:
                  path: /var/lib/kubelet/pods
                  type: DirectoryOrCreate
            nodeSelector:
              kubernetes.io/os: linux
    2. Add a secret-provider-app.yaml file in your GitOps repository to create an application and deploy resources for AWS secrets Manager:

      Example secret-provider-app.yaml file
      apiVersion: argoproj.io/v1alpha1
      kind: Application
      metadata:
        name: secret-provider-app
        namespace: openshift-gitops
      spec:
        destination:
          namespace: openshift-cluster-csi-drivers
          server: https://kubernetes.default.svc
        project: default
        source:
          path: path/to/aws-provider/resources
          repoURL: https://github.com/<my-domain>/<gitops>.git (1)
        syncPolicy:
          automated:
          prune: true
          selfHeal: true
      1 Update the value of the repoURL field to point to your GitOps repository.
  2. Synchronize resources with the default Argo CD instance to deploy them in the cluster:

    1. Add a label to the openshift-cluster-csi-drivers namespace your application is deployed in so that the Argo CD instance in the openshift-gitops namespace can manage it:

      $ oc label namespace openshift-cluster-csi-drivers argocd.argoproj.io/managed-by=openshift-gitops
    2. Apply the resources in your GitOps repository to your cluster, including the aws-provider.yaml file you just pushed:

      Example output
      application.argoproj.io/argo-app created
      application.argoproj.io/secret-provider-app created
      ...

In the Argo CD UI, you can observe that the csi-secrets-store-provider-aws daemonset continues to synchronize resources. To resolve this issue, you must configure the SSCSI driver to mount secrets from the AWS secrets Manager.

Configuring SSCSI driver to mount secrets from AWS secrets Manager

To store and manage your secrets securely, use GitOps workflows and configure the secrets Store Container Storage Interface (SSCSI) Driver Operator to mount secrets from AWS secrets Manager to a CSI volume in OpenShift Container Platform. For example, consider that you want to mount a secret to a deployment pod under the dev namespace which is in the /environments/dev/ directory.

Prerequisites
  • You have the AWS secrets Manager resources stored in your GitOps repository.

Procedure
  1. Grant privileged access to the csi-secrets-store-provider-aws service account by running the following command:

    $ oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers
    Example output
    clusterrole.rbac.authorization.k8s.io/system:openshift:scc:privileged added: "csi-secrets-store-provider-aws"
  2. Grant permission to allow the service account to read the AWS secret object:

    1. Create a credentialsrequest-dir-aws folder under a namespace-scoped directory in your GitOps repository because the credentials request is namespace-scoped. For example, create a credentialsrequest-dir-aws folder under the dev namespace which is in the /environments/dev/ directory by running the following command:

      $ mkdir credentialsrequest-dir-aws
    2. Create a YAML file with the following configuration for the credentials request in the /environments/dev/credentialsrequest-dir-aws/ path to mount a secret to the deployment pod in the dev namespace:

      Example credentialsrequest.yaml file
      apiVersion: cloudcredential.openshift.io/v1
      kind: CredentialsRequest
      metadata:
        name: aws-provider-test
        namespace: openshift-cloud-credential-operator
      spec:
        providerSpec:
          apiVersion: cloudcredential.openshift.io/v1
          kind: AWSProviderSpec
          statementEntries:
          - action:
            - "secretsmanager:GetsecretValue"
            - "secretsmanager:Describesecret"
            effect: Allow
            resource: "<aws_secret_arn>" (2)
      secretRef:
        name: aws-creds
        namespace: dev (1)
      serviceAccountNames:
        - default
      1 The namespace for the secret reference. Update the value of this namespace field according to your project deployment setup.
      2 The ARN of your secret in the region where your cluster is on. The <aws_region> of <aws_secret_arn> has to match the cluster region. If it does not match, create a replication of your secret in the region where your cluster is on.

      To find your cluster region, run the command:

      $ oc get infrastructure cluster -o jsonpath='{.status.platformStatus.aws.region}'
      Example output
      us-west-2
    3. Retrieve the OIDC provider by running the following command:

      $ oc get --raw=/.well-known/openid-configuration | jq -r '.issuer'
      Example output
      https://<oidc_provider_name>

      Copy the OIDC provider name <oidc_provider_name> from the output to use in the next step.

    4. Use the ccoctl tool to process the credentials request by running the following command:

      $ ccoctl aws create-iam-roles \
          --name my-role --region=<aws_region> \
          --credentials-requests-dir=credentialsrequest-dir-aws \
          --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output
      Example output
      2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created
      2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml
      2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds

      Copy the <aws_role_arn> from the output to use in the next step. For example, arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds.

    5. Check the role policy on AWS to confirm the <aws_region> of "Resource" in the role policy matches the cluster region:

      Example role policy
      {
      	"Version": "2012-10-17",
      	"Statement": [
      		{
      			"Effect": "Allow",
      			"Action": [
      				"secretsmanager:GetsecretValue",
      				"secretsmanager:Describesecret"
      			],
      			"Resource": "arn:aws:secretsmanager:<aws_region>:<aws_account_id>:secret:my-secret-xxxxxx"
      		}
      	]
      }
    6. Bind the service account with the role ARN by running the following command:

      $ oc annotate -n <namespace> sa/<app_service_account> eks.amazonaws.com/role-arn="<aws_role_arn>"
      Example command
      $ oc annotate -n dev sa/default eks.amazonaws.com/role-arn="<aws_role_arn>"
      Example output
      serviceaccount/default annotated
  3. Create a namespace-scoped secretProviderClass resource to define your secrets store provider. For example, you create a secretProviderClass resource in /environments/dev/apps/app-taxi/services/taxi/base/config directory of your GitOps repository.

    1. Create a secret-provider-class-aws.yaml file in the same directory where the target deployment is located in your GitOps repository:

      Example secret-provider-class-aws.yaml
      apiVersion: secrets-store.csi.x-k8s.io/v1
      kind: secretProviderClass
      metadata:
        name: my-aws-provider (1)
        namespace: dev (2)
      spec:
        provider: aws (3)
        parameters: (4)
          objects: |
            - objectName: "testsecret" (5)
              objectType: "secretsmanager"
      1 Name of the secret provider class.
      2 Namespace for the secret provider class. The namespace must match the namespace of the resource which will use the secret.
      3 Name of the secret store provider.
      4 Specifies the provider-specific configuration parameters.
      5 The secret name you created in AWS.
    2. Verify that after pushing this YAML file to your GitOps repository, the namespace-scoped secretProviderClass resource is populated in the target application page in the Argo CD UI.

      If the Sync Policy of your application is not set to Auto, you can manually sync the secretProviderClass resource by clicking Sync in the Argo CD UI.

Configuring GitOps managed resources to use mounted secrets

You must configure the GitOps managed resources by adding volume mounts configuration to a deployment and configuring the container pod to use the mounted secret.

Prerequisites
  • You have the AWS secrets Manager resources stored in your GitOps repository.

  • You have the secrets Store Container Storage Interface (SSCSI) driver configured to mount secrets from AWS secrets Manager.

Procedure
  1. Configure the GitOps managed resources. For example, consider that you want to add volume mounts configuration to the deployment of app-taxi application and the 100-deployment.yaml file is in the /environments/dev/apps/app-taxi/services/taxi/base/config/ directory.

    1. Add the volume mounting to the deployment YAML file and configure the container pod to use the secret provider class resources and mounted secret:

      Example YAML file
      apiVersion: apps/v1
      kind: Deployment
      metadata:
        name: taxi
        namespace: dev (1)
      spec:
        replicas: 1
        template:
          metadata:
      # ...
          spec:
            containers:
              - image: nginxinc/nginx-unprivileged:latest
                imagePullPolicy: Always
                name: taxi
                ports:
                  - containerPort: 8080
                volumeMounts:
                  - name: secrets-store-inline
                    mountPath: "/mnt/secrets-store" (2)
                    readOnly: true
                resources: {}
          serviceAccountName: default
          volumes:
            - name: secrets-store-inline
              csi:
                driver: secrets-store.csi.k8s.io
                readOnly: true
                volumeAttributes:
                  secretProviderClass: "my-aws-provider" (3)
          status: {}
      # ...
      1 Namespace for the deployment. This must be the same namespace as the secret provider class.
      2 The path to mount secrets in the volume mount.
      3 Name of the secret provider class.
    2. Push the updated resource YAML file to your GitOps repository.

  2. In the Argo CD UI, click REFRESH on the target application page to apply the updated deployment manifest.

  3. Verify that all the resources are successfully synchronized on the target application page.

  4. Verify that you can you can access the secrets from AWS secrets manager in the pod volume mount:

    1. List the secrets in the pod mount:

      $ oc exec <deployment_name>-<hash> -n <namespace> -- ls /mnt/secrets-store/
      Example command
      $ oc exec taxi-5959644f9-t847m -n dev -- ls /mnt/secrets-store/
      Example output
      <secret_name>
    2. View a secret in the pod mount:

      $ oc exec <deployment_name>-<hash> -n <namespace> -- cat /mnt/secrets-store/<secret_name>
      Example command
      $ oc exec taxi-5959644f9-t847m -n dev -- cat /mnt/secrets-store/testsecret
      Example output
      <secret_value>