This is a cache of https://docs.okd.io/4.18/storage/container_storage_interface/persistent-storage-csi-aws-efs.html. It is a snapshot of the page at 2025-10-14T22:56:17.366+0000.
AWS Elastic File Service CSI Driver Operator - Using Container Storage Interface (CSI) | Storage | OKD 4.18
×

Overview

OKD is capable of provisioning persistent volumes (PVs) using the Container Storage Interface (CSI) driver for AWS Elastic File Service (EFS).

Familiarity with persistent storage and configuring CSI volumes is recommended when working with a CSI Operator and driver.

After installing the AWS EFS CSI Driver Operator, OKD installs the AWS EFS CSI Operator and the AWS EFS CSI driver by default in the openshift-cluster-csi-drivers namespace. This allows the AWS EFS CSI Driver Operator to create CSI-provisioned PVs that mount to AWS EFS assets.

  • The AWS EFS CSI Driver Operator, after being installed, does not create a storage class by default to use to create persistent volume claims (PVCs). However, you can manually create the AWS EFS StorageClass. The AWS EFS CSI Driver Operator supports dynamic volume provisioning by allowing storage volumes to be created on-demand. This eliminates the need for cluster administrators to pre-provision storage.

  • The AWS EFS CSI driver enables you to create and mount AWS EFS PVs.

AWS EFS only supports regional volumes, not zonal volumes.

About CSI

Storage vendors have traditionally provided storage drivers as part of Kubernetes. With the implementation of the Container Storage Interface (CSI), third-party providers can instead deliver storage plugins using a standard interface without ever having to change the core Kubernetes code.

CSI Operators give OKD users storage options, such as volume snapshots, that are not possible with in-tree volume plugins.

Setting up the AWS EFS CSI Driver Operator

  1. If you are using AWS EFS with AWS Secure Token Service (STS), obtain a role Amazon Resource Name (ARN) for STS. This is required for installing the AWS EFS CSI Driver Operator.

  2. Install the AWS EFS CSI Driver Operator.

  3. Install the AWS EFS CSI Driver.

Obtaining a role Amazon Resource Name for Security Token Service

This procedure explains how to obtain a role Amazon Resource Name (ARN) to configure the AWS EFS CSI Driver Operator with OKD on AWS Security Token Service (STS).

Perform this procedure before you install the AWS EFS CSI Driver Operator (see Installing the AWS EFS CSI Driver Operator procedure).

Prerequisites
  • Access to the cluster as a user with the cluster-admin role.

  • AWS account credentials

Procedure

You can obtain the ARN role in multiple ways. The following procedure shows one method that uses the same concept and CCO utility (ccoctl) binary tool as cluster installation.

To obtain a role ARN for configuring AWS EFS CSI Driver Operator using STS:

  1. Extract the ccoctl from the OKD release image, which you used to install the cluster with STS. For more information, see "Configuring the Cloud Credential Operator utility".

  2. Create and save an EFS CredentialsRequest YAML file, such as shown in the following example, and then place it in the credrequests directory:

    Example
    apiVersion: cloudcredential.openshift.io/v1
    kind: CredentialsRequest
    metadata:
      name: openshift-aws-efs-csi-driver
      namespace: openshift-cloud-credential-operator
    spec:
      providerSpec:
        apiVersion: cloudcredential.openshift.io/v1
        kind: AWSProviderSpec
        statementEntries:
        - action:
          - elasticfilesystem:*
          effect: Allow
          resource: '*'
      secretRef:
        name: aws-efs-cloud-credentials
        namespace: openshift-cluster-csi-drivers
      serviceAccountNames:
      - aws-efs-csi-driver-operator
      - aws-efs-csi-driver-controller-sa
  3. Run the ccoctl tool to generate a new IAM role in AWS, and create a YAML file for it in the local file system (<path_to_ccoctl_output_dir>/manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml).

    $ ccoctl aws create-iam-roles --name=<name> --region=<aws_region> --credentials-requests-dir=<path_to_directory_with_list_of_credentials_requests>/credrequests --identity-provider-arn=arn:aws:iam::<aws_account_id>:oidc-provider/<name>-oidc.s3.<aws_region>.amazonaws.com
    • name=<name> is the name used to tag any cloud resources that are created for tracking.

    • region=<aws_region> is the AWS region where cloud resources are created.

    • dir=<path_to_directory_with_list_of_credentials_requests>/credrequests is the directory containing the EFS CredentialsRequest file in previous step.

    • <aws_account_id> is the AWS account ID.

      Example
      $ ccoctl aws create-iam-roles --name my-aws-efs --credentials-requests-dir credrequests --identity-provider-arn arn:aws:iam::123456789012:oidc-provider/my-aws-efs-oidc.s3.us-east-2.amazonaws.com
      Example output
      2022/03/21 06:24:44 Role arn:aws:iam::123456789012:role/my-aws-efs -openshift-cluster-csi-drivers-aws-efs-cloud- created
      2022/03/21 06:24:44 Saved credentials configuration to: /manifests/openshift-cluster-csi-drivers-aws-efs-cloud-credentials-credentials.yaml
      2022/03/21 06:24:45 Updated Role policy for Role my-aws-efs-openshift-cluster-csi-drivers-aws-efs-cloud-
  4. Copy the role ARN from the first line of the Example output in the preceding step. The role ARN is between "Role" and "created". In this example, the role ARN is "arn:aws:iam::123456789012:role/my-aws-efs -openshift-cluster-csi-drivers-aws-efs-cloud".

    You will need the role ARN when you install the AWS EFS CSI Driver Operator.

Installing the AWS EFS CSI Driver Operator

The AWS EFS CSI Driver Operator (a Red Hat Operator) is not installed in OKD by default. Use the following procedure to install and configure the AWS EFS CSI Driver Operator in your cluster.

Prerequisites
  • Access to the OKD web console.

Procedure

To install the AWS EFS CSI Driver Operator from the web console:

  1. Log in to the web console.

  2. Install the AWS EFS CSI Operator:

    1. Click OperatorsOperatorHub.

    2. Locate the AWS EFS CSI Operator by typing AWS EFS CSI in the filter box.

    3. Click the AWS EFS CSI Driver Operator button.

    Be sure to select the AWS EFS CSI Driver Operator and not the AWS EFS Operator. The AWS EFS Operator is a community Operator and is not supported by Red Hat.

    1. On the AWS EFS CSI Driver Operator page, click Install.

    2. On the Install Operator page, ensure that:

      • All namespaces on the cluster (default) is selected.

      • Installed Namespace is set to openshift-cluster-csi-drivers.

    3. Click Install.

      After the installation finishes, the AWS EFS CSI Operator is listed in the Installed Operators section of the web console.

Installing the AWS EFS CSI Driver

After installing the AWS EFS CSI Driver Operator (a Red Hat operator), you install the AWS EFS CSI driver.

Prerequisites
  • Access to the OKD web console.

Procedure
  1. Click AdministrationCustomResourceDefinitionsClusterCSIDriver.

  2. On the Instances tab, click Create ClusterCSIDriver.

  3. Use the following YAML file:

    apiVersion: operator.openshift.io/v1
    kind: ClusterCSIDriver
    metadata:
        name: efs.csi.aws.com
    spec:
      managementState: Managed
  4. Click Create.

  5. Wait for the following Conditions to change to a "True" status:

    • AWSEFSDriverNodeServiceControllerAvailable

    • AWSEFSDriverControllerServiceControllerAvailable

Creating the AWS EFS storage class

Storage classes are used to differentiate and delineate storage levels and usages. By defining a storage class, users can obtain dynamically provisioned persistent volumes.

The AWS EFS CSI Driver Operator (a Red Hat operator), after being installed, does not create a storage class by default. However, you can manually create the AWS EFS storage class.

Creating the AWS EFS storage class using the console

Procedure
  1. In the OKD console, click StorageStorageClasses.

  2. On the StorageClasses page, click Create StorageClass.

  3. On the StorageClass page, perform the following steps:

    1. Enter a name to reference the storage class.

    2. Optional: Enter the description.

    3. Select the reclaim policy.

    4. Select efs.csi.aws.com from the Provisioner drop-down list.

    5. Optional: Set the configuration parameters for the selected provisioner.

  4. Click Create.

Creating the AWS EFS storage class using the CLI

Procedure
  • Create a StorageClass object:

    kind: StorageClass
    apiVersion: storage.k8s.io/v1
    metadata:
      name: efs-sc
    provisioner: efs.csi.aws.com
    parameters:
      provisioningMode: efs-ap (1)
      fileSystemId: fs-a5324911 (2)
      directoryPerms: "700" (3)
      gidRangeStart: "1000" (4)
      gidRangeEnd: "2000" (4)
      basePath: "/dynamic_provisioning" (5)
    1 provisioningMode must be efs-ap to enable dynamic provisioning.
    2 fileSystemId must be the ID of the EFS volume created manually.
    3 directoryPerms is the default permission of the root directory of the volume. In this example, the volume is accessible only by the owner.
    4 gidRangeStart and gidRangeEnd set the range of POSIX Group IDs (GIDs) that are used to set the GID of the AWS access point. If not specified, the default range is 50000-7000000. Each provisioned volume, and thus AWS access point, is assigned a unique GID from this range.
    5 basePath is the directory on the EFS volume that is used to create dynamically provisioned volumes. In this case, a PV is provisioned as “/dynamic_provisioning/<random uuid>” on the EFS volume. Only the subdirectory is mounted to pods that use the PV.

    A cluster admin can create several StorageClass objects, each using a different EFS volume.

AWS EFS CSI cross account support

Cross account support allows you to have an OKD cluster in one AWS account and mount your file system in another AWS account by using the AWS Elastic File System (EFS) Container Storage Interface (CSI) driver.

Prerequisites
  • Access to an OKD cluster with administrator rights

  • Two valid AWS accounts

  • The EFS CSI Operator has been installed. For information about installing the EFS CSI Operator, see the Installing the AWS EFS CSI Driver Operator section.

  • Both the OKD cluster and EFS file system must be located in the same AWS region.

  • Ensure that the two virtual private clouds (VPCs) used in the following procedure use different network Classless Inter-Domain Routing (CIDR) ranges.

  • Access to OKD CLI (oc).

  • Access to AWS CLI.

  • Access to jq command-line JSON processor.

Procedure

The following procedure explains how to set up:

  • OKD AWS Account A: Contains a Red Hat OKD cluster v4.16, or later, deployed within a VPC

  • AWS Account B: Contains a VPC (including subnets, route tables, and network connectivity). The EFS filesystem will be created in this VPC.

To use AWS EFS across accounts:

  1. Set up the environment:

    1. Configure environment variables by running the following commands:

      export CLUSTER_NAME="<CLUSTER_NAME>" (1)
      export AWS_REGION="<AWS_REGION>" (2)
      export AWS_ACCOUNT_A_ID="<ACCOUNT_A_ID>" (3)
      export AWS_ACCOUNT_B_ID="<ACCOUNT_B_ID>" (4)
      export AWS_ACCOUNT_A_VPC_CIDR="<VPC_A_CIDR>" (5)
      export AWS_ACCOUNT_B_VPC_CIDR="<VPC_B_CIDR>" (6)
      export AWS_ACCOUNT_A_VPC_ID="<VPC_A_ID>" (7)
      export AWS_ACCOUNT_B_VPC_ID="<VPC_B_ID>" (8)
      export SCRATCH_DIR="<WORKING_DIRECTORY>" (9)
      export CSI_DRIVER_NAMESPACE="openshift-cluster-csi-drivers" (10)
      export AWS_PAGER="" (11)
      
      1 Cluster name of choice.
      2 AWS region of choice.
      3 AWS Account A ID.
      4 AWS Account B ID.
      5 CIDR range of VPC in Account A.
      6 CIDR range of VPC in Account B.
      7 VPC ID in Account A (cluster)
      8 VPC ID in Account B (EFS cross account)
      9 Any writeable directory of choice to use to store temporary files.
      10 If your driver is installed in a non-default namespace, change this value.
      11 Makes AWS CLI output everything directly to stdout.
    2. Create the working directory by running the following command:

      mkdir -p $SCRATCH_DIR
    3. Verify cluster connectivity by running the following command in the OKD CLI:

      $ oc whoami
    4. Determine the OKD cluster type and set node selector:

      The EFS cross account feature requires assigning AWS IAM policies to nodes running EFS CSI controller pods. However, this is not consistent for every OKD type.

      • If your cluster is deployed as a Hosted Control Plane (HyperShift), set the NODE_SELECTOR environment variable to hold the worker node label by running the following command:

        export NODE_SELECTOR=node-role.kubernetes.io/worker
      • For all other OKD types, set the NODE_SELECTOR environment variable to hold the master node label by running the following command:

        export NODE_SELECTOR=node-role.kubernetes.io/master
    5. Configure AWS CLI profiles as environment variables for account switching by running the following commands:

      export AWS_ACCOUNT_A="<ACCOUNT_A_NAME>"
      export AWS_ACCOUNT_B="<ACCOUNT_B_NAME>"
    6. Ensure that your AWS CLI is configured with JSON output format as the default for both accounts by running the following commands:

      export AWS_DEFAULT_PROFILE=${AWS_ACCOUNT_A}
      aws configure get output
      export AWS_DEFAULT_PROFILE=${AWS_ACCOUNT_B}
      aws configure get output

      If the preceding commands return:

      • No value: The default output format is already set to JSON and no changes are required.

      • Any value: Reconfigure your AWS CLI to use JSON format. For information about changing output formats, see Setting the output format in the AWS CLI in the AWS documentation.

    7. Unset AWS_PROFILE in your shell to prevent conflicts with AWS_DEFAULT_PROFILE by running the following command:

      unset AWS_PROFILE
  2. Configure the AWS Account B IAM roles and policies:

    1. Switch to your Account B profile by running the following command:

      export AWS_DEFAULT_PROFILE=${AWS_ACCOUNT_B}
    2. Define the IAM role name for the EFS CSI Driver Operator by running the following command:

      export ACCOUNT_B_ROLE_NAME=${CLUSTER_NAME}-cross-account-aws-efs-csi-operator
    3. Create the IAM trust policy file by running the following command:

      cat <<EOF > $SCRATCH_DIR/AssumeRolePolicyInAccountB.json
      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Effect": "Allow",
                  "Principal": {
                      "AWS": "arn:aws:iam::${AWS_ACCOUNT_A_ID}:root"
                  },
                  "Action": "sts:AssumeRole",
                  "Condition": {}
              }
          ]
      }
      EOF
    4. Create the IAM role for the EFS CSI Driver Operator by running the following command:

      ACCOUNT_B_ROLE_ARN=$(aws iam create-role \
        --role-name "${ACCOUNT_B_ROLE_NAME}" \
        --assume-role-policy-document file://$SCRATCH_DIR/AssumeRolePolicyInAccountB.json \
        --query "Role.Arn" --output text) \
      && echo $ACCOUNT_B_ROLE_ARN
    5. Create the IAM policy file by running the following command:

      cat << EOF > $SCRATCH_DIR/EfsPolicyInAccountB.json
      {
          "Version": "2012-10-17",
          "Statement": [
              {
                  "Sid": "VisualEditor0",
                  "Effect": "Allow",
                  "Action": [
                      "ec2:DescribeNetworkInterfaces",
                      "ec2:DescribeSubnets"
                  ],
                  "Resource": "*"
              },
              {
                  "Sid": "VisualEditor1",
                  "Effect": "Allow",
                  "Action": [
                      "elasticfilesystem:DescribeMountTargets",
                      "elasticfilesystem:DeleteAccessPoint",
                      "elasticfilesystem:ClientMount",
                      "elasticfilesystem:DescribeAccessPoints",
                      "elasticfilesystem:ClientWrite",
                      "elasticfilesystem:ClientRootAccess",
                      "elasticfilesystem:DescribeFileSystems",
                      "elasticfilesystem:CreateAccessPoint",
                      "elasticfilesystem:TagResource"
                  ],
                  "Resource": "*"
              }
          ]
      }
      EOF
    6. Create the IAM policy by running the following command:

      ACCOUNT_B_POLICY_ARN=$(aws iam create-policy --policy-name "${CLUSTER_NAME}-efs-csi-policy" \
         --policy-document file://$SCRATCH_DIR/EfsPolicyInAccountB.json \
         --query 'Policy.Arn' --output text) \
      && echo ${ACCOUNT_B_POLICY_ARN}
    7. Attach the policy to the role by running the following command:

      aws iam attach-role-policy \
         --role-name "${ACCOUNT_B_ROLE_NAME}" \
         --policy-arn "${ACCOUNT_B_POLICY_ARN}"
  3. Configure the AWS Account A IAM roles and policies:

    1. Switch to your Account A profile by running the following command:

      export AWS_DEFAULT_PROFILE=${AWS_ACCOUNT_A}
    2. Create the IAM policy document by running the following command:

      cat << EOF > $SCRATCH_DIR/AssumeRoleInlinePolicyPolicyInAccountA.json
      {
        "Version": "2012-10-17",
        "Statement": [
          {
            "Effect": "Allow",
            "Action": "sts:AssumeRole",
            "Resource": "${ACCOUNT_B_ROLE_ARN}"
          }
        ]
      }
      EOF
    3. In AWS Account A, attach the AWS-managed policy "AmazonElasticFileSystemClientFullAccess" to the OKD cluster master role by running the following command:

      EFS_CLIENT_FULL_ACCESS_BUILTIN_POLICY_ARN=arn:aws:iam::aws:policy/AmazonElasticFileSystemClientFullAccess
      declare -A ROLE_SEEN
      for NODE in $(oc get nodes --selector="${NODE_SELECTOR}" -o jsonpath='{.items[*].metadata.name}'); do
          INSTANCE_PROFILE=$(aws ec2 describe-instances \
              --filters "Name=private-dns-name,Values=${NODE}" \
              --query 'Reservations[].Instances[].IamInstanceProfile.Arn' \
              --output text | awk -F'/' '{print $NF}' | xargs)
          MASTER_ROLE_ARN=$(aws iam get-instance-profile \
              --instance-profile-name "${INSTANCE_PROFILE}" \
              --query 'InstanceProfile.Roles[0].Arn' \
              --output text | xargs)
          MASTER_ROLE_NAME=$(echo "${MASTER_ROLE_ARN}" | awk -F'/' '{print $NF}' | xargs)
          echo "Checking role: '${MASTER_ROLE_NAME}'"
          if [[ -n "${ROLE_SEEN[$MASTER_ROLE_NAME]:-}" ]]; then
              echo "Already processed role: '${MASTER_ROLE_NAME}', skipping."
              continue
          fi
          ROLE_SEEN["$MASTER_ROLE_NAME"]=1
          echo "Assigning policy ${EFS_CLIENT_FULL_ACCESS_BUILTIN_POLICY_ARN} to role ${MASTER_ROLE_NAME}"
          aws iam attach-role-policy --role-name "${MASTER_ROLE_NAME}" --policy-arn "${EEFS_CLIENT_FULL_ACCESS_BUILTIN_POLICY_ARN}"
      done
  4. Attach the policy to the IAM entity to allow role assumption:

    This step depends on your cluster configuration. In both of the following scenarios, the EFS CSI Driver Operator uses an entity to authenticate to AWS, and this entity must be granted permission to assume roles in Account B.

    If your cluster:

    • Does not have STS enabled: The EFS CSI Driver Operator uses an IAM User entity for AWS authentication. Continue with the step "Attach policy to IAM User to allow role assumption".

    • Has STS enabled: The EFS CSI Driver Operator uses an IAM role entity for AWS authentication. Continue with the step "Attach policy to IAM Role to allow role assumption".

  5. Attach policy to IAM User to allow role assumption

    1. Identify the IAM User used by the EFS CSI Driver Operator by running the following command:

      EFS_CSI_DRIVER_OPERATOR_USER=$(oc -n openshift-cloud-credential-operator get credentialsrequest/openshift-aws-efs-csi-driver -o json | jq -r '.status.providerStatus.user')
    2. Attach the policy to the IAM user by running the following command:

      aws iam put-user-policy \
          --user-name "${EFS_CSI_DRIVER_OPERATOR_USER}"  \
          --policy-name efs-cross-account-inline-policy \
          --policy-document file://$SCRATCH_DIR/AssumeRoleInlinePolicyPolicyInAccountA.json
  6. Attach the policy to the IAM role to allow role assumption:

    1. Identify the IAM role name currently used by the EFS CSI Driver Operator by running the following command:

      EFS_CSI_DRIVER_OPERATOR_ROLE=$(oc -n ${CSI_DRIVER_NAMESPACE} get secret/aws-efs-cloud-credentials -o jsonpath='{.data.credentials}' | base64 -d | grep role_arn | cut -d'/' -f2) && echo ${EFS_CSI_DRIVER_OPERATOR_ROLE}
    2. Attach the policy to the IAM role used by the EFS CSI Driver Operator by running the following command:

       aws iam put-role-policy \
          --role-name "${EFS_CSI_DRIVER_OPERATOR_ROLE}"  \
          --policy-name efs-cross-account-inline-policy \
          --policy-document file://$SCRATCH_DIR/AssumeRoleInlinePolicyPolicyInAccountA.json
  7. Configure VPC peering:

    1. Initiate a peering request from Account A to Account B by running the following command:

      export AWS_DEFAULT_PROFILE=${AWS_ACCOUNT_A}
      PEER_REQUEST_ID=$(aws ec2 create-vpc-peering-connection --vpc-id "${AWS_ACCOUNT_A_VPC_ID}" --peer-vpc-id "${AWS_ACCOUNT_B_VPC_ID}" --peer-owner-id "${AWS_ACCOUNT_B_ID}" --query VpcPeeringConnection.VpcPeeringConnectionId --output text)
    2. Accept the peering request from Account B by running the following command:

      export AWS_DEFAULT_PROFILE=${AWS_ACCOUNT_B}
      aws ec2 accept-vpc-peering-connection --vpc-peering-connection-id "${PEER_REQUEST_ID}"
    3. Retrieve the route table IDs for Account A and add routes to the Account B VPC by running the following command:

      export AWS_DEFAULT_PROFILE=${AWS_ACCOUNT_A}
      for NODE in $(oc get nodes --selector=node-role.kubernetes.io/worker | tail -n +2 | awk '{print $1}')
      do
          SUBNET=$(aws ec2 describe-instances --filters "Name=private-dns-name,Values=$NODE" --query 'Reservations[*].Instances[*].NetworkInterfaces[*].SubnetId' | jq -r '.[0][0][0]')
          echo SUBNET is ${SUBNET}
          ROUTE_TABLE_ID=$(aws ec2 describe-route-tables --filters "Name=association.subnet-id,Values=${SUBNET}" --query 'RouteTables[*].RouteTableId' | jq -r '.[0]')
          echo Route table ID is $ROUTE_TABLE_ID
          aws ec2 create-route --route-table-id ${ROUTE_TABLE_ID} --destination-cidr-block ${AWS_ACCOUNT_B_VPC_CIDR} --vpc-peering-connection-id ${PEER_REQUEST_ID}
      done
    4. Retrieve the route table IDs for Account B and add routes to the Account A VPC by running the following command:

      export AWS_DEFAULT_PROFILE=${AWS_ACCOUNT_B}
      for ROUTE_TABLE_ID in $(aws ec2 describe-route-tables   --filters "Name=vpc-id,Values=${AWS_ACCOUNT_B_VPC_ID}"   --query "RouteTables[].RouteTableId" | jq -r '.[]')
      do
          echo Route table ID is $ROUTE_TABLE_ID
          aws ec2 create-route --route-table-id ${ROUTE_TABLE_ID} --destination-cidr-block ${AWS_ACCOUNT_A_VPC_CIDR} --vpc-peering-connection-id ${PEER_REQUEST_ID}
      done
  8. Configure security groups in Account B to allow NFS traffic from Account A to EFS:

    1. Switch to your Account B profile by running the following command:

      export AWS_DEFAULT_PROFILE=${AWS_ACCOUNT_B}
    2. Configure the VPC security groups for EFS access by running the following command:

      SECURITY_GROUP_ID=$(aws ec2 describe-security-groups --filters Name=vpc-id,Values="${AWS_ACCOUNT_B_VPC_ID}" | jq -r '.SecurityGroups[].GroupId')
      aws ec2 authorize-security-group-ingress \
       --group-id "${SECURITY_GROUP_ID}" \
       --protocol tcp \
       --port 2049 \
       --cidr "${AWS_ACCOUNT_A_VPC_CIDR}" | jq .
  9. Create a region-wide EFS filesystem in Account B:

    1. Switch to your Account B profile by running the following command:

      export AWS_DEFAULT_PROFILE=${AWS_ACCOUNT_B}
    2. Create a region-wide EFS file system by running the following command:

      CROSS_ACCOUNT_FS_ID=$(aws efs create-file-system --creation-token efs-token-1 \
      --region ${AWS_REGION} \
      --encrypted | jq -r '.FileSystemId') \
      && echo $CROSS_ACCOUNT_FS_ID
    3. Configure region-wide mount targets for EFS by running the following command:

      for SUBNET in $(aws ec2 describe-subnets \
        --filters "Name=vpc-id,Values=${AWS_ACCOUNT_B_VPC_ID}" \
        --region ${AWS_REGION} \
        | jq -r '.Subnets.[].SubnetId'); do \
          MOUNT_TARGET=$(aws efs create-mount-target --file-system-id ${CROSS_ACCOUNT_FS_ID} \
          --subnet-id ${SUBNET} \
          --region ${AWS_REGION} \
          | jq -r '.MountTargetId'); \
          echo ${MOUNT_TARGET}; \
      done

      This creates a mount point in each subnet of your VPC.

  10. Configure the EFS Operator for cross-account access:

    1. Define custom names for the secret and storage class that you will create in subsequent steps by running the following command:

      export secret_NAME=my-efs-cross-account
      export STORAGE_CLASS_NAME=efs-sc-cross
    2. Create a secret that references the role ARN in Account B by running the following command in the OKD CLI:

      oc create secret generic ${secret_NAME} -n ${CSI_DRIVER_NAMESPACE} --from-literal=awsRoleArn="${ACCOUNT_B_ROLE_ARN}"
    3. Grant the CSI driver controller access to the newly created secret by running the following commands in the OKD CLI:

      oc -n ${CSI_DRIVER_NAMESPACE} create role access-secrets --verb=get,list,watch --resource=secrets
      oc -n ${CSI_DRIVER_NAMESPACE} create rolebinding --role=access-secrets default-to-secrets --serviceaccount=${CSI_DRIVER_NAMESPACE}:aws-efs-csi-driver-controller-sa
    4. Create a new storage class that references the EFS ID from Account B and the secret created previously by running the following command in the OKD CLI:

      cat << EOF | oc apply -f -
      kind: StorageClass
      apiVersion: storage.k8s.io/v1
      metadata:
        name: ${STORAGE_CLASS_NAME}
      provisioner: efs.csi.aws.com
      parameters:
        provisioningMode: efs-ap
        fileSystemId: ${CROSS_ACCOUNT_FS_ID}
        directoryPerms: "700"
        gidRangeStart: "1000"
        gidRangeEnd: "2000"
        basePath: "/dynamic_provisioning"
        csi.storage.k8s.io/provisioner-secret-name: ${secret_NAME}
        csi.storage.k8s.io/provisioner-secret-namespace: ${CSI_DRIVER_NAMESPACE}
      EOF

Creating and configuring access to EFS volumes in AWS

This procedure explains how to create and configure EFS volumes in AWS so that you can use them in OKD.

Prerequisites
  • AWS account credentials

Procedure

To create and configure access to an EFS volume in AWS:

  1. On the AWS console, open https://console.aws.amazon.com/efs.

  2. Click Create file system:

    • Enter a name for the file system.

    • For Virtual Private Cloud (VPC), select your OKD’s' virtual private cloud (VPC).

    • Accept default settings for all other selections.

  3. Wait for the volume and mount targets to finish being fully created:

    1. Go to https://console.aws.amazon.com/efs#/file-systems.

    2. Click your volume, and on the Network tab wait for all mount targets to become available (~1-2 minutes).

  4. On the Network tab, copy the Security Group ID (you will need this in the next step).

  5. Go to https://console.aws.amazon.com/ec2/v2/home#SecurityGroups, and find the Security Group used by the EFS volume.

  6. On the Inbound rules tab, click Edit inbound rules, and then add a new rule with the following settings to allow OKD nodes to access EFS volumes :

    • Type: NFS

    • Protocol: TCP

    • Port range: 2049

    • Source: Custom/IP address range of your nodes (for example: “10.0.0.0/16”)

      This step allows OKD to use NFS ports from the cluster.

  7. Save the rule.

Dynamic provisioning for Amazon Elastic File Storage

The AWS EFS CSI driver supports a different form of dynamic provisioning than other CSI drivers. It provisions new PVs as subdirectories of a pre-existing EFS volume. The PVs are independent of each other. However, they all share the same EFS volume. When the volume is deleted, all PVs provisioned out of it are deleted too. The EFS CSI driver creates an AWS Access Point for each such subdirectory. Due to AWS AccessPoint limits, you can only dynamically provision 1000 PVs from a single StorageClass/EFS volume.

Note that PVC.spec.resources is not enforced by EFS.

In the example below, you request 5 GiB of space. However, the created PV is limitless and can store any amount of data (like petabytes). A broken application, or even a rogue application, can cause significant expenses when it stores too much data on the volume.

Using monitoring of EFS volume sizes in AWS is strongly recommended.

Prerequisites
  • You have created Amazon Elastic File Storage (Amazon EFS) volumes.

  • You have created the AWS EFS storage class.

Procedure

To enable dynamic provisioning:

  • Create a PVC (or StatefulSet or Template) as usual, referring to the StorageClass created previously.

    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: test
    spec:
      storageClassName: efs-sc
      accessModes:
        - ReadWriteMany
      resources:
        requests:
          storage: 5Gi

If you have problems setting up dynamic provisioning, see AWS EFS troubleshooting.

Additional resources

Creating static PVs with Amazon Elastic File Storage

It is possible to use an Amazon Elastic File Storage (Amazon EFS) volume as a single PV without any dynamic provisioning. The whole volume is mounted to pods.

Prerequisites
  • You have created Amazon EFS volumes.

Procedure
  • Create the PV using the following YAML file:

    apiVersion: v1
    kind: PersistentVolume
    metadata:
      name: efs-pv
    spec:
      capacity: (1)
        storage: 5Gi
      volumeMode: Filesystem
      accessModes:
        - ReadWriteMany
        - ReadWriteOnce
      persistentVolumeReclaimPolicy: Retain
      csi:
        driver: efs.csi.aws.com
        volumeHandle: fs-ae66151a (2)
        volumeAttributes:
          encryptInTransit: "false" (3)
    1 spec.capacity does not have any meaning and is ignored by the CSI driver. It is used only when binding to a PVC. Applications can store any amount of data to the volume.
    2 volumeHandle must be the same ID as the EFS volume you created in AWS. If you are providing your own access point, volumeHandle should be <EFS volume ID>::<access point ID>. For example: fs-6e633ada::fsap-081a1d293f0004630.
    3 If desired, you can disable encryption in transit. Encryption is enabled by default.

If you have problems setting up static PVs, see AWS EFS troubleshooting.

Amazon Elastic File Storage security

The following information is important for Amazon Elastic File Storage (Amazon EFS) security.

When using access points, for example, by using dynamic provisioning as described earlier, Amazon automatically replaces GIDs on files with the GID of the access point. In addition, EFS considers the user ID, group ID, and secondary group IDs of the access point when evaluating file system permissions. EFS ignores the NFS client’s IDs. For more information about access points, see https://docs.aws.amazon.com/efs/latest/ug/efs-access-points.html.

As a consequence, EFS volumes silently ignore FSGroup; OKD is not able to replace the GIDs of files on the volume with FSGroup. Any pod that can access a mounted EFS access point can access any file on it.

Unrelated to this, encryption in transit is enabled by default. For more information, see https://docs.aws.amazon.com/efs/latest/ug/encryption-in-transit.html.

AWS EFS storage CSI usage metrics

Usage metrics overview

Amazon Web Services (AWS) Elastic File Service (EFS) storage Container Storage Interface (CSI) usage metrics allow you to monitor how much space is used by either dynamically or statically provisioned EFS volumes.

This features is disabled by default, because turning on metrics can lead to performance degradation.

The AWS EFS usage metrics feature collects volume metrics in the AWS EFS CSI Driver by recursively walking through the files in the volume. Because this effort can degrade performance, administrators must explicitly enable this feature.

Enabling usage metrics using the web console

To enable Amazon Web Services (AWS) Elastic File Service (EFS) Storage Container Storage Interface (CSI) usage metrics using the web console:

  1. Click Administration > CustomResourceDefinitions.

  2. On the CustomResourceDefinitions page next to the Name dropdown box, type clustercsidriver.

  3. Click CRD ClusterCSIDriver.

  4. Click the YAML tab.

  5. Under spec.aws.efsVolumeMetrics.state, set the value to RecursiveWalk.

    RecursiveWalk indicates that volume metrics collection in the AWS EFS CSI Driver is performed by recursively walking through the files in the volume.

    Example ClusterCSIDriver efs.csi.aws.com YAML file
    spec:
        driverConfig:
            driverType: AWS
            aws:
                efsVolumeMetrics:
                  state: RecursiveWalk
                  recursiveWalk:
                    refreshPeriodMinutes: 100
                    fsRateLimit: 10
  6. Optional: To define how the recursive walk operates, you can also set the following fields:

    • refreshPeriodMinutes: Specifies the refresh frequency for volume metrics in minutes. If this field is left blank, a reasonable default is chosen, which is subject to change over time. The current default is 240 minutes. The valid range is 1 to 43,200 minutes.

    • fsRateLimit: Defines the rate limit for processing volume metrics in goroutines per file system. If this field is left blank, a reasonable default is chosen, which is subject to change over time. The current default is 5 goroutines. The valid range is 1 to 100 goroutines.

  7. Click Save.

To disable AWS EFS CSI usage metrics, use the preceding procedure, but for spec.aws.efsVolumeMetrics.state, change the value from RecursiveWalk to Disabled.

Enabling usage metrics using the CLI

To enable Amazon Web Services (AWS) Elastic File Service (EFS) storage Container Storage Interface (CSI) usage metrics using the CLI:

  1. Edit ClusterCSIDriver by running the following command:

    $ oc edit clustercsidriver efs.csi.aws.com
  2. Under spec.aws.efsVolumeMetrics.state, set the value to RecursiveWalk.

    RecursiveWalk indicates that volume metrics collection in the AWS EFS CSI Driver is performed by recursively walking through the files in the volume.

    Example ClusterCSIDriver efs.csi.aws.com YAML file
    spec:
        driverConfig:
            driverType: AWS
            aws:
                efsVolumeMetrics:
                  state: RecursiveWalk
                  recursiveWalk:
                    refreshPeriodMinutes: 100
                    fsRateLimit: 10
  3. Optional: To define how the recursive walk operates, you can also set the following fields:

    • refreshPeriodMinutes: Specifies the refresh frequency for volume metrics in minutes. If this field is left blank, a reasonable default is chosen, which is subject to change over time. The current default is 240 minutes. The valid range is 1 to 43,200 minutes.

    • fsRateLimit: Defines the rate limit for processing volume metrics in goroutines per file system. If this field is left blank, a reasonable default is chosen, which is subject to change over time. The current default is 5 goroutines. The valid range is 1 to 100 goroutines.

  4. Save the changes to the efs.csi.aws.com object.

To disable AWS EFS CSI usage metrics, use the preceding procedure, but for spec.aws.efsVolumeMetrics.state, change the value from RecursiveWalk to Disabled.

Amazon Elastic File Storage troubleshooting

The following information provides guidance on how to troubleshoot issues with Amazon Elastic File Storage (Amazon EFS):

  • The AWS EFS Operator and CSI driver run in namespace openshift-cluster-csi-drivers.

  • To initiate gathering of logs of the AWS EFS Operator and CSI driver, run the following command:

    $ oc adm must-gather
    [must-gather      ] OUT Using must-gather plugin-in image: quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5
    [must-gather      ] OUT namespace/openshift-must-gather-xm4wq created
    [must-gather      ] OUT clusterrolebinding.rbac.authorization.k8s.io/must-gather-2bd8x created
    [must-gather      ] OUT pod for plug-in image quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:125f183d13601537ff15b3239df95d47f0a604da2847b561151fedd699f5e3a5 created
  • To show AWS EFS Operator errors, view the ClusterCSIDriver status:

    $ oc get clustercsidriver efs.csi.aws.com -o yaml
  • If a volume cannot be mounted to a pod (as shown in the output of the following command):

    $ oc describe pod
    ...
      Type     Reason       Age    From               Message
      ----     ------       ----   ----               -------
      Normal   Scheduled    2m13s  default-scheduler  Successfully assigned default/efs-app to ip-10-0-135-94.ec2.internal
      Warning  FailedMount  13s    kubelet            MountVolume.SetUp failed for volume "pvc-d7c097e6-67ec-4fae-b968-7e7056796449" : rpc error: code = DeadlineExceeded desc = context deadline exceeded (1)
      Warning  FailedMount  10s    kubelet            Unable to attach or mount volumes: unmounted volumes=[persistent-storage], unattached volumes=[persistent-storage kube-api-access-9j477]: timed out waiting for the condition
    1 Warning message indicating volume not mounted.

    This error is frequently caused by AWS dropping packets between an OKD node and Amazon EFS.

    Check that the following are correct:

    • AWS firewall and Security Groups

    • Networking: port number and IP addresses

Uninstalling the AWS EFS CSI Driver Operator

All EFS PVs are inaccessible after uninstalling the AWS EFS CSI Driver Operator (a Red Hat operator).

Prerequisites
  • Access to the OKD web console.

Procedure

To uninstall the AWS EFS CSI Driver Operator from the web console:

  1. Log in to the web console.

  2. Stop all applications that use AWS EFS PVs.

  3. Delete all AWS EFS PVs:

    1. Click StoragePersistentVolumeClaims.

    2. Select each PVC that is in use by the AWS EFS CSI Driver Operator, click the drop-down menu on the far right of the PVC, and then click Delete PersistentVolumeClaims.

  4. Uninstall the AWS EFS CSI driver:

    Before you can uninstall the Operator, you must remove the CSI driver first.

    1. Click AdministrationCustomResourceDefinitionsClusterCSIDriver.

    2. On the Instances tab, for efs.csi.aws.com, on the far left side, click the drop-down menu, and then click Delete ClusterCSIDriver.

    3. When prompted, click Delete.

  5. Uninstall the AWS EFS CSI Operator:

    1. Click OperatorsInstalled Operators.

    2. On the Installed Operators page, scroll or type AWS EFS CSI into the Search by name box to find the Operator, and then click it.

    3. On the upper, right of the Installed Operators > Operator details page, click ActionsUninstall Operator.

    4. When prompted on the Uninstall Operator window, click the Uninstall button to remove the Operator from the namespace. Any applications deployed by the Operator on the cluster need to be cleaned up manually.

      After uninstalling, the AWS EFS CSI Driver Operator is no longer listed in the Installed Operators section of the web console.

Before you can destroy a cluster (openshift-install destroy cluster), you must delete the EFS volume in AWS. An OKD cluster cannot be destroyed when there is an EFS volume that uses the cluster’s VPC. Amazon does not allow deletion of such a VPC.

Additional resources