This is a cache of https://docs.openshift.com/container-platform/4.4/installing/installing_openstack/uninstalling-openstack-user.html. It is a snapshot of the page at 2024-11-26T01:18:54.949+0000.
Uninstalling a cluster on OpenStack from your own infrastructure - Installing on OpenStack | Installing | OpenShift Container Platform 4.4
×

You can remove a cluster that you deployed to Red Hat OpenStack Platform (RHOSP) on user-provisioned infrastructure.

Prerequisites

  • Have on your machine

    • A single directory in which you can create files to help you with the removal process

    • Python 3

Downloading playbook dependencies

The Ansible playbooks that simplify the removal process on user-provisioned infrastructure require several Python modules. On the machine where you will run the process, add the modules' repositories and then download them.

These instructions assume that you are using Red Hat Enterprise Linux (RHEL) 8.
Prerequisites
  • Python 3 is installed on your machine

Procedure
  1. On a command line, add the repositories:

    $ sudo subscription-manager register # If not done already
    $ sudo subscription-manager attach --pool=$YOUR_POOLID # If not done already
    $ sudo subscription-manager repos --disable=* # If not done already
    
    $ sudo subscription-manager repos \
      --enable=rhel-8-for-x86_64-baseos-rpms \
      --enable=openstack-16-tools-for-rhel-8-x86_64-rpms \
      --enable=ansible-2.9-for-rhel-8-x86_64-rpms \
      --enable=rhel-8-for-x86_64-appstream-rpms
  2. Install the modules:

    $ sudo yum install python3-openstackclient ansible python3-openstacksdk
  3. Ensure that the python command points to python3:

    $ sudo alternatives --set python /usr/bin/python3

Removing a cluster on RHOSP that uses your own infrastructure

You can remove an OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP) that uses your own infrastructure. To complete the removal process quickly, create and run several Ansible playbooks.

Prerequisites
  • Python 3 is installed on your machine

  • You downloaded the modules in "Downloading playbook dependencies"

Procedure
You may have the common.yaml and inventory.yaml playbooks left over from when you installed OpenShift Container Platform. If you do, you can skip the first two steps of the procedure.
  1. Insert the following content into a local file called common.yaml:

    common.yaml Ansible playbook
    - hosts: localhost
      gather_facts: no
    
      vars_files:
      - metadata.json
    
      tasks:
      - name: 'Compute resource names'
        set_fact:
          cluster_id_tag: "openshiftClusterID={{ infraID }}"
          os_network: "{{ infraID }}-network"
          os_subnet: "{{ infraID }}-nodes"
          os_router: "{{ infraID }}-external-router"
          # Port names
          os_port_api: "{{ infraID }}-api-port"
          os_port_ingress: "{{ infraID }}-ingress-port"
          os_port_bootstrap: "{{ infraID }}-bootstrap-port"
          os_port_master: "{{ infraID }}-master-port"
          os_port_worker: "{{ infraID }}-worker-port"
          # Security groups names
          os_sg_master: "{{ infraID }}-master"
          os_sg_worker: "{{ infraID }}-worker"
          # Server names
          os_bootstrap_server_name: "{{ infraID }}-bootstrap"
          os_cp_server_name: "{{ infraID }}-master"
          os_compute_server_name: "{{ infraID }}-worker"
          # Trunk names
          os_cp_trunk_name: "{{ infraID }}-master-trunk"
          os_compute_trunk_name: "{{ infraID }}-worker-trunk"
          # Subnet pool name
          subnet_pool: "{{ infraID }}-kuryr-pod-subnetpool"
          # Service network name
          os_svc_network: "{{ infraID }}-kuryr-service-network"
          # Service subnet name
          os_svc_subnet: "{{ infraID }}-kuryr-service-subnet"
          # Ignition files
          os_bootstrap_ignition: "{{ infraID }}-bootstrap-ignition.json"
  2. Insert the following content into a local file called inventory.yaml, and edit the values to match your own:

    inventory.yaml Ansible playbook
    all:
      hosts:
        localhost:
          ansible_connection: local
          ansible_python_interpreter: "{{ansible_playbook_python}}"
    
          # User-provided values
          os_subnet_range: '10.0.0.0/16'
          os_flavor_master: 'm1.xlarge'
          os_flavor_worker: 'm1.large'
          os_image_rhcos: 'rhcos'
          os_external_network: 'external'
          # OpenShift API floating IP address
          os_api_fip: '203.0.113.23'
          # OpenShift Ingress floating IP address
          os_ingress_fip: '203.0.113.19'
          # Service subnet cidr
          svc_subnet_range: '172.30.0.0/16'
          os_svc_network_range: '172.30.0.0/15'
          # Subnet pool prefixes
          cluster_network_cidrs: '10.128.0.0/14'
          # Subnet pool prefix length
          host_prefix: '23'
          # Name of the SDN.
          # Possible values are OpenshiftSDN or Kuryr.
          os_networking_type: 'OpenshiftSDN'
    
          # Number of provisioned Control Plane nodes
          # 3 is the minimum number for a fully-functional cluster.
          os_cp_nodes_number: 3
    
          # Number of provisioned Compute nodes.
          # 3 is the minimum number for a fully-functional cluster.
          os_compute_nodes_number: 3
  3. Optional: If your cluster uses Kuryr, insert the following content into a local file called down-06_load-balancers.yaml:

    down-06_load-balancers.yaml
    # Required Python packages:
    #
    # ansible
    # openstackcli
    # openstacksdk
    
    - import_playbook: common.yaml
    
    - hosts: all
      gather_facts: no
    
      tasks:
      - name: 'Get a token for creating the server group'
        os_auth:
        register: cloud
        when: os_networking_type == "Kuryr"
    
      - name: 'List octavia versions'
        uri:
          method: GET
          headers:
            X-Auth-Token: "{{ cloud.ansible_facts.auth_token }}"
            Content-Type: 'application/json'
          url: "{{ cloud.ansible_facts.service_catalog | selectattr('name', 'match', 'octavia') | first | json_query('endpoints') | selectattr('interface', 'match', 'public') | first | json_query('url') }}/"
        register: octavia_versions
        when: os_networking_type == "Kuryr"
    
      - set_fact:
          versions: "{{ octavia_versions.json.versions | selectattr('id', 'match', 'v2.5') | map(attribute='id') | list }}"
        when: os_networking_type == "Kuryr"
    
      - name: 'List tagged loadbalancers'
        uri:
          method: GET
          headers:
            X-Auth-Token: "{{ cloud.ansible_facts.auth_token }}"
          url: "{{ cloud.ansible_facts.service_catalog | selectattr('name', 'match', 'octavia') | first | json_query('endpoints') | selectattr('interface', 'match', 'public') | first | json_query('url') }}/v2.0/lbaas/loadbalancers?tags={{cluster_id_tag}}"
        when:
        - os_networking_type == "Kuryr"
        - versions | length > 0
        register: lbs_tagged
    
      # NOTE: Kuryr creates an Octavia load balancer
      # for each service present on the cluster. Let's make
      # sure to remove the resources generated.
      - name: 'Remove the cluster load balancers'
        command:
          cmd: "openstack loadbalancer delete --cascade {{ item.id }}"
        with_items: "{{ lbs_tagged.json.loadbalancers }}"
        when:
        - os_networking_type == "Kuryr"
        - versions | length > 0
        - '"PENDING" not in item.provisioning_status'
    
      - name: 'List loadbalancers tagged on description'
        uri:
          method: GET
          headers:
            X-Auth-Token: "{{ cloud.ansible_facts.auth_token }}"
          url: "{{ cloud.ansible_facts.service_catalog | selectattr('name', 'match', 'octavia') | first | json_query('endpoints') | selectattr('interface', 'match', 'public') | first | json_query('url') }}/v2.0/lbaas/loadbalancers?description={{cluster_id_tag}}"
        when:
        - os_networking_type == "Kuryr"
        - versions | length == 0
        register: lbs_description
    
      # NOTE: Kuryr creates an Octavia load balancer
      # for each service present on the cluster. Let's make
      # sure to remove the resources generated.
      - name: 'Remove the cluster load balancers'
        command:
          cmd: "openstack loadbalancer delete --cascade {{ item.id }}"
        with_items: "{{ lbs_description.json.loadbalancers }}"
        when:
        - os_networking_type == "Kuryr"
        - versions | length == 0
        - '"PENDING" not in item.provisioning_status'
  4. Insert the following content into a local file called down-05_compute-nodes.yaml:

    down-05_compute-nodes.yaml
    # Required Python packages:
    #
    # ansible
    # openstackclient
    # openstacksdk
    
    - import_playbook: common.yaml
    
    - hosts: all
      gather_facts: no
    
      tasks:
      - name: 'Remove the Compute servers'
        os_server:
          name: "{{ item.1 }}-{{ item.0 }}"
          state: absent
        with_indexed_items: "{{ [os_compute_server_name] * os_compute_nodes_number }}"
    
      - name: 'List the Compute trunks'
        command:
          cmd: "openstack network trunk list -c Name -f value"
        when: os_networking_type == "Kuryr"
        register: trunks
    
      - name: 'Remove the Compute trunks'
        command:
          cmd: "openstack network trunk delete {{ item.1 }}-{{ item.0 }}"
        when:
        - os_networking_type == "Kuryr"
        - (item.1|string + '-' + item.0|string) in trunks.stdout_lines|list
        with_indexed_items: "{{ [os_compute_trunk_name] * os_compute_nodes_number }}"
    
      - name: 'Remove the Compute ports'
        os_port:
          name: "{{ item.1 }}-{{ item.0 }}"
          state: absent
        with_indexed_items: "{{ [os_port_worker] * os_compute_nodes_number }}"
  5. Insert the following content into a local file called down-04_control-plane.yaml:

    down-04_control-plane.yaml
    # Required Python packages:
    #
    # ansible
    # openstackclient
    # openstacksdk
    
    - import_playbook: common.yaml
    
    - hosts: all
      gather_facts: no
    
      tasks:
      - name: 'Remove the Control Plane servers'
        os_server:
          name: "{{ item.1 }}-{{ item.0 }}"
          state: absent
        with_indexed_items: "{{ [os_cp_server_name] * os_cp_nodes_number }}"
    
      - name: 'List the Compute trunks'
        command:
          cmd: "openstack network trunk list -c Name -f value"
        when: os_networking_type == "Kuryr"
        register: trunks
    
      - name: 'Remove the Control Plane trunks'
        command:
          cmd: "openstack network trunk delete {{ item.1 }}-{{ item.0 }}"
        when:
        - os_networking_type == "Kuryr"
        - (item.1|string + '-' + item.0|string) in trunks.stdout_lines|list
        with_indexed_items: "{{ [os_cp_trunk_name] * os_cp_nodes_number }}"
    
      - name: 'Remove the Control Plane ports'
        os_port:
          name: "{{ item.1 }}-{{ item.0 }}"
          state: absent
        with_indexed_items: "{{ [os_port_master] * os_cp_nodes_number }}"
  6. Insert the following content into a local file called down-03_bootstrap.yaml:

    down-03_bootstrap.yaml
    # Required Python packages:
    #
    # ansible
    # openstacksdk
    
    - import_playbook: common.yaml
    
    - hosts: all
      gather_facts: no
    
      tasks:
      - name: 'Remove the bootstrap server'
        os_server:
          name: "{{ os_bootstrap_server_name }}"
          state: absent
          delete_fip: yes
    
      - name: 'Remove the bootstrap server port'
        os_port:
          name: "{{ os_port_bootstrap }}"
          state: absent
  7. Insert the following content into a local file called down-02_network.yaml:

    down-02_network.yaml
    # Required Python packages:
    #
    # ansible
    # openstackclient
    # openstacksdk
    
    - import_playbook: common.yaml
    
    - hosts: all
      gather_facts: no
    
      tasks:
      - name: 'List ports attatched to router'
        command:
          cmd: "openstack port list --device-owner=network:router_interface --tags {{ cluster_id_tag }} -f value -c id"
        register: router_ports
    
      - name: 'Remove the ports from router'
        command:
          cmd: "openstack router remove port {{ os_router }} {{ item.1}}"
        with_indexed_items: "{{ router_ports.stdout_lines }}"
    
      - name: 'List ha ports attached to router'
        command:
          cmd: "openstack port list --device-owner=network:ha_router_replicated_interface --tags {{ cluster_id_tag }} -f value -c id"
        register: ha_router_ports
    
      - name: 'Remove the ha ports from router'
        command:
          cmd: "openstack router remove port {{ os_router }} {{ item.1}}"
        with_indexed_items: "{{ ha_router_ports.stdout_lines }}"
    
      - name: 'List ports'
        command:
          cmd: "openstack port list --tags {{ cluster_id_tag }} -f value -c id "
        register: ports
    
      - name: 'Remove the cluster ports'
        command:
          cmd: "openstack port delete {{ item.1}}"
        with_indexed_items: "{{ ports.stdout_lines }}"
    
      - name: 'Remove the cluster router'
        os_router:
          name: "{{ os_router }}"
          state: absent
    
      - name: 'List cluster networks'
        command:
          cmd: "openstack network list --tags {{ cluster_id_tag }} -f value -c Name"
        register: networks
    
      - name: 'Remove the cluster networks'
        os_network:
          name: "{{ item.1}}"
          state: absent
        with_indexed_items: "{{ networks.stdout_lines }}"
    
      - name: 'List the cluster subnet pool'
        command:
          cmd: "openstack subnet pool list --name {{ subnet_pool }}"
        when: os_networking_type == "Kuryr"
        register: pods_subnet_pool
    
      - name: 'Remove the cluster subnet pool'
        command:
          cmd: "openstack subnet pool delete {{ subnet_pool }}"
        when:
        - os_networking_type == "Kuryr"
        - pods_subnet_pool.stdout != ""
  8. Insert the following content into a local file called down-01_security-groups.yaml:

    down-01_security-groups.yaml
    # Required Python packages:
    #
    # ansible
    # openstackclient
    # openstacksdk
    
    - import_playbook: common.yaml
    
    - hosts: all
      gather_facts: no
    
      tasks:
      - name: 'List security groups'
        command:
          cmd: "openstack security group list --tags {{ cluster_id_tag }} -f value -c ID"
        register: security_groups
    
      - name: 'Remove the cluster security groups'
        command:
          cmd: "openstack security group delete {{ item.1 }}"
        with_indexed_items: "{{ security_groups.stdout_lines }}"
  9. On a command line, run the playbooks you created:

    $ ansible-playbook -i inventory.yaml  \
    	down-03_bootstrap.yaml      \
    	down-04_control-plane.yaml  \
    	down-05_compute-nodes.yaml  \
    	down-06_load-balancers.yaml \
    	down-02_network.yaml        \
    	down-01_security-groups.yaml
  10. Remove any DNS record changes you made for the OpenShift Container Platform installation.

OpenShift Container Platform is removed from your infrastructure.