# ansible-vault create secure_vars.yaml
You can configure OKD for Red Hat Virtualization by creating a bastion virtual machine and using it to install OKD.
Create a bastion virtual machine in Red Hat Virtualization to install OKD.
Log in to the Manager machine by using SSH.
Create a temporary bastion installation directory, for example, /bastion_installation, for the installation files.
Create an encrypted /bastion_installation/secure_vars.yaml file with ansible-vault
and record the password:
# ansible-vault create secure_vars.yaml
Add the following parameter values to the secure_vars.yaml file:
engine_password: <Manager_password> (1)
bastion_root_password: <bastion_root_password> (2)
rhsub_user: <Red_Hat_Subscription_Manager_username> (3)
rhsub_pass: <Red_Hat_Subscription_Manager_password>
rhsub_pool: <Red_Hat_Subscription_Manager_pool_id> (4)
root_password: <OpenShift_node_root_password> (5)
engine_cafile: <RHVM_CA_certificate> (6)
oreg_auth_user: <image_registry_authentication_username> (7)
oreg_auth_password: <image_registry_authentication_password>
1 | Password for logging in to the Administration Portal. |
2 | Root password for the bastion virtual machine. |
3 | Red Hat Subscription Manager credentials. |
4 | Pool ID of the Red Hat Virtualization Manager subscription pool. |
5 | OKD root password. |
6 | Red Hat Virtualization Manager CA certificate. The engine_cafile value is required if you are not running the playbook from the Manager machine. The Manager CA certificate’s default location is /etc/pki/ovirt-engine/ca.pem. |
7 | If you are using an image registry that requires authentication, add the credentials. |
Save the file.
Obtain the Red Hat enterprise Linux KVM Guest Image download link:
Navigate to Red Hat Customer Portal: Download Red Hat enterprise Linux.
In the Product Software tab, locate the Red Hat enterprise Linux KVM Guest Image.
Right-click Download Now, copy the link, and save it.
The link is time-sensitive and must be copied just before you create the bastion virtual machine.
Create the /bastion_installation/create-bastion-machine-playbook.yaml file with the following content and update its parameter values:
---
- name: Create a bastion machine
hosts: localhost
connection: local
gather_facts: false
no_log: true
roles:
- oVirt.image-template
- oVirt.vm-infra
no_log: true
vars:
engine_url: https://_Manager_FQDN_/ovirt-engine/api (1)
engine_user: <admin@internal>
engine_password: "{{ engine_password }}"
engine_cafile: /etc/pki/ovirt-engine/ca.pem
qcow_url: <RHeL_KVM_guest_image_download_link> (2)
template_cluster: Default
template_name: rhelguest7
template_memory: 4GiB
template_cpu: 2
wait_for_ip: true
debug_vm_create: false
vms:
- name: rhel-bastion
cluster: "{{ template_cluster }}"
profile:
cores: 2
template: "{{ template_name }}"
root_password: "{{ root_password }}"
ssh_key: "{{ lookup('file', '/root/.ssh/id_rsa_ssh_ocp_admin.pub') }}"
state: running
cloud_init:
custom_script: |
rh_subscription:
username: "{{ rhsub_user }}"
password: "{{ rhsub_pass }}"
auto-attach: true
disable-repo: ['*']
# 'rhel-7-server-rhv-4.2-manager-rpms' supports RHV 4.2 and 4.3
enable-repo: ['rhel-7-server-rpms', 'rhel-7-server-extras-rpms', 'rhel-7-server-ansible-2.7-rpms', 'rhel-7-server-ose-3.11-rpms', 'rhel-7-server-supplementary-rpms', 'rhel-7-server-rhv-4.2-manager-rpms']
packages:
- ansible
- ovirt-ansible-roles
- openshift-ansible
- python-ovirt-engine-sdk4
pre_tasks:
- name: Create an ssh key-pair for OpenShift admin
user:
name: root
generate_ssh_key: yes
ssh_key_file: .ssh/id_rsa_ssh_ocp_admin
roles:
- oVirt.image-template
- oVirt.vm-infra
- name: post installation tasks on the bastion machine
hosts: rhel-bastion
tasks:
- name: create ovirt-engine PKI dir
file:
state: directory
dest: /etc/pki/ovirt-engine/
- name: Copy the engine ca cert to the bastion machine
copy:
src: "{{ engine_cafile }}"
dest: "{{ engine_cafile }}"
- name: Copy the secured vars to the bastion machine
copy:
src: secure_vars.yaml
dest: secure_vars.yaml
decrypt: false
- file:
state: directory
path: /root/.ssh
- name: copy the OpenShift_admin keypair to the bastion machine
copy:
src: "{{ item }}"
dest: "{{ item }}"
mode: 0600
with_items:
- /root/.ssh/id_rsa_ssh_ocp_admin
- /root/.ssh/id_rsa_ssh_ocp_admin.pub
1 | FQDN of the Manager machine. |
2 | <qcow_url> is the download link of the Red Hat enterprise Linux KVM Guest Image. The Red Hat enterprise Linux KVM Guest Image includes the cloud-init package, which is required by this playbook. If you are not using Red Hat enterprise Linux, download the cloud-init package and install it manually before running this playbook. |
Create the bastion virtual machine:
# ansible-playbook -i localhost create-bastion-machine-playbook.yaml -e @secure_vars.yaml --ask-vault-pass
Log in to the Administration Portal.
Click
to verify that the rhel-bastion virtual machine was created successfully.Install OKD by using the bastion virtual machine in Red Hat Virtualization.
Log in to rhel-bastion.
Create an install_ocp.yaml file that contains the following content:
---
- name: Openshift on RHV
hosts: localhost
connection: local
gather_facts: false
vars_files:
- vars.yaml
- secure_vars.yaml
pre_tasks:
- ovirt_auth:
url: "{{ engine_url }}"
username: "{{ engine_user }}"
password: "{{ engine_password }}"
insecure: "{{ engine_insecure }}"
ca_file: "{{ engine_cafile | default(omit) }}"
roles:
- role: openshift_ovirt
- import_playbook: setup_dns.yaml
- import_playbook: /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml
- import_playbook: /usr/share/ansible/openshift-ansible/playbooks/openshift-node/network_manager.yml
- import_playbook: /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
Create a setup_dns.yaml file that contains the following content:
- hosts: masters
strategy: free
tasks:
- shell: "echo {{ ansible_default_ipv4.address }} {{ inventory_hostname }} etcd.{{ inventory_hostname.split('.', 1)[1] }} openshift-master.{{ inventory_hostname.split('.', 1)[1] }} openshift-public-master.{{ inventory_hostname.split('.', 1)[1] }} docker-registry-default.apps.{{ inventory_hostname.split('.', 1)[1] }} webconsole.openshift-web-console.svc registry-console-default.apps.{{ inventory_hostname.split('.', 1)[1] }} >> /etc/hosts"
when: openshift_ovirt_all_in_one is defined | ternary((openshift_ovirt_all_in_one | bool), false)
Create an /etc/ansible/openshift_3_11.hosts Ansible inventory file that contains the following content:
[workstation]
localhost ansible_connection=local
[all:vars]
openshift_ovirt_dns_zone="{{ public_hosted_zone }}"
openshift_web_console_install=true
openshift_master_overwrite_named_certificates=true
openshift_master_cluster_hostname="openshift-master.{{ public_hosted_zone }}"
openshift_master_cluster_public_hostname="openshift-public-master.{{ public_hosted_zone }}"
openshift_master_default_subdomain="{{ public_hosted_zone }}"
openshift_public_hostname="{{openshift_master_cluster_public_hostname}}"
openshift_deployment_type=openshift-enterprise
openshift_service_catalog_image_version="{{ openshift_image_tag }}"
[OSev3:vars]
# General variables
debug_level=1
containerized=false
ansible_ssh_user=root
os_firewall_use_firewalld=true
openshift_enable_excluders=false
openshift_install_examples=false
openshift_clock_enabled=true
openshift_debug_level="{{ debug_level }}"
openshift_node_debug_level="{{ node_debug_level | default(debug_level,true) }}"
osn_storage_plugin_deps=[]
openshift_master_bootstrap_auto_approve=true
openshift_master_bootstrap_auto_approver_node_selector={"node-role.kubernetes.io/master":"true"}
osm_controller_args={"experimental-cluster-signing-duration": ["20m"]}
osm_default_node_selector="node-role.kubernetes.io/compute=true"
openshift_enable_service_catalog=false
# Docker
container_runtime_docker_storage_type=overlay2
openshift_docker_use_system_container=false
[OSev3:children]
nodes
masters
etcd
lb
[masters]
[nodes]
[etcd]
[lb]
Obtain the Red Hat enterprise Linux KVM Guest Image download link:
Navigate to Red Hat Customer Portal: Download Red Hat enterprise Linux.
In the Product Software tab, locate the Red Hat enterprise Linux KVM Guest Image.
Right-click Download Now, copy the link, and save it.
Do not use the link that you copied when you created the bastion virtual machine. The download link is time-sensitive and must be copied just before you run the installation playbook.
Create the vars.yaml file with the following content and update its parameter values:
---
# For detailed documentation of variables, see
# openshift_ovirt: https://github.com/openshift/openshift-ansible/tree/master/roles/openshift_ovirt#role-variables
# openshift installation: https://github.com/openshift/openshift-ansible/tree/master/inventory
engine_url: https://<Manager_FQDN>/ovirt-engine/api (1)
engine_user: admin@internal
engine_password: "{{ engine_password }}"
engine_insecure: false
engine_cafile: /etc/pki/ovirt-engine/ca.pem
openshift_ovirt_vm_manifest:
- name: 'master'
count: 1
profile: 'master_vm'
- name: 'compute'
count: 0
profile: 'node_vm'
- name: 'lb'
count: 0
profile: 'node_vm'
- name: 'etcd'
count: 0
profile: 'node_vm'
- name: infra
count: 0
profile: node_vm
# Currently, only all-in-one installation (`openshift_ovirt_all_in_one: true`) is supported.
# Multi-node installation (master and node VMs installed separately) will be supported in a future release.
openshift_ovirt_all_in_one: true
openshift_ovirt_cluster: Default
openshift_ovirt_data_store: data
openshift_ovirt_ssh_key: "{{ lookup('file', '/root/.ssh/id_rsa_ssh_ocp_admin.pub') }}"
public_hosted_zone:
# Uncomment to disable install-time checks, for smaller scale installations
#openshift_disable_check: memory_availability,disk_availability,docker_image_availability
qcow_url: <RHeL_KVM_guest_image_download_link> (2)
image_path: /var/tmp
template_name: rhelguest7
template_cluster: "{{ openshift_ovirt_cluster }}"
template_memory: 4GiB
template_cpu: 1
template_disk_storage: "{{ openshift_ovirt_data_store }}"
template_disk_size: 100GiB
template_nics:
- name: nic1
profile_name: ovirtmgmt
interface: virtio
debug_vm_create: false
wait_for_ip: true
vm_infra_wait_for_ip_retries: 30
vm_infra_wait_for_ip_delay: 20
node_item: &node_item
cluster: "{{ openshift_ovirt_cluster }}"
template: "{{ template_name }}"
memory: "8GiB"
cores: "2"
high_availability: true
disks:
- name: docker
size: 15GiB
interface: virtio
storage_domain: "{{ openshift_ovirt_data_store }}"
- name: openshift
size: 30GiB
interface: virtio
storage_domain: "{{ openshift_ovirt_data_store }}"
state: running
cloud_init:
root_password: "{{ root_password }}"
authorized_ssh_keys: "{{ openshift_ovirt_ssh_key }}"
custom_script: "{{ cloud_init_script_node | to_nice_yaml }}"
openshift_ovirt_vm_profile:
master_vm:
<<: *node_item
memory: 16GiB
cores: "{{ vm_cores | default(4) }}"
disks:
- name: docker
size: 15GiB
interface: virtio
storage_domain: "{{ openshift_ovirt_data_store }}"
- name: openshift_local
size: 30GiB
interface: virtio
storage_domain: "{{ openshift_ovirt_data_store }}"
- name: etcd
size: 25GiB
interface: virtio
storage_domain: "{{ openshift_ovirt_data_store }}"
cloud_init:
root_password: "{{ root_password }}"
authorized_ssh_keys: "{{ openshift_ovirt_ssh_key }}"
custom_script: "{{ cloud_init_script_master | to_nice_yaml }}"
node_vm:
<<: *node_item
etcd_vm:
<<: *node_item
lb_vm:
<<: *node_item
cloud_init_script_node: &cloud_init_script_node
packages:
- ovirt-guest-agent
runcmd:
- sed -i 's/# ignored_nics =.*/ignored_nics = docker0 tun0 /' /etc/ovirt-guest-agent.conf
- systemctl enable ovirt-guest-agent
- systemctl start ovirt-guest-agent
- mkdir -p /var/lib/docker
- mkdir -p /var/lib/origin/openshift.local.volumes
- /usr/sbin/mkfs.xfs -L dockerlv /dev/vdb
- /usr/sbin/mkfs.xfs -L ocplv /dev/vdc
mounts:
- [ '/dev/vdb', '/var/lib/docker', 'xfs', 'defaults,gquota' ]
- [ '/dev/vdc', '/var/lib/origin/openshift.local.volumes', 'xfs', 'defaults,gquota' ]
power_state:
mode: reboot
message: cloud init finished - boot and install openshift
condition: True
cloud_init_script_master:
<<: *cloud_init_script_node
runcmd:
- sed -i 's/# ignored_nics =.*/ignored_nics = docker0 tun0 /' /etc/ovirt-guest-agent.conf
- systemctl enable ovirt-guest-agent
- systemctl start ovirt-guest-agent
- mkdir -p /var/lib/docker
- mkdir -p /var/lib/origin/openshift.local.volumes
- mkdir -p /var/lib/etcd
- /usr/sbin/mkfs.xfs -L dockerlv /dev/vdb
- /usr/sbin/mkfs.xfs -L ocplv /dev/vdc
- /usr/sbin/mkfs.xfs -L etcdlv /dev/vdd
mounts:
- [ '/dev/vdb', '/var/lib/docker', 'xfs', 'defaults,gquota' ]
- [ '/dev/vdc', '/var/lib/origin/openshift.local.volumes', 'xfs', 'defaults,gquota' ]
- [ '/dev/vdd', '/var/lib/etcd', 'xfs', 'defaults,gquota' ]
1 | FQDN of the Manager machine. |
2 | <qcow_url> is the download link of the Red Hat enterprise Linux KVM Guest Image. The Red Hat enterprise Linux KVM Guest Image includes the cloud-init package, which is required by this playbook. If you are not using Red Hat enterprise Linux, download the cloud-init package and install it manually before running this playbook. |
Install OKD:
# export ANSIBLe_ROLeS_PATH="/usr/share/ansible/roles/:/usr/share/ansible/openshift-ansible/roles" # export ANSIBLe_JINJA2_eXTeNSIONS="jinja2.ext.do" # ansible-playbook -i /etc/ansible/openshift_3_11.hosts install_ocp.yaml -e @vars.yaml -e @secure_vars.yaml --ask-vault-pass
Create DNS entries for the routers, for each infrastructure instance.
Configure round-robin routing so that the router can pass traffic to the applications.
Create a DNS entry for the OKD web console.
Specify the IP address of the load balancer node.