$ oc get -n openshift-dns-operator deployment/dns-operator
The DNS Operator deploys and manages CoreDNS to provide a name resolution service to pods, enabling DNS-based Kubernetes Service discovery in OKD.
The DNS Operator implements the dns API from the operator.openshift.io API
group. The Operator deploys CoreDNS using a daemon set, creates a service for
the daemon set, and configures the kubelet to instruct pods to use the CoreDNS
service IP address for name resolution.
The DNS Operator is deployed during installation with a Deployment object.
Use the oc get command to view the deployment status:
$ oc get -n openshift-dns-operator deployment/dns-operator
NAME           READY     UP-TO-DATE   AVAILABLE   AGE
dns-operator   1/1       1            1           23h
Use the oc get command to view the state of the DNS Operator:
$ oc get clusteroperator/dns
NAME      VERSION     AVAILABLE   PROGRESSING   DEGRADED   SINCE
dns       4.1.0-0.11  True        False         False      92m
AVAILABLE, PROGRESSING and DEGRADED provide information about the status of the operator. AVAILABLE is True when at least 1 pod from the CoreDNS daemon set reports an Available status condition.
The DNS Operator has two daemon sets: one for CoreDNS and one for managing the /etc/hosts file. The daemon set for /etc/hosts must run on every node host to add an entry for the cluster image registry to support pulling images. Security policies can prohibit communication between pairs of nodes, which prevents the daemon set for CoreDNS from running on every node.
As a cluster administrator, you can use a custom node selector to configure the daemon set for CoreDNS to run or not run on certain nodes.
You installed the oc cli.
You are logged in to the cluster with a user with cluster-admin privileges.
To prevent communication between certain nodes, configure the spec.nodePlacement.nodeSelector API field:
Modify the DNS Operator object named default:
$ oc edit dns.operator/default
Specify a node selector that includes only control plane nodes in the spec.nodePlacement.nodeSelector API field:
 spec:
   nodePlacement:
     nodeSelector:
       node-role.kubernetes.io/worker: ""
To allow the daemon set for CoreDNS to run on nodes, configure a taint and toleration:
Modify the DNS Operator object named default:
$ oc edit dns.operator/default
Specify a taint key and a toleration for the taint:
 spec:
   nodePlacement:
     tolerations:
     - effect: NoExecute
       key: "dns-only"
       operators: Equal
       value: abc
       tolerationSeconds: 3600 (1)
| 1 | If the taint is dns-only, it can be tolerated indefinitely. You can omit tolerationSeconds. | 
Every new OKD installation has a dns.operator named default.
Use the oc describe command to view the default dns:
$ oc describe dns.operator/default
Name:         default
Namespace:
Labels:       <none>
Annotations:  <none>
API Version:  operator.openshift.io/v1
Kind:         DNS
...
Status:
  Cluster Domain:  cluster.local (1)
  Cluster IP:      172.30.0.10 (2)
...
| 1 | The Cluster Domain field is the base DNS domain used to construct fully qualified pod and service domain names. | 
| 2 | The Cluster IP is the address pods query for name resolution. The IP is defined as the 10th address in the service CIDR range. | 
To find the service CIDR of your cluster,
use the oc get command:
$ oc get networks.config/cluster -o jsonpath='{$.status.serviceNetwork}'
[172.30.0.0/16]
You can use DNS forwarding to override the forwarding configuration identified in /etc/resolv.conf on a per-zone basis by specifying which name server should be used for a given zone. If the forwarded zone is the Ingress domain managed by OKD, then the upstream name server must be authorized for the domain.
Modify the DNS Operator object named default:
$ oc edit dns.operator/default
This allows the Operator to create and update the ConfigMap named dns-default with additional server configuration blocks based on Server. If none of the servers has a zone that matches the query, then name resolution falls back to the name servers that are specified in /etc/resolv.conf.
apiVersion: operator.openshift.io/v1
kind: DNS
metadata:
  name: default
spec:
  servers:
  - name: foo-server (1)
    zones: (2)
      - example.com
    forwardPlugin:
      upstreams: (3)
        - 1.1.1.1
        - 2.2.2.2:5353
  - name: bar-server
    zones:
      - bar.com
      - example.com
    forwardPlugin:
      upstreams:
        - 3.3.3.3
        - 4.4.4.4:5454
| 1 | name must comply with the rfc6335 service name syntax. | 
| 2 | zones must conform to the definition of a subdomain in rfc1123. The cluster domain, cluster.local, is an invalid subdomain for zones. | 
| 3 | A maximum of 15 upstreams is allowed per forwardPlugin. | 
| 
 If   | 
View the ConfigMap:
$ oc get configmap/dns-default -n openshift-dns -o yaml
apiVersion: v1
data:
  Corefile: |
    example.com:5353 {
        forward . 1.1.1.1 2.2.2.2:5353
    }
    bar.com:5353 example.com:5353 {
        forward . 3.3.3.3 4.4.4.4:5454 (1)
    }
    .:5353 {
        errors
        health
        kubernetes cluster.local in-addr.arpa ip6.arpa {
            pods insecure
            upstream
            fallthrough in-addr.arpa ip6.arpa
        }
        prometheus :9153
        forward . /etc/resolv.conf {
            policy sequential
        }
        cache 30
        reload
    }
kind: ConfigMap
metadata:
  labels:
    dns.operator.openshift.io/owning-dns: default
  name: dns-default
  namespace: openshift-dns
| 1 | Changes to the forwardPlugin triggers a rolling update of the CoreDNS daemon set. | 
For more information on DNS forwarding, see the CoreDNS forward documentation.