apiVersion: v1
kind: Namespace
metadata:
name: openshift-ptp
annotations:
workload.openshift.io/allowed: management
labels:
name: openshift-ptp
openshift.io/cluster-monitoring: "true"
You can configure linuxptp
services and use PTP-capable hardware in OKD cluster nodes.
You can use the OKD console or OpenShift CLI (oc
) to install PTP by deploying the PTP Operator. The PTP Operator creates and manages the linuxptp
services and provides the following features:
Discovery of the PTP-capable devices in the cluster.
Management of the configuration of linuxptp
services.
Notification of PTP clock events that negatively affect the performance and reliability of your application with the PTP Operator cloud-event-proxy
sidecar.
The PTP Operator works with PTP-capable devices on clusters provisioned only on bare-metal infrastructure. |
Precision Time Protocol (PTP) is used to synchronize clocks in a network. When used in conjunction with hardware support, PTP is capable of sub-microsecond accuracy, and is more accurate than Network Time Protocol (ntp).
The linuxptp
package includes the ptp4l
and phc2sys
programs for clock synchronization. ptp4l
implements the PTP boundary clock and ordinary clock. ptp4l
synchronizes the PTP hardware clock to the source clock with hardware time stamping and synchronizes the system clock to the source clock with software time stamping. phc2sys
is used for hardware time stamping to synchronize the system clock to the PTP hardware clock on the network interface controller (NIC).
PTP is used to synchronize multiple nodes connected in a network, with clocks for each node. The clocks synchronized by PTP are organized in a source-destination hierarchy. The hierarchy is created and updated automatically by the best master clock (BMC) algorithm, which runs on every clock. Destination clocks are synchronized to source clocks, and destination clocks can themselves be the source for other downstream clocks. The following types of clocks can be included in configurations:
The grandmaster clock provides standard time information to other clocks across the network and ensures accurate and stable synchronisation. It writes time stamps and responds to time requests from other clocks. Grandmaster clocks can be synchronized to a Global Positioning System (GPS) time source.
The ordinary clock has a single port connection that can play the role of source or destination clock, depending on its position in the network. The ordinary clock can read and write time stamps.
The boundary clock has ports in two or more communication paths and can be a source and a destination to other destination clocks at the same time. The boundary clock works as a destination clock upstream. The destination clock receives the timing message, adjusts for delay, and then creates a new source time signal to pass down the network. The boundary clock produces a new timing packet that is still correctly synced with the source clock and can reduce the number of connected devices reporting directly to the source clock.
One of the main advantages that PTP has over ntp is the hardware support present in various network interface controllers (NIC) and network switches. The specialized hardware allows PTP to account for delays in message transfer and improves the accuracy of time synchronization. To achieve the best possible accuracy, it is recommended that all networking components between PTP clocks are PTP hardware enabled.
Hardware-based PTP provides optimal accuracy, since the NIC can time stamp the PTP packets at the exact moment they are sent and received. Compare this to software-based PTP, which requires additional processing of the PTP packets by the operating system.
Before enabling PTP, ensure that ntp is disabled for the required nodes. You can disable the chrony time service ( |
OKD supports single and dual NIC hardware for precision PTP timing in the cluster.
For 5G telco networks that deliver mid-band spectrum coverage, each virtual distributed unit (vDU) requires connections to 6 radio units (RUs). To make these connections, each vDU host requires 2 NICs configured as boundary clocks.
Dual NIC hardware allows you to connect each NIC to the same upstream leader clock with separate ptp4l
instances for each NIC feeding the downstream clocks.
As a cluster administrator, you can install the Operator by using the CLI.
A cluster installed on bare-metal hardware with nodes that have hardware that supports PTP.
Install the OpenShift CLI (oc
).
Log in as a user with cluster-admin
privileges.
Create a namespace for the PTP Operator.
Save the following YAML in the ptp-namespace.yaml
file:
apiVersion: v1
kind: Namespace
metadata:
name: openshift-ptp
annotations:
workload.openshift.io/allowed: management
labels:
name: openshift-ptp
openshift.io/cluster-monitoring: "true"
Create the Namespace
CR:
$ oc create -f ptp-namespace.yaml
Create an Operator group for the PTP Operator.
Save the following YAML in the ptp-operatorgroup.yaml
file:
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: ptp-operators
namespace: openshift-ptp
spec:
targetNamespaces:
- openshift-ptp
Create the OperatorGroup
CR:
$ oc create -f ptp-operatorgroup.yaml
Subscribe to the PTP Operator.
Save the following YAML in the ptp-sub.yaml
file:
apiVersion: operators.coreos.com/v1alpha1
kind: Subscription
metadata:
name: ptp-operator-subscription
namespace: openshift-ptp
spec:
channel: "stable"
name: ptp-operator
source: redhat-operators
sourceNamespace: openshift-marketplace
Create the Subscription
CR:
$ oc create -f ptp-sub.yaml
To verify that the Operator is installed, enter the following command:
$ oc get csv -n openshift-ptp -o custom-columns=Name:.metadata.name,Phase:.status.phase
Name Phase
4.12.0-202301261535 Succeeded
As a cluster administrator, you can install the PTP Operator using the web console.
You have to create the namespace and Operator group as mentioned in the previous section. |
Install the PTP Operator using the OKD web console:
In the OKD web console, click Operators → OperatorHub.
Choose PTP Operator from the list of available Operators, and then click Install.
On the Install Operator page, under A specific namespace on the cluster select openshift-ptp. Then, click Install.
Optional: Verify that the PTP Operator installed successfully:
Switch to the Operators → Installed Operators page.
Ensure that PTP Operator is listed in the openshift-ptp project with a Status of InstallSucceeded.
During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. |
If the Operator does not appear as installed, to troubleshoot further:
Go to the Operators → Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status.
Go to the Workloads → Pods page and check the logs for pods in the
openshift-ptp
project.
The PTP Operator adds the NodePtpDevice.ptp.openshift.io
custom resource definition (CRD) to OKD.
When installed, the PTP Operator searches your cluster for PTP-capable network devices on each node. It creates and updates a NodePtpDevice
custom resource (CR) object for each node that provides a compatible PTP-capable network device.
To return a complete list of PTP capable network devices in your cluster, run the following command:
$ oc get NodePtpDevice -n openshift-ptp -o yaml
apiVersion: v1
items:
- apiVersion: ptp.openshift.io/v1
kind: NodePtpDevice
metadata:
creationTimestamp: "2022-01-27T15:16:28Z"
generation: 1
name: dev-worker-0 (1)
namespace: openshift-ptp
resourceVersion: "6538103"
uid: d42fc9ad-bcbf-4590-b6d8-b676c642781a
spec: {}
status:
devices: (2)
- name: eno1
- name: eno2
- name: eno3
- name: eno4
- name: enp5s0f0
- name: enp5s0f1
...
1 | The value for the name parameter is the same as the name of the parent node. |
2 | The devices collection includes a list of the PTP capable devices that the PTP Operator discovers for the node. |
You can configure the linuxptp
services (ptp4l
, phc2sys
, ts2phc
) as grandmaster clock by creating a PtpConfig
custom resource (CR) that configures the host NIC.
The ts2phc
utility allows you to synchronize the system clock with the PTP grandmaster clock so that the node can stream precision clock signal to downstream PTP ordinary clocks and boundary clocks.
Use the following example |
Install an Intel Westport Channel network interface in the bare-metal cluster host.
Install the OpenShift CLI (oc
).
Log in as a user with cluster-admin
privileges.
Install the PTP Operator.
Create the PtpConfig
resource. For example:
Save the following YAML in the grandmaster-clock-ptp-config.yaml
file:
apiVersion: ptp.openshift.io/v1
kind: PtpConfig
metadata:
name: grandmaster-clock
namespace: openshift-ptp
annotations: {}
spec:
profile:
- name: grandmaster-clock
# The interface name is hardware-specific
interface: $interface
ptp4lOpts: "-2"
phc2sysOpts: "-a -r -r -n 24"
ptpSchedulingPolicy: SCHED_FIFO
ptpSchedulingPriority: 10
ptpSettings:
logReduce: "true"
ptp4lConf: |
[global]
#
# Default Data Set
#
twoStepFlag 1
slaveOnly 0
priority1 128
priority2 128
domainNumber 24
#utc_offset 37
clockClass 255
clockAccuracy 0xFE
offsetScaledLogVariance 0xFFFF
free_running 0
freq_est_interval 1
dscp_event 0
dscp_general 0
dataset_comparison G.8275.x
G.8275.defaultDS.localPriority 128
#
# Port Data Set
#
logAnnounceInterval -3
logSyncInterval -4
logMinDelayReqInterval -4
logMinPdelayReqInterval -4
announceReceiptTimeout 3
syncReceiptTimeout 0
delayAsymmetry 0
fault_reset_interval -4
neighborPropDelayThresh 20000000
masterOnly 0
G.8275.portDS.localPriority 128
#
# Run time options
#
assume_two_step 0
logging_level 6
path_trace_enabled 0
follow_up_info 0
hybrid_e2e 0
inhibit_multicast_service 0
net_sync_monitor 0
tc_spanning_tree 0
tx_timestamp_timeout 50
unicast_listen 0
unicast_master_table 0
unicast_req_duration 3600
use_syslog 1
verbose 0
summary_interval 0
kernel_leap 1
check_fup_sync 0
clock_class_threshold 7
#
# Servo Options
#
pi_proportional_const 0.0
pi_integral_const 0.0
pi_proportional_scale 0.0
pi_proportional_exponent -0.3
pi_proportional_norm_max 0.7
pi_integral_scale 0.0
pi_integral_exponent 0.4
pi_integral_norm_max 0.3
step_threshold 2.0
first_step_threshold 0.00002
max_frequency 900000000
clock_servo pi
sanity_freq_limit 200000000
ntpshm_segment 0
#
# Transport options
#
transportSpecific 0x0
ptp_dst_mac 01:1B:19:00:00:00
p2p_dst_mac 01:80:C2:00:00:0E
udp_ttl 1
udp6_scope 0x0E
uds_address /var/run/ptp4l
#
# Default interface options
#
clock_type OC
network_transport L2
delay_mechanism E2E
time_stamping hardware
tsproc_mode filter
delay_filter moving_median
delay_filter_length 10
egressLatency 0
ingressLatency 0
boundary_clock_jbod 0
#
# Clock description
#
productDescription ;;
revisionData ;;
manufacturerIdentity 00:00:00
userDescription ;
timeSource 0xA0
recommend:
- profile: grandmaster-clock
priority: 4
match:
- nodeLabel: "node-role.kubernetes.io/$mcp"
Create the CR by running the following command:
$ oc create -f grandmaster-clock-ptp-config.yaml
Check that the PtpConfig
profile is applied to the node.
Get the list of pods in the openshift-ptp
namespace by running the following command:
$ oc get pods -n openshift-ptp -o wide
NAME READY STATUS RESTARTS AGE IP NODE
linuxptp-daemon-74m2g 3/3 Running 3 4d15h 10.16.230.7 compute-1.example.com
ptp-operator-5f4f48d7c-x7zkf 1/1 Running 1 4d15h 10.128.1.145 compute-1.example.com
Check that the profile is correct. Examine the logs of the linuxptp
daemon that corresponds to the node you specified in the PtpConfig
profile.
Run the following command:
$ oc logs linuxptp-daemon-74m2g -n openshift-ptp -c linuxptp-daemon-container
ts2phc[94980.334]: [ts2phc.0.config] nmea delay: 98690975 ns
ts2phc[94980.334]: [ts2phc.0.config] ens3f0 extts index 0 at 1676577329.999999999 corr 0 src 1676577330.901342528 diff -1
ts2phc[94980.334]: [ts2phc.0.config] ens3f0 master offset -1 s2 freq -1
ts2phc[94980.441]: [ts2phc.0.config] nmea sentence: GNRMC,195453.00,A,4233.24427,N,07126.64420,W,0.008,,160223,,,A,V
phc2sys[94980.450]: [ptp4l.0.config] CLOCK_REALTIME phc offset 943 s2 freq -89604 delay 504
phc2sys[94980.512]: [ptp4l.0.config] CLOCK_REALTIME phc offset 1000 s2 freq -89264 delay 474
You can configure linuxptp
services (ptp4l
, phc2sys
) as ordinary clock by creating a PtpConfig
custom resource (CR) object.
Use the following example |
Install the OpenShift CLI (oc
).
Log in as a user with cluster-admin
privileges.
Install the PTP Operator.
Create the following PtpConfig
CR, and then save the YAML in the ordinary-clock-ptp-config.yaml
file.
apiVersion: ptp.openshift.io/v1
kind: PtpConfig
metadata:
name: ordinary-clock
namespace: openshift-ptp
annotations: {}
spec:
profile:
- name: ordinary-clock
# The interface name is hardware-specific
interface: $interface
ptp4lOpts: "-2 -s"
phc2sysOpts: "-a -r -n 24"
ptpSchedulingPolicy: SCHED_FIFO
ptpSchedulingPriority: 10
ptpSettings:
logReduce: "true"
ptp4lConf: |
[global]
#
# Default Data Set
#
twoStepFlag 1
slaveOnly 1
priority1 128
priority2 128
domainNumber 24
#utc_offset 37
clockClass 255
clockAccuracy 0xFE
offsetScaledLogVariance 0xFFFF
free_running 0
freq_est_interval 1
dscp_event 0
dscp_general 0
dataset_comparison G.8275.x
G.8275.defaultDS.localPriority 128
#
# Port Data Set
#
logAnnounceInterval -3
logSyncInterval -4
logMinDelayReqInterval -4
logMinPdelayReqInterval -4
announceReceiptTimeout 3
syncReceiptTimeout 0
delayAsymmetry 0
fault_reset_interval -4
neighborPropDelayThresh 20000000
masterOnly 0
G.8275.portDS.localPriority 128
#
# Run time options
#
assume_two_step 0
logging_level 6
path_trace_enabled 0
follow_up_info 0
hybrid_e2e 0
inhibit_multicast_service 0
net_sync_monitor 0
tc_spanning_tree 0
tx_timestamp_timeout 50
unicast_listen 0
unicast_master_table 0
unicast_req_duration 3600
use_syslog 1
verbose 0
summary_interval 0
kernel_leap 1
check_fup_sync 0
clock_class_threshold 7
#
# Servo Options
#
pi_proportional_const 0.0
pi_integral_const 0.0
pi_proportional_scale 0.0
pi_proportional_exponent -0.3
pi_proportional_norm_max 0.7
pi_integral_scale 0.0
pi_integral_exponent 0.4
pi_integral_norm_max 0.3
step_threshold 2.0
first_step_threshold 0.00002
max_frequency 900000000
clock_servo pi
sanity_freq_limit 200000000
ntpshm_segment 0
#
# Transport options
#
transportSpecific 0x0
ptp_dst_mac 01:1B:19:00:00:00
p2p_dst_mac 01:80:C2:00:00:0E
udp_ttl 1
udp6_scope 0x0E
uds_address /var/run/ptp4l
#
# Default interface options
#
clock_type OC
network_transport L2
delay_mechanism E2E
time_stamping hardware
tsproc_mode filter
delay_filter moving_median
delay_filter_length 10
egressLatency 0
ingressLatency 0
boundary_clock_jbod 0
#
# Clock description
#
productDescription ;;
revisionData ;;
manufacturerIdentity 00:00:00
userDescription ;
timeSource 0xA0
recommend:
- profile: ordinary-clock
priority: 4
match:
- nodeLabel: "node-role.kubernetes.io/$mcp"
Custom resource field | Description |
---|---|
|
The name of the |
|
Specify an array of one or more |
|
Specify the network interface to be used by the |
|
Specify system config options for the |
|
Specify system config options for the |
|
Specify a string that contains the configuration to replace the default |
|
For Intel Columbiaville 800 Series NICs, set |
|
For Intel Columbiaville 800 Series NICs, set |
|
Scheduling policy for |
|
Integer value from 1-65 used to set FIFO priority for |
|
Optional. If |
|
Specify an array of one or more |
|
Specify the |
|
Set |
|
Specify |
|
Set |
|
Set |
Create the PtpConfig
CR by running the following command:
$ oc create -f ordinary-clock-ptp-config.yaml
Check that the PtpConfig
profile is applied to the node.
Get the list of pods in the openshift-ptp
namespace by running the following command:
$ oc get pods -n openshift-ptp -o wide
NAME READY STATUS RESTARTS AGE IP NODE
linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com
linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com
ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com
Check that the profile is correct. Examine the logs of the linuxptp
daemon that corresponds to the node you specified in the PtpConfig
profile. Run the following command:
$ oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container
I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile
I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to:
I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------
I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1
I1115 09:41:17.117616 4143292 daemon.go:102] Interface: ens787f1
I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -2 -s
I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r -n 24
I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------
For more information about FIFO priority scheduling on PTP hardware, see Configuring FIFO priority scheduling for PTP hardware.
For more information about configuring PTP fast events, see Configuring the PTP fast event notifications publisher.
You can configure the linuxptp
services (ptp4l
, phc2sys
) as boundary clock by creating a PtpConfig
custom resource (CR) object.
Use the following example |
Install the OpenShift CLI (oc
).
Log in as a user with cluster-admin
privileges.
Install the PTP Operator.
Create the following PtpConfig
CR, and then save the YAML in the boundary-clock-ptp-config.yaml
file.
apiVersion: ptp.openshift.io/v1
kind: PtpConfig
metadata:
name: boundary-clock
namespace: openshift-ptp
annotations: {}
spec:
profile:
- name: boundary-clock
ptp4lOpts: "-2"
phc2sysOpts: "-a -r -n 24"
ptpSchedulingPolicy: SCHED_FIFO
ptpSchedulingPriority: 10
ptpSettings:
logReduce: "true"
ptp4lConf: |
# The interface name is hardware-specific
[$iface_slave]
masterOnly 0
[$iface_master_1]
masterOnly 1
[$iface_master_2]
masterOnly 1
[$iface_master_3]
masterOnly 1
[global]
#
# Default Data Set
#
twoStepFlag 1
slaveOnly 0
priority1 128
priority2 128
domainNumber 24
#utc_offset 37
clockClass 248
clockAccuracy 0xFE
offsetScaledLogVariance 0xFFFF
free_running 0
freq_est_interval 1
dscp_event 0
dscp_general 0
dataset_comparison G.8275.x
G.8275.defaultDS.localPriority 128
#
# Port Data Set
#
logAnnounceInterval -3
logSyncInterval -4
logMinDelayReqInterval -4
logMinPdelayReqInterval -4
announceReceiptTimeout 3
syncReceiptTimeout 0
delayAsymmetry 0
fault_reset_interval -4
neighborPropDelayThresh 20000000
masterOnly 0
G.8275.portDS.localPriority 128
#
# Run time options
#
assume_two_step 0
logging_level 6
path_trace_enabled 0
follow_up_info 0
hybrid_e2e 0
inhibit_multicast_service 0
net_sync_monitor 0
tc_spanning_tree 0
tx_timestamp_timeout 50
unicast_listen 0
unicast_master_table 0
unicast_req_duration 3600
use_syslog 1
verbose 0
summary_interval 0
kernel_leap 1
check_fup_sync 0
clock_class_threshold 135
#
# Servo Options
#
pi_proportional_const 0.0
pi_integral_const 0.0
pi_proportional_scale 0.0
pi_proportional_exponent -0.3
pi_proportional_norm_max 0.7
pi_integral_scale 0.0
pi_integral_exponent 0.4
pi_integral_norm_max 0.3
step_threshold 2.0
first_step_threshold 0.00002
max_frequency 900000000
clock_servo pi
sanity_freq_limit 200000000
ntpshm_segment 0
#
# Transport options
#
transportSpecific 0x0
ptp_dst_mac 01:1B:19:00:00:00
p2p_dst_mac 01:80:C2:00:00:0E
udp_ttl 1
udp6_scope 0x0E
uds_address /var/run/ptp4l
#
# Default interface options
#
clock_type BC
network_transport L2
delay_mechanism E2E
time_stamping hardware
tsproc_mode filter
delay_filter moving_median
delay_filter_length 10
egressLatency 0
ingressLatency 0
boundary_clock_jbod 0
#
# Clock description
#
productDescription ;;
revisionData ;;
manufacturerIdentity 00:00:00
userDescription ;
timeSource 0xA0
recommend:
- profile: boundary-clock
priority: 4
match:
- nodeLabel: "node-role.kubernetes.io/$mcp"
Custom resource field | Description |
---|---|
|
The name of the |
|
Specify an array of one or more |
|
Specify the name of a profile object which uniquely identifies a profile object. |
|
Specify system config options for the |
|
Specify the required configuration to start |
|
The interface that receives the synchronization clock. |
|
The interface that sends the synchronization clock. |
|
For Intel Columbiaville 800 Series NICs, set |
|
For Intel Columbiaville 800 Series NICs, ensure |
|
Specify system config options for the |
|
Scheduling policy for ptp4l and phc2sys processes. Default value is |
|
Integer value from 1-65 used to set FIFO priority for |
|
Optional. If |
|
Specify an array of one or more |
|
Specify the |
|
Specify the |
|
Specify |
|
Set |
|
Set |
Create the CR by running the following command:
$ oc create -f boundary-clock-ptp-config.yaml
Check that the PtpConfig
profile is applied to the node.
Get the list of pods in the openshift-ptp
namespace by running the following command:
$ oc get pods -n openshift-ptp -o wide
NAME READY STATUS RESTARTS AGE IP NODE
linuxptp-daemon-4xkbb 1/1 Running 0 43m 10.1.196.24 compute-0.example.com
linuxptp-daemon-tdspf 1/1 Running 0 43m 10.1.196.25 compute-1.example.com
ptp-operator-657bbb64c8-2f8sj 1/1 Running 0 43m 10.129.0.61 control-plane-1.example.com
Check that the profile is correct. Examine the logs of the linuxptp
daemon that corresponds to the node you specified in the PtpConfig
profile. Run the following command:
$ oc logs linuxptp-daemon-4xkbb -n openshift-ptp -c linuxptp-daemon-container
I1115 09:41:17.117596 4143292 daemon.go:107] in applyNodePTPProfile
I1115 09:41:17.117604 4143292 daemon.go:109] updating NodePTPProfile to:
I1115 09:41:17.117607 4143292 daemon.go:110] ------------------------------------
I1115 09:41:17.117612 4143292 daemon.go:102] Profile Name: profile1
I1115 09:41:17.117616 4143292 daemon.go:102] Interface:
I1115 09:41:17.117620 4143292 daemon.go:102] Ptp4lOpts: -2
I1115 09:41:17.117623 4143292 daemon.go:102] Phc2sysOpts: -a -r -n 24
I1115 09:41:17.117626 4143292 daemon.go:116] ------------------------------------
For more information about FIFO priority scheduling on PTP hardware, see Configuring FIFO priority scheduling for PTP hardware.
For more information about configuring PTP fast events, see Configuring the PTP fast event notifications publisher.
Precision Time Protocol (PTP) hardware with dual NIC configured as boundary clocks is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope. |
You can configure the linuxptp
services (ptp4l
, phc2sys
) as boundary clocks for dual NIC hardware by creating a PtpConfig
custom resource (CR) object for each NIC.
Dual NIC hardware allows you to connect each NIC to the same upstream leader clock with separate ptp4l
instances for each NIC feeding the downstream clocks.
Install the OpenShift CLI (oc
).
Log in as a user with cluster-admin
privileges.
Install the PTP Operator.
Create two separate PtpConfig
CRs, one for each NIC, using the reference CR in "Configuring linuxptp services as a boundary clock" as the basis for each CR. For example:
Create boundary-clock-ptp-config-nic1.yaml
, specifying values for phc2sysOpts
:
apiVersion: ptp.openshift.io/v1
kind: PtpConfig
metadata:
name: boundary-clock-ptp-config-nic1
namespace: openshift-ptp
spec:
profile:
- name: "profile1"
ptp4lOpts: "-2 --summary_interval -4"
ptp4lConf: | (1)
[ens5f1]
masterOnly 1
[ens5f0]
masterOnly 0
...
phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16" (2)
1 | Specify the required interfaces to start ptp4l as a boundary clock. For example, ens5f0 synchronizes from a grandmaster clock and ens5f1 synchronizes connected devices. |
2 | Required phc2sysOpts values. -m prints messages to stdout . The linuxptp-daemon DaemonSet parses the logs and generates Prometheus metrics. |
Create boundary-clock-ptp-config-nic2.yaml
, removing the phc2sysOpts
field altogether to disable the phc2sys
service for the second NIC:
apiVersion: ptp.openshift.io/v1
kind: PtpConfig
metadata:
name: boundary-clock-ptp-config-nic2
namespace: openshift-ptp
spec:
profile:
- name: "profile2"
ptp4lOpts: "-2 --summary_interval -4"
ptp4lConf: | (1)
[ens7f1]
masterOnly 1
[ens7f0]
masterOnly 0
...
1 | Specify the required interfaces to start ptp4l as a boundary clock on the second NIC. |
You must completely remove the |
Create the dual NIC PtpConfig
CRs by running the following commands:
Create the CR that configures PTP for the first NIC:
$ oc create -f boundary-clock-ptp-config-nic1.yaml
Create the CR that configures PTP for the second NIC:
$ oc create -f boundary-clock-ptp-config-nic2.yaml
Check that the PTP Operator has applied the PtpConfig
CRs for both NICs. Examine the logs for the linuxptp
daemon corresponding to the node that has the dual NIC hardware installed. For example, run the following command:
$ oc logs linuxptp-daemon-cvgr6 -n openshift-ptp -c linuxptp-daemon-container
ptp4l[80828.335]: [ptp4l.1.config] master offset 5 s2 freq -5727 path delay 519
ptp4l[80828.343]: [ptp4l.0.config] master offset -5 s2 freq -10607 path delay 533
phc2sys[80828.390]: [ptp4l.0.config] CLOCK_REALTIME phc offset 1 s2 freq -87239 delay 539
The following table describes the changes that you must make to the reference PTP configuration in order to use Intel Columbiaville E800 series NICs as ordinary clocks. Make the changes in a PtpConfig
custom resource (CR) that you apply to the cluster.
PTP configuration | Recommended setting |
---|---|
|
|
|
|
|
|
For |
For a complete example CR that configures linuxptp
services as an ordinary clock with PTP fast events, see Configuring linuxptp services as ordinary clock.
In telco or other deployment configurations that require low latency performance, PTP daemon threads run in a constrained CPU footprint alongside the rest of the infrastructure components. By default, PTP threads run with the SCHED_OTHER
policy. Under high load, these threads might not get the scheduling latency they require for error-free operation.
To mitigate against potential scheduling latency errors, you can configure the PTP Operator linuxptp
services to allow threads to run with a SCHED_FIFO
policy. If SCHED_FIFO
is set for a PtpConfig
CR, then ptp4l
and phc2sys
will run in the parent container under chrt
with a priority set by the ptpSchedulingPriority
field of the PtpConfig
CR.
Setting |
Edit the PtpConfig
CR profile:
$ oc edit PtpConfig -n openshift-ptp
Change the ptpSchedulingPolicy
and ptpSchedulingPriority
fields:
apiVersion: ptp.openshift.io/v1
kind: PtpConfig
metadata:
name: <ptp_config_name>
namespace: openshift-ptp
...
spec:
profile:
- name: "profile1"
...
ptpSchedulingPolicy: SCHED_FIFO (1)
ptpSchedulingPriority: 10 (2)
1 | Scheduling policy for ptp4l and phc2sys processes. Use SCHED_FIFO on systems that support FIFO scheduling. |
2 | Required. Sets the integer value 1-65 used to configure FIFO priority for ptp4l and phc2sys processes. |
Save and exit to apply the changes to the PtpConfig
CR.
Get the name of the linuxptp-daemon
pod and corresponding node where the PtpConfig
CR has been applied:
$ oc get pods -n openshift-ptp -o wide
NAME READY STATUS RESTARTS AGE IP NODE
linuxptp-daemon-gmv2n 3/3 Running 0 1d17h 10.1.196.24 compute-0.example.com
linuxptp-daemon-lgm55 3/3 Running 0 1d17h 10.1.196.25 compute-1.example.com
ptp-operator-3r4dcvf7f4-zndk7 1/1 Running 0 1d7h 10.129.0.61 control-plane-1.example.com
Check that the ptp4l
process is running with the updated chrt
FIFO priority:
$ oc -n openshift-ptp logs linuxptp-daemon-lgm55 -c linuxptp-daemon-container|grep chrt
I1216 19:24:57.091872 1600715 daemon.go:285] /bin/chrt -f 65 /usr/sbin/ptp4l -f /var/run/ptp4l.0.config -2 --summary_interval -4 -m
The linuxptp
daemon generates logs that you can use for debugging purposes. In telco or other deployment configurations that feature a limited storage capacity, these logs can add to the storage demand.
To reduce the number log messages, you can configure the PtpConfig
custom resource (CR) to exclude log messages that report the master offset
value. The master offset
log message reports the difference between the current node’s clock and the master clock in nanoseconds.
Install the OpenShift CLI (oc
).
Log in as a user with cluster-admin
privileges.
Install the PTP Operator.
Edit the PtpConfig
CR:
$ oc edit PtpConfig -n openshift-ptp
In spec.profile
, add the ptpSettings.logReduce
specification and set the value to true
:
apiVersion: ptp.openshift.io/v1
kind: PtpConfig
metadata:
name: <ptp_config_name>
namespace: openshift-ptp
...
spec:
profile:
- name: "profile1"
...
ptpSettings:
logReduce: "true"
For debugging purposes, you can revert this specification to |
Save and exit to apply the changes to the PtpConfig
CR.
Get the name of the linuxptp-daemon
pod and corresponding node where the PtpConfig
CR has been applied:
$ oc get pods -n openshift-ptp -o wide
NAME READY STATUS RESTARTS AGE IP NODE
linuxptp-daemon-gmv2n 3/3 Running 0 1d17h 10.1.196.24 compute-0.example.com
linuxptp-daemon-lgm55 3/3 Running 0 1d17h 10.1.196.25 compute-1.example.com
ptp-operator-3r4dcvf7f4-zndk7 1/1 Running 0 1d7h 10.129.0.61 control-plane-1.example.com
Verify that master offset messages are excluded from the logs by running the following command:
$ oc -n openshift-ptp logs <linux_daemon_container> -c linuxptp-daemon-container | grep "master offset" (1)
1 | <linux_daemon_container> is the name of the linuxptp-daemon pod, for example linuxptp-daemon-gmv2n . |
When you configure the logReduce
specification, this command does not report any instances of master offset
in the logs of the linuxptp
daemon.
Troubleshoot common problems with the PTP Operator by performing the following steps.
Install the OKD CLI (oc
).
Log in as a user with cluster-admin
privileges.
Install the PTP Operator on a bare-metal cluster with hosts that support PTP.
Check the Operator and operands are successfully deployed in the cluster for the configured nodes.
$ oc get pods -n openshift-ptp -o wide
NAME READY STATUS RESTARTS AGE IP NODE
linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com
linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com
ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com
When the PTP fast event bus is enabled, the number of ready |
Check that supported hardware is found in the cluster.
$ oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io
NAME AGE
control-plane-0.example.com 10d
control-plane-1.example.com 10d
compute-0.example.com 10d
compute-1.example.com 10d
compute-2.example.com 10d
Check the available PTP network interfaces for a node:
$ oc -n openshift-ptp get nodeptpdevices.ptp.openshift.io <node_name> -o yaml
where:
Specifies the node you want to query, for example, compute-0.example.com
.
apiVersion: ptp.openshift.io/v1
kind: NodePtpDevice
metadata:
creationTimestamp: "2021-09-14T16:52:33Z"
generation: 1
name: compute-0.example.com
namespace: openshift-ptp
resourceVersion: "177400"
uid: 30413db0-4d8d-46da-9bef-737bacd548fd
spec: {}
status:
devices:
- name: eno1
- name: eno2
- name: eno3
- name: eno4
- name: enp5s0f0
- name: enp5s0f1
Check that the PTP interface is successfully synchronized to the primary clock by accessing the linuxptp-daemon
pod for the corresponding node.
Get the name of the linuxptp-daemon
pod and corresponding node you want to troubleshoot by running the following command:
$ oc get pods -n openshift-ptp -o wide
NAME READY STATUS RESTARTS AGE IP NODE
linuxptp-daemon-lmvgn 3/3 Running 0 4d17h 10.1.196.24 compute-0.example.com
linuxptp-daemon-qhfg7 3/3 Running 0 4d17h 10.1.196.25 compute-1.example.com
ptp-operator-6b8dcbf7f4-zndk7 1/1 Running 0 5d7h 10.129.0.61 control-plane-1.example.com
Remote shell into the required linuxptp-daemon
container:
$ oc rsh -n openshift-ptp -c linuxptp-daemon-container <linux_daemon_container>
where:
is the container you want to diagnose, for example linuxptp-daemon-lmvgn
.
In the remote shell connection to the linuxptp-daemon
container, use the PTP Management Client (pmc
) tool to diagnose the network interface. Run the following pmc
command to check the sync status of the PTP device, for example ptp4l
.
# pmc -u -f /var/run/ptp4l.0.config -b 0 'GET PORT_DATA_SET'
sending: GET PORT_DATA_SET
40a6b7.fffe.166ef0-1 seq 0 RESPONSE MANAGEMENT PORT_DATA_SET
portIdentity 40a6b7.fffe.166ef0-1
portState SLAVE
logMinDelayReqInterval -4
peerMeanPathDelay 0
logAnnounceInterval -3
announceReceiptTimeout 3
logSyncInterval -4
delayMechanism 1
logMinPdelayReqInterval -4
versionNumber 2
You can use the oc adm must-gather
CLI command to collect information about your cluster, including features and objects associated with Precision Time Protocol (PTP) Operator.
You have access to the cluster as a user with the cluster-admin
role.
You have installed the OpenShift CLI (oc
).
You have installed the PTP Operator.
To collect PTP Operator data with must-gather
, you must specify the PTP Operator must-gather
image.
$ oc adm must-gather --image=registry.redhat.io/openshift4/ptp-must-gather-rhel8:v4.12
Cloud native applications such as virtual RAN (vRAN) require access to notifications about hardware timing events that are critical to the functioning of the overall network. PTP clock synchronization errors can negatively affect the performance and reliability of your low-latency application, for example, a vRAN application running in a distributed unit (DU).
Loss of PTP synchronization is a critical error for a RAN network. If synchronization is lost on a node, the radio might be shut down and the network Over the Air (OTA) traffic might be shifted to another node in the wireless network. Fast event notifications mitigate against workload errors by allowing cluster nodes to communicate PTP clock sync status to the vRAN application running in the DU.
Event notifications are available to vRAN applications running on the same DU node. A publish-subscribe REST API passes events notifications to the messaging bus. Publish-subscribe messaging, or pub-sub messaging, is an asynchronous service-to-service communication architecture where any message published to a topic is immediately received by all of the subscribers to the topic.
The PTP Operator generates fast event notifications for every PTP-capable network interface. You can access the events by using a cloud-event-proxy
sidecar container over an HTTP or Advanced Message Queuing Protocol (AMQP) message bus.
PTP fast event notifications are available for network interfaces configured to use PTP ordinary clocks or PTP boundary clocks. |
HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status. |
Use the Precision Time Protocol (PTP) fast event notifications framework to subscribe cluster applications to PTP events that the bare-metal cluster node generates.
The fast events notifications framework uses a REST API for communication. The REST API is based on the O-RAN O-Cloud Notification API Specification for Event Consumers 3.0 that is available from O-RAN ALLIANCE Specifications. |
The framework consists of a publisher, subscriber, and an AMQ or HTTP messaging protocol to handle communications between the publisher and subscriber applications.
Applications run the cloud-event-proxy
container in a sidecar pattern to subscribe to PTP events.
The cloud-event-proxy
sidecar container can access the same resources as the primary application container without using any of the resources of the primary application and with no significant latency.
HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status. |
linuxptp-daemon
in the PTP Operator-managed pod runs as a Kubernetes DaemonSet
and manages the various linuxptp
processes (ptp4l
, phc2sys
, and optionally for grandmaster clocks, ts2phc
).
The linuxptp-daemon
passes the event to the UNIX domain socket.
The PTP plugin reads the event from the UNIX domain socket and passes it to the cloud-event-proxy
sidecar in the PTP Operator-managed pod.
cloud-event-proxy
delivers the event from the Kubernetes infrastructure to Cloud-Native Network Functions (CNFs) with low latency.
The cloud-event-proxy
sidecar in the PTP Operator-managed pod processes the event and publishes the cloud-native event by using a REST API.
The message transporter transports the event to the cloud-event-proxy
sidecar in the application pod over HTTP or AMQP 1.0 QPID.
The cloud-event-proxy
sidecar in the Application pod processes the event and makes it available by using the REST API.
The consumer application sends an API request to the cloud-event-proxy
sidecar in the application pod to create a PTP events subscription.
The cloud-event-proxy
sidecar creates an AMQ or HTTP messaging listener protocol for the resource specified in the subscription.
The cloud-event-proxy
sidecar in the application pod receives the event from the PTP Operator-managed pod, unwraps the cloud events object to retrieve the data, and posts the event to the consumer application.
The consumer application listens to the address specified in the resource qualifier and receives and processes the PTP event.
To start using PTP fast event notifications for a network interface in your cluster, you must enable the fast event publisher in the PTP Operator PtpOperatorConfig
custom resource (CR) and configure ptpClockThreshold
values in a PtpConfig
CR that you create.
You have installed the OKD CLI (oc
).
You have logged in as a user with cluster-admin
privileges.
You have installed the PTP Operator.
Modify the default PTP Operator config to enable PTP fast events.
Save the following YAML in the ptp-operatorconfig.yaml
file:
apiVersion: ptp.openshift.io/v1
kind: PtpOperatorConfig
metadata:
name: default
namespace: openshift-ptp
spec:
daemonNodeSelector:
node-role.kubernetes.io/worker: ""
ptpEventConfig:
enableEventpublisher: true (1)
1 | Set enableEventpublisher to true to enable PTP fast event notifications. |
In OKD 4.12 or later, you do not need to set the |
Update the PtpOperatorConfig
CR:
$ oc apply -f ptp-operatorconfig.yaml
Create a PtpConfig
custom resource (CR) for the PTP enabled interface, and set the required values for ptpClockThreshold
and ptp4lOpts
.
The following YAML illustrates the required values that you must set in the PtpConfig
CR:
spec:
profile:
- name: "profile1"
interface: "enp5s0f0"
ptp4lOpts: "-2 -s --summary_interval -4" (1)
phc2sysOpts: "-a -r -m -n 24 -N 8 -R 16" (2)
ptp4lConf: "" (3)
ptpClockThreshold: (4)
holdOverTimeout: 5
maxOffsetThreshold: 100
minOffsetThreshold: -100
1 | Append --summary_interval -4 to use PTP fast events. |
2 | Required phc2sysOpts values. -m prints messages to stdout . The linuxptp-daemon DaemonSet parses the logs and generates Prometheus metrics. |
3 | Specify a string that contains the configuration to replace the default /etc/ptp4l.conf file. To use the default configuration, leave the field empty. |
4 | Optional. If the ptpClockThreshold stanza is not present, default values are used for the ptpClockThreshold fields. The stanza shows default ptpClockThreshold values. The ptpClockThreshold values configure how long after the PTP master clock is disconnected before PTP events are triggered. holdOverTimeout is the time value in seconds before the PTP clock event state changes to FREERUN when the PTP master clock is disconnected. The maxOffsetThreshold and minOffsetThreshold settings configure offset values in nanoseconds that compare against the values for CLOCK_REALTIME (phc2sys ) or master offset (ptp4l ). When the ptp4l or phc2sys offset value is outside this range, the PTP clock state is set to FREERUN . When the offset value is within this range, the PTP clock state is set to LOCKED . |
For a complete example CR that configures linuxptp
services as an ordinary clock with PTP fast events, see Configuring linuxptp services as ordinary clock.
If you have previously deployed PTP or bare-metal events consumer applications, you need to update the applications to use HTTP message transport.
You have installed the OpenShift CLI (oc
).
You have logged in as a user with cluster-admin
privileges.
You have updated the PTP Operator or Bare Metal Event Relay to version 4.12 or later which uses HTTP transport by default.
Update your events consumer application to use HTTP transport.
Set the http-event-publishers
variable for the cloud event sidecar deployment.
For example, in a cluster with PTP events configured, the following YAML snippet illustrates a cloud event sidecar deployment:
containers:
- name: cloud-event-sidecar
image: cloud-event-sidecar
args:
- "--metrics-addr=127.0.0.1:9091"
- "--store-path=/store"
- "--transport-host=consumer-events-subscription-service.cloud-events.svc.cluster.local:9043"
- "--http-event-publishers=ptp-event-publisher-service-NODE_NAME.openshift-ptp.svc.cluster.local:9043" (1)
- "--api-port=8089"
1 | The PTP Operator automatically resolves NODE_NAME to the host that is generating the PTP events.
For example, compute-1.example.com . |
In a cluster with bare-metal events configured, set the http-event-publishers
field to hw-event-publisher-service.openshift-bare-metal-events.svc.cluster.local:9043
in the cloud event sidecar deployment CR.
Deploy the consumer-events-subscription-service
service alongside the events consumer application.
For example:
apiVersion: v1
kind: Service
metadata:
annotations:
prometheus.io/scrape: "true"
service.alpha.openshift.io/serving-cert-secret-name: sidecar-consumer-secret
name: consumer-events-subscription-service
namespace: cloud-events
labels:
app: consumer-service
spec:
ports:
- name: sub-port
port: 9043
selector:
app: consumer
clusterIP: None
sessionAffinity: None
type: ClusterIP
To pass PTP fast event notifications between publisher and subscriber on a node, you can install and configure an AMQ messaging bus to run locally on the node. To use AMQ messaging, you must install the AMQ Interconnect Operator.
HTTP transport is the default transport for PTP and bare-metal events. Use HTTP transport instead of AMQP for PTP and bare-metal events where possible. AMQ Interconnect is EOL from 30 June 2024. Extended life cycle support (ELS) for AMQ Interconnect ends 29 November 2029. For more information see, Red Hat AMQ Interconnect support status. |
Install the OKD CLI (oc
).
Log in as a user with cluster-admin
privileges.
Install the AMQ Interconnect Operator to its own amq-interconnect
namespace. See Adding the Red Hat Integration - AMQ Interconnect Operator.
Check that the AMQ Interconnect Operator is available and the required pods are running:
$ oc get pods -n amq-interconnect
NAME READY STATUS RESTARTS AGE
amq-interconnect-645db76c76-k8ghs 1/1 Running 0 23h
interconnect-operator-5cb5fc7cc-4v7qm 1/1 Running 0 23h
Check that the required linuxptp-daemon
PTP event producer pods are running in the openshift-ptp
namespace.
$ oc get pods -n openshift-ptp
NAME READY STATUS RESTARTS AGE
linuxptp-daemon-2t78p 3/3 Running 0 12h
linuxptp-daemon-k8n88 3/3 Running 0 12h
Use the PTP event notifications REST API to subscribe a distributed unit (DU) application to the PTP events that are generated on the parent node.
Subscribe applications to PTP events by using the resource address /cluster/node/<node_name>/ptp
, where <node_name>
is the cluster node running the DU application.
Deploy your cloud-event-consumer
DU application container and cloud-event-proxy
sidecar container in a separate DU application pod. The cloud-event-consumer
DU application subscribes to the cloud-event-proxy
container in the application pod.
Use the following API endpoints to subscribe the cloud-event-consumer
DU application to PTP events posted by the cloud-event-proxy
container at http://localhost:8089/api/ocloudNotifications/v1/
in the DU application pod:
/api/ocloudNotifications/v1/subscriptions
POST
: Creates a new subscription
GET
: Retrieves a list of subscriptions
DELETE
: Deletes all subscriptions
/api/ocloudNotifications/v1/subscriptions/<subscription_id>
GET
: Returns details for the specified subscription ID
DELETE
: Deletes the subscription associated with the specified subscription ID
/api/ocloudNotifications/v1/health
GET
: Returns the health status of ocloudNotifications
API
api/ocloudNotifications/v1/publishers
GET
: Returns an array of os-clock-sync-state
, ptp-clock-class-change
, and lock-state
messages for the cluster node
/api/ocloudnotifications/v1/{resource_address}/CurrentState
GET
: Returns the current state of one the following event types: os-clock-sync-state
, ptp-clock-class-change
, or lock-state
events
|
GET api/ocloudNotifications/v1/subscriptions
Returns a list of subscriptions. If subscriptions exist, a 200 OK
status code is returned along with the list of subscriptions.
[
{
"id": "75b1ad8f-c807-4c23-acf5-56f4b7ee3826",
"endpointUri": "http://localhost:9089/event",
"uriLocation": "http://localhost:8089/api/ocloudNotifications/v1/subscriptions/75b1ad8f-c807-4c23-acf5-56f4b7ee3826",
"resource": "/cluster/node/compute-1.example.com/ptp"
}
]
POST api/ocloudNotifications/v1/subscriptions
Creates a new subscription. If a subscription is successfully created, or if it already exists, a 201 Created
status code is returned.
Parameter | Type |
---|---|
subscription |
data |
{
"uriLocation": "http://localhost:8089/api/ocloudNotifications/v1/subscriptions",
"resource": "/cluster/node/compute-1.example.com/ptp"
}
DELETE api/ocloudNotifications/v1/subscriptions
Deletes all subscriptions.
{
"status": "deleted all subscriptions"
}
GET api/ocloudNotifications/v1/subscriptions/{subscription_id}
Returns details for the subscription with ID subscription_id
.
Parameter | Type |
---|---|
|
string |
{
"id":"48210fb3-45be-4ce0-aa9b-41a0e58730ab",
"endpointUri": "http://localhost:9089/event",
"uriLocation":"http://localhost:8089/api/ocloudNotifications/v1/subscriptions/48210fb3-45be-4ce0-aa9b-41a0e58730ab",
"resource":"/cluster/node/compute-1.example.com/ptp"
}
DELETE api/ocloudNotifications/v1/subscriptions/{subscription_id}
Deletes the subscription with ID subscription_id
.
Parameter | Type |
---|---|
|
string |
{
"status": "OK"
}
GET api/ocloudNotifications/v1/health/
Returns the health status for the ocloudNotifications
REST API.
OK
GET api/ocloudNotifications/v1/publishers
Returns an array of os-clock-sync-state
, ptp-clock-class-change
, and lock-state
details for the cluster node. The system generates notifications when the relevant equipment state changes.
os-clock-sync-state
notifications describe the host operating system clock synchronization state. Can be in LOCKED
or FREERUN
state.
ptp-clock-class-change
notifications describe the current state of the PTP clock class.
lock-state
notifications describe the current status of the PTP equipment lock state. Can be in LOCKED
, HOLDOVER
or FREERUN
state.
[
{
"id": "0fa415ae-a3cf-4299-876a-589438bacf75",
"endpointUri": "http://localhost:9085/api/ocloudNotifications/v1/dummy",
"uriLocation": "http://localhost:9085/api/ocloudNotifications/v1/publishers/0fa415ae-a3cf-4299-876a-589438bacf75",
"resource": "/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state"
},
{
"id": "28cd82df-8436-4f50-bbd9-7a9742828a71",
"endpointUri": "http://localhost:9085/api/ocloudNotifications/v1/dummy",
"uriLocation": "http://localhost:9085/api/ocloudNotifications/v1/publishers/28cd82df-8436-4f50-bbd9-7a9742828a71",
"resource": "/cluster/node/compute-1.example.com/sync/ptp-status/ptp-clock-class-change"
},
{
"id": "44aa480d-7347-48b0-a5b0-e0af01fa9677",
"endpointUri": "http://localhost:9085/api/ocloudNotifications/v1/dummy",
"uriLocation": "http://localhost:9085/api/ocloudNotifications/v1/publishers/44aa480d-7347-48b0-a5b0-e0af01fa9677",
"resource": "/cluster/node/compute-1.example.com/sync/ptp-status/lock-state"
}
]
You can find os-clock-sync-state
, ptp-clock-class-change
and lock-state
events in the logs for the cloud-event-proxy
container. For example:
$ oc logs -f linuxptp-daemon-cvgr6 -n openshift-ptp -c cloud-event-proxy
{
"id":"c8a784d1-5f4a-4c16-9a81-a3b4313affe5",
"type":"event.sync.sync-status.os-clock-sync-state-change",
"source":"/cluster/compute-1.example.com/ptp/CLOCK_REALTIME",
"dataContentType":"application/json",
"time":"2022-05-06T15:31:23.906277159Z",
"data":{
"version":"v1",
"values":[
{
"resource":"/sync/sync-status/os-clock-sync-state",
"dataType":"notification",
"valueType":"enumeration",
"value":"LOCKED"
},
{
"resource":"/sync/sync-status/os-clock-sync-state",
"dataType":"metric",
"valueType":"decimal64.3",
"value":"-53"
}
]
}
}
{
"id":"69eddb52-1650-4e56-b325-86d44688d02b",
"type":"event.sync.ptp-status.ptp-clock-class-change",
"source":"/cluster/compute-1.example.com/ptp/ens2fx/master",
"dataContentType":"application/json",
"time":"2022-05-06T15:31:23.147100033Z",
"data":{
"version":"v1",
"values":[
{
"resource":"/sync/ptp-status/ptp-clock-class-change",
"dataType":"metric",
"valueType":"decimal64.3",
"value":"135"
}
]
}
}
{
"id":"305ec18b-1472-47b3-aadd-8f37933249a9",
"type":"event.sync.ptp-status.ptp-state-change",
"source":"/cluster/compute-1.example.com/ptp/ens2fx/master",
"dataContentType":"application/json",
"time":"2022-05-06T15:31:23.467684081Z",
"data":{
"version":"v1",
"values":[
{
"resource":"/sync/ptp-status/lock-state",
"dataType":"notification",
"valueType":"enumeration",
"value":"LOCKED"
},
{
"resource":"/sync/ptp-status/lock-state",
"dataType":"metric",
"valueType":"decimal64.3",
"value":"62"
}
]
}
}
GET api/ocloudNotifications/v1/cluster/node/<node_name>/sync/ptp-status/lock-state/CurrentState
GET api/ocloudNotifications/v1/cluster/node/<node_name>/sync/sync-status/os-clock-sync-state/CurrentState
GET api/ocloudNotifications/v1/cluster/node/<node_name>/sync/ptp-status/ptp-clock-class-change/CurrentState
Configure the CurrentState
API endpoint to return the current state of the os-clock-sync-state
, ptp-clock-class-change
, or lock-state
events for the cluster node.
os-clock-sync-state
notifications describe the host operating system clock synchronization state. Can be in LOCKED
or FREERUN
state.
ptp-clock-class-change
notifications describe the current state of the PTP clock class.
lock-state
notifications describe the current status of the PTP equipment lock state. Can be in LOCKED
, HOLDOVER
or FREERUN
state.
Parameter | Type |
---|---|
|
string |
{
"id": "c1ac3aa5-1195-4786-84f8-da0ea4462921",
"type": "event.sync.ptp-status.ptp-state-change",
"source": "/cluster/node/compute-1.example.com/sync/ptp-status/lock-state",
"dataContentType": "application/json",
"time": "2023-01-10T02:41:57.094981478Z",
"data": {
"version": "v1",
"values": [
{
"resource": "/cluster/node/compute-1.example.com/ens5fx/master",
"dataType": "notification",
"valueType": "enumeration",
"value": "LOCKED"
},
{
"resource": "/cluster/node/compute-1.example.com/ens5fx/master",
"dataType": "metric",
"valueType": "decimal64.3",
"value": "29"
}
]
}
}
{
"specversion": "0.3",
"id": "4f51fe99-feaa-4e66-9112-66c5c9b9afcb",
"source": "/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state",
"type": "event.sync.sync-status.os-clock-sync-state-change",
"subject": "/cluster/node/compute-1.example.com/sync/sync-status/os-clock-sync-state",
"datacontenttype": "application/json",
"time": "2022-11-29T17:44:22.202Z",
"data": {
"version": "v1",
"values": [
{
"resource": "/cluster/node/compute-1.example.com/CLOCK_REALTIME",
"dataType": "notification",
"valueType": "enumeration",
"value": "LOCKED"
},
{
"resource": "/cluster/node/compute-1.example.com/CLOCK_REALTIME",
"dataType": "metric",
"valueType": "decimal64.3",
"value": "27"
}
]
}
}
{
"id": "064c9e67-5ad4-4afb-98ff-189c6aa9c205",
"type": "event.sync.ptp-status.ptp-clock-class-change",
"source": "/cluster/node/compute-1.example.com/sync/ptp-status/ptp-clock-class-change",
"dataContentType": "application/json",
"time": "2023-01-10T02:41:56.785673989Z",
"data": {
"version": "v1",
"values": [
{
"resource": "/cluster/node/compute-1.example.com/ens5fx/master",
"dataType": "metric",
"valueType": "decimal64.3",
"value": "165"
}
]
}
}
You can monitor PTP fast events metrics from cluster nodes where the linuxptp-daemon
is running.
You can also monitor PTP fast event metrics in the OKD web console by using the preconfigured and self-updating Prometheus monitoring stack.
Install the OKD CLI oc
.
Log in as a user with cluster-admin
privileges.
Install and configure the PTP Operator on a node with PTP-capable hardware.
Check for exposed PTP metrics on any node where the linuxptp-daemon
is running. For example, run the following command:
$ curl http://<node_name>:9091/metrics
# HELP openshift_ptp_clock_state 0 = FREERUN, 1 = LOCKED, 2 = HOLDOVER # TYPE openshift_ptp_clock_state gauge openshift_ptp_clock_state{iface="ens1fx",node="compute-1.example.com",process="ptp4l"} 1 openshift_ptp_clock_state{iface="ens3fx",node="compute-1.example.com",process="ptp4l"} 1 openshift_ptp_clock_state{iface="ens5fx",node="compute-1.example.com",process="ptp4l"} 1 openshift_ptp_clock_state{iface="ens7fx",node="compute-1.example.com",process="ptp4l"} 1 # HELP openshift_ptp_delay_ns # TYPE openshift_ptp_delay_ns gauge openshift_ptp_delay_ns{from="master",iface="ens1fx",node="compute-1.example.com",process="ptp4l"} 842 openshift_ptp_delay_ns{from="master",iface="ens3fx",node="compute-1.example.com",process="ptp4l"} 480 openshift_ptp_delay_ns{from="master",iface="ens5fx",node="compute-1.example.com",process="ptp4l"} 584 openshift_ptp_delay_ns{from="master",iface="ens7fx",node="compute-1.example.com",process="ptp4l"} 482 openshift_ptp_delay_ns{from="phc",iface="CLOCK_REALTIME",node="compute-1.example.com",process="phc2sys"} 547 # HELP openshift_ptp_offset_ns # TYPE openshift_ptp_offset_ns gauge openshift_ptp_offset_ns{from="master",iface="ens1fx",node="compute-1.example.com",process="ptp4l"} -2 openshift_ptp_offset_ns{from="master",iface="ens3fx",node="compute-1.example.com",process="ptp4l"} -44 openshift_ptp_offset_ns{from="master",iface="ens5fx",node="compute-1.example.com",process="ptp4l"} -8 openshift_ptp_offset_ns{from="master",iface="ens7fx",node="compute-1.example.com",process="ptp4l"} 3 openshift_ptp_offset_ns{from="phc",iface="CLOCK_REALTIME",node="compute-1.example.com",process="phc2sys"} 12
To view the PTP event in the OKD web console, copy the name of the PTP metric you want to query, for example, openshift_ptp_offset_ns
.
In the OKD web console, click Observe → Metrics.
Paste the PTP metric name into the Expression field, and click Run queries.