When an etcd cluster is not healthy, it needs to be rebuilt. During that process, the pods that rely on etcd clusters lose data. That data needs to be repopulated in order for the cluster to go back to a healthy state.
The following services need their data repopulated in the etcd cluster:
An etcd cluster was rebuilt. See Rebuild Unhealthy etcd Clusters.
Reconstruct boot session templates for impacted product streams to repopulate data.
Boot preparation information for other product streams can be found in the following locations:
Data is repopulated in BSS when the REDS init
job is run.
(ncn-mw#
) Get the current REDS job.
kubectl get -o json -n services job/cray-reds-init |
jq 'del(.spec.template.metadata.labels["controller-uid"], .spec.selector)' > cray-reds-init.json
(ncn-mw#
) Delete the reds-client-init
job.
kubectl delete -n services -f cray-reds-init.json
(ncn-mw#
) Restart the reds-client-init
job.
kubectl apply -n services -f cray-reds-init.json
Repopulate clusters for CPS.
NOTE CRUS was deprecated in CSM 1.2.0 and it will be removed in CSM 1.5.0. See the following links for more information:
(ncn-mw#
) View the progress of existing CRUS sessions.
List the existing CRUS sessions to find the upgrade_id
for the desired session.
cray crus session list --format toml
Example output:
[[results]]
api_version = "1.0.0"
completed = false
failed_label = "failed-nodes"
kind = "ComputeUpgradeSession"
messages = [ "Quiesce requested in step 0: moving to QUIESCING", "All nodes quiesced in step 0: moving to QUIESCED", "Began the boot session for step 0: moving to BOOTING",]
starting_label = "slurm-nodes"
state = "UPDATING"
upgrade_id = "e0131663-dbee-47c2-aa5c-13fe9b110242" <<-- Note this value
upgrade_step_size = 50
upgrade_template_id = "boot-template"
upgrading_label = "upgrading-nodes"
workload_manager_type = "slurm"
Describe the CRUS session to see if the session failed or is stuck.
If the session continued and appears to be in a healthy state, proceed to the BSS section.
cray crus session describe CRUS_UPGRADE_ID --format toml
Example output:
api_version = "1.0.0"
completed = false
failed_label = "failed-nodes"
kind = "ComputeUpgradeSession"
messages = [ "Quiesce requested in step 0: moving to QUIESCING", "All nodes quiesced in step 0: moving to QUIESCED", "Began the boot session for step 0: moving to BOOTING",]
starting_label = "slurm-nodes"
state = "UPDATING"
upgrade_id = "e0131663-dbee-47c2-aa5c-13fe9b110242"
upgrade_step_size = 50
upgrade_template_id = "boot-template"
upgrading_label = "upgrading-nodes"
workload_manager_type = "slurm"
(ncn-mw#
) Find the name of the running CRUS pod.
kubectl get pods -n services | grep cray-crus
Example output:
cray-crus-549cb9cb5d-jtpqg 3/4 Running 528 25h
(ncn-mw#
) Restart the CRUS pod.
Deleting the pod will restart CRUS and start the discovery process for any data recovered in etcd.
kubectl delete pods -n services POD_NAME
Reload the firmware images from Nexus.
Refer to the Load Firmware from Nexus
section in FAS Admin Procedures for more information.
When the etcd cluster is rebuilt, all historic data for firmware actions and all recorded snapshots will be lost.
Image data will be reloaded from Nexus.
Any images that were loaded into FAS outside of Nexus will need to be reloaded using the Load Firmware from RPM or ZIP file
section in
FAS Admin Procedures.
After images are reloaded, any running actions at time of failure will need to be recreated.
Resubscribe the compute nodes and any NCNs that use the ORCA daemon for their State Change Notifications (SCN).
(ncn-m#
) Resubscribe all compute nodes.
TMPFILE=$(mktemp)
sat status --no-borders --no-headings | grep Ready | grep Compute | awk '{printf("nid%06d-nmn\n",$4);}' > "${TMPFILE}"
pdsh -w ^"${TMPFILE}" "systemctl restart cray-orca"
rm -rf "${TMPFILE}"
(ncn-m#
) Resubscribe all worker nodes.
NOTE: Modify the -w
arguments in the following commands to reflect the number of worker nodes in the system.
pdsh -w ncn-w00[1-4]-can.local "systemctl restart cray-orca"
(ncn-mw#
) Restart MEDS.
kubectl -n services delete pods --selector='app.kubernetes.io/name=cray-meds'
(ncn-mw#
) Restart REDS.
kubectl -n services delete pods --selector='app.kubernetes.io/name=cray-reds'