This page details how to wipe disks on NCNs installed with the current version of CSM.
Everything in this section should be considered DESTRUCTIVE.
NOTE All types of disk wipe can be run from Linux or from an emergency shell.
This wipe erases the magic bits on the disk to prevent them from being recognized, as well as removing the common volume groups.
(ncn#
) List the disks for verification.
Set a variable by loading a helper library.
From the Linux command line, run the following command:
source /usr/lib/dracut/modules.d/90metalmdsquash/metal-lib.sh
From the emergency shell, run the following command:
. /lib/metal-lib.sh
List the disks.
disks_to_wipe=$(lsblk -l -o NAME,TYPE,TRAN | grep -E "[[:space:]].*(raid|${metal_transports})" |
awk '{ print "/dev/"$1 }' | sort -u | tr '\n' ' ')
echo "${disks_to_wipe}"
(ncn#
) Wipe the disks and the RAIDs.
for disk in $disks_to_wipe; do
wipefs --all --force ${disk}* 2> /dev/null
done
If any disks had labels present, then the output looks similar to the following:
/dev/sda: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
/dev/sda: 8 bytes were erased at offset 0x6fc86d5e00 (gpt): 45 46 49 20 50 41 52 54
/dev/sda: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
/dev/sdb: 6 bytes were erased at offset 0x00000000 (crypto_LUKS): 4c 55 4b 53 ba be
/dev/sdb: 6 bytes were erased at offset 0x00004000 (crypto_LUKS): 53 4b 55 4c ba be
/dev/sdc: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54
/dev/sdc: 8 bytes were erased at offset 0x6fc86d5e00 (gpt): 45 46 49 20 50 41 52 54
/dev/sdc: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa
Verify that there are no error messages in the output.
The wipefs
command may fail if no labeled disks are found, which is an indication of a larger problem.
(ncn#
) Remove the volume groups on all NCNs.
NOTE The Ceph volume group will only exist on storage nodes, but this code snippet will work on all NCNs.
ceph_vgs='vg_name=~ceph*'
metal_vgs='vg_name=~metal*'
for volume_group in ${ceph_vgs} ${metal_vgs}; do
vgremove -f -v --select "${volume_group}" -y >/dev/null 2>&1
done
An advanced wipe includes handling storage node specific items before running the Basic wipe.
(ncn-s#
) Stop Ceph on all of the storage nodes.
CSM 0.9 or earlier
systemctl stop ceph-osd.target
CSM 1.0 or later
cephadm rm-cluster --fsid $(cephadm ls|jq -r '.[0].fsid') --force
(ncn-s#
) Make sure the OSDs (if any) are not running on the storage nodes.
CSM 0.9 or earlier
ps -ef|grep ceph-osd
CSM 1.0 or later
podman ps
Examine the output. There should be no running ceph-osd
processes or containers.
Perform the Basic wipe procedure.
A full wipe cleanly stops all running services that require partitions, as well as removing the node from the Ceph or Kubernetes cluster (as appropriate for the node type).
This does not zero disks; this will ensure that all disks look raw on the next reboot.
IMPORTANT For each step, pay attention to whether the command is to be run on a master node, storage node, or worker node. If wiping a different type of node than what a step specifies, then skip that step.
(ncn-w#
) Reset Kubernetes on worker nodes ONLY.
This will stop kubelet
, stop underlying containers, and remove the contents of /var/lib/kubelet
.
Reset Kubernetes.
kubeadm reset --force
List any containers running in containerd
.
crictl ps
Example output:
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
66a78adf6b4c2 18b6035f5a9ce About a minute ago Running spire-bundle 1212 6d89f7dee8ab6
7680e4050386d c8344c866fa55 24 hours ago Running speaker 0 5460d2bffb4d7
b6467c907f063 8e6730a2b718c 3 days ago Running request-ncn-join-token 0 a3a9ca9e1ca78
e8ce2d1a8379f 64d4c06dc3fb4 3 days ago Running istio-proxy 0 6d89f7dee8ab6
c3d4811fc3cd0 0215a709bdd9b 3 days ago Running weave-npc 0 f5e25c12e617e
If there are any running containers from the output of the crictl ps
command, then stop them.
crictl stop <container id from the CONTAINER column>
(ncn-m#
) Reset Kubernetes on master nodes ONLY.
This will stop kubelet
, stop underlying containers, and remove the contents of /var/lib/kubelet
.
Reset Kubernetes.
kubeadm reset --force
List any containers running in containerd
.
crictl ps
Example output:
CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID
66a78adf6b4c2 18b6035f5a9ce About a minute ago Running spire-bundle 1212 6d89f7dee8ab6
7680e4050386d c8344c866fa55 24 hours ago Running speaker 0 5460d2bffb4d7
b6467c907f063 8e6730a2b718c 3 days ago Running request-ncn-join-token 0 a3a9ca9e1ca78
e8ce2d1a8379f 64d4c06dc3fb4 3 days ago Running istio-proxy 0 6d89f7dee8ab6
c3d4811fc3cd0 0215a709bdd9b 3 days ago Running weave-npc 0 f5e25c12e617e
If there are any running containers from the output of the crictl ps
command, then stop them.
crictl stop <container id from the CONTAINER column>
(ncn-s#
) Run the Advanced wipe, but stop when it mentions the “basic wipe”. Then return here.
Unmount volumes.
NOTE
Some of the followingumount
commands may fail or have warnings depending on the state of the NCN. Failures in this section can be ignored and will not inhibit the wipe process.
NOTE
There is an edge case where the overlay may keep the drive from being unmounted. If this is a rebuild, ignore this.
The exact commands used depends on the node type:
Master nodes
Stop the etcd
service on the master node before unmounting /var/lib/etcd
and other mounts.
systemctl stop etcd.service
umount -v /run/lib-etcd /var/lib/etcd /var/lib/sdu /var/opt/cray/sdu/collection-mount /var/lib/admin-tools /var/lib/s3fs_cache /var/lib/containerd
Storage nodes
umount -vf /var/lib/ceph /var/lib/containers /etc/ceph /var/opt/cray/sdu/collection-mount /var/lib/admin-tools /var/lib/s3fs_cache /var/lib/containerd
If the umount
command outputs target is busy
on the storage node, then try the following:
Look for containers
mounts:
mount | grep containers
Example output:
/dev/mapper/metalvg0-CONTAIN on /var/lib/containers type xfs (rw,noatime,swalloc,attr2,largeio,inode64,allocsize=131072k,logbufs=8,logbsize=32k,noquota)
/dev/mapper/metalvg0-CONTAIN on /var/lib/containers/storage/overlay type xfs (rw,noatime,swalloc,attr2,largeio,inode64,allocsize=131072k,logbufs=8,logbsize=32k,noquota)
Unmount /var/lib/containers/storage/overlay
.
umount -v /var/lib/containers/storage/overlay
Example output:
umount: /var/lib/containers/storage/overlay unmounted
Unmount /var/lib/containers
.
umount -v /var/lib/containers
Example output:
umount: /var/lib/containers unmounted
Worker nodes
umount -v /var/lib/kubelet /var/lib/sdu /run/containerd /var/lib/containerd /run/lib-containerd /var/opt/cray/sdu/collection-mount /var/lib/admin-tools /var/lib/s3fs_cache /var/lib/containerd
(ncn#
) Stop cray-sdu-rda
on all node types (master, storage, or worker).
See if any cray-sdu-rda
containers are running.
podman ps
Example output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7741d5096625 registry.local/sdu-docker-stable-local/cray-sdu-rda:1.1.1 /bin/sh -c /usr/s... 6 weeks ago Up 6 weeks ago cray-sdu-rda
If there is a running cray-sdu-rda
container in the above output, then stop it using the container ID:
podman stop 7741d5096625
Example output:
7741d50966259410298bb4c3210e6665cdbd57a82e34e467d239f519ae3f17d4
(ncn-m#
) Remove etcd
device on master nodes ONLY.
Determine whether or not an etcd
volume is present.
dmsetup ls
Expected output when the etcd
volume is present will show ETCDLVM
, but the numbers might be different.
ETCDLVM (254:1)
Remove the etcd
device mapper.
dmsetup remove $(dmsetup ls | grep -i etcd | awk '{print $1}')
NOTE
The following output means that theetcd
volume mapper is not present. This is okay.
No device specified.
Command failed.
(ncn-m#
) Remove etcd
volumes on master nodes ONLY.
vgremove etcdvg0
Perform the Basic wipe procedure.