This procedure will install CSM applications and services into the CSM Kubernetes cluster.
NOTECheck the information in Known issues before starting this procedure to be warned about possible problems.
NOTE: During this step, only on systems with only four worker nodes (typically Testing and Development Systems (TDS)), thecustomizations.yamlfile will be automatically edited to lower pod CPU requests for some services, in order to better facilitate scheduling on smaller systems. See the file/usr/share/doc/csm/upgrade/scripts/upgrade/tds_cpu_requests.yamlfor these settings. This file can be modified with different values (prior to executing theyaplcommand below), if other settings are desired in thecustomizations.yamlfile for this system. For more information about modifyingcustomizations.yamland tuning for specific systems, see Post-Install Customizations.
(pit#) Install YAPL.
rpm -Uvh "${CSM_PATH}"/rpm/cray/csm/sle-15sp2/x86_64/yapl-*.x86_64.rpm
(pit#) Install CSM services using YAPL.
pushd /usr/share/doc/csm/install/scripts/csm_services
yapl -f install.yaml execute
popd
NOTE
- This command may take up to 90 minutes to complete.
- If any errors are encountered, then potential fixes should be displayed where the error occurred.
- If you are prompted for a password, this is the password for the PIT node (
ncn-m001). Enter the password to continue.- Output is redirected to
/usr/share/doc/csm/install/scripts/csm_services/yapl.log. To show the output in the terminal, append the--console-output executeargument to theyaplcommand.- The
yaplcommand can safely be rerun. By default, it will skip any steps which were previously completed successfully. To force it to rerun all steps regardless of what was previously completed, append the--no-cacheargument to theyaplcommand.- The order of the
yaplcommand arguments is important. The syntax isyapl -f install.yaml [--console-output] execute [--no-cache].
(pit#) Wait for BSS to be ready.
kubectl -n services rollout status deployment cray-bss
(pit#) Retrieve an API token.
export TOKEN=$(curl -k -s -S -d grant_type=client_credentials \
-d client_id=admin-client \
-d client_secret=`kubectl get secrets admin-client-auth -o jsonpath='{.data.client-secret}' | base64 -d` \
https://api-gw-service-nmn.local/keycloak/realms/shasta/protocol/openid-connect/token | jq -r '.access_token')
(pit#) Create empty boot parameters:
curl -i -k -H "Authorization: Bearer ${TOKEN}" -X PUT \
https://api-gw-service-nmn.local/apis/bss/boot/v1/bootparameters \
--data '{"hosts":["Global"]}'
Example of successful output:
HTTP/2 200
content-type: application/json; charset=UTF-8
date: Mon, 27 Jun 2022 17:08:55 GMT
content-length: 0
x-envoy-upstream-service-time: 7
server: istio-envoy
(pit#) Restart the cray-spire-update-bss job.
SPIRE_JOB=$(kubectl -n spire get jobs -l app.kubernetes.io/name=cray-spire-update-bss -o name)
kubectl -n spire get "${SPIRE_JOB}" -o json | jq 'del(.spec.selector)' \
| jq 'del(.spec.template.metadata.labels."controller-uid")' \
| kubectl replace --force -f -
(pit#) Wait for the cray-spire-update-bss job to complete.
kubectl -n spire wait "${SPIRE_JOB}" --for=condition=complete --timeout=5m
If CSM has been installed and Vault is running, add the switch credentials into Vault. Certain
tests, including goss-switch-bgp-neighbor-aruba-or-mellanox use these credentials to test the
state of the switch. This step is not required to configure the management network. If Vault is
unavailable, this step can be temporarily skipped. Any automated tests that depend on the switch
credentials being in Vault will fail until they are added.
First, write the switch admin password to the SW_ADMIN_PASSWORD variable if it is not already
set.
read -s SW_ADMIN_PASSWORD
Once the SW_ADMIN_PASSWORD variable is set, run the following commands to add the switch admin
password to Vault.
VAULT_PASSWD=$(kubectl -n vault get secrets cray-vault-unseal-keys -o json | jq -r '.data["vault-root"]' | base64 -d)
alias vault='kubectl -n vault exec -i cray-vault-0 -c vault -- env VAULT_TOKEN="$VAULT_PASSWD" VAULT_ADDR=http://127.0.0.1:8200 VAULT_FORMAT=json vault'
vault kv put secret/net-creds/switch_admin admin=$SW_ADMIN_PASSWORD
Note: The use of read -s is a convention used throughout this documentation which allows for the
user input of secrets without echoing them to the terminal or saving them in history.
Wait at least 15 minutes to let the various Kubernetes resources initialize and start before proceeding with the rest of the install. Because there are a number of dependencies between them, some services are not expected to work immediately after the install script completes.
After having waited until services are healthy (
run kubectl get po -A | grep -v 'Completed\|Running' to see which pods may still be Pending),
take a manual backup of all Etcd clusters.
These clusters are automatically backed up every 24 hours, but not until the clusters have been
up that long.
Taking a manual backup enables restoring from backup later in this install process if needed.
/usr/share/doc/csm/scripts/operations/etcd/take-etcd-manual-backups.sh post_install
The next step is to validate CSM health before redeploying the final NCN. See Validate CSM health before final NCN deployment.
After installing CSM, proceed to validate CSM health before final NCN deployment.
Deploy CSM Applications and Services known issuesThe following error may occur during the Deploy CSM Applications and Services step:
+ csi upload-sls-file --sls-file /var/www/ephemeral/prep/eniac/sls_input_file.json
2021/10/05 18:42:58 Retrieving S3 credentials ( sls-s3-credentials ) for SLS
2021/10/05 18:42:58 Unable to SLS S3 secret from k8s:secrets "sls-s3-credentials" not found
(pit#) Verify that the sls-s3-credentials secret exists in the default namespace:
kubectl get secret sls-s3-credentials
Example output:
NAME TYPE DATA AGE
sls-s3-credentials Opaque 7 28d
(pit#) Check for running sonar-sync jobs. If there are no sonar-sync jobs, then wait for
one to complete. The sonar-sync CronJob is responsible
for copying the sls-s3-credentials secret from the default namespace to the services
namespace.
kubectl -n services get pods -l cronjob-name=sonar-sync
Example output:
NAME READY STATUS RESTARTS AGE
sonar-sync-1634322840-4fckz 0/1 Completed 0 73s
sonar-sync-1634322900-pnvl6 1/1 Running 0 13s
(pit#) Verify that the sls-s3-credentials secret now exists in the services namespace.
kubectl -n services get secret sls-s3-credentials
Example output:
NAME TYPE DATA AGE
sls-s3-credentials Opaque 7 20s
Running the yapl command again is expected to succeed.
The Following error may occur during Create base BSS global boot parameters step:
kubectl -n spire get "${SPIRE_JOB}" -o json | jq 'del(.spec.selector)' \
> | jq 'del(.spec.template.metadata.labels."controller-uid")' \
> | kubectl replace --force -f -
error: the server doesn't have a resource type ""
No resources found
error: no objects passed to replace
(pit#) Verify there is no cray-spire-update-bss pod in the spire namespace.
kubectl get pods -n spire | grep cray-spire-update-bss
(pit#) Get the cray-spire-update-bss job from the helm chart and apply it.
helm get manifest -n spire cray-spire | grep -A 68 'Source: cray-spire/templates/update-bss/job.yaml' \
| kubectl apply -n spire -f -
Wait for the job to complete.
SPIRE_JOB=$(kubectl -n spire get jobs -l app.kubernetes.io/name=cray-spire-update-bss -o name)
kubectl -n spire wait "${SPIRE_JOB}" --for=condition=complete --timeout=5m
Restart the Create base BSS global boot parameters step from the beginning.
Setup Nexus known issuesKnown potential issues along with suggested fixes are listed in Troubleshoot Nexus.