Session templates in the Boot Orchestration Service (BOS) are a reusable collection of boot, configuration, and component information.
After creation they can be combined with a boot operation to create a BOS session that will apply the desired changes to the specified components.
Session templates can be created via the API by providing JSON data or via the CLI by writing the JSON data to a file, which can then be referenced using the --file parameter.
The following is an example BOS session template:
{
"boot_sets": {
"arm_boot_set": {
"arch": "ARM"
"etag": "foo",
"kernel_parameters": "console=ttyS0,115200 bad_page=panic crashkernel=360M hugepagelist=2m-2g intel_iommu=off intel_pstate=disable iommu=pt ip=dhcp numa_interleave_omit=headless numa_zonelist_order=node oops=panic pageblock_order=14 pcie_ports=native printk.synchronous=y rd.neednet=1 rd.retry=10 rd.shell k8s_gw=api-gw-service-nmn.local quiet turbo_boost_limit=999",
"node_roles_groups": [
"Compute"
],
"path": "s3://boot-images/e06530f1-fde2-4ca5-9148-7e84f4857d17/manifest.json",
"rootfs_provider": "sbps",
"rootfs_provider_passthrough": "sbps:v1:iqn.2023-06.csm.iscsi:_sbps-hsn._tcp.my-system.my-site-domain:300",
"type": "s3"
},
"x86_boot_set": {
"arch": "X86",
"etag": "bar",
"kernel_parameters": "console=ttyS0,115200 bad_page=panic crashkernel=360M hugepagelist=2m-2g intel_iommu=off intel_pstate=disable iommu=pt ip=dhcp numa_interleave_omit=headless numa_zonelist_order=node oops=panic pageblock_order=14 pcie_ports=native printk.synchronous=y rd.neednet=1 rd.retry=10 rd.shell k8s_gw=api-gw-service-nmn.local quiet turbo_boost_limit=999",
"node_roles_groups": [
"Compute"
],
"path": "s3://boot-images/f17631a1-fed1-5cb5-0aa8-7aaaf4123411/manifest.json",
"rootfs_provider": "sbps",
"rootfs_provider_passthrough": "sbps:v1:iqn.2023-06.csm.iscsi:_sbps-hsn._tcp.my-system.my-site-domain:300",
"type": "s3"
}
},
"cfs": {
"configuration": "example-configuration"
},
"description": "session template example",
"enable_cfs": true,
"name": "session-template-example",
"tenant": ""
}
description field is an optional text description of the template.node_list field (under boot_sets) is a list of individual node component names (xnames).etag field is used to identify the version of the manifest.json file in S3.path field is the path to the manifest.json file in S3.type field is the type of storage where the boot image resides.configuration field (under cfs) is the name of the
Configuration Framework Service (CFS) configuration to apply.enable_cfs field indicates whether or not CFS should be invoked.tenant field indicates which tenant owns this session template.
boot_sets field is discussed in the following section: Boot sets.A boot set in a BOS session template contains information on the boot artifacts and kernel parameters that nodes should boot with, as well as information on the nodes the boot set should apply to. Optionally, configuration information can also be overwritten on a per boot set basis.
Every BOS session template is required to include at least one boot set entry. As the example in the Session template structure section shows, it is legal to have multiple boot set entries in a single session template; however, many session templates only have a single boot set.
Boot artifacts allow a node to boot. They consist of a kernel, an initrd, and a root file system (rootfs). These three artifacts are
listed in a manifest.json file.
Boot sets specify a set of parameters that point to a manifest.json file stored in the
Simple Storage Service (S3).
This file is created by the Image Management Service (IMS)
and contains links to all of the boot artifacts.
The following S3 parameters are used to specify this file:
s3.manifest.json file in S3. The path will follow the s3://<BUCKET_NAME>/<KEY_NAME> format.
<BUCKET_NAME> is set to boot-images<KEY_NAME> is set to the image ID that the Image Management Service (IMS) created when it generated the boot artifacts.etag: This entity tag helps identify the version of the manifest.json file. Its value can be an empty string, but cannot be left blank. However, the etag line can be omitted entirely.This boot artifact information from the files stored in S3 is then written to the Boot Script Service (BSS) where it is retrieved when these nodes boot.
Also see the Architecture section for information on how that field relates to the boot artifacts.
Each boot set also specifies a set of nodes that are the targets of the boot set.
There are three different fields used to specify the nodes: node_list, node_groups, and node_roles_groups.
These are called the hardware-specifier fields of the boot set.
The total set of nodes targeted by the boot set is the union of the nodes specified by these fields.
Related:
- See the Architecture section for information on how that field relates to specifying nodes.
- See Session templates with tenancy for details on on how specifying nodes behaves in a multi-tenancy environment.
node_list maps to a list of nodes identified by component names (xnames).
For example:
"node_list": ["x3000c0s19b1n0", "x3000c0s19b1n1", "x3000c0s19b2n0"]
NIDs are not supported.
The reject_nids option can be enabled in order to prevent accidental creation of session templates that reference NIDs.
If the session template belongs to a tenant, any nodes listed in this field should belong to that tenant in TAPMS. For more information, see Session templates with tenancy.
node_groups maps to a list of component groups defined by the
Hardware State Manager (HSM).
Each group may contain zero or more nodes. Groups can be arbitrarily defined by users.
For example:
"node_groups": ["green", "white", "pink"]
(ncn-mw#) To retrieve the current list of HSM groups, run following command:
cray hsm groups list --format json | jq .[].label
For more information on HSM groups, see Manage Component Groups.
node_roles_groups is a list of HSM roles and sub-roles.
Each node’s role and sub-role is specified in the HSM database.
An entry in this list may be just a role (for example, Compute)
or it may be a role and sub-role joined by an underscore character (for example, Application_UAN).
For example:
"node_roles_groups": ["Compute"]
Consult the cray-hms-base-config Kubernetes ConfigMap in the services namespace for a listing of the available roles and sub-roles on the system.
See HSM Roles and Subroles for more information.
The arch field is the only boot set field which plays a role in both the boot artifacts
and specifying nodes. It specifies the hardware architecture both of the target nodes
and of the boot artifacts. Supported values are X86 and ARM.
When a boot set is validated, it will contact IMS to make sure that the boot image being used has an architecture matching
what is specified in the boot set. Boot set validation happens when creating a session template, validating a session template, or
creating a session. In cases where BOS is unable to perform this validation, the behavior of BOS is controlled by
the ims_errors_fatal option and ims_images_must_exist option.
Unlike the fields discussed in the Specifying nodes section, the arch field is not used to specify additional
nodes. Instead, it acts as a filter, removing any specified nodes that do not have a matching architecture.
rootfs providersThe rootfs is the root file system.
rootfs_provider identifies the mechanism that provides the root file system for the node.
In the case of the User Services Software (USS) image, the rootfs_provider is HPE’s
iSCSI SBPS (Scalable Boot Content Projection Service).
SBPS projects the root file system onto the nodes as a SquashFS image. This is provided via an overlay file system which is set up in dracut.
rootfs_provider_passthrough is a string that is passed through to the provider of the rootfs. This string can contain additional information that the provider will act upon.
Both the rootfs_provider and rootfs_provider_passthrough parameters are used to construct the value of the kernel boot parameter root that BOS sends to the node.
BOS constructs the kernel boot parameter root per the following syntax.
root=<Protocol>:<Root FS location>:<Etag>:<RootFS-provider-passthrough parameters>
BOS fills in the protocol based on the value provided in rootfs_provider. If BOS does not know the rootfs_provider, then it omits the protocol field.
BOS finds the rootfs_provider and etag values in the manifest file in the session template in the boot set.
The rootfs_provider_passthrough parameters are appended to the root parameter without modification. They are “passed through”, as the name implies.
Currently, the only rootfs provider that BOS recognizes is sbps.
For more information on sbps, see Create a Session Template to Boot Compute Nodes with SBPS.
root kernel parameter exampleroot=sbps-s3:s3://boot-images/4fab0408-0bfe-4668-b957-964f8ff0e4e9/rootfs:b6ea7a2314d54dead0c94223863b3488-1977:sbps:v1:iqn.2023-06.csm.iscsi:_sbps-hsn._tcp.my-system.my-site-domain:300
The following table explains the different pieces in the preceding example.
| Field | Example Value | Explanation |
|---|---|---|
| Protocol | sbps-s3 |
The protocol used to mount the root file system, using SBPS in this example. |
rootfs_provider location |
s3://boot-images/4fab0408-0bfe-4668-b957-964f8ff0e4e9/rootfs |
The rootfs_provider location is a SquashFS image stored in S3. |
etag |
b6ea7a2314d54dead0c94223863b3488-1977 |
The Etag (entity tag) is the identifier of the SquashFS image in S3. |
rootfs_provider_passthrough parameters |
sbps:v1:iqn.2023-06.csm.iscsi:_sbps-hsn._tcp.my-system.my-site-domain:30 |
These are additional parameters passed through to SBPS in this example. |
It is also possible to specify CFS configuration in the boot set. This is done by setting the cfs field inside the boot set.
It follows the same format as the cfs field at the top level of the session template.
If specified, this will override (for that boot set entry) whatever value is set in the base session template.
Boot set validation is performed by the BOS API server in the following operations:
If a boot set fails validation, then the associated operation also fails. The following things are checked when a boot set is validated:
rootfs_provider field is either unset or is set to a supported value.ims_images_must_exist option is enabled,
verify that the boot image exists in IMS.
ims_errors_fatal option is also enabled,
then the boot set validation will fail if BOS is unable to contact IMS to perform
this check.ims_errors_fatal option is enabled,
then the boot set validation will fail if BOS is unable to contact IMS to perform
this check.reject_nids option is enabled,
verify that the node_list field is either unset or does
not contain any NIDs.