IMPORTANT:
This document addresses how to move OSDs that are unmanaged, but should actually
be managed by the Ceph orchestrator. For OSDs that are intentionally unmanaged, DO NOT use
this document to move them.
(ncn-s#
) Check for unmanaged OSDs.
ceph orch ls | awk 'NR==1 || /osd/'
Example output (the following example shows eight unmanaged OSDs with osd
service name):
NAME PORTS RUNNING REFRESHED AGE PLACEMENT
osd 8 5m ago - <unmanaged>
osd.all-available-devices 16 5m ago 6d *
In addition, the following command shows that osd
service has unmanaged
set to true
:
ceph orch ls --service_name osd --export
Example output:
service_type: osd
service_name: osd
unmanaged: true <-----
spec:
filter_logic: AND
objectstore: bluestore
This procedure requires administrative privileges and assumes osd
as the service name for
unintentionally unmanaged OSDs, as shown in the example output above.
Perform the following steps on only one storage node.
(ncn-s#
) Create a service specification YAML file with the following content (replace the
service_name
field with the actual service name. Do not replace the service_type
field,
as it should be osd
regardless of the service name):
service_type: osd
service_name: osd
unmanaged: false
placement:
host_pattern: '*'
spec:
data_devices:
all: true
filter_logic: AND
objectstore: bluestore
(ncn-s#
) Apply the service specification defined in the above YAML file.
ceph orch apply -i <path_to_yaml_file>
(ncn-s#
) Verify that osd
service name no longer has unmanaged
set to true
.
ceph orch ls --service_name osd --export
Example output (no unmanaged
line):
service_type: osd
service_name: osd
placement:
host_pattern: '*'
spec:
data_devices:
all: true
filter_logic: AND
objectstore: bluestore
(ncn-s#
) Verify that there are no unmanaged OSDs for osd
service name.
ceph orch ls | awk 'NR==1 || /osd/'
Example output (osd
service no longer shows <unmanaged>
placement):
NAME PORTS RUNNING REFRESHED AGE PLACEMENT
osd 8 10m ago - * <--- no longer <unmanaged>
osd.all-available-devices 16 10m ago 7d *