cephadm 装置布置 ceph 集群

介绍

手册:

access.redhat.com/documentati…

docs.ceph.org.cn/

ceph可以完成的存储方法:

块存储:供给像普通硬盘相同的存储,为运用者供给“硬盘”

文件体系存储:类似于NFS的同享方法,为运用者供给同享文件夹

目标存储:像百度云盘相同,需求运用单独的客户端

ceph还是一个分布式的存储体系,十分灵活。假如需求扩容,只要向ceph会集增加服务器即可。ceph存储数据时采用多副本的方法进行存储,出产环境下,一个文件至少要存3份。ceph默许也是三副本存储。

ceph的构成

Ceph OSD 看护进程: Ceph OSD 用于存储数据。此外,Ceph OSD 利用 Ceph 节点的 CPU、内存和网络来履行数据复制、纠删代码、从头平衡、康复、监控和陈述功能。存储节点有几块硬盘用于存储,该节点就会有几个osd进程。

Ceph Mon监控器: Ceph Mon维护 Ceph 存储集群映射的主副本和 Ceph 存储群集的当时状况。监控器需求高度共同性,确保对Ceph 存储集群状况达成共同。维护着展现集群状况的各种图表,包括监视器图、 OSD 图、归置组( PG )图、和 CRUSH 图。

MDSs: Ceph 元数据服务器( MDS )为 Ceph 文件体系存储元数据。

RGW:目标存储网关。首要为拜访ceph的软件供给API接口。

装置

装备IP地址

# 装备IP地址
ssh root@192.168.1.154 "nmcli con mod ens18 ipv4.addresses 192.168.1.25/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"
ssh root@192.168.1.179 "nmcli con mod ens18 ipv4.addresses 192.168.1.26/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"
ssh root@192.168.1.181 "nmcli con mod ens18 ipv4.addresses 192.168.1.27/24; nmcli con mod ens18 ipv4.gateway 192.168.1.1; nmcli con mod ens18 ipv4.method manual; nmcli con mod ens18 ipv4.dns "8.8.8.8"; nmcli con up ens18"

装备基础环境

# 装备主机名
hostnamectl set-hostname ceph-1
hostnamectl set-hostname ceph-2
hostnamectl set-hostname ceph-3

# 更新到最新
yum update -y

# 封闭selinux
setenforce 0
sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/config

# 封闭防火墙
systemctl disable --now firewalld

# 装备免密
ssh-keygen -f /root/.ssh/id_rsa -P ''
ssh-copy-id -o StrictHostKeyChecking=no 192.168.1.25
ssh-copy-id -o StrictHostKeyChecking=no 192.168.1.26
ssh-copy-id -o StrictHostKeyChecking=no 192.168.1.27

# 检查磁盘
[root@ceph-1 ~]# lsblk
NAME        MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda           8:0    0  100G  0 disk 
├─sda1        8:1    0    1G  0 part /boot
└─sda2        8:2    0   99G  0 part 
  ├─cs-root 253:0    0 61.2G  0 lvm  /
  ├─cs-swap 253:1    0  7.9G  0 lvm  [SWAP]
  └─cs-home 253:2    0 29.9G  0 lvm  /home
sdb           8:16   0  100G  0 disk 
[root@ceph-1 ~]# 

# 装备hosts
cat > /etc/hosts <<EOF
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.25 ceph-1
192.168.1.26 ceph-2
192.168.1.27 ceph-3
EOF

装置时间同步和docker

# 装置需求的包
yum install epel* -y 
yum install -y ceph-mon ceph-osd ceph-mds ceph-radosgw

# 服务端
yum install chrony -y
cat > /etc/chrony.conf << EOF 
pool ntp.aliyun.com iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
allow 192.168.1.0/24
local stratum 10
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony
EOF
systemctl restart chronyd ; systemctl enable chronyd

# 客户端
yum install chrony -y
cat > /etc/chrony.conf << EOF 
pool 192.168.1.25 iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony
EOF
systemctl restart chronyd ; systemctl enable chronyd

#运用客户端进行验证
chronyc sources -v

# 装置docker
curl -sSL https://get.daocloud.io/docker | sh

装置集群


# 装置集群
yum install -y python3
# 装置 cephadm 东西
curl --silent --remote-name --location https://mirrors.chenby.cn/https://github.com/ceph/ceph/raw/quincy/src/cephadm/cephadm

# 创立源信息
./cephadm add-repo --release 17.2.5
sed -i 's#download.ceph.com#mirrors.ustc.edu.cn/ceph#' /etc/yum.repos.d/ceph.repo 
./cephadm install

# 引导新的集群
[root@ceph-1 ~]# cephadm bootstrap --mon-ip 192.168.1.25
Verifying podman|docker is present...
Verifying lvm2 is present...
Verifying time synchronization is in place...
Unit chronyd.service is enabled and running
Repeating the final host check...
docker (/usr/bin/docker) is present
systemctl is present
lvcreate is present
Unit chronyd.service is enabled and running
Host looks OK
Cluster fsid: 976e04fe-9315-11ed-a275-e29e49e9189c
Verifying IP 192.168.1.25 port 3300 ...
Verifying IP 192.168.1.25 port 6789 ...
Mon IP `192.168.1.25` is in CIDR network `192.168.1.0/24`
Mon IP `192.168.1.25` is in CIDR network `192.168.1.0/24`
Internal network (--cluster-network) has not been provided, OSD replication will default to the public_network
Pulling container image quay.io/ceph/ceph:v17...
Ceph version: ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)
Extracting ceph user uid/gid from container image...
Creating initial keys...
Creating initial monmap...
Creating mon...
Waiting for mon to start...
Waiting for mon...
mon is available
Assimilating anything we can from ceph.conf...
Generating new minimal ceph.conf...
Restarting the monitor...
Setting mon public_network to 192.168.1.0/24
Wrote config to /etc/ceph/ceph.conf
Wrote keyring to /etc/ceph/ceph.client.admin.keyring
Creating mgr...
Verifying port 9283 ...
Waiting for mgr to start...
Waiting for mgr...
mgr not available, waiting (1/15)...
mgr not available, waiting (2/15)...
mgr not available, waiting (3/15)...
mgr not available, waiting (4/15)...
mgr is available
Enabling cephadm module...
Waiting for the mgr to restart...
Waiting for mgr epoch 4...
mgr epoch 4 is available
Setting orchestrator backend to cephadm...
Generating ssh key...
Wrote public SSH key to /etc/ceph/ceph.pub
Adding key to root@localhost authorized_keys...
Adding host ceph-1...
Deploying mon service with default placement...
Deploying mgr service with default placement...
Deploying crash service with default placement...
Deploying prometheus service with default placement...
Deploying grafana service with default placement...
Deploying node-exporter service with default placement...
Deploying alertmanager service with default placement...
Enabling the dashboard module...
Waiting for the mgr to restart...
Waiting for mgr epoch 8...
mgr epoch 8 is available
Generating a dashboard self-signed certificate...
Creating initial admin user...
Fetching dashboard port number...
Ceph Dashboard is now available at:
             URL: https://ceph-1:8443/
            User: admin
        Password: dsvi6yiat7
Enabling client.admin keyring and conf on hosts with "admin" label
Saving cluster configuration to /var/lib/ceph/976e04fe-9315-11ed-a275-e29e49e9189c/config directory
Enabling autotune for osd_memory_target
You can access the Ceph CLI as following in case of multi-cluster or non-default config:
        sudo /usr/sbin/cephadm shell --fsid 976e04fe-9315-11ed-a275-e29e49e9189c -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
Or, if you are only running a single cluster on this host:
        sudo /usr/sbin/cephadm shell 
Please consider enabling telemetry to help improve Ceph:
        ceph telemetry on
For more information see:
        https://docs.ceph.com/docs/master/mgr/telemetry/
Bootstrap complete.
[root@ceph-1 ~]# 

检查容器


[root@ceph-1 ~]# docker images
REPOSITORY                         TAG       IMAGE ID       CREATED         SIZE
quay.io/ceph/ceph                  v17       cc65afd6173a   2 months ago    1.36GB
quay.io/ceph/ceph-grafana          8.3.5     dad864ee21e9   9 months ago    558MB
quay.io/prometheus/prometheus      v2.33.4   514e6a882f6e   10 months ago   204MB
quay.io/prometheus/node-exporter   v1.3.1    1dbe0e931976   13 months ago   20.9MB
quay.io/prometheus/alertmanager    v0.23.0   ba2b418f427c   16 months ago   57.5MB
[root@ceph-1 ~]#
[root@ceph-1 ~]# docker ps
CONTAINER ID   IMAGE                                     COMMAND                  CREATED              STATUS              PORTS     NAMES
41a980ad57b6   quay.io/ceph/ceph-grafana:8.3.5           "/bin/sh -c 'grafana…"   32 seconds ago       Up 31 seconds                 ceph-976e04fe-9315-11ed-a275-e29e49e9189c-grafana-ceph-1
c1d92377e2f2   quay.io/prometheus/alertmanager:v0.23.0   "/bin/alertmanager -…"   33 seconds ago       Up 32 seconds                 ceph-976e04fe-9315-11ed-a275-e29e49e9189c-alertmanager-ceph-1
9262faff37be   quay.io/prometheus/prometheus:v2.33.4     "/bin/prometheus --c…"   42 seconds ago       Up 41 seconds                 ceph-976e04fe-9315-11ed-a275-e29e49e9189c-prometheus-ceph-1
2601411f95a6   quay.io/prometheus/node-exporter:v1.3.1   "/bin/node_exporter …"   About a minute ago   Up About a minute             ceph-976e04fe-9315-11ed-a275-e29e49e9189c-node-exporter-ceph-1
a6ca018a7620   quay.io/ceph/ceph                         "/usr/bin/ceph-crash…"   2 minutes ago        Up 2 minutes                  ceph-976e04fe-9315-11ed-a275-e29e49e9189c-crash-ceph-1
f9e9de110612   quay.io/ceph/ceph:v17                     "/usr/bin/ceph-mgr -…"   3 minutes ago        Up 3 minutes                  ceph-976e04fe-9315-11ed-a275-e29e49e9189c-mgr-ceph-1-svfnsm
cac707c88b83   quay.io/ceph/ceph:v17                     "/usr/bin/ceph-mon -…"   3 minutes ago        Up 3 minutes                  ceph-976e04fe-9315-11ed-a275-e29e49e9189c-mon-ceph-1
[root@ceph-1 ~]# 

运用shell指令


[root@ceph-1 ~]# cephadm shell   #切换形式
Inferring fsid 976e04fe-9315-11ed-a275-e29e49e9189c
Inferring config /var/lib/ceph/976e04fe-9315-11ed-a275-e29e49e9189c/mon.ceph-1/config
Using ceph image with id 'cc65afd6173a' and tag 'v17' created on 2022-10-18 07:41:41 +0800 CST
quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45
[ceph: root@ceph-1 /]# 
[ceph: root@ceph-1 /]# 
[ceph: root@ceph-1 /]# 
[ceph: root@ceph-1 /]# ceph -s  
  cluster:
    id:     976e04fe-9315-11ed-a275-e29e49e9189c
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3
  services:
    mon: 1 daemons, quorum ceph-1 (age 4m)
    mgr: ceph-1.svfnsm(active, since 2m)
    osd: 0 osds: 0 up, 0 in
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     
[ceph: root@ceph-1 /]# 
[ceph: root@ceph-1 /]# ceph orch ps  #检查现在集群内运转的组件(包括其他节点)
NAME                  HOST    PORTS        STATUS        REFRESHED  AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      CONTAINER ID  
alertmanager.ceph-1   ceph-1  *:9093,9094  running (2m)     2m ago   4m    15.1M        -           ba2b418f427c  c1d92377e2f2  
crash.ceph-1          ceph-1               running (4m)     2m ago   4m    6676k        -  17.2.5   cc65afd6173a  a6ca018a7620  
grafana.ceph-1        ceph-1  *:3000       running (2m)     2m ago   3m    39.1M        -  8.3.5    dad864ee21e9  41a980ad57b6  
mgr.ceph-1.svfnsm     ceph-1  *:9283       running (5m)     2m ago   5m     426M        -  17.2.5   cc65afd6173a  f9e9de110612  
mon.ceph-1            ceph-1               running (5m)     2m ago   5m    29.0M    2048M  17.2.5   cc65afd6173a  cac707c88b83  
node-exporter.ceph-1  ceph-1  *:9100       running (3m)     2m ago   3m    13.2M        -           1dbe0e931976  2601411f95a6  
prometheus.ceph-1     ceph-1  *:9095       running (3m)     2m ago   3m    34.4M        -           514e6a882f6e  9262faff37be  
[ceph: root@ceph-1 /]# 
[ceph: root@ceph-1 /]# 
[ceph: root@ceph-1 /]# ceph orch ps --daemon-type mon  #检查某一组件的状况
NAME        HOST    PORTS  STATUS        REFRESHED  AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      CONTAINER ID  
mon.ceph-1  ceph-1         running (5m)     2m ago   5m    29.0M    2048M  17.2.5   cc65afd6173a  cac707c88b83  
[ceph: root@ceph-1 /]# 
[ceph: root@ceph-1 /]# exit   #退出指令形式 
exit
[root@ceph-1 ~]#

# ceph指令的第二种运用
[root@ceph-1 ~]# cephadm shell -- ceph -s
Inferring fsid 976e04fe-9315-11ed-a275-e29e49e9189c
Inferring config /var/lib/ceph/976e04fe-9315-11ed-a275-e29e49e9189c/mon.ceph-1/config
Using ceph image with id 'cc65afd6173a' and tag 'v17' created on 2022-10-18 07:41:41 +0800 CST
quay.io/ceph/ceph@sha256:0560b16bec6e84345f29fb6693cd2430884e6efff16a95d5bdd0bb06d7661c45
  cluster:
    id:     976e04fe-9315-11ed-a275-e29e49e9189c
    health: HEALTH_WARN
            OSD count 0 < osd_pool_default_size 3
  services:
    mon: 1 daemons, quorum ceph-1 (age 6m)
    mgr: ceph-1.svfnsm(active, since 4m)
    osd: 0 osds: 0 up, 0 in
  data:
    pools:   0 pools, 0 pgs
    objects: 0 objects, 0 B
    usage:   0 B used, 0 B / 0 B avail
    pgs:     
[root@ceph-1 ~]# 

装置ceph-common包

# 装置ceph-common包
[root@ceph-1 ~]# cephadm install ceph-common
Installing packages ['ceph-common']...
[root@ceph-1 ~]# 
[root@ceph-1 ~]# ceph -v 
ceph version 17.2.5 (98318ae89f1a893a6ded3a640405cdbb33e08757) quincy (stable)
[root@ceph-1 ~]#

# 启用ceph组件
ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-2
ssh-copy-id -f -i /etc/ceph/ceph.pub root@ceph-3

创立mon和mgr


# 创立mon和mgr
ceph orch host add ceph-2
ceph orch host add ceph-3

#检查现在集群纳管的节点
[root@ceph-1 ~]# ceph orch host ls 
HOST    ADDR          LABELS  STATUS  
ceph-1  192.168.1.25  _admin          
ceph-2  192.168.1.26                  
ceph-3  192.168.1.27                  
3 hosts in cluster
[root@ceph-1 ~]#

#ceph集群一般默许会答应存在5个mon和2个mgr;可以运用ceph orch apply mon --placement="3 node1 node2 node3"进行手动修改
[root@ceph-1 ~]# ceph orch apply mon --placement="3 ceph-1 ceph-2 ceph-3"
Scheduled mon update...
[root@ceph-1 ~]# 
[root@ceph-1 ~]# ceph orch apply mgr --placement="3 ceph-1 ceph-2 ceph-3"
Scheduled mgr update...
[root@ceph-1 ~]# 
[root@ceph-1 ~]# ceph orch ls 
NAME           PORTS        RUNNING  REFRESHED  AGE   PLACEMENT                     
alertmanager   ?:9093,9094      1/1  30s ago    17m   count:1                       
crash                           3/3  4m ago     17m   *                             
grafana        ?:3000           1/1  30s ago    17m   count:1                       
mgr                             3/3  4m ago     46s   ceph-1;ceph-2;ceph-3;count:3  
mon                             3/3  4m ago     118s  ceph-1;ceph-2;ceph-3;count:3  
node-exporter  ?:9100           3/3  4m ago     17m   *                             
prometheus     ?:9095           1/1  30s ago    17m   count:1                       
[root@ceph-1 ~]# 

创立osd

# 创立osd
[root@ceph-1 ~]# ceph orch daemon add osd ceph-1:/dev/sdb
Created osd(s) 0 on host 'ceph-1'
[root@ceph-1 ~]# ceph orch daemon add osd ceph-2:/dev/sdb
Created osd(s) 1 on host 'ceph-2'
[root@ceph-1 ~]# ceph orch daemon add osd ceph-3:/dev/sdb
Created osd(s) 2 on host 'ceph-3'
[root@ceph-1 ~]# 

创立mds

# 创立mds

#首要创立cephfs,不指定pg的话,默许主动调整
[root@ceph-1 ~]# ceph osd pool create cephfs_data
pool 'cephfs_data' created
[root@ceph-1 ~]# ceph osd pool create cephfs_metadata
pool 'cephfs_metadata' created
[root@ceph-1 ~]# ceph fs new cephfs cephfs_metadata cephfs_data
new fs with metadata pool 3 and data pool 2
[root@ceph-1 ~]# 

#开启mds组件,cephfs:文件体系称号;–placement:指定集群内需求几个mds,后边跟主机名
[root@ceph-1 ~]# ceph orch apply mds cephfs --placement="3 ceph-1 ceph-2 ceph-3"
Scheduled mds.cephfs update...
[root@ceph-1 ~]# 

#检查各节点是否已发动mds容器;还可以运用ceph orch ps 检查某一节点运转的容器
[root@ceph-1 ~]# ceph orch ps --daemon-type mds
NAME                      HOST    PORTS  STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      CONTAINER ID  
mds.cephfs.ceph-1.zgcgrw  ceph-1         running (52s)    44s ago  52s    17.0M        -  17.2.5   cc65afd6173a  aba28ef97b9a  
mds.cephfs.ceph-2.vvpuyk  ceph-2         running (51s)    45s ago  51s    14.1M        -  17.2.5   cc65afd6173a  940a019d4c75  
mds.cephfs.ceph-3.afnozf  ceph-3         running (54s)    45s ago  54s    14.2M        -  17.2.5   cc65afd6173a  bd17d6414aa9  
[root@ceph-1 ~]# 
[root@ceph-1 ~]# 

创立rgw


# 创立rgw

#首要创立一个范畴
[root@ceph-1 ~]# radosgw-admin realm create --rgw-realm=myorg --default
{
    "id": "a6607d08-ac44-45f0-95b0-5435acddfba2",
    "name": "myorg",
    "current_period": "16769237-0ed5-4fad-8822-abc444292d0b",
    "epoch": 1
}
[root@ceph-1 ~]# 

#创立区域组
[root@ceph-1 ~]# radosgw-admin zonegroup create --rgw-zonegroup=default --master --default
{
    "id": "4d978fe1-b158-4b3a-93f7-87fbb31f6e7a",
    "name": "default",
    "api_name": "default",
    "is_master": "true",
    "endpoints": [],
    "hostnames": [],
    "hostnames_s3website": [],
    "master_zone": "",
    "zones": [],
    "placement_targets": [],
    "default_placement": "",
    "realm_id": "a6607d08-ac44-45f0-95b0-5435acddfba2",
    "sync_policy": {
        "groups": []
    }
}
[root@ceph-1 ~]# 

#创立区域
[root@ceph-1 ~]# radosgw-admin zone create --rgw-zonegroup=default --rgw-zone=cn-east-1 --master --default
{
    "id": "5ac7f118-a69c-4dec-b174-f8432e7115b7",
    "name": "cn-east-1",
    "domain_root": "cn-east-1.rgw.meta:root",
    "control_pool": "cn-east-1.rgw.control",
    "gc_pool": "cn-east-1.rgw.log:gc",
    "lc_pool": "cn-east-1.rgw.log:lc",
    "log_pool": "cn-east-1.rgw.log",
    "intent_log_pool": "cn-east-1.rgw.log:intent",
    "usage_log_pool": "cn-east-1.rgw.log:usage",
    "roles_pool": "cn-east-1.rgw.meta:roles",
    "reshard_pool": "cn-east-1.rgw.log:reshard",
    "user_keys_pool": "cn-east-1.rgw.meta:users.keys",
    "user_email_pool": "cn-east-1.rgw.meta:users.email",
    "user_swift_pool": "cn-east-1.rgw.meta:users.swift",
    "user_uid_pool": "cn-east-1.rgw.meta:users.uid",
    "otp_pool": "cn-east-1.rgw.otp",
    "system_key": {
        "access_key": "",
        "secret_key": ""
    },
    "placement_pools": [
        {
            "key": "default-placement",
            "val": {
                "index_pool": "cn-east-1.rgw.buckets.index",
                "storage_classes": {
                    "STANDARD": {
                        "data_pool": "cn-east-1.rgw.buckets.data"
                    }
                },
                "data_extra_pool": "cn-east-1.rgw.buckets.non-ec",
                "index_type": 0
            }
        }
    ],
    "realm_id": "a6607d08-ac44-45f0-95b0-5435acddfba2",
    "notif_pool": "cn-east-1.rgw.log:notif"
}
[root@ceph-1 ~]# 

#为特定范畴和区域布置radosgw看护程序
[root@ceph-1 ~]# ceph orch apply rgw myorg cn-east-1 --placement="3 ceph-1 ceph-2 ceph-3"
Scheduled rgw.myorg update...
[root@ceph-1 ~]# 

#验证各节点是否发动rgw容器
[root@ceph-1 ~]#  ceph orch ps --daemon-type rgw
NAME                     HOST    PORTS  STATUS         REFRESHED  AGE  MEM USE  MEM LIM  VERSION  IMAGE ID      CONTAINER ID  
rgw.myorg.ceph-1.tzzauo  ceph-1  *:80   running (60s)    50s ago  60s    18.6M        -  17.2.5   cc65afd6173a  2ce31e5c9d35  
rgw.myorg.ceph-2.zxwpfj  ceph-2  *:80   running (61s)    51s ago  61s    20.0M        -  17.2.5   cc65afd6173a  a334e346ae5c  
rgw.myorg.ceph-3.bvsydw  ceph-3  *:80   running (58s)    51s ago  58s    18.6M        -  17.2.5   cc65afd6173a  97b09ba01821  
[root@ceph-1 ~]# 

为一切节点装置ceph-common包


# 为一切节点装置ceph-common包
scp /etc/yum.repos.d/ceph.repo ceph-2:/etc/yum.repos.d/    #将主节点的ceph源同步至其他节点
scp /etc/yum.repos.d/ceph.repo ceph-3:/etc/yum.repos.d/    #将主节点的ceph源同步至其他节点
yum -y install ceph-common    #在节点装置ceph-common,ceph-common包会供给ceph指令并在etc下创立ceph目录
scp /etc/ceph/ceph.conf ceph-2:/etc/ceph/    #将ceph.conf文件传输至对应节点
scp /etc/ceph/ceph.conf ceph-3:/etc/ceph/    #将ceph.conf文件传输至对应节点
scp /etc/ceph/ceph.client.admin.keyring ceph-2:/etc/ceph/    #将密钥文件传输至对应节点
scp /etc/ceph/ceph.client.admin.keyring ceph-3:/etc/ceph/    #将密钥文件传输至对应节点

测验

# 测验
[root@ceph-3 ~]# ceph -s
  cluster:
    id:     976e04fe-9315-11ed-a275-e29e49e9189c
    health: HEALTH_OK
  services:
    mon: 3 daemons, quorum ceph-1,ceph-2,ceph-3 (age 17m)
    mgr: ceph-1.svfnsm(active, since 27m), standbys: ceph-2.zuetkd, ceph-3.vntnlf
    mds: 1/1 daemons up, 2 standby
    osd: 3 osds: 3 up (since 8m), 3 in (since 8m)
    rgw: 3 daemons active (3 hosts, 1 zones)
  data:
    volumes: 1/1 healthy
    pools:   7 pools, 177 pgs
    objects: 226 objects, 585 KiB
    usage:   108 MiB used, 300 GiB / 300 GiB avail
    pgs:     177 active+clean
[root@ceph-3 ~]# 

拜访界面

# 页面拜访
https://192.168.1.25:8443
http://192.168.1.25:9095/
https://192.168.1.25:3000/
User: admin
Password: dsvi6yiat7

常用指令

ceph orch ls    #列出集群内运转的组件
ceph orch host ls    #列出集群内的主机
ceph orch ps     #列出集群内容器的详细信息
ceph orch apply mon --placement="3 node1 node2 node3"    #调整组件的数量
ceph orch ps --daemon-type rgw    #--daemon-type:指定检查的组件
ceph orch host label add node1 mon    #给某个主机指定标签
ceph orch apply mon label:mon    #告诉cephadm依据标签布置mon,修改后只要包括mon的主机才会成为mon,不过本来发动的mon现在暂时不会封闭
ceph orch device ls    #列出集群内的存储设备
例如,要在newhost1IP地址10.1.2.123上布置第二台监视器,并newhost2在网络10.1.2.0/24中布置第三台monitor
ceph orch apply mon --unmanaged    #禁用mon主动布置
ceph orch daemon add mon newhost1:10.1.2.123
ceph orch daemon add mon newhost2:10.1.2.0/24

关于

www.oiox.cn/

www.oiox.cn/index.php/s…

CSDN、GitHub、51CTO、知乎、开源中国、思否、掘金、简书、华为云、阿里云、腾讯云、哔哩哔哩、今天头条、新浪微博、个人博客

全网可搜《小陈运维》

文章首要发布于微信公众号