美文网首页
Ceph 安装部署

Ceph 安装部署

作者: 河码匠 | 来源:发表于2021-12-05 14:36 被阅读0次

本地测试环境和官方用例一样,使用 VMware 虚拟机,系统 Ubuntu 16.04。
一个 admin 管理节点。
一个 monitor 监控节点。
两个 osd 存储节点。osd 节点两个硬盘,一个系统盘,一个数据盘。

虚拟机安装功能如下图:

测试环境

一、 基本配置

1. 所有节点同步时间

我这里用的是 US 池的 NTP 服务器。然后开启并使 NTP 服务在开机时启动。

# apt-get install -y ntp ntpdate ntp-doc
# ntpdate 0.us.pool.ntp.org
# hwclock --systohc
# systemctl enable ntp
# systemctl start ntp

2. 再所有节点创建 Ceph 账户

这里需要注意不要创建名字为 ceph 的账户。这个是 Ceph 预留的。

# useradd -d /home/machao -m machao
# sudo passwd machao

3. 所有节点修改创建用户的权限

新创建的用户需要 root 权限且不用输入密码

# echo "machao ALL = (root) NOPASSWD:ALL" | sudo tee /etc/sudoers.d/machao
# chmod 0440 /etc/sudoers.d/machao

4. 修改所有节点 /etc/hosts 文件

这样改的原因是 Ubuntu 让 hostname 使用了它的回环 loopback 地址。
如果使用 localhost 再安装时会引起 unable to resolve host node2 这样的错误。
node-admin 节点的hosts 是 ceph 使用 hostname 访问其他节点的一个配置。

node-admin 节点

127.0.0.1     node-admin
172.16.245.156 node-admin
172.16.245.157 node1
172.16.245.158 node2
172.16.245.159 node3

node1 节点

127.0.0.1     node1

node2 节点

127.0.0.1     node2

node3 节点

127.0.0.1     node3

ping 测试

machao@node-admin:~/.ssh$ ping node-admin
PING node-admin (172.16.245.156) 56(84) bytes of data.
64 bytes from node-admin (172.16.245.156): icmp_seq=1 ttl=64 time=0.012 ms
64 bytes from node-admin (172.16.245.156): icmp_seq=2 ttl=64 time=0.020 ms
^C
--- node-admin ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.012/0.016/0.020/0.004 ms
machao@node-admin:~/.ssh$ ping node1
PING node1 (172.16.245.157) 56(84) bytes of data.
64 bytes from node1 (172.16.245.157): icmp_seq=1 ttl=64 time=0.531 ms
64 bytes from node1 (172.16.245.157): icmp_seq=2 ttl=64 time=0.366 ms
^C
--- node1 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.366/0.448/0.531/0.085 ms
machao@node-admin:~/.ssh$ ping node2
PING node2 (172.16.245.158) 56(84) bytes of data.
64 bytes from node2 (172.16.245.158): icmp_seq=1 ttl=64 time=0.460 ms
^C
--- node2 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms
machao@node-admin:~/.ssh$ ping node3
PING node3 (172.16.245.159) 56(84) bytes of data.
64 bytes from node3 (172.16.245.159): icmp_seq=1 ttl=64 time=0.460 ms
64 bytes from node3 (172.16.245.159): icmp_seq=2 ttl=64 time=0.287 ms
^C
--- node3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 999ms
rtt min/avg/max/mdev = 0.287/0.373/0.460/0.088 ms
machao@node-admin:~/.ssh$

5. 在 node-admin 节点开始配置 ssh

切换到为 Cehp 创建的账户

# us machao

创建 ssh key。一直回车就可以了。

machao@node-admin:/root$ ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/home/machao/.ssh/id_rsa):
Created directory '/home/machao/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/machao/.ssh/id_rsa.
Your public key has been saved in /home/machao/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:4Q8BuxoA9w9YUMNGAjiPP14fLoF0p1e9TEJL8cfk/NA machao@node-admin
The key's randomart image is:
+---[RSA 2048]----+
|+.+== . ..  .    |
|oo =o. oo. = .   |
| +o.o .ooo. * E  |
|. o..o.o+oo. o   |
| o o.oo.S+ .  .  |
|  + +oo  oo      |
| . o.= .  .      |
|  . . o          |
|     .           |
+----[SHA256]-----+

6. 在 node-admin 节点配置 ~/.ssh/config

这个配置是为了让 node-admin 免密登录其他节点

Host node-admin
   Hostname node-admin
   User machao
Host node1
   Hostname node1
   User machao
Host node2
   Hostname node2
   User machao
Host node3
   Hostname node3
   User machao

7. 在 node-admin 节点将 id_rsa.pub 复制到其他节点

machao@node-admin:~/.ssh$ ssh-copy-id machao@node-admin
machao@node-admin:~/.ssh$ ssh-copy-id machao@node1
machao@node-admin:~/.ssh$ ssh-copy-id machao@node2
machao@node-admin:~/.ssh$ ssh-copy-id machao@node3

中途会让你输入一次密码。输入即可。

8. 在 node-admin 节点测试 ssh 连接

machao@node-admin:~/.ssh$ ssh node1
machao@node1:~$ exit
logout

machao@node-admin:~/.ssh$ ssh node2
machao@node2:~$ exit
logout

machao@node-admin:~/.ssh$ ssh node3
machao@node3:~$ exit
logout

9. 因为 Ubuntu 16.04 默认是没有 python 命令的需要安装,安装 Ceph 时会用到 python

这里我是在 root 用户下安装的

root@node-admin:~$ apt-get install python python-pip

二、安装 Ceph 创建集群

1. 在 node-admin 节点安装 ceph-deploy

machao@node-admin:~/.ssh$ sudo apt-get install ceph-deploy

2. 在 node-admin 节点创建新集群目录并进入

machao@node-admin:~$ mkdir my_cluster
machao@node-admin:~$ cd my_cluster

3. 在 node-admin 节点用 ceph-deploy 命令通过定义监控节点 node1

machao@node-admin:~/my_cluster$ ceph-deploy new node1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/machao/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.32): /usr/bin/ceph-deploy new node1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f89ba148ea8>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['node1']
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0x7f89b9c096e0>
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[node1][DEBUG ] connected to host: node-admin
[node1][INFO  ] Running command: ssh -CT -o BatchMode=yes node1
[node1][DEBUG ] connection detected need for sudo
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location of an executable
[node1][INFO  ] Running command: sudo /bin/ip link show
[node1][INFO  ] Running command: sudo /bin/ip addr show
[node1][DEBUG ] IP addresses found: ['172.16.245.157']
[ceph_deploy.new][DEBUG ] Resolving host node1
[ceph_deploy.new][DEBUG ] Monitor node1 at 172.16.245.157
[ceph_deploy.new][DEBUG ] Monitor initial members are ['node1']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['172.16.245.157']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...

4. 在 my_cluster 目录中会新增很多文件。现在修改 ceph.conf 文件

osd 默认是 3 个节点。因为我这里的测试环境使用的是两个 osd 所以这个要改为 2 个

osd_pool_default_size = 2

有多网卡需要再 ceph.conf 中添加 public network 否则会报错
neither 'public_addr' nor 'public_network' keys are defined for monitors
public_network 是 ceph client 获取数据时使用的网络
cluster_network 是 ceph 各个节点直接复制数据使用的网络

public_network = 172.16.245.0/24
cluster_network = 172.16.245.0/24

5. 在 node-admin 节点的~/my_cluster下进行给所有节点安装 Ceph 操作

machao@node-admin:~/my_cluster$ ceph-deploy install node-admin node1 node2 node3

6. 在 node-admin 节点初始化监控

machao@node-admin:~/my_cluster$ ceph-deploy mon create-initial

7. 检查监控密钥的一些信息

ceph-deploy 2.0.1 版本 ceph-deploy mon create-initial 执行完成后下面的 gatherkeys 已经生成

machao@node-admin:~/my_cluster$ ceph-deploy gatherkeys node1
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/machao/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.32): /usr/bin/ceph-deploy gatherkeys node1
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f73cc796680>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  mon                           : ['node1']
[ceph_deploy.cli][INFO  ]  func                          : <function gatherkeys at 0x7f73ccbebf50>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.gatherkeys][DEBUG ] Have ceph.client.admin.keyring
[ceph_deploy.gatherkeys][DEBUG ] Have ceph.mon.keyring
[ceph_deploy.gatherkeys][DEBUG ] Have ceph.bootstrap-osd.keyring
[ceph_deploy.gatherkeys][DEBUG ] Have ceph.bootstrap-mds.keyring
[ceph_deploy.gatherkeys][DEBUG ] Have ceph.bootstrap-rgw.keyring

三、配置 OSD

1. 添加 osd 节点,查看节点磁盘可用性

machao@node-admin:~/my_cluster$ ceph-deploy disk list node2 node3
[node2][DEBUG ] /dev/sda :
[node2][DEBUG ]  /dev/sda2 other, 0x5
[node2][DEBUG ]  /dev/sda5 swap, swap
[node2][DEBUG ]  /dev/sda1 other, ext4, mounted on /
[node2][DEBUG ] /dev/sdb other, unknown

[node3][DEBUG ] /dev/sda :
[node3][DEBUG ]  /dev/sda2 other, 0x5
[node3][DEBUG ]  /dev/sda5 swap, swap
[node3][DEBUG ]  /dev/sda1 other, ext4, mounted on /
[node3][DEBUG ] /dev/sdb other, unknown

信息较多,截取重要部分。
可以看到 sdb 的状态时 other,unknown

2. 对 node2 和 node3 的节点磁盘进行格式化。以下操作针对 node2 的代码截取。node3 一样的操作

ceph-deploy 2.1 以后不在需要分区

a. 查看磁盘信息
root@node2:~# fdisk -l /dev/sdb
Disk /dev/sdb: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
b. 格式化 /dev/sdb 分区为 XFS 文件系统,使用 parted 命令创建一个 GPT 分区表
parted -s /dev/sdb mklabel gpt mkpart primary xfs 0% 100%
c. 使用 mkfs 命令格式化分区为 XFS 格式
root@node2:~# mkfs.xfs -f /dev/sdb
meta-data=/dev/sdb               isize=512    agcount=4, agsize=1310720 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=0
data     =                       bsize=4096   blocks=5242880, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0 ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0
d. 查看分区大小和类型
root@node2:~# fdisk -s /dev/sdb
20971520
root@node2:~# blkid -o value -s TYPE /dev/sdb
xfs

4. 在 node-admin 节点再次查看 node2 和 node3 的磁盘使用情况

machao@node-admin:~/my_cluster$ ceph-deploy disk list node2 node3
[node2][DEBUG ] /dev/sda :
[node2][DEBUG ]  /dev/sda2 other, 0x5
[node2][DEBUG ]  /dev/sda5 swap, swap
[node2][DEBUG ]  /dev/sda1 other, ext4, mounted on /
[node2][DEBUG ] /dev/sdb other, xfs
[node2][DEBUG ] /dev/sr0 other, iso9660

[node3][DEBUG ] /dev/sda :
[node3][DEBUG ]  /dev/sda2 other, 0x5
[node3][DEBUG ]  /dev/sda5 swap, swap
[node3][DEBUG ]  /dev/sda1 other, ext4, mounted on /
[node3][DEBUG ] /dev/sdb other, xfs
[node3][DEBUG ] /dev/sr0 other, iso9660

5. 在所有 OSD 节点上用 zap 选项删除该分区表

命令将删除所有 Ceph OSD 节点的 /dev/sdb 上的数据

machao@node-admin:~/my_cluster$ ceph-deploy disk zap node2:/dev/sdb node3:/dev/sdb

6. 准备所有 OSD 节点

ceph-deploy 2.1 以后准备和激活合并为一个
ceph-deploy osd create --data /dev/sda noe1

machao@node-admin:~/my_cluster$ ceph-deploy osd prepare node2:/dev/sdb node3:/dev/sdb
[ceph_deploy.osd][DEBUG ] Host node2 is now ready for osd use.
[ceph_deploy.osd][DEBUG ] Host node3 is now ready for osd use.

7. 激活 OSD

ceph-deploy 2.1 以后准备和激活合并为一个
ceph-deploy osd create --data /dev/sda noe1

machao@node-admin:~/my_cluster$ ceph-deploy osd activate node2:/dev/sdb node3:/dev/sdb

如果在激活过成功报错
RuntimeError: Failed to execute command: /usr/sbin/ceph-disk -v activate --mark-init systemd --mount /dev/sdb
可以到失败的节点上运行

sudo ceph-disk activate-all

9. 再次查看 node2 和 node3 的磁盘使用情况

machao@node-admin:~/my_cluster$ ceph-deploy disk list node2 node3

[node2][DEBUG ] /dev/sdb :
[node2][DEBUG ]  /dev/sdb2 ceph journal, for /dev/sdb1
[node2][DEBUG ]  /dev/sdb1 ceph data, active, cluster ceph, osd.0, journal /dev/sdb2

[node3][DEBUG ] /dev/sdb :
[node3][DEBUG ]  /dev/sdb2 ceph journal, for /dev/sdb1
[node3][DEBUG ]  /dev/sdb1 ceph data, active, cluster ceph, osd.1, journal /dev/sdb2

可以发现 /dev/sdb2 ceph journal/dev/sdb1 ceph data, active, cluster ceph, osd.1, journal /dev/sdb2

10. 把配置文件拷贝到各个节点

machao@node-admin:~/my_cluster$ ceph-deploy admin node-admin node1 node2 node3

11. 所有节点确保你对 ceph.client.admin.keyring 有正确的操作权限

\color{red}{注意}:这里是所有节点

sudo chmod +r /etc/ceph/ceph.client.admin.keyring

12. 查看 Ceph 状态

machao@node-admin:~/my_cluster$ sudo ceph -s
    cluster 3a559358-7144-4c96-a302-dd96d78909fb
     health HEALTH_OK
     monmap e1: 1 mons at {node1=172.16.245.157:6789/0}
            election epoch 3, quorum 0 node1
     osdmap e10: 2 osds: 2 up, 2 in
            flags sortbitwise,require_jewel_osds
      pgmap v17: 64 pgs, 1 pools, 0 bytes data, 0 objects
            214 MB used, 30483 MB / 30697 MB avail
                  64 active+clean

可以看到 HEALTH_OK

\color{red}{注意}:若安装 Ceph 后遇到麻烦可以使用以下命令进行清除包和配置:

  • 删除安装包
$ ceph-deploy purge node-admin node1 node2 node3
  • 清除配置
$ ceph-deploy purgedata node-admin node1 node2 node3
$ ceph-deploy forgetkeys

四、重置集群

1. 删除 ceph 配置
ceph-deploy purge node1 node2 node3
2. 删除 ceph 文件
rm -rf /var/lib/ceph

重新装即可

相关文章

网友评论

      本文标题:Ceph 安装部署

      本文链接:https://www.haomeiwen.com/subject/qgeithtx.html