您好,欢迎来到尔游网。
搜索
您的当前位置:首页分布式系列之分布式存储ceph初识

分布式系列之分布式存储ceph初识

来源:尔游网

分布式存储广泛用于云计算领域,Ceph作为流行的开源分布式存储系统成为OpenStack的首选后端存储。本文简要介绍Ceph的特点及功能,并安装部署Ceph集群环境。


1、Ceph概述

Ceph是开源的分布式文件系统,随着OpenStack在云计算领域的发展,Ceph也成为OpenStack的首选后端存储。

1.1 Ceph特性

和其它分布式存储相比较,Ceph有以下特性:

1.2 Ceph应用场景

Ceph可以提供对象存储、块设备存储和文件系统服务,其对象存储可以对接网盘应用业务等;其块设备存储可以对接IaaS云平台,包括当前主流的IaaS运行平台OpenStack。

  • 对象存储(RADOSGW):提供RESTful接口,也提供多种编程语言绑定。兼容S3、Swift
  • 块存储(RDB):由RBD提供,可以直接作为磁盘挂载,内置了容灾机制
  • 文件系统(CephFS):提供POSIX兼容的网络文件系统CephFS,专注于高性能、大容量存储
2、Ceph功能组件

Ceph提供了RADOS、OSD、MON、Librados、RBD、RGW和Ceph FS等功能组件,但其底层仍然使用RADOS存储来支撑上层的那些组件。

2.1 核心组件

Ceph中包含几个重要的组件,包括Ceph OSD、Ceph Monitor和Ceph MDS:

  1. Monitors:Ceph监视器,维护整个集群的健康状态,维护展示集群状态的各种图表,如OSD Map、Monitor Map、PG Map和CRUSH Map。并且存储当前版本信息以及最新更改信息,通过 "ceph mon dump"查看 monitor map。
  2. MDS(Metadata Server):Ceph 元数据,主要保存的是Ceph文件系统的元数据。注意:ceph的块存储和ceph对象存储都不需要MDS。
  3. OSD(Object Storage Device):即对象存储守护程序,负责存储数据、处理数据复制、恢复、回补、再平衡,并通过检查其他OSD守护进程的心跳来向Ceph Monitors 提供一些监控信息。当Ceph存储集群设定为有2个副本时,至少需要2个OSD守护进程,集群才能达到active+clean状态。在构建 Ceph OSD的时候,建议采用SSD 磁盘以及xfs文件系统来格式化分区,一般情况下一块硬盘对应一个OSD。
  4. Client客户端:负责存储协议的接入,节点负载均衡
2.2 Ceph功能特性

Ceph的最底层是RADOS(分布式对象存储系统),它具有可靠、智能、分布式等特性,实现高可靠、高可拓展、高性能、高自动化等功能,并最终存储用户数据。之上是LIBRADOS,LIBRADOS是一个库,它允许应用程序通过访问该库来与RADOS系统进行交互,支持多种编程语言,比如C、C++、Python等。基于LIBRADOS层开发的有三种接口,分别是RADOSGW、librbd和MDS。Ceph可以同时提供对象存储RADOSGW(Reliable、Autonomic、Distributed、Object Storage Gateway )、块存储RBD(Rados Block Device)、文件系统存储Ceph FS(Ceph File System)3种功能:

  1. RADOS:Reliable Autonomic Distributed Object Store。RADOS是ceph存储集群的基础。在ceph中,所有数据都以对象的形式存储,并且无论什么数据类型,RADOS对象存储都将负责保存这些对象,RADOS层可以确保数据始终保持一致。
  2. librados:librados库,为应用程度提供访问接口,同时也为块存储、对象存储、文件系统提供原生的接口。
  3. RADOSGW:网关接口,提供对象存储服务。它使用librgw和librados来实现允许应用程序与Ceph对象存储建立连接。并且提供S3 和 Swift(openstack) 兼容的RESTful API接口。
  4. RBD:块设备,它能够自动精简配置并可调整大小,而且将数据分散存储在多个OSD上,librbd提供分布式的块存储设备接口
  5. CephFS:Ceph文件系统,与POSIX兼容的文件系统,基于librados封装原生接口,MDS提供兼容POSIX的文件系统。
3、Ceph部署
3.1 基本信息

使用一个监控节点,三个存储节点,具体如下:

3.2 安装前准备
3.2.1 添加hosts

添加hosts文件实现集群主机名与主机名之间相互能够解析,分别打开各节点的/etc/hosts文件,加入四个节点ip与名称的对应关系:

192.168.112.10  tango-01
192.168.112.101 tango-centos01
192.168.112.102 tango-centos02
192.168.112.103 tango-centos03
3.2.2 设置免密登录

在ceph节点设置root用户免密登录

[root@tango-centos01 ~]$ ssh-keygen
[root@tango-centos01 ~]$ ssh-copy-id tango-centos01
[root@tango-centos01 ~]$ ssh-copy-id tango-centos02
[root@tango-centos01 ~]$ ssh-copy-id tango-centos03
3.2.3 关闭防火墙

检查防火墙已关闭

[root@tango-centos01 ~]# service firewalld status
Redirecting to /bin/systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)
3.2.4 数据节点磁盘挂载
[root@tango-centos01 ~]# mkfs.xfs  /dev/sdb
[root@tango-centos01 ~]# mkdir -p /usr/local/ceph/osd0
[root@tango-centos01 ~]# mount /dev/sdb /usr/local/ceph/osd0/
[root@tango-centos01 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/sdb                  10G   33M   10G   1% /usr/local/ceph/osd0

[root@tango-centos02 ~]#  mkfs.xfs  /dev/sdb
[root@tango-centos02 ~]# mkdir -p /usr/local/ceph/osd1
[root@tango-centos02 ~]# mount /dev/sdb /usr/local/ceph/osd1/
[root@tango-centos02 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/sdb                  10G   33M   10G   1% /usr/local/ceph/osd1

[root@tango-centos03 ~]#  mkfs.xfs  /dev/sdb
[root@tango-centos03 ~]# mkdir -p /usr/local/ceph/osd2
[root@tango-centos03 ~]# mount /dev/sdb /usr/local/ceph/osd2/
[root@tango-centos03 ~]# df -h
Filesystem               Size  Used Avail Use% Mounted on
/dev/sdb                  10G   33M   10G   1% /usr/local/ceph/osd2

2)更新fstab,添加挂载路径

[root@tango-centos01 ~]# vi /etc/fstab
/dev/sdb               /usr/local/ceph/osd0     xfs     defaults        0 0
[root@tango-centos02 ~]# vi /etc/fstab
/dev/sdb               /usr/local/ceph/osd0     xfs     defaults        0 0
[root@tango-centos03 ~]# vi /etc/fstab
/dev/sdb               /usr/local/ceph/osd0     xfs     defaults        0 0
[root@tango-centos01 local]# chmod -R 777 /usr/local/ceph/osd0
[root@tango-centos02 local]# chmod -R 777 /usr/local/ceph/osd1
[root@tango-centos03 local]# chmod -R 777 /usr/local/ceph/osd2
3.3 部署ceph集群
3.3.1 管理节点安装ceph-deploy工具

1)在各个节点增加yum源配置

[root@tango-centos01 ~]# vi /etc/yum.repos.d/ceph.repo
[Ceph]
name=Ceph packages for $basearch
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/x86_
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.163.com/ceph/keys/release.asc
priority=1

[Ceph-noarch]
name=Ceph noarch packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/noarch
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.163.com/ceph/keys/release.asc
priority=1

[ceph-source]
name=Ceph source packages
baseurl=http://mirrors.163.com/ceph/rpm-jewel/el7/SRPMS
enabled=1
gpgcheck=1
type=rpm-md
gpgkey=http://mirrors.163.com/ceph/keys/release.asc
priority=1

2)更新yum源

[root@tango-centos01 ~]# yum clean all && yum list   

3)在管理节点安装

[root@tango-centos01 ~]# yum -y install ceph-deploy
3.3.2 创建ceph集群

在管理节点上使用ceph-deploy创建ceph集群,设置tango-centos01为mon节点

[root@tango-centos01 ~]# cd /usr/local/ceph
[root@tango-centos01 ceph]# ceph-deploy new tango-centos01                         
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /bin/ceph-deploy new tango-centos01
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  func                          : <function new at 0xe86a28>
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0xea2cb0>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  ssh_copykey                   : True
[ceph_deploy.cli][INFO  ]  mon                           : ['tango-centos01']
[ceph_deploy.cli][INFO  ]  public_network                : None
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  cluster_network               : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.cli][INFO  ]  fsid                          : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO  ] making sure passwordless SSH succeeds
[tango-centos01][DEBUG ] connected to host: tango-centos01 
[tango-centos01][DEBUG ] detect platform information from remote host
[tango-centos01][DEBUG ] detect machine type
[tango-centos01][DEBUG ] find the location of an executable
[tango-centos01][INFO  ] Running command: /usr/sbin/ip link show
[tango-centos01][INFO  ] Running command: /usr/sbin/ip addr show
[tango-centos01][DEBUG ] IP addresses found: [u'192.168.112.143', u'192.168.112.101', u'172.17.0.1', u'172.18.0.1']
[ceph_deploy.new][DEBUG ] Resolving host tango-centos01
[ceph_deploy.new][DEBUG ] Monitor tango-centos01 at 192.168.112.101
[ceph_deploy.new][DEBUG ] Monitor initial members are ['tango-centos01']
[ceph_deploy.new][DEBUG ] Monitor addrs are ['192.168.112.101']
[ceph_deploy.new][DEBUG ] Creating a random mon key...
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring...
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf...
[root@tango-centos01 ceph]# ll
total 12
-rw-r--r-- 1 root root  207 Jan 23 09:34 ceph.conf
-rw-r--r-- 1 root root 3075 Jan 23 09:34 ceph-deploy-ceph.log
-rw------- 1 root root   73 Jan 23 09:34 ceph.mon.keyring
3.3.3 修改副本数

配置文件的默认副本数从3改成2,这样只有两个osd也能达到active+clean状态,把下面这行加入到[global]段

[global]
fsid = a4dc4584-863a-4766-b725-d902d6f54f27
mon_initial_members = tango-centos01
mon_host = 192.168.112.101
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
osd_pool_default_size = 2
3.3.4 安装ceph

在管理节点执行命令在所有ceph节点安装ceph

[root@tango-centos01 ceph]$ ceph-deploy install tango-centos01 tango-centos02 tango-centos03

执行完成后查看ceph版本:

[root@tango-centos01 ceph]$ ceph --version
ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e)
[root@tango-centos02 ~]$ ceph --version
ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e)
[root@tango-centos03 ~]$ ceph --version
ceph version 10.2.11 (e4b061b47f07f583c92a050d9e84b1813a35671e)
3.3.5 创建并初始化监控节点

使用如下命令创建监控节点

[root@tango-centos01 ceph]$ ceph-deploy mon create tango-centos01
[root@tango-centos01 ceph]# ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.tango-centos01.asok mon_status
{
    "name": "tango-centos01",
    "rank": 0,
    "state": "leader",
    "election_epoch": 3,
    "quorum": [
        0
    ],
    "outside_quorum": [],
    "extra_probe_peers": [],
    "sync_provider": [],
    "monmap": {
        "epoch": 1,
        "fsid": "daff0d7d-d63d-48d7-ae8b-b70493240ad8",
        "modified": "2022-01-23 10:00:54.091720",
        "created": "2022-01-23 10:00:54.091720",
        "mons": [
            {
                "rank": 0,
                "name": "tango-centos01",
                "addr": "192.168.112.101:67\/0"
            }
        ]
    }
}
3.3.6 收集节点的keyring文件

使用如下命令收集节点的keyring文件

[ceph@tango-centos01 ceph]$ ceph-deploy gatherkeys tango-centos01
[root@tango-centos01 ceph]# ll
total 156
-rw------- 1 root root   113 Jan 23 10:01 ceph.bootstrap-mds.keyring
-rw------- 1 root root    71 Jan 23 10:01 ceph.bootstrap-mgr.keyring
-rw------- 1 root root   113 Jan 23 10:01 ceph.bootstrap-osd.keyring
-rw------- 1 root root   113 Jan 23 10:01 ceph.bootstrap-rgw.keyring
-rw------- 1 root root   129 Jan 23 10:01 ceph.client.admin.keyring
3.3.7 创建激活OSD服务

1)创建OSD服务

[root@tango-centos01 ceph]$ ceph-deploy osd prepare tango-centos01:/usr/local/ceph/osd0 tango-centos02:/usr/local/ceph/osd1 tango-centos03:/usr/local/ceph/osd2

2)激活osd服务

[root@tango-centos01 ceph]$ ceph-deploy osd activate tango-centos01:/usr/local/ceph/osd0 tango-centos02:/usr/local/ceph/osd1 tango-centos03:/usr/local/ceph/osd2
[root@tango-centos01 osd0]# ll
total 5242924
-rw-r--r-- 1 root root        200 Jan 23 10:03 activate.monmap
-rw-r--r-- 1 ceph ceph          3 Jan 23 10:03 active
-rw-r--r-- 1 ceph ceph         37 Jan 23 10:01 ceph_fsid
drwxr-xr-x 4 ceph ceph         65 Jan 23 10:03 current
-rw-r--r-- 1 ceph ceph         37 Jan 23 10:01 fsid
-rw-r--r-- 1 ceph ceph 5368709120 Jan 23 10:32 journal
-rw------- 1 ceph ceph         56 Jan 23 10:03 keyring
-rw-r--r-- 1 ceph ceph         21 Jan 23 10:01 magic
-rw-r--r-- 1 ceph ceph          6 Jan 23 10:03 ready
-rw-r--r-- 1 ceph ceph          4 Jan 23 10:03 store_version
-rw-r--r-- 1 ceph ceph         53 Jan 23 10:03 superblock
-rw-r--r-- 1 ceph ceph          0 Jan 23 10:21 systemd
-rw-r--r-- 1 ceph ceph         10 Jan 23 10:03 type
-rw-r--r-- 1 ceph ceph          2 Jan 23 10:03 whoami
3.3.8 统一配置
[ceph@tango-centos01 ceph]$ ceph-deploy admin tango-centos01 tango-centos02 tango-centos03
[ceph_deploy.conf][DEBUG ] found configuration file at: /home/ceph/.cephdeploy.conf
[ceph_deploy.cli][INFO  ] Invoked (1.5.39): /bin/ceph-deploy admin tango-centos01 tango-centos02 tango-centos03
[ceph_deploy.cli][INFO  ] ceph-deploy options:
[ceph_deploy.cli][INFO  ]  username                      : None
[ceph_deploy.cli][INFO  ]  verbose                       : False
[ceph_deploy.cli][INFO  ]  overwrite_conf                : False
[ceph_deploy.cli][INFO  ]  quiet                         : False
[ceph_deploy.cli][INFO  ]  cd_conf                       : <ceph_deploy.conf.cephdeploy.Conf instance at 0x277ab90>
[ceph_deploy.cli][INFO  ]  cluster                       : ceph
[ceph_deploy.cli][INFO  ]  client                        : ['tango-centos01', 'tango-centos02', 'tango-centos03']
[ceph_deploy.cli][INFO  ]  func                          : <function admin at 0x26bade8>
[ceph_deploy.cli][INFO  ]  ceph_conf                     : None
[ceph_deploy.cli][INFO  ]  default_release               : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to tango-centos01
[tango-centos01][DEBUG ] connection detected need for sudo
[tango-centos01][DEBUG ] connected to host: tango-centos01 
[tango-centos01][DEBUG ] detect platform information from remote host
[tango-centos01][DEBUG ] detect machine type
[tango-centos01][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to tango-centos02
[tango-centos02][DEBUG ] connection detected need for sudo
[tango-centos02][DEBUG ] connected to host: tango-centos02 
[tango-centos02][DEBUG ] detect platform information from remote host
[tango-centos02][DEBUG ] detect machine type
[tango-centos02][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to tango-centos03
[tango-centos03][DEBUG ] connection detected need for sudo
[tango-centos03][DEBUG ] connected to host: tango-centos03 
[tango-centos03][DEBUG ] detect platform information from remote host
[tango-centos03][DEBUG ] detect machine type
[tango-centos03][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
3.3.9 查看osd状态

1) 使用osd list查看osd状态

[ceph@tango-centos01 ceph]$ ceph-deploy osd list tango-centos01 tango-centos02 tango-centos03
[tango-centos03][INFO  ] Running command: sudo /usr/sbin/ceph-disk list
[tango-centos03][INFO  ] ----------------------------------------
[tango-centos03][INFO  ] ceph-2
[tango-centos03][INFO  ] ----------------------------------------
[tango-centos03][INFO  ] Path           /var/lib/ceph/osd/ceph-2
[tango-centos03][INFO  ] ID             2
[tango-centos03][INFO  ] Name           osd.2
[tango-centos03][INFO  ] Status         up
[tango-centos03][INFO  ] Reweight       1.0
[tango-centos03][INFO  ] Active         ok
[tango-centos03][INFO  ] Magic          ceph osd volume v026
[tango-centos03][INFO  ] Whoami         2
[tango-centos03][INFO  ] Journal path   /usr/local/ceph/osd2/journal
[tango-centos03][INFO  ] ----------------------------------------
3.3.10 部署mds服务

执行以下命令部署mds服务

[ceph@tango-centos01 ceph]$ ceph-deploy mds create tango-centos01

查看mds状态

[root@tango-centos01 ceph]# ceph mds stat
e2:, 1 up:standby
3.3.11 查看ceph集群状态

使用命令ceph -s查看集群状态

[root@tango-centos01 ceph]# ceph -s
    cluster daff0d7d-d63d-48d7-ae8b-b70493240ad8
     health HEALTH_WARN
             pgs degraded
             pgs stuck unclean
             pgs undersized
     monmap e1: 1 mons at {tango-centos01=192.168.112.101:67/0}
            election epoch 3, quorum 0 tango-centos01
     osdmap e11: 2 osds: 2 up, 2 in
            flags sortbitwise,require_jewel_osds
      pgmap v18:  pgs, 1 pools, 0 bytes data, 0 objects
            10454 MB used, 10006 MB / 20460 MB avail
                   active+undersized+degraded

至此,整个ceph集群部署完成!


  1. 《Ceph分布式存储实战》

因篇幅问题不能全部显示,请点此查看更多更全内容

Copyright © 2019- axer.cn 版权所有 湘ICP备2023022495号-12

违法及侵权请联系:TEL:199 18 7713 E-MAIL:2724546146@qq.com

本站由北京市万商天勤律师事务所王兴未律师提供法律服务