前言
这篇简文主要介绍如何使用keepalived来配置RGW的高可用。
配置说明
搭建集群的步骤这里不再赘述,这里我们假设你已经搭建好了ceph集群。下面是我测试集群配置情况:
主机一:node-0/192.168.10.10/16 | MON |
主机二:node-1/192.168.10.11/16 | OSD 2 / RGW |
主机三:node-2/192.168.10.12/16 | OSD 2 / RGW |
集群里面有三个节点,node-0节点上配置MON,node-1和node-2这两个节点,每个节点上都配置了两个OSD和一个RGW实例。
整体架构
开始配置
前面我们假设你已经配置好了ceph集群,因为我们主要想介绍下如何使用keepalived来配置RGW的高可用,这里就跳过部署RGW过程。如果你还没有配置好RGW,可以参考文末配置RGW的步骤。
安装相关软件
安装依赖
$ yum -y install openssl-devel --skip-broken
$ yum install -y libnl3-devel libnfnetlink-devel
安装keepalived
$ wget http://www.keepalived.org/software/keepalived-1.4.2.tar.gz
$ tar -zxvf keepalived-1.4.2.tar.gz
$ cd keepalived-1.4.2
$ ./configure --prefix=/usr/local/keepalived
$ make && make install
安装完成,将对应的几个文件cp到/etc目录下
$ mkdir /etc/keepalived/
$ cp /root/keepalived-1.4.2/keepalived/etc/init.d/keepalived /etc/init.d/
$ cp /root/keepalived-1.4.2/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
$ cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
修改keepalived配置文件
配置node-1(RGW主节点)
$ cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
$ cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
}
vrrp_script chk_rgw {
script "/usr/local/keepalived/sbin/check_rgw.sh" # 该脚本检测rgw的运行状态,并在rgw进程挂了之后尝试重新启动rgw,如果启动失败则停止keepalived,准备让其它机器接管。
interval 2 # 每2s检测一次
weight 2 # 检测失败(脚本返回非0)则优先级2
}
vrrp_instance VI_1 {
state MASTER # 指定keepalived的角色,MASTER表示此主机是主服务器,BACKUP表示此主机是备用服务器
interface eno16777736 # 指定HA监测网络的接口 根据你实际的网卡名来
virtual_router_id 55 # 虚拟路由标识,这个标识是一个数字,同一个vrrp实例使用唯一的标识。即同一vrrp_instance下,MASTER和BACKUP必须是一致的
priority 100 # 定义优先级,数字越大,优先级越高,在同一个vrrp_instance下,MASTER的优先级必须大于BACKUP的优先级
advert_int 1 # 设定MASTER与BACKUP负载均衡器之间同步检查的时间间隔,单位是秒
authentication {
auth_type PASS # 设置验证类型,主要有PASS和AH两种
auth_pass dyp # 设置验证密码,在同一个vrrp_instance下,MASTER与BACKUP必须使用相同的密码才能正常通信
}
virtual_ipaddress {
192.168.10.16/16 # 设置虚拟IP地址,可以设置多个虚拟IP地址,每行一个
}
track_script {
chk_rgw # 引用VRRP脚本,即在 vrrp_script 部分指定的名字。定期运行它们来改变优先级,并最终引发主备切换。
}
}
/usr/local/keepalived/sbin/check_rgw.sh脚本内容如下:
#!/bin/bash
if [ "$(ps -ef | grep "radosgw"| grep -v grep )" == "" ];then
systemctl start ceph-radosgw.target
sleep 3
if [ "$(ps -ef | grep "radosgw"| grep -v grep )" == "" ];then
systemctl stop keepalived
fi
fi
添加check_rgw.sh脚本执行权限:
chmod +x /usr/local/keepalived/sbin/check_rgw.sh
配置node-2(RGW从节点)
$ cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak
$ cat /etc/keepalived/keepalived.conf
! Configuration File for keepalived
global_defs {
}
vrrp_script chk_rgw {
script "/usr/local/keepalived/sbin/check_rgw.sh"
interval 2
weight 2
}
vrrp_instance VI_1 {
state BACKUP
interface eno16777736
virtual_router_id 55
priority 100
advert_int 1
authentication {
auth_type PASS
auth_pass dyp
}
virtual_ipaddress {
192.168.10.16/16
}
track_script {
chk_rgw
}
}
/usr/local/keepalived/sbin/check_rgw.sh脚本和master节点上一样,cp过来,并添加check_rgw.sh脚本执行权限:
chmod +x /usr/local/keepalived/sbin/check_rgw.sh
好了,到这里对keepalived的配置已经完成,然后分别启动node-1和node-2节点上的keepalived:
systemctl start keepalived
分别在node-1和node-2节点上执行ip a
命令,查看虚IP信息:
node-1节点:
[root@node-1 keepalived-1.4.2]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:39:73:67 brd ff:ff:ff:ff:ff:ff
inet 192.168.10.11/16 brd 192.168.255.255 scope global eno16777736
valid_lft forever preferred_lft forever
inet 192.168.10.16/16 scope global secondary eno16777736
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe39:7367/64 scope link
valid_lft forever preferred_lft forever
node-2节点:
[root@node-2 ~]# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eno16777736: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
link/ether 00:0c:29:7e:65:8d brd ff:ff:ff:ff:ff:ff
inet 192.168.10.12/16 brd 192.168.255.255 scope global eno16777736
valid_lft forever preferred_lft forever
inet6 fe80::20c:29ff:fe7e:658d/64 scope link
valid_lft forever preferred_lft forever
可以看到虚IP在node-1节点上。
测试
找一台服务器(方便测试,注意关闭iptables/firewall/selinux),然后安装和配置s3cmd
,将s3cmd
的host_base
和host_bucket
改为我们定义的虚IP地址,如下所示:
... 其他配置项省略,比如access_key和secret_key的配置根据你实际值去配置
host_base = 192.168.10.16:7480
host_bucket = 192.168.10.16:7480
...
测试结果正常:
[root@node-4 ~]# s3cmd ls
2018-04-10 02:56 s3://bk0
模拟RGW故障,停止node-1节点上的keepalived,然后继续使用s3cmd
访问RGW也正常,此时在node-1和node-2上面执行ip a
命令,应该可以看虚IP已经漂到node-2上面了。
ok,简单的使用keeplived来实现RGW的高可用已经成功了,当然在RGW前面使用nginx或haproxy或其他软件做负载均衡之后,也可以使用keepalived来对其做高可用。步骤一样这里就不演示了。
ps:通常如果master服务死掉后backup会变成master,但是当master服务又好了的时候master此时会抢占VIP,这样就会发生两次切换 对业务繁忙的应用来说不是很友好。我们可以在配置文件加入nopreempt非抢占,但是这个参数只能用于state为BACKUP,所以我们在用HA的时候 最好MASTER和backup的state都设置成BACKUP让其通过priority来竞争。
配置RGW
这里是简单配置RGW的步骤,不涉及优化等。
1.安装radosgw(若在多个节点上配置,需在每个节点上执行)
yum install -y ceph-radosgw
2.创建radosgw节点目录(若在多个节点上配置,需在每个节点上执行)
$ mkdir -p /var/lib/ceph/radosgw/ceph-radosgw.gateway
3.修改ceph配置文件(若在多个节点上配置,需在每个节点上执行,且host设置成各个节点自己的主机名)
[client.radosgw.gateway]
host = node-1
rgw_frontends = civetweb port=7480
rgw_content_length_compat = true
4.创建用户key(若在多个节点上配置,需在每个节点上执行)
$ ceph-authtool -C -n client.radosgw.gateway --gen-key /etc/ceph/ceph.client.radosgw.keyring
$ chmod +r /etc/ceph/ceph.client.radosgw.keyring
5.设置key权限(若在多个节点上配置,需在每个节点上执行)
$ ceph-authtool -n client.radosgw.gateway --cap mon 'allow rwx' --cap osd 'allow rwx' /etc/ceph/ceph.client.radosgw.keyring
6.将key加入集群(若在多个节点上配置,只需在某个节点上执行)
$ ceph auth add client.radosgw.gateway --in-file=/etc/ceph/ceph.client.radosgw.keyring
7.创建.rgw.buckets存储池(若在多个节点上配置,只需在某个节点上执行)
$ rados mkpool .rgw.buckets (将pg/pgp设成与默认池相同)
8.加入到对象存储中(若在多个节点上配置,只需在某个节点上执行)
$ radosgw-admin pool add --pool .rgw.buckets
9.启动服务(若在多个节点上配置,需在每个节点上执行)
$ systemctl start ceph-radosgw.target
ok,到这里radosgw就配置好了。