1200字范文,内容丰富有趣,写作的好帮手!
1200字范文 > LVS_DR实现(负载均衡)及LVS_DR+keepalived实现(高可用+负载均衡)

LVS_DR实现(负载均衡)及LVS_DR+keepalived实现(高可用+负载均衡)

时间:2024-01-09 23:04:36

相关推荐

LVS_DR实现(负载均衡)及LVS_DR+keepalived实现(高可用+负载均衡)

client->VS->RS->client(VS只做调度,RS为虚拟服务器)

LVS_DR原理图解:

优点:负载均衡器只负责将请求包分发给物理服务器,而物理服务器将应答包直接发给用户。所以,负载均衡器能处理 很巨大的请求量,这种方式,一台负载均衡能为 超过100台的物理服务器服务,负载均衡器不再是系统的瓶颈.

缺点:这种方式需要所有的DIR和RIP都在同一广播域;不支持异地容灾。

环境:

iptables和selinux关闭

test1(调度器)端(172.25.1.1):

[root@test1 ~]# yum install -y ipvsadm

Loaded plugins: product-id, subscription-manager

This system is not registered to Red Hat Subscription Management. You can use

subscription-manager to register.

Setting up Install Process

No package ipvsadm available.

Error: Nothing to do

[root@test1 ~]# vim /etc/yum.repos.d/rhel-source.repo

[rhel6.5]

name=rhel6.5

baseurl=http://172.25.1.250/rhel6.5

gpgcheck=0

enabled=1

gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release

[LoadBalancer] //添加LoadBalancer用来下载ipvsadm服务

name=LoadBalancer

baseurl=http://172.25.1.250/rhel6.5/LoadBalancer

gpgcheck=0

enabled=1

[root@test1 ~]# yum install -y ipvsadm

[root@test1 ~]# /etc/init.d/ipvsadm start //启动服务

[root@test1 ~]# ip addr add 172.25.1.100 dev eth0 //添加虚拟IP

[root@test1 ~]# ip addr //查看是否添加

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state

UNKNOWN

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

inet6 ::1/128 scope host

valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast

state UP qlen 1000

link/ether 52:54:00:4f:1c:32 brd ff:ff:ff:ff:ff:ff

inet 172.25.1.1/24 brd 172.25.1.255 scope global eth0

inet 172.25.1.100/32 scope global eth0inet6 fe80::5054:ff:fe4f:1c32/64 scope link

valid_lft forever preferred_lft forever

[root@test1 ~]# ipvsadm -A -t 172.25.1.100:80 -s rr

[root@test1 ~]# ipvsadm -a -t 172.25.1.100:80 -r 172.25.1.2:80 -g //-a表示在添加虚拟服务中添加,-g表示使用直接模式

[root@test1 ~]# ipvsadm -a -t 172.25.1.100:80 -r 172.25.1.3:80 -g

[root@test1 ~]# ipvsadm -ln

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port

Forward Weight ActiveConn InActConn

TCP 172.25.1.100:80 rr

-> 172.25.1.2:80 Route 1 0 0

-> 172.25.1.3:80 Route 1 0 0

服务器 1(server2:172.25.1.2)端:

[root@test2 ~]# ip addr add 172.25.1.100/32 dev eth0 //添加虚拟ip,目的是让test1与其正常进行三次握手。

[root@test2 ~]# vim /var/www/html/index.html

<h1>-server2</h1>

/etc/init.d/httpd restart

服务器 2(server3:172.25.1.3)端:

[root@test3 ~]# ip addr add 172.25.1.100/32 dev eth0 //添加虚拟ip,目的是让test1与其正常进行三次握手。

[root@test3 ~]# vim /var/www/html/index.html

<h1>-server3</h1>

/etc/init.d/httpd restart

客户端访问:

调度器 MAC 地址:52:54:00:4f:1c:32

服务器 1(server2)端 MAC 地址:52:54:00:2b:85:5b

服务器 2(server3)端 MAC 地址:52:54:00:98:3d:65

注意:访问结果会出现下面三种情况:

[root@foundation1 ~]# arp -d 172.25.1.100 //端开之前的连接

[root@foundation1 ~]# curl 172.25.1.100

<h1>-server2</h1>

[root@foundation1 ~]# curl 172.25.1.100

<h1>-server2</h1>

[root@foundation1 ~]# curl 172.25.1.100

<h1>-server2</h1>

[root@foundation1 ~]# arp -an | grep 100

? (172.25.1.100) at 52:54:00:2b:85:5b [ether] on br0

总结1:从MAC地址可以看出没有经过调度器,直接经过服务器 1 访问

[root@foundation1 ~]# arp -d 172.25.1.100

[r[root@foundation1 ~]# curl 172.25.1.100

<h1>-server3</h1>

oot@foundation1 ~]# curl 172.25.1.100

<h1>-server3</h1>

[root@foundation1 ~]# curl 172.25.1.100

<h1>-server3</h1>

<h[root@foundation1 ~]# arp -an | grep 100

? (172.25.1.100) at 52:54:00:2b:85:5b [ether] on

总结2:从MAC地址可以看出没有经过调度器,直接经过服务器 2 访问

[root@foundation1 ~]# arp -d 172.25.1.100

[root@foundation1 ~]# curl 172.25.1.100

<h1>-server2</h1>

[root@foundation1 ~]# curl 172.25.1.100

<h1>-server3</h1>

[root@foundation1 ~]# curl 172.25.1.100

<h1>-server2</h1>

[root@foundation1 ~]# curl 172.25.1.100

<h1>-server3</h1>

[root@foundation1 ~]# arp -an | grep 100

? (172.25.1.100) at 52:54:00:4f:1c:32 [ether] on br0

总结3:从 MAC 地址可以看出经过了调度器,所以轮询。

总结: 从三种情况可以发现,连接到的 ip(VS 和 两个RS 的 ip 都一样)是随机的,因为三台 server 在同一

VLAN 下具有相同的 vip,故不能保证每次都会访问调度器。

解决:为了解决上面这个问题,需要设置禁止访问连接 RS。

RS(test2) :

[root@test2 ~]# yum install arptables_jf -y //下载服务arptables

[root@test2 ~]# arptables -A IN -d 172.25.1.100 -j DROP //目的是不允许客户端直接连接RS1

[root@test2 ~]# arptables -A OUT -s 172.25.1.100 -j mangle --mangle-ip-s 172.25.1.2 //允许VS与RS1连接

[root@test2 ~]# /etc/init.d/arptables_jf save //保存该策略

[root@test2 ~]# cat /etc/sysconfig/arptables //查看所写策略

# Generated by arptables-save v0.0.8 on Thu Sep 27 22:31:05

*filter

:IN ACCEPT [0:0]

:OUT ACCEPT [0:0]

:FORWARD ACCEPT [0:0][0:0] -A IN -d 172.25.1.100 -j DROP

[0:0] -A OUT -s 172.25.1.100 -j mangle --mangle-ip-s 172.25.1.2

COMMIT

# Completed on Thu Sep 27 22:31:05

RS(test3)与test2步骤相同 :

[root@test3 ~]# arptables -nL //用于查看 arptables 的具体内容

[root@test3 ~]# yum install arptables_jf -y

[root@test3 ~]# arptables -A IN -d 172.25.1.100 -j DROP //目的是不允许客户端直接连接RS2

[root@test3 ~]# arptables -A OUT -s 172.25.1.100 -j mangle --mangle-ip-s 172.25.1.3 //允许VS与RS1连接

[root@test3 ~]# /etc/init.d/arptables_jf save //保存该策略

[root@test3 ~]# cat /etc/sysconfig/arptables

[root@test3 ~]# arptables -nL //用于查看 arptables 的具体内容

# Generated by arptables-save v0.0.8 on Thu Sep 27 22:31:09

*filter

:IN ACCEPT [1:28]

:OUT ACCEPT [1:28]

:FORWARD ACCEPT [0:0]

[0:0] -A IN -d 172.25.1.100 -j DROP

[0:0] -A OUT -s 172.25.1.100 -j mangle --mangle-ip-s 172.25.1.3

COMMIT

# Completed on Thu Sep 27 22:31:09

客户端测试(172.25.1.250):

[root@foundation1 ~]# arp -an | grep 100

? (172.25.1.100) at 52:54:00:4f:1c:32 [ether] on br0

再次测试时 ip 的 mac 地址是VS 的

[root@foundation1 ~]# arp -d 172.25.1.100 //多次 down 掉后查看是否会依旧是VS的MAC地址

[root@foundation1 ~]# curl 172.25.1.100

<h1>-server3</h1>

[root@foundation1 ~]# curl 172.25.1.100

<h1>-server2</h1>

[root@foundation1 ~]# curl 172.25.1.100

<h1>-server3</h1>

[root@foundation1 ~]# curl 172.25.1.100

<h1>-server2</h1>

问题:这种服务的缺点在于,如果后端服务器挂掉,比如说停掉 server 真实主机的 httpd 服务,

那么在客户端解析的时候们就会报错,但 server3 还会正常工作。这样用户就将得到错

误的信息:

[root@test2 ~]# /etc/init.d/httpd stop

Stopping httpd:

[ OK

[root@foundation1 ~]# curl 172.25.1.100

curl: (7) Failed connect to 172.25.1.100:80; Connection refused

[root@foundation1 ~]# curl 172.25.1.100

<h1>-server3</h1>

[root@foundation1 ~]# curl 172.25.1.100

curl: (7) Failed connect to 172.25.1.100:80; Connection refused

[root@foundation1 ~]# curl 172.25.1.100

<h1>-server3</h1>

总结: vs 对后端没有健康检查

解决这个问题的方法一:

用 ldirectord 解决此问题

VS端:

[root@test1 ~]# vim /etc/yum.repos.d/rhel-source.repo //配置yum源,下载ldirectord服务

[HighAvailability]

name=HighAvailability

baseurl=http://172.25.1.250/rhel6.5/HighAvailability

gpgcheck=0

[root@test1 ~]# ls

ldirectord-3.9.5-3.1.x86_64.rpm

[root@test1 ~]# yum install * -y

[root@test1 ~]# rpm -ql ldirectord //查看配置文件

/usr/share/doc/ldirectord-3.9.5/ldirectord.cf

[root@test1 ~]# cp /usr/share/doc/ldirectord-3.9.5/ldirectord.cf /etc/ha.d/ //拷贝一份到/etc/ha.d/

[root@test1 ~]# cd /etc/ha.d

[root@sest1 ha.d]# ls

ldirectord.cf resource.d shellfuncs

[root@test1 ha.d]# vim ldirectord.cf //修改配置文件

virtual=172.25.1.100:80 //虚拟vip

real=172.25.1.2:80 gate //真实服务器1的ip

real=172.25.1.3:80 gate //真实服务器1的ip

fallback=127.0.0.1:80 gate

service=http

scheduler=rr

#persistent=600

#netmask=255.255.255.255

protocol=tcpchecktype=negotiate

checkport=80

request="index.html"

#receive="Test Page"

#virtualhost=www.x.y.z

[root@test1 ha.d]# ipvsadm -ln //列出规则

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port

Forward Weight ActiveConn InActConn

TCP 172.25.1.100:80 rr

Route 1 0 3 -> 172.25.1.2:80

Route 1 0 2 -> 172.25.1.3:80

[root@test1 ~]# ipvsadm -C //清理规则

[root@test1 ~]# ipvsadm -l //查看是否已经清除

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port

Forward Weight ActiveConn InActConn

[root@test1 ha.d]# /etc/init.d/ldirectord start //再次打开服务又可以加载出规则

Starting ldirectord... success

[root@test1 ha.d]# ipvsadm -ln

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port

Forward Weight ActiveConn InActConn

TCP 172.25.1.100:80 rr

Route 1 0 3 -> 172.25.1.2:80

Route 1 0 2 -> 172.25.1.3:80

[root@test1 ha.d]# vim /etc/httpd/conf/httpd.conf //修改端口为80

Listen 80

[root@test1 ha.d]# /etc/init.d/httpd start //重起服务

[root@test1 ha.d]# cd /var/www/html/

[root@test1 html]# vim index.html //编辑下面内容

<h1>网站正在维护中......</h1>

客户端测试:

[root@foundation1 ~]# curl 172.25.1.100

<h1>-server2</h1>

[root@foundation1 ~]# curl 172.25.1.100

<h1>-server3</h1>

[root@foundation1 ~]# curl 172.25.1.100

<h1>-server2</h1>

[root@foundation1 ~]# curl 172.25.1.100

<h1>-server3</h1>

[root@test2 ~]# /etc/init.d/httpd stop //若后台坏掉一个,则策略会自动更新

[root@test1 ha.d]# ipvsadm -ln //可以看出已经实时更新坏掉的服务器

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port

Forward Weight ActiveConn InActConn

TCP 172.25.1.100:80 rr

-> 172.25.1.3:80

Route 1 0 2

客户端再次访问:

[root@foundation1 ~]# curl 172.25.1.100

<h1>-server3</h1>

[root@foundation1 ~]# curl 172.25.1.100

<h1>-server3</h1>

[root@foundation1 ~]# curl 172.25.1.100

<h1>-server3</h1>

若两个真实服务器都挂掉:

[root@test3 ~]# /etc/init.d/httpd stop

[root@test1 ha.d]# ipvsadm -ln //可以看出已经没有正常的服务器了

IP Virtual Server version 1.2.1 (size=4096)

Prot LocalAddress:Port Scheduler Flags

-> RemoteAddress:Port

Forward Weight ActiveConn InActConn

TCP 172.25.1.100:80 rr

-> 127.0.0.1:80

Local 1 0 0

此时客户端再次访问:

[root@foundation1 ~]# curl 172.25.1.100

<h1>网站正在维护中......</h1>

[root@foundation1 ~]# curl 172.25.1.100

<h1>网站正在维护中......</h1>

[root@foundation1 ~]# curl 172.25.1.100

<h1>网站正在维护中......</h1>

总结:

在客户端 curl 172.25.1.100 测试时, RS 轮询,当关闭 test2 时,只访问 test3,

RS 都关闭时会访问本地 test1,故显示“系统正在维护中......”

解决健康检查的方法二:

用 keepalived 软件解决,

官网下载 keepalived 软件:/download.html

两个VS分别为:

主:test1

备:test4

VS 端分别安装 keepalived:

1. 安装keepalived服务

2. ./configure -->openssl-devel --> make --> make install

在主VR:test1下载keepalived服务

[root@test1 ~]# ls

keepalived-2.0.6.tar.gz

[root@test1 ~]# tar zxf keepalived-2.0.6.tar.gz //解压压缩包

[root@test1 ~]# ls

keepalived-2.0.6 keepalived-2.0.6.tar.gz

[root@test1 keepalived-2.0.6]# ./configure --prefix=/usr/local/keepalived

--with-init=SYSV

[root@test1 keepalived-2.0.6]# yum install openssl-devel

[root@test1 keepalived-2.0.6]# ./configure --prefix=/usr/local/keepalived

--with-init=SYSV

[root@test1 keepalived-2.0.6]# make //编译

[root@test1 keepalived-2.0.6]# make install

[root@test1 keepalived-2.0.6]# ln -s

/usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/

[root@test1 keepalived-2.0.6]# ln -s /usr/local/keepalived/etc/keepalived/

/etc/

[root@test1 keepalived-2.0.6]# ln -s

/usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/

[root@test1 keepalived-2.0.6]# ln -s /usr/local/keepalived/sbin/keepalived

/sbin/

[root@test1 keepalived-2.0.6]# cd /usr/local/

[root@test1 init.d]# chmod +x keepalived //给keepalived执行权限

[root@test1 init.d]# /etc/init.d/keepalived start //开启keepalived服务

在备VR:test4

创建一个虚拟机(test4:172.25.1.4),在备VR:test4下载与 test1 相同的服务 keepalived服务:

[root@test4 ~]# yum install openssh-clients -y

[root@test1 local]# scp -r keepalived/ root@172.25.1.4:/usr/local/ //将test1已经下载好的keepalived传给 test4

[root@test4 local]# ls

bin etc games include keepalived lib lib64 libexec sbin share src

[root@test4 ~]# ln -s /usr/local/keepalived/etc/rc.d/init.d/keepalived/etc/init.d/

[root@test4 local]# ln -s /usr/local/keepalived/etc/keepalived/ /etc/

[root@test4 local]# ln -s /usr/local/keepalived/etc/sysconfig/keepalived

/etc/sysconfig/

[root@test4 local]# ln -s /usr/local/keepalived/sbin/keepalived /sbin/

[root@test4 keepalived]# /etc/init.d/keepalived start

[root@test1 keepalived]# cd /etc/keepalived/

[root@test1 keepalived]# yum install mailx -y //下载邮件服务

[root@test1 keepalived]# ip addr del 172.25.1.100/24 dev eth0 //删除虚拟ip

[root@test1 keepalived]# /etc/init.d/ldirectord stop

[root@test1 keepalived]# ip addr

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state

UNKNOWN

link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00

inet 127.0.0.1/8 scope host lo

inet6 ::1/128 scope host

valid_lft forever preferred_lft forever

2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast

state UP qlen 1000

link/ether 52:54:00:4f:1c:32 brd ff:ff:ff:ff:ff:ff

inet 172.25.1.1/24 brd 172.25.1.255 scope global eth0 //可以看到没有虚拟ip

inet6 fe80::5054:ff:fe4f:1c32/64 scope link

[root@test1 keepalived]# vim keepalived.conf //修改内容

! Configuration File for keepalived

global_defs {

notification_email {

root@localhost

}

notification_email_from Alexandre.Cassen@firewall.loc

smtp_server 172.0.0.1

smtp_connect_timeout 30

router_id LVS_DEVEL

vrrp_skip_check_adv_addr

#vrrp_strict //放弃修改防火墙规则

vrrp_garp_interval 0

vrrp_gna_interval 0

}

vrrp_instance VI_1 {

state MASTER //MASTER表示主模式

interface eth0

virtual_router_id 1

priority 100 //数值越大,优先级越高

advert_int 1

aauthentication {

auth_type PASS

auth_pass 1111

}

virtual_ipaddress {

172.25.1.100 //虚拟ip地址

}

}

virtual_server 172.25.1.100 80 {//虚拟ip地址 ,服务启动生效时会自动添加

delay_loop 6 //对后端的健康检查时间

lb_algo rr

lb_kind DR //DR模式

#persistence_timeout 50 //注释持续连接

protocol TCP

}

real_server 172.25.1.2 80 { //RS1的ip

weight 1

TCP_CHECK {

connect_timeout 3

retry 3

delay_before_retry 3

}

}

real_server 172.25.1.3 80 { //RS2的ip

weight 1

TCP_CHECK {

connect_timeout 3

retry 3

delay_before_retry 3

}

}

[root@test1 keepalived]# /etc/init.d/keepalived restart

[root@test1 keepalived]# scp keepalived.conf root@172.25.1.4:/etc/keepalived/ //将配置文传给test4

[root@test4 keepalived]# cd /etc/keepalived/

[root@test4 keepalived]# yum install mailx -y

[root@test4 keepalived]# vim keepalived.conf

vrrp_instance VI_1 {

state BACKUP //该为备模式

interface eth0

virtual_router_id 1

priority 50 //优先级要小于 test1 的优先级

[root@test4 keepalived]# >/var/log/messages //清空日志

[root@test4 keepalived]# /etc/init.d/keepalived restart //重起服务

[root@test4 keepalived]# cat /var/log/messages //查看日志,可以看出test4做了BACKUP模式

客户端测试:

此时 test1 和 test4 的 keepalived 都是开启状态,其中 test1 做主,test4 做备

[root@foundation1 lvs]# curl 172.25.1.100

<h1>-server3</h1>

[root@foundation1 lvs]# curl 172.25.1.100

<h1>-server2</h1>

[root@foundation1 lvs]# curl 172.25.1.100

<h1>-server3</h1>

[root@foundation1 lvs]# curl 172.25.1.100

<h1>-server2</h1>

[root@foundation1 lvs]# arp -an | grep 100

? (172.25.1.100) at 52:54:00:4f:1c:32 [ether] on br0

从 MAC 地址可以看出走的是 test1。

若将 test1 的 keepalive 挂掉,则客户端依旧可以正常访问

[root@test1 keepalived]# /etc/init.d/ipvsadm stop

[root@foundation1 lvs]# curl 172.25.1.100

<h1>-server3</h1>[root@foundation1 lvs]# curl 172.25.1.100

<h1>-server2</h1>

[root@foundation1 lvs]# curl 172.25.1.100

<h1>-server3</h1>

[root@foundation1 lvs]# curl 172.25.1.100

<h1>-server2</h1>

从 MAC 地址可以看出走的是 test4。

将 test1 的 keepalived 开启,并 test3 的 http 服务关掉,则客户只能访问 test2 的

[root@test3 ~]# /etc/init.d/httpd stop

[root@foundation1 lvs]# curl 172.25.1.100

<h1>-server2</h1>

[root@foundation1 lvs]# curl 172.25.1.100

<h1>-server2</h1>

[root@foundation1 lvs]# curl 172.25.1.100

<h1>-server2</h1>

若将两个都挂掉,则 test1,则客户端直接不能正常访问,与 ldirectord 不同的是本地 test1

不会接替让 VS 访问

[root@foundation1 lvs]# curl 172.25.1.100

curl: (7) Failed connect to 172.25.1.100:80; Connection refused

[root@foundation1 lvs]# curl 172.25.1.100

curl: (7) Failed connect to 172.25.1.100:80; Connection refused

[root@foundation1 lvs]# curl 172.25.1.100

curl: (7) Failed connect to 172.25.1.100:80; Connection refused

本内容不代表本网观点和政治立场,如有侵犯你的权益请联系我们处理。
网友评论
网友评论仅供其表达个人看法,并不表明网站立场。