LVS-NAT的问题

发布于 2022-07-17 00:56:21 字数 10996 浏览 9 评论 6

LVS-NAT问题描述
安装环境:
RHLE-AS4.3   ipvsadm-1.24-6    piranha-0.8.1-1   iptables关闭
LVS结构图:
        eth0=192.168.1.254
        eth0:1=192.168.1.250(vip)
                Load Balance
                   Router
             eth1=192.168.2.254
        eth1:1=192.168.2.250(vip)
                  |         |
                  |         |
              Real1    Real2
eth1=192.168.2.253  eth1=192.168.2.252
  (eth1 gateway=192.168.2.250)
                       |
                       |
                     GFS
              Share storager       
Director的配置:
hosts表:
[root@lb ~]# cat /etc/hosts
# Do not remove the following line, or various programs
# that require network functionality will fail.
127.0.0.1       localhost.localdomain   localhost
192.168.2.253   node_1
192.168.2.252   node_2
192.168.2.254   net_gw
192.168.1.254   lb
具有些有成功经验的人说,hosts里不要加入VIP地址,请各位明示!
sysctl.conf:
[root@lb ~]# sysctl -p
net.ipv4.ip_forward = 1
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route = 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
lvs.cf:
[root@lb ~]# cat /etc/sysconfig/ha/lvs.cf
serial_no = 20
primary = 192.168.1.254
service = lvs
backup = 0.0.0.0
heartbeat = 1
heartbeat_port = 539
keepalive = 6
deadtime = 18
network = nat
nat_router = 192.168.2.250 eth1:1
nat_nmask = 255.255.255.0
debug_level = NONE
virtual Apache {
     active = 1
     address = 192.168.1.250 eth0:1
     vip_nmask = 255.255.255.0
     port = 80
     send = "GET / HTTP/1.0rnrn"
     expect = "HTTP"
     use_regex = 0
     load_monitor = none
     scheduler = wlc
     protocol = tcp
     timeout = 6
     reentry = 15
     quiesce_server = 1
     server node_1 {
         address = 192.168.2.253
         active = 1
         weight = 1
     }
     server node_2 {
         address = 192.168.2.252
         active = 1
         weight = 1
     }
}

Pulse服务已经运行。
Realserver的配置:
node_1:
[root@node_1 ~]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=localhost.localdomain
GATEWAY=192.168.2.250

[root@node_1 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
ONBOOT=yes
BOOTPROTO=static
NETMASK=255.255.255.0
IPADDR=192.168.2.253
BROADCAST=192.168.2.255

node_2:
[root@node_2 ~]# cat /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=localhost.localdomain
GATEWAY=192.168.2.250

[root@node_2 ~]# cat /etc/sysconfig/network-scripts/ifcfg-eth1
DEVICE=eth1
ONBOOT=yes
BOOTPROTO=static
NETMASK=255.255.255.0
IPADDR=192.168.2.252
BROADCAST=192.168.2.255

Director的网络状态:
[root@lb ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0C:29:F2:E9:96  
          inet addr:192.168.1.254  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fef2:e996/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:926 errors:0 dropped:0 overruns:0 frame:0
          TX packets:872 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:99667 (97.3 KiB)  TX bytes:85902 (83.8 KiB)
          Interrupt:10 Base address:0x1480

eth0:1    Link encap:Ethernet  HWaddr 00:0C:29:F2:E9:96  
          inet addr:192.168.1.250  Bcast:192.168.1.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          Interrupt:10 Base address:0x1480

eth1      Link encap:Ethernet  HWaddr 00:0C:29:F2:E9:A0  
          inet addr:192.168.2.254  Bcast:192.168.2.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fef2:e9a0/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:4111 errors:0 dropped:0 overruns:0 frame:0
          TX packets:5877 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:324440 (316.8 KiB)  TX bytes:446294 (435.8 KiB)
          Interrupt:5 Base address:0x1800

eth1:1    Link encap:Ethernet  HWaddr 00:0C:29:F2:E9:A0  
          inet addr:192.168.2.250  Bcast:192.168.2.255  Mask:255.255.255.0
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          Interrupt:5 Base address:0x1800

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:3173 errors:0 dropped:0 overruns:0 frame:0
          TX packets:3173 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:4128942 (3.9 MiB)  TX bytes:4128942 (3.9 MiB)

[root@lb ~]# arp -a
node_2 (192.168.2.252) at 00:0C:29:62:283 [ether] on eth1
? (192.168.1.1) at 00:14:78:A0:5A:5C [ether] on eth0
? (192.168.1.137) at 00:90:F5:1F:FB:E7 [ether] on eth0
node_1 (192.168.2.253) at 00:0C:29:37:832 [ether] on eth1

Director的日志跟踪:
[root@lb ~]# tail -f /var/log/messages
Oct  4 04:59:47 localhost pulse[4745]: Terminating due to signal 15
Oct  4 04:59:47 localhost lvs[4748]: shutting down due to signal 15       
Oct  4 04:59:47 localhost lvs[4748]: shutting down virtual service Apache
Oct  4 04:59:47 localhost nanny[4757]: Terminating due to signal 15
Oct  4 04:59:47 localhost nanny[4758]: Terminating due to signal 15
Oct  4 04:59:47 localhost pulse: pulse å³é­ succeeded
Oct  4 04:59:47 localhost pulse[4835]: STARTING PULSE AS MASTER
Oct  4 04:59:47 localhost pulse: pulse åå¨ succeeded
Oct  4 05:00:05 localhost pulse[4835]: partner dead: activating lvs
Oct  4 05:00:05 localhost lvs[4839]: starting virtual service Apache active: 80
Oct  4 05:00:05 localhost nanny[4842]: starting LVS client monitor for 192.168.1.250:80
Oct  4 05:00:05 localhost lvs[4839]: create_monitor for Apache/node_1 running as pid 4842
Oct  4 05:00:05 localhost nanny[4843]: starting LVS client monitor for 192.168.1.250:80
Oct  4 05:00:05 localhost lvs[4839]: create_monitor for Apache/node_2 running as pid 4843
Oct  4 05:00:11 localhost pulse[4845]: gratuitous lvs arps finished
Oct  4 05:00:11 localhost nanny[4843]: READ to 192.168.2.252:80 timed out
Oct  4 05:00:11 localhost nanny[4842]: READ to 192.168.2.253:80 timed out
Oct  4 05:00:23 localhost nanny[4843]: READ to 192.168.2.252:80 timed out
Oct  4 05:00:23 localhost nanny[4842]: READ to 192.168.2.253:80 timed out
Oct  4 05:00:35 localhost nanny[4843]: READ to 192.168.2.252:80 timed out
Oct  4 05:00:35 localhost nanny[4842]: READ to 192.168.2.253:80 timed out
Oct  4 05:00:47 localhost nanny[4843]: READ to 192.168.2.252:80 timed out
Oct  4 05:00:47 localhost nanny[4842]: READ to 192.168.2.253:80 timed out

问题:
1.        是不是iptables一定要用?
2.        问题描述已经说明我现在环境和所有的配置,如果没有提到就有可能是没有想到,请各位明示!
3.        感谢大家的帮忙,在此真诚的感谢你的关注!

[ 本帖最后由 dighdypea 于 2006-10-4 17:10 编辑 ]

如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。

扫码二维码加入Web技术交流群

发布评论

需要 登录 才能够评论, 你可以免费 注册 一个本站的账号。

评论(6

渡你暖光 2022-07-27 18:14:58

重感冒了,几年都没有这么严重了。过两天再做吧!

公布 2022-07-27 17:18:35

pulse是什么?
个人推荐keepalived

随心而道 2022-07-27 17:07:31

关闭pluse,没有问题,但只是如果没有它,拿入正真的生产环境还是不好三,而且PLUSE提供了更为有用的功能。我再想想办法三。但愿国庆假期完搞定。呵呵……

深居我梦 2022-07-27 11:58:55

是的。感谢哈!

微凉 2022-07-23 04:32:29

你碰到的问题不是LVS的问题,而是piranha的问题

梦亿 2022-07-22 11:32:36

Kernel

The LVS nodes require Red Hat 6.1 (or later). Red Hat 6.2 or later is
recommended.

Masquerading

Masquerading must be enabled on the LVS nodes. You can do this in two ways:

* In the /etc/sysctl.conf file, set ip_forward and ip_always_defrag to 1.

* Issue these commands:

        echo 1 >/proc/sys/net/ipv4/ip_forward
        echo 1 >/proc/sys/net/ipv4/ip_always_defrag
        ipchains -A forward -j MASQ -s n.n.n.n/24 -d 0.0.0.0/0
  
  n.n.n.n is the address of the private subnet of the Web/FTP hosts.

这里面讲到增加ip_always_defrag,但是我加进去,sysctl报告net.ipv4.ip_always_defrag is an unknown key。现在的版本不带ipchains,请问有什么其它的软件或者模块代替了吗?都渡假去了呀?

[ 本帖最后由 dighdypea 于 2006-10-4 20:27 编辑 ]

~没有更多了~
我们使用 Cookies 和其他技术来定制您的体验包括您的登录状态等。通过阅读我们的 隐私政策 了解更多相关信息。 单击 接受 或继续使用网站,即表示您同意使用 Cookies 和您的相关数据。
原文