关于redhat advance server 2.1 的cluster问题
我用advance server 2.1自带的cluster可以做到使用块设备(/dev/sdb3)上建立文件系统(ext3等)对oracle做ha。但无法使用lvm上建立得文件系统做ha。因为在第一种情况下,当node1 switch to node2时cluster会停止oralce service并umoun storge.然后在node2上up.担对于lvm不是简单的进行umount而是需要卷组在两台节点之间进行export import后在mout然后在启动oracle service。
所以请问大家可有什么解决的办法?当然还是采用redhat advance server 2.1和lvm。
如果你对这篇内容有疑问,欢迎到本站社区发帖提问 参与讨论,获取更多帮助,或者扫码二维码加入 Web 技术交流群。
绑定邮箱获取回复消息
由于您还没有绑定你的真实邮箱,如果其他用户或者作者回复了您的评论,将不能在第一时间通知您!
发布评论
评论(9)
Thank you for sharing,Thank you for brother.
以上的这些相关的软件包到哪里找呀!
除了LVM以外,有没有其它的东西了?
很好就是没实际做过,red hat ad 2.1 cluster是不是支持ha 和nlb(负载均衡)两种模式,还是只支持ha.
谢谢
软件:
操作系统:redhat advance server2.1
卷管理:lvm_1.0.7.tar.gz
文件系统:我用的是reiserfs
数据库:oracle 9i
硬件:
磁盘阵列。两台pc server 。两块网卡。其中一块连lan,一块互连。
1)首先两台主机都能认到磁盘阵列
2)然后安装lvm
tar vxfz lvm_x.x.x.tar.gz
cd lvm_x.x.x
./configure
make
make install
3)编译内核
(将Multi-device support (RAID and LVM)->;Logical volume manager (LVM) support选中)
(将filesystem -->; <M>; Reiserfs support选中)
然后编译内核,改变lilo.conf文件
当然如果你不使用lvm和reiserfs那么这步就不做了
4)需要注意的装了lvm在很多文档中介绍将vgscan 和vgchange -a y 加入rc.local,但在双机一定不能同时加,
(最好不要加,在后面的脚本中来解决)。然后修改rc.sysinit也将启动vg的语句注释掉。
5)使两台主机能用root rsh (当然会存在安全问题,但没有办法谁叫我们用lvm呢。)
6) 磁盘分区(最少三个区)
fdisk /dev/sdb
sdb1和sdb2只要有100m就够了。剩下的随你怎么分配
修改两台主机得/etc/sysconfig/rawdevices文件加入:
/dev/raw/raw1 /dev/sdb1
/dev/raw/raw2 /dev/sdb2
后reboot
7)建立物理卷,建立卷组,建立卷,建立文件系统。然后手工在两台主机之间将卷export import mount umount。
如果没有问题就成功一半了
7) 接下来配置cluster,我们分别将两台主机命名为clu1 clu2。<>;表示输入的.
两台的hosts文件
127.0.0.1 localhost.localdomain localhost
192.168.1.1 clu1
10.0.0.1 eclu1
192.168.1.2 clu2
10.0.0.2 eclu2
192.168.1.3 culalis
/sbin/cluconfig
Red Hat Cluster Manager Configuration Utility (running on clu1)
- Configuration file exists already.
Would you like to use those prior settings as defaults? (yes/no) [yes]:<yes>;
Enter cluster name:<clutest>;
Enter IP address for cluster alias:<192.168.1.3>;
--------------------------------
Information for Cluster Member 0
--------------------------------
Enter name of cluster member <clu1>;
Looking for host clutest2 (may take a few seconds)...
Enter number of heartbeat channels (minimum = 1) [1]: <1>;
Information about Channel 0
Channel type: net or serial [net]:<net>;
Enter hostname of the cluster member on heartbeat channel 0:<eclu1>;
Information about Quorum Partitions
Enter Primary Quorum Partition [/dev/raw/raw1]: 回车
Enter Shadow Quorum Partition [/dev/raw/raw2]: 回车
Information About the Power Switch That Power Cycles Member 'clutest2'
Choose one of the following power switches:
o NONE
o RPS10
o BAYTECH
o APCSERIAL
o APCMASTER
o WTI_NPS
o SW_WATCHDOG
Power switch [NONE]: <none>;
Note: Operating a cluster without a remote power switch does not provide
maximum data integrity guarantees.
--------------------------------
Information for Cluster Member 1
--------------------------------
Enter name of cluster member :<clu2>;
Looking for host clutest1 (may take a few seconds)...
Information about Channel 0
Enter hostname of the cluster member on heartbeat channel 0:<eclu2>;
Information about Quorum Partitions
Enter Primary Quorum Partition [/dev/raw/raw1]: 回车
Enter Shadow Quorum Partition [/dev/raw/raw2]: 回车
Information About the Power Switch That Power Cycles Member 'clutest2'
Choose one of the following power switches:
o NONE
o RPS10
o BAYTECH
o APCSERIAL
o APCMASTER
o WTI_NPS
o SW_WATCHDOG
Power switch [NONE]: <none>;
Heartbeat channels: 1
Channel type: net, Name: eclutest2
Power switch IP address or hostname: clutest2
Identifier on power controller for member clutest2: unused
--------------------
Member 1 Information
--------------------
Name: clutest1
Primary quorum partition: /dev/raw/raw1
Shadow quorum partition: /dev/raw/raw2
Heartbeat channels: 1
Channel type: net, Name: eclutest1
Power switch IP address or hostname: clutest1
Identifier on power controller for member clutest1: unused
--------------------------
Power Switch 0 Information
--------------------------
Power switch IP address or hostname: clutest2
Type: NONE
Login or port: unused
Password: unused
--------------------------
Power Switch 1 Information
--------------------------
Power switch IP address or hostname: clutest1
Type: NONE
Login or port: unused
Password: unused
Save the cluster member information? yes/no [yes]: <yes>;
Writing to configuration file...done
Configuration information has been saved to /etc/cluster.conf.
----------------------------
Setting up Quorum Partitions
----------------------------
Running cludiskutil -I to initialize the quorum partitions: done
Saving configuration information to quorum partitions: done
Do you wish to allow remote monitoring of the cluster? yes/no [yes]: <yes>;
----------------------------------------------------------------
Configuration on this member is complete.
To configure the next member, invoke the following command on that system:
# /sbin/cluconfig --init=/dev/raw/raw1
Refer to the Red Hat Cluster Manager Installation and Administration Guide
for details.
上面太长了,没关系。只要注意”<>;“和“回车”就可以了。接下来运行
# /sbin/cluconfig --init=/dev/raw/raw1
然后做一些监测
/sbin/cludiskutil -p
----- Shared State Header ------
Magic# = 0x39119fcd
Version = 1
Updated on Thu Sep 14 05:43:18 2000
Updated by node 0
--------------------------------
在两台主机都运行,显示的相同就可以了。
clustonith -S
null device OK.
监测完毕。
接这在两主机上安装oracle。最好在一台主机装好测试过后打包在另一台主机在解包。
oracle 安装好后,写三个脚本
(1):/home/oracle/oracleclu (拥有者是root)
#!/bin/bash
#
# Cluster service script to start/stop oracle
#
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin;export PATH
case "$1" in
start)
rsh clu1 umount /data1
rsh clu1 /sbin/vgchange -a n
rsh clu1 /sbin/vgexport search
/sbin/vgscan
/sbin/vgimport -f search /dev/sdb3
mount /dev/search/data1 /data1
su - oracle -c /home/oracle/startdb
su - oracle -c "lsnrctl start"
;;
stop)
su - oracle -c /home/oracle/stopdb
su - oracle -c "lsnrctl stop"
umount /data1
vgchange -a n
vgexport search
;;
esac
(2)/home/oracle/startdb (拥有者是oracle)
#!/bin/bash
ORACLE_RELEASE=9.2.0
export ORACLE_SID=searchdb
export ORACLE_HOME=/home/oracle/920
export LD_LIBRARY_PATH=$LD_LIBRARY_PATHORACLE_HOME/lib
PATH=$HOME/binORACLE_HOME/bin:/usr/local/jdk/binPATH
sqlplus /nolog<<EOF
connect / as sysdba
startup pfile=/home/oracle/admin/searchdb/pfile/initsearchdb.ora
EOF
exit
(2)/home/oracle/stopdb (拥有者是oracle)
#!/bin/bash
ORACLE_RELEASE=9.2.0
export ORACLE_SID=searchdb
export ORACLE_HOME=/home/oracle/920
export LD_LIBRARY_PATH=$LD_LIBRARY_PATHORACLE_HOME/lib
PATH=$HOME/binORACLE_HOME/bin:/usr/local/jdk/binPATH
sqlplus /nolog<<EOF
connect / as sysdba
shutdown immediate
EOF
exit
在cluster中添加service
在两台主机上都执行service cluster start
/sbin/cluadmin
cluadmin>;service add oracle
Preferred member [None]: <clu1>;
Relocate when the preferred member joins the cluster (yes/no/?)
[no]: <no>;
User script (e.g., /usr/foo/script or None)
[None]: </home/oracle/oracleclu>;
Do you want to add an IP address to the service (yes/no/?): yes
IP Address Information
IP address: <192.168.1.4>; #client connect oracle ip
Netmask (e.g. 255.255.255.0 or None) [None]: <255.255.255.0>;
Broadcast (e.g. X.Y.Z.255 or None) [None]: <192.168.1.255>;
Do you want to (a)dd, (m)odify, (d)elete or (s)how an IP address,
or are you (f)inished adding IP addresses: f
Do you want to add a disk device to the service (yes/no/?): yes #如果使用lvm这里输入no
Disk Device Information
Device special file (e.g., /dev/sda1): </dev/sda3>;
Filesystem type (e.g., ext2, reiserfs, ext3 or None): <ext3>;
Mount point (e.g., /usr/mnt/service1 or None) [None]:</oradata>;
Mount options (e.g., rw, nosuid): 回车
Forced unmount support (yes/no/?) [no]: <yes>;
Do you want to (a)dd, (m)odify, (d)elete or (s)how devices,
or are you (f)inished adding device information: <f>;
应用:
clustat 察看cluster状态
cluadmin>; cluster status
Cluster Status Monitor (eachnet) 15:58:26
Cluster alias: cluster
========================= M e m b e r S t a t u s ==========================
Member Status Node Id Power Switch
-------------- ---------- ---------- ------------
clutest2 Up 0 Good
clutest1 Up 1 Good
========================= H e a r t b e a t S t a t u s ====================
Name Type Status
------------------------------ ---------- ------------
eclutest2 <-->; eclutest1 network ONLINE
========================= S e r v i c e S t a t u s ========================
Last Monitor Restart
Service Status Owner Transition Interval Count
-------------- -------- -------------- ---------------- -------- -------
oracle started clutest2 15:57:48 Jul 03 30 0
cluadmin>;service relocate oracle 把oracle从一台主机切到另一台。
能不能共享你有成功经验,我也准备做linuxADV21的双机测试。以前我用redhat7.3加rose HA.
解决了:-)
我在写export和import的脚本。但是lvm在管理共享storge时。经常会遇到
ERROR: VGDA in kernel and lvmtab are NOT consistent。所以要经常重起机器。但在实际应用中是不可能的。
我在测试。因为veritas的产品太贵了。除了不能用lvm外advance server 的cluster还是很不错的。advance server 是从朋友那里刻的。
我没用过,但一定有个 pre-service 配置文件或脚本,umount 命令和网络配置就是被这个东东执行的,把 export 卷组的命令放在里面即可。
另,你在用这个吗?多少钱买的?用于生产还是自己玩?透露一下可否