您的当前位置:首页正文

OCM_Session7_10_安装clusterware

2020-11-09 来源:个人技术集锦

十、安装clusterware 开始安装,我是使用xmanager 4安装的,我的本机ip为192.168.1.103 [root@rac1 ~]# xhost -bash: xhost: command not found[root@rac1 ~]# export DISPLAY=192.168.1.103:0.0 [root@rac1 ~]# su - oracle[oracle@rac1 ~]$ cd /stage/clus

十、安装clusterware
开始安装,我是使用xmanager 4安装的,我的本机ip为192.168.1.103
[root@rac1 ~]# xhost+ -bash: xhost+: command not found [root@rac1 ~]# export DISPLAY=192.168.1.103:0.0 [root@rac1 ~]# su - oracle [oracle@rac1 ~]$ cd /stage/clustware/Disk1/clusterware/ [oracle@rac1 clusterware]$ ll total 36 drwxr-xr-x 2 oracle oinstall 4096 Jul 3 2005 cluvfy drwxr-xr-x 6 oracle oinstall 4096 Jul 3 2005 doc drwxr-xr-x 4 oracle oinstall 4096 Jul 3 2005 install drwxr-xr-x 2 oracle oinstall 4096 Jul 3 2005 response drwxr-xr-x 2 oracle oinstall 4096 Jul 3 2005 rpm -rwxr-xr-x 1 oracle oinstall 1328 Jul 3 2005 runInstaller drwxr-xr-x 9 oracle oinstall 4096 Jul 3 2005 stage drwxr-xr-x 2 oracle oinstall 4096 Jul 3 2005 upgrade -rw-r--r-- 1 oracle oinstall 3445 Jul 3 2005 welcome.html [oracle@rac1 clusterware]$ ./runInstaller Starting Oracle Universal Installer...
Checking installer requirements...
Checking operating system version: must be redhat-3, SuSE-9, redhat-4, UnitedLinux-1.0, asianux-1 or asianux-2 Passed

All installer requirements met.
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2014-03-22_03-39-48PM. Please wait ...[oracle@rac1 clusterware]$ Oracle Universal Installer, Version 10.2.0.1.0 Production Copyright (C) 1999, 2005, Oracle. All rights reserved.
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
如果出现以下错误,则是因为缺少-ivh libXp-1.0.0-8.1.el5.i386.rpm的原因,之前我在检查缺少包时,已经安装过了。
------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Exception java.lang.UnsatisfiedLinkError: /tmp/OraInstall2014-03-22_08-31-31AM/jre/1.4.2/lib/i386/libawt.so: libXp.so.6: cannot open shared object file: No such file or directory occurred.. java.lang.UnsatisfiedLinkError: /tmp/OraInstall2014-03-22_08-31-31AM/jre/1.4.2/lib/i386/libawt.so: libXp.so.6: cannot open shared object file: No such file or directory at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(Unknown Source) at java.lang.ClassLoader.loadLibrary(Unknown Source) at java.lang.Runtime.loadLibrary0(Unknown Source) at java.lang.System.loadLibrary(Unknown Source) at sun.security.action.LoadLibraryAction.run(Unknown Source) at java.security.AccessController.doPrivileged(Native Method) at sun.awt.NativeLibLoader.loadLibraries(Unknown Source) at sun.awt.DebugHelper.(Unknown Source) at java.awt.Component.(Unknown Source) at oracle.sysman.oii.oiif.oiifm.OiifmGraphicInterfaceManager.(OiifmGraphicInterfaceManager.java:222) at oracle.sysman.oii.oiic.OiicSessionInterfaceManager.createInterfaceManager(OiicSessionInterfaceManager.java:193) at oracle.sysman.oii.oiic.OiicSessionInterfaceManager.getInterfaceManager(OiicSessionInterfaceManager.java:202) at oracle.sysman.oii.oiic.OiicInstaller.getInterfaceManager(OiicInstaller.java:436) at oracle.sysman.oii.oiic.OiicInstaller.runInstaller(OiicInstaller.java:926) at oracle.sysman.oii.oiic.OiicInstaller.main(OiicInstaller.java:866) Exception in thread "main" java.lang.NoClassDefFoundError at oracle.sysman.oii.oiif.oiifm.OiifmGraphicInterfaceManager.(OiifmGraphicInterfaceManager.java:222) at oracle.sysman.oii.oiic.OiicSessionInterfaceManager.createInterfaceManager(OiicSessionInterfaceManager.java:193) at oracle.sysman.oii.oiic.OiicSessionInterfaceManager.getInterfaceManager(OiicSessionInterfaceManager.java:202) at oracle.sysman.oii.oiif.oiifm.OiifmAlert.(OiifmAlert.java:151) at oracle.sysman.oii.oiic.OiicInstaller.runInstaller(OiicInstaller.java:984) at oracle.sysman.oii.oiic.OiicInstaller.main(OiicInstaller.java:866) ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
缺少包 [oracle@rac1 clusterware]$ su - Password: [root@rac1 ~]# mount /dev/cdrom /mnt/ mount: you must specify the filesystem type [root@rac1 ~]# mount /dev/cdrom /mnt/ mount: block device /dev/cdrom is write-protected, mounting read-only [root@rac1 ~]# cd /mnt/Server/ [root@rac1 Server]# rpm -ivh libXp-1.0.0-8.1.el5.i386.rpm warning: libXp-1.0.0-8.1.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159 Preparing... ########################################### [100%] 1:libXp ########################################### [100%] ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

1.稍微等几秒钟,便可看见弹出的clusterware安装界面,点击“Next” 安装oracle clusterware 10.2.0.1.0开始,进入欢迎界面点击“Next”
\

2.将Z喎?http://www.2cto.com/kf/ware/vc/" target="_blank" class="keylink">vcmFJbnZlbnRvcnnEv8K8sLLXsLW9L3UwMS9hcHAvb3JhY2xlL29yYUludmVudG9yec/Co6y147v3obBOZXh0obEK1ri2qLLZ1/fPtc2z1+mjum9pbnN0YWxs1+kKxKzIzwo8YnI+Cgo8aW1nIHNyYz0="http://www.2cto.com/uploadfile/Collfiles/20140324/2014032409113091.jpg" alt="\">
3.定义clusterware的安装路径,点击“Next” Name:OraCrs10g_home Path:/u01/app/oracle/product/10.2.0/crs_1
\
4.先决条件prerequisite检查,0 requirements to beverified。 安装前期的环境检查,如图所示
\

5. 指定集群节点rac1 rac2 Specify Cluster Configuration 这个地方系统默认只显示一个节点,另一个节点需要手动添加 点击“Add…”-> Public Node Name:rac2.localdomain -> Private Node Name:rac2-priv.localdomain -> virtual Host Name: rac2-vip.localdomain

\
\
6. 指定网卡接口用途,系统把所有网卡都扫描进来 点击“Edit”修改 eth0设置为public网卡 eth1设置为private网卡
\
7.指定“OCR 集群注册表”所对应裸设备路径,点击“Next” OCR:oracle cluster register集群注册表文件,这个文件中记录了oracle RAC的所有可用资源,例如:实例, 数据库 ,监听 ,节点 ,ASM 磁盘组, service服务等资源,只有把新资源注册到OCR才能被RAC调配使用。例如 新加入一个节点到RAC架构,就需要把新节点注册到OCR,由于OCR文件非常重要因此需要我们拥有冗余方案。 选择“External Redundancy 外部冗余”【你将用磁盘管理系统提供OCR冗余】 Specify OCR Location:/dev/raw/raw1
\
8.指定“Voting Disk 表决磁盘”所对应裸设备路径,点击“Next” Voting Disk:表决磁盘是用于防止脑裂现象的一种判断机制,一般为单数个磁盘,当某个节点发生通信故障或不能继续扮演RAC角色时需要表决磁盘来判断是否剔除它。 选择“External Redundancy 外部冗余” Specify Voting Disk Location:/dev/raw/raw2
\
9.当上述属性设置完之后,下面开始正式的安装过程,点击“Install”,主节点安装完成后,系统会自动往rac2对应目录下推送clusterware的所有文件。
\
\
---------------------------------------------------------------------------------------------------------------------------------------- 如果出现以下错误,则是缺少包的原因,安装包后,在retry,就可以了。
\
[root@rac1 Server]# rpm -ivh kernel-headers-2.6.18-274.el5.i386.rpm warning: kernel-headers-2.6.18-274.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159 Preparing... ########################################### [100%] 1:kernel-headers ########################################### [100%] [root@rac1 Server]# rpm -ivh glibc-headers-2.5-65.i386.rpm warning: glibc-headers-2.5-65.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159 Preparing... ########################################### [100%] 1:glibc-headers ########################################### [100%] [root@rac1 Server]# rpm -ivh glibc-devel-2.5-65.i386.rpm warning: glibc-devel-2.5-65.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159 Preparing... ########################################### [100%] 1:glibc-devel ########################################### [100%] [root@rac1 Server]# rpm -ivh gcc-4.1.2-51.el5.i386.rpm warning: gcc-4.1.2-51.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159 Preparing... ########################################### [100%] 1:gcc ########################################### [100%] [root@rac1 Server]# rpm -ivh libstdc++-devel-4.1.2-51.el5.i386.rpm warning: libstdc++-devel-4.1.2-51.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159 Preparing... ########################################### [100%] 1:libstdc++-devel ########################################### [100%] [root@rac1 Server]# rpm -ivh gcc-c++-4.1.2-51.el5.i386.rpm warning: gcc-c++-4.1.2-51.el5.i386.rpm: Header V3 DSA signature: NOKEY, key ID 1e5e0159 Preparing... ########################################### [100%] 1:gcc-c++ ########################################### [100%] [root@rac1 Server]#
---------------------------------------------------------------------------------------------------------------------------

10.安装完成之后,需要分别在两个节点以root身份运行两个脚本 第一个脚本在所有节点上都执行 /u01/app/oracle/oraInventory/orainstRoot.sh 第二个脚本在所有节点上都执行 /u01/app/oracle/product/10.2.0/crs_1/root.sh
\

按顺序执行 su - root 先在rac1节点上执行:
root@rac1 ~]# /u01/app/oracle/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oracle/oraInventory to 770. Changing groupname of /u01/app/oracle/oraInventory to oinstall. The execution of the script is complete [root@rac1 ~]# /u01/app/oracle/product/10.2.0/crs_1/root.sh WARNING: directory "/u01/app/oracle/product/10.2.0' is not owned by root WARNING: directory '/u01/app/oracle/product' is not owned by root WARNING: directory '/u01/app/oracle' is not owned by root Checking to see if Oracle CRS stack is already configured /etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory Setting up NS directories Oracle Cluster Registry configuration upgraded successfully WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root WARNING: directory '/u01/app/oracle/product' is not owned by root WARNING: directory '/u01/app/oracle' is not owned by root assigning default hostname rac1 for node 1. assigning default hostname rac2 for node 2. Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node : node 1: rac1 rac1-priv rac1 node 2: rac2 rac2-priv rac2 Creating OCR keys for user 'root', privgrp 'root'.. Operation successful. Now formatting voting device: /dev/raw/raw2 Format of 1 voting devices complete. Startup will be queued to init within 90 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. CSS is active on these nodes. rac1 CSS is inactive on these nodes. rac2 Local node checking complete. Run root.sh on remaining nodes to start CRS daemons. [root@rac1 ~]# ----------------------------------------------------------------------------------------------------------------------
再在rac2节点上执行
[root@rac2 ~]# /u01/app/oracle/oraInventory/orainstRoot.sh Changing permissions of /u01/app/oracle/oraInventory to 770. Changing groupname of /u01/app/oracle/oraInventory to oinstall. The execution of the script is complete
------------------------------------------------------------------------------------------------------
在rac2执行/u01/app/oracle/product/10.2.0/crs_1/root.sh之前要修改两个文件,否则会出现以下错误: /u01/app/oracle/product/10.2.0/crs_1/jdk/jre//bin/java: error while loading shared libraries: libpthread.so.0: cannot open shared object file: No such file or directory
----------------------------------------------------------------------------------------------------------------


11.下面我们在rac2上执行root.sh脚本之前需要先在两个节点中都编辑两个文件,使用root用户修改 其实这里需要在两个节点上都需要这两个文件的修改。
第一个文件 [root@rac2 ~]# vi /u01/app/oracle/product/10.2.0/crs_1/bin/vipca 搜索/LD_ASSUME_KERNEL

#Remove this workaround when the bug 3937317 is fixed arch=`uname -m` if [ "$arch" = "i686" -o "$arch" = "ia64" ] then LD_ASSUME_KERNEL=2.4.19 export LD_ASSUME_KERNEL fi unset LD_ASSUME_KERNEL ---添加一行:清除环境变量 #End workaround
------------------------------------------------------------------------------------------------------- 第二个文件 [root@rac2 ~]# vi /u01/app/oracle/product/10.2.0/crs_1/bin/srvctl

#Remove this workaround when the bug 3937317 is fixed LD_ASSUME_KERNEL=2.4.19 export LD_ASSUME_KERNEL unset LD_ASSUME_KERNEL---添加一行:清除环境变量 # Run ops control utility
--------------------------------------------------------------------------------------
12.然后在rac2节点下用root身份执行
[root@rac2 ~]# /u01/app/oracle/product/10.2.0/crs_1/root.sh WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root WARNING: directory '/u01/app/oracle/product' is not owned by root WARNING: directory '/u01/app/oracle' is not owned by root Checking to see if Oracle CRS stack is already configured /etc/oracle does not exist. Creating it now.
Setting the permissions on OCR backup directory Setting up NS directories Oracle Cluster Registry configuration upgraded successfully WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root WARNING: directory '/u01/app/oracle/product' is not owned by root WARNING: directory '/u01/app/oracle' is not owned by root clscfg: EXISTING configuration version 3 detected. clscfg: version 3 is 10G Release 2. assigning default hostname rac1 for node 1. assigning default hostname rac2 for node 2. Successfully accumulated necessary OCR keys. Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897. node : node 1: rac1 rac1-priv rac1 node 2: rac2 rac2-priv rac2 clscfg: Arguments check out successfully.
NO KEYS WERE WRITTEN. Supply -force parameter to override. -force is destructive and will destroy any previous cluster configuration. Oracle Cluster Registry for cluster has already been initialized Startup will be queued to init within 90 seconds. Adding daemons to inittab Expecting the CRS daemons to be up within 600 seconds. CSS is active on these nodes. rac1 rac2 CSS is active on all nodes. Waiting for the Oracle CRSD and EVMD to start Waiting for the Oracle CRSD and EVMD to start Waiting for the Oracle CRSD and EVMD to start Waiting for the Oracle CRSD and EVMD to start Oracle CRS stack installed and running under init(1M) Running vipca(silent) for configuring nodeapps Error 0(Native: listNetInterfaces:[3]) [Error 0(Native: listNetInterfaces:[3])] [root@rac2 ~]#
-------------------------------------------------------------------- 这里报错,有的文档这样修改,我没有验证过。 我是在rac1节点上修改 /u01/app/oracle/product/10.2.0/crs_1/bin/vipca和 /u01/app/oracle/product/10.2.0/crs_1/bin/srvctl后,再次在rac2节点上运行 /u01/app/oracle/product/10.2.0/crs_1/root.sh,如下:
1.这里我没有验证过。有的文档说明可以这样修改。
Waiting for the Oracle CRSD and EVMD to start Oracle CRS stack installed and running under init(1M) Running vipca(silent) for configuring nodeapps 运行vipca配置节点 Error 0(Native: listNetInterfaces:[3]) 本地网卡错误 [Error 0(Native: listNetInterfaces:[3])] 本地网卡错误
cd /u01/app/oracle/product/10.2.0/crs_1/bin ./oifcfg 这是oracle网卡配置工具,我们可以使用这个工具来检查网卡配置是否正确 oifcfg iflist 检查网卡配置 oifcfg setif -global eth0/192.168.1.0:public 指定全局公有网卡 oifcfg setif -global eth1/172.168.1.0:cluster_interconnect 指定全局私有网卡 oifcfg getif 获取配置结果,当rac2配置好后rac1自动生成vipca文件,oifcfg getif
-----------------------------------------------------------------------------------------

2.我是在第一个节点rac1修改后,再在rac2上执行一下/u01/app/oracle/product/10.2.0/crs_1/root.sh脚本:
-- ------------------------------------------------------------------------------------------- rac1节点
第一个文件[root@rac1 ~]# vi /u01/app/oracle/product/10.2.0/crs_1/bin/vipca 搜索/LD_ASSUME_KERNEL

#Remove this workaround when the bug 3937317 is fixed arch=`uname -m` if [ "$arch" = "i686" -o "$arch" = "ia64" ] then LD_ASSUME_KERNEL=2.4.19 export LD_ASSUME_KERNEL fi unset LD_ASSUME_KERNEL ---添加一行:清除环境变量 #End workaround

第二个文件[root@rac1 ~]# vi /u01/app/oracle/product/10.2.0/crs_1/bin/srvctl

#Remove this workaround when the bug 3937317 is fixed LD_ASSUME_KERNEL=2.4.19 export LD_ASSUME_KERNEL unset LD_ASSUME_KERNEL---添加一行:清除环境变量 # Run ops control utility
--------------------------------------------------------------------------------
3.rac2节点再次执行,没有出现错误:
[root@rac2 ~]# /u01/app/oracle/product/10.2.0/crs_1/root.sh WARNING: directory '/u01/app/oracle/product/10.2.0' is not owned by root WARNING: directory '/u01/app/oracle/product' is not owned by root WARNING: directory '/u01/app/oracle' is not owned by root Checking to see if Oracle CRS stack is already configured Oracle CRS stack is already configured and will be running under init(1M) [root@rac2 ~]#

---------------------------------------------------------------------------------------------------------------------------------------------------------------------------- 13.两个节点执行完毕后,点击OK,出现如下配置信息。
\
14.在执行oracle cluster verification utility出现错误,此时需要在任意节点上执行配置虚拟ip。
\
\
15.使用root用户配置虚拟ip 此步骤在rac1和rac2节点上都可操作 执行/u01/app/oracle/product/10.2.0/crs_1/bin/vipca 自动弹出图形化界面我们可以使用vipca来创建和配置VIP GSD ONS 资源
\


16.打开欢迎界面,点击“Next”
\
17. 系统自动找到public的eth0,点击“Next”,【虚拟ip是基于公有网卡eth0】
\
18 . 补填各节点对应的vip名称和ip地址 mask地址,点击“Next” Node name IP Alias Name IP address Subnet Mask rac1 rac1-vip.localdomain 192.168.1.152 255.255.255.0 rac2 rac2-vip.localdomain 192.168.1.154 255.255.255.0
\

19 .检查概述信息,点击“Finish”开始安装,重点检查返填出来的IP地址是否正确千万不能错.
\
20.安装完之后就可以点击“ok”查看结果,点击“Exit”退出vipca.
\
\
21.检查vip服务 gsd ons vip 包含三个进程.
crs_stat -t 全是ONLINE才算正常,在所有节点上都相同
target是希望的最终状态,state是当前状态
[root@rac1 bin]# crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora.rac1.gsd application ONLINE ONLINE rac1 ora.rac1.ons application ONLINE ONLINE rac1 ora.rac1.vip application ONLINE ONLINE rac1 ora.rac2.gsd application ONLINE ONLINE rac2 ora.rac2.ons application ONLINE ONLINE rac2 ora.rac2.vip application ONLINE ONLINE rac2

22 .vipca执行成功后,那么相当于rac2的root.sh脚本也顺利完成,下一步需要做的就是返回到rac1节点,执行剩下的步骤,点击“OK”在Retry,
\


23 .启动三个组件并通过前提条件检查.如下,状态为全部成功》 Oracle Notification Server Configuration Assistance 通知服务 Oracle Private Interconnect Configuration Assistance 私有互联服务 Oracle Cluster Verification Utility 集群检查工具
\
24 .等待配置完成后,将会收到“The installation of Oracle Clusterware was successful”提示信息,点击“Exit”,“Yes”退出整个clusterware的安装界面
\

25.验证安装成功.至此,clusterware安装成功。
在rac1和rac2上分别执行crs_stat -t 检查集群软件状态,必须都为ONLINE才算正常 [root@rac1 bin]# crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora.rac1.gsd application ONLINE ONLINE rac1 ora.rac1.ons application ONLINE ONLINE rac1 ora.rac1.vip application ONLINE ONLINE rac1 ora.rac2.gsd application ONLINE ONLINE rac2 ora.rac2.ons application ONLINE ONLINE rac2 ora.rac2.vip application ONLINE ONLINE rac2 ---------------------------------------------------------------------- [root@rac2 bin]# crs_stat -t Name Type Target State Host ------------------------------------------------------------ ora.rac1.gsd application ONLINE ONLINE rac1 ora.rac1.ons application ONLINE ONLINE rac1 ora.rac1.vip application ONLINE ONLINE rac1 ora.rac2.gsd application ONLINE ONLINE rac2 ora.rac2.ons application ONLINE ONLINE rac2 ora.rac2.vip application ONLINE ONLINE rac2
---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
Top