Download Clusterware Installation Software: Oracle Cluster

Posted on admin

This post is not a step by step guide to installing 12cR2 RAC but list the main differences in the installation process compared to previous versions and things to look out for. One of the main difference in installing 12cR2 clusterware is that, it is no longer done through runInstaller.

The downloaded Oracle Grid Infrastructure image files are unzipped into the folder path where GI is to be installed. This only need to be done on the local node only. During installation, the software is copied and installed on all other nodes in the cluster.

If it is planned to ASM filter driver (ASMFD) for managing disks used by ASM then uninstall ASMLib if it is already installed. Unlike 12cR1, it is possible to specify a separate disk group for GIMR at the installation time. Oracle documentation says 'You cannot migrate the GIMR from one disk group to another later'. 12cR1 also mentioned the same but there were ways to after the installation. It has been not verified whether the same techniques could be used for 12.2 as well. Sw parallels desktop 12 for mac. Shown below is the OS and kernel versions used for this RAC installation. These are the installation.

If the OS/Kernel versions are lower than this version then before installation could proceed. Uname -r 2.6.32-358.el6.x8664 $ cat /etc/redhat-release Red Hat Enterprise Linux Server release 6.4 (Santiago)2. Create the user groups and users similar to previous 12.1 installation. As said in the 12.2 introduce a new user group for RAC management. Following from install guide 'You must designate a group as the OSRACDBA group during database installation.

Members of this group are granted the SYSRAC privileges to perform day–to–day administration of Oracle databases on an Oracle RAC cluster'. Groupadd dba groupadd oinstall groupadd oper groupadd asmoper groupadd asmdba groupadd asmadmin groupadd backupdba groupadd dgdba groupadd kmdba groupadd racdba useradd -g oinstall -G dba,oper,asmdba,asmoper,asmadmin,backupdba,dgdba,kmdba,racdba oracle useradd -g oinstall -G asmadmin,asmdba,asmoper,dba grid 3. The other pre-requisite steps are similar to.

Follow oracle installation guide for detail instructions. As said earlier the grid infrastructure software is unzip directly into the install location.

Mkdir -p /opt/app/12.2.0/grid cp /linuxx6412201gridhome.zip /opt/app/12.2.0/grid cd /opt/app/12.2.0/grid unzip linuxx6412201gridhome.zip 4. 12.2 gives the option of using ASM filter driver (ASMFD) for managing disks used by ASM. However installation could be done without the use of AFD as well.

Using for the ASM disks. For this setup UDEV rules have been setup as below. The disks used for OCR were of 3GB (OCR disk group to be of normal redundancy) and GIMR disk group is 40GB (with external redundancy). Minimum storage requirement for standaloen cluster could be found.

The first disk group created is the one for OCR and vote disk. As said earlier this expected to be of normal redundancy. Change the disk discovery path so the candidate disks are displayed.

At times the disks may be visible even without changing the discovery path. Even then it's better to change the discovery path to reflect the correct path string.

Otherwise during root.sh run the commands will use the default path of /dev/sd. and execution will fail. 2017/05/11 16:26:56 CLSRSC-184: Configuration of ASM failed 2017/05/11 16:27:01 CLSRSC-258: Failed to configure and start ASM Died at /opt/app/12.2.0/grid/crs/install/crsinstall.pm line 2091. The command '/opt/app/12.2.0/grid/perl/bin/perl -I/opt/app/12.2.0/grid/perl/lib -I/opt/app/12.2.0/grid/crs/install /opt/app/12.2.0/grid/crs/install/rootcrs.pl ' execution failedAlert log will show that ASM disk group creation was attempted with default disk path string. 2017-05-11 16:26:24: Invoking '/opt/app/12.2.0/grid/bin/asmca -silent -diskGroupName clusterdg -diskList '/dev/oracleasm/ocr3,/dev/oracleasm/ocr1,/dev/oracleasm/ocr2' -redundancy NORMAL -diskString ' /dev/sd.' -configureLocalASM -passwordFileLocation +clusterdg/orapwASM -ausize 4 ' as user 'grid'If it comes to this it is possible to change the ASMDISCOVERYSTRING in GIHOME/crs/install/crsconfigparams to reflect the correct disk path string and run root.sh again.

All of this could be avoided by explicitly setting the disk path string. Root.sh output for first node is shown below. root@rhel6m1 # /opt/app/12.2.0/grid/root.sh Performing root user operation. The following environment variables are set as: ORACLEOWNER= grid ORACLEHOME= /opt/app/12.2.0/grid Enter the full pathname of the local bin directory: /usr/local/bin: The contents of 'dbhome' have not changed.

No need to overwrite. The contents of 'oraenv' have not changed. No need to overwrite.

Download clusterware installation software: oracle clusters

The contents of 'coraenv' have not changed. No need to overwrite. Creating /etc/oratab file. Entries will be added to the /etc/oratab file as needed by Database Configuration Assistant when a database is created Finished running generic part of root script. Now product-specific root actions will be performed. Relinking oracle with racon option Using configuration parameter file: /opt/app/12.2.0/grid/crs/install/crsconfigparams The log of current session can be found at: /opt/app/oracle/crsdata/rhel6m1/crsconfig/rootcrsrhel6m12017-05-1512-11-53AM.log 2017/05/15 12:11:57 CLSRSC-594: Executing installation step 1 of 19: 'SetupTFA'. 2017/05/15 12:11:57 CLSRSC-4001: Installing Oracle Trace File Analyzer (TFA) Collector.

Download clusterware installation software: oracle clusters

2017/05/15 12:12:29 CLSRSC-4002: Successfully installed Oracle Trace File Analyzer (TFA) Collector. 2017/05/15 12:12:29 CLSRSC-594: Executing installation step 2 of 19: 'ValidateEnv'. 2017/05/15 12:12:38 CLSRSC-363: User ignored prerequisites during installation 2017/05/15 12:12:38 CLSRSC-594: Executing installation step 3 of 19: 'CheckFirstNode'. 2017/05/15 12:12:41 CLSRSC-594: Executing installation step 4 of 19: 'GenSiteGUIDs'.

2017/05/15 12:12:44 CLSRSC-594: Executing installation step 5 of 19: 'SaveParamFile'. 2017/05/15 12:12:56 CLSRSC-594: Executing installation step 6 of 19: 'SetupOSD'. 2017/05/15 12:12:59 CLSRSC-594: Executing installation step 7 of 19: 'CheckCRSConfig'. 2017/05/15 12:12:59 CLSRSC-594: Executing installation step 8 of 19: 'SetupLocalGPNP'. 2017/05/15 12:13:37 CLSRSC-594: Executing installation step 9 of 19: 'ConfigOLR'. 2017/05/15 12:13:53 CLSRSC-594: Executing installation step 10 of 19: 'ConfigCHMOS'. 2017/05/15 12:13:53 CLSRSC-594: Executing installation step 11 of 19: 'CreateOHASD'.

2017/05/15 12:14:04 CLSRSC-594: Executing installation step 12 of 19: 'ConfigOHASD'. 2017/05/15 12:14:19 CLSRSC-330: Adding Clusterware entries to file 'oracle-ohasd.conf' 2017/05/15 12:14:49 CLSRSC-594: Executing installation step 13 of 19: 'InstallAFD'. 2017/05/15 12:14:59 CLSRSC-594: Executing installation step 14 of 19: 'InstallACFS'. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel6m1' CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel6m1' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. 2017/05/15 12:15:57 CLSRSC-594: Executing installation step 15 of 19: 'InstallKA'.

2017/05/15 12:16:07 CLSRSC-594: Executing installation step 16 of 19: 'InitConfig'. CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'rhel6m1' CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'rhel6m1' has completed CRS-4133: Oracle High Availability Services has been stopped. CRS-4123: Oracle High Availability Services has been started. With this conclude the installation of 12.2 clusterware. As said in the beginning the GIMR will be used for storing OCR backup. Using ASM Filter Driver During Installation In order to use AFD to mange the disk sued by ASM carry out the following before the installation (before running gridSetup.sh). At this stage is assumed that grid software is extracted to grid home location.

Set ORACLEHOME AND ORACLEBASE variables as below. Oracle base is set to a temp location so that files generated during ASM disk labelling doesn't interfere with actual installation. # export ORACLEHOME=/opt/app/12.2.0/grid # export ORACLEBASE=/tmpRun the disk labelling command from the $GIHOME/bin directory. #./asmcmd afdlabel DATA1 /dev/oracleasm/data1 -init #./asmcmd afdlabel FRA1 /dev/oracleasm/fra1 -init #./asmcmd afdlabel OCR1 /dev/oracleasm/ocr1 -init #./asmcmd afdlabel OCR2 /dev/oracleasm/ocr2 -init #./asmcmd afdlabel OCR3 /dev/oracleasm/ocr3 -init #./asmcmd afdlabel GIMR /dev/oracleasm/gimr -initVerify the disks are labelled.

#./asmcmd afdlslbl '/dev/oracleasm/.' - Label Duplicate Path DATA1 /dev/oracleasm/data1 FRA1 /dev/oracleasm/fra1 GIMR /dev/oracleasm/gimr OCR1 /dev/oracleasm/ocr1 OCR2 /dev/oracleasm/ocr2 OCR3 /dev/oracleasm/ocr3Afterwards run the gridSetup.sh as before. Give the ASM disk path string as before when it is time to specify the disks for ASM. The listed disks will have the status 'provisioned'. During the root.sh run the file name for ASM diskgroup would have the AFD: prefix.

Successfully replaced voting disk group with +CLUSTERDG. CRS-4256: Updating the profile CRS-4266: Voting file(s) successfully replaced ## STATE File Universal Id File Name Disk group - - - - - 1. ONLINE 5525126aef6f4fb4bf9b383115bade73 (AFD:OCR3) CLUSTERDG 2. ONLINE a5330a1d40594f31bfcc9ee98c6b0034 (AFD:OCR2) CLUSTERDG 3. Finally run the post crsisnt to verify cluster status.

$ cluvfy stage -post crsinst -n rhel6m1,rhel6m2 -verbose Verifying Node Connectivity. Verifying Hosts File. Node Name Status - - rhel6m1 passed rhel6m2 passed Verifying Hosts File.PASSED Interface information for node 'rhel6m2' Name IP Address Subnet Gateway Def. Gateway HW Address MTU - - - - - - - eth0 192.168.0.94 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:7E:61:A9 1500 eth0 192.168.0.98 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:7E:61:A9 1500 eth0 192.168.0.135 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:7E:61:A9 1500 eth0 192.168.0.145 192.168.0.0 0.0.0.0 192.168.0.100 08:00:27:7E:61:A9 1500 eth1 192.168.1.88 192.168.1.0 0.0.0.0 192.168.0.100 08:00:27:69:2C:B6 1500 Interface information for node 'rhel6m1' Name IP Address Subnet Gateway Def.

Link back: This guide is a part of the Virtual Oracle RAC project,. Taking care of memory As mentioned before, now comes the time to increase the virtual machines memory to 1024MB.

Take this chance to run a backup of the whole project while guests are shut down, then change Base Memory settings on both nodes. Start all the machines again. Getting an X Server running on the PC There are few small things to take care of before we launch Oracle Installer. Namely, we need an X Server on our PC. This can be any server of your choosing, such as, or other. I will be using Cygwin/X here. Make sure your X Server includes xhost utility, sometimes it is not installed by default.

This is how to locate the xhost utility during instalation: Have your X Server installed and started on your PC. If using Cygwin, run in CMD window (or via an icon): C: bin cygwin2 bin run.exe /usr/bin/bash.exe -l -c /usr/bin/startxwin.exe This will usually start X server on display 0.0 and bring up an xterm window. If xterm window did not appear have it started manually.

In this xterm window give an access control command: $ xhost +10.10.1.11 $ xhost +10.10.1.12 This xhost command has enabled the obdn1 host to use your X Server to display x clients. I think I have just saved you a few days of frustration.

Oracle clusterware

The access control matters of X Server are not covered clearly in documentation and articles would suggest using X11 forwarding in SSH. The X11 forwarding in SSH will indeed take care of all the neccessary settings automaticatlly, however this setup won’t be usable by Oracle Installer. Therefore you should check that the SSH sessions you run from your PC to the RAC nodes are not using the X11 forwarding. So, if you are using PuTTY make sure the X11 forwarding is not selected for the RAC node sessions: As user 'oracle', from the odbn1 node, from that same SSH session you have enabled SSH no-password connections (go back to refresh your memory). Ff you have restarted servers or logged in again in SSH you will need to repeat steps with ssh-agent and ssh-add as described before (on odbn1 only).

$ export DISPLAY=10.10.1.1:0 Test that your session can run some X windows application and show it on your PC’s X Server: $ xclock & This should spawn an xclock application and show it on your PC via X Server. If this did not work you will have to troubleshoot your setup and proceed with Oracle installation only when resolved.

Notice: The IP address we supplied in DISPLAY variable is the virtual 'router' on our host PC, since X Server expect connection from 10.10.1.11, we should 'talk' to it on the same network. Having endured so far we are now going to do more exciting things. Configure OCFS2 Notice: OCFS2 package (RPM) installation instructions are in, go back if you missed them. If you already cloned the node then do this step on both nodes. OCFS2 will be configured to use private network i.e. Hostnames as in etc/hosts: 10.10.1.11 odbn1-priv 10.10.1.12 odbn2-priv Notice, that although we will be using the private network IP addresses we will still use hostnames as they appear on public network. I know, it is confusing.

There would not be such a confusion with just one network (either private or public). As user 'root', on the odbn1 node, start a new SSH session and set the DISPLAY varaible: # export DISPLAY=10.10.1.1:0 Now, bring up OCFS2 configuration tool to edit /etc/ocfs2/cluster.conf file: # ocfs2console & Select Cluster – Configure Nodes.

This will start the OCFS2 Cluster Stack and bring up next dialog ('Node Configuration') There will be a pop-up message: Dismiss it and proceed to 'Node Configuration'. Add two RAC nodes as below: When you click 'Apply' the screen changes to node status and node numbers: Exit the console and repeat the same process on the other node of the cluster supplying same exact data in the same order. The /etc/ocfs2/cluster.conf file will look like this after configuration is done (do not just copy this file over to other node): O2CB Cluster Service O2CB is the OCFS2’s cluster stack of services that are listed below:.

DLMFS: User space interface to the kernel space DLM All of the listed services are included in the single service o2cb (/etc/init.d/o2cb). Now issue following commands to have O2CB start on boot up and have other needed attributes (on both nodes): # /etc/init.d/o2cb offline ocfs2 # /etc/init.d/o2cb unload # /etc/init.d/o2cb configure Below shown non-default responses to the configuration questions (marked by arrows): Repeat the same steps on the other node. Issue this command to review the service status so far: # /etc/init.d/o2cb status Make sure that o2cb serivce will be started at proper run level: # chkconfig –list o2cb Levels 3 through 5 should be 'on': Format and mount the OCFS2 Filesystem This task is to be done from one node only # mkfs.ocfs2 -b 4K -C 32K -N 4 -L oracrsfiles /dev/iscsi/crs/part1 The following output is produced: Now mount the newly formatted partition (that was labeled 'oracrsfiles') under directory /u02: # mount -t ocfs2 -o datavolume,nointr -L 'oracrsfiles' /u02 Repeat the mount command on the other node. Verify that the file system is mounted correctly: # mount Repeat verification on the other node. Unzip the archives and re-pack them into an ISO image, then mount the image in VB guest I will be using the second option as not to inflate the file system of the virtual machine (after all, it is a one time install).

So, have the archives unzipped into separate directories on your PC, then use a utility such as to pack the software into a DVD ISO image, just like in the picture below: Name this image 'rac32media.iso': Add newly created image to Virtual Media Manager and then mount it in both VB machines. Then mount it in the nodes like this: # mount -r /dev/cdrom /media/cdrom # cd /media/cdrom Install cvuqdisk package This package is needed later by Cluster Verification Utility (CVU). Issue the following commands (do it on both nodes): # cd /media/cdrom/10201clusterwarelinux32/clusterware/rpm # export CVUQDISKGRP=oinstall # rpm -iv cvuqdisk-1.0.1-1.rpm Verify installation of the package: # ls -l /usr/sbin/cvuqdisk Install and run Cluster Verification Utility (CVU) This utility will be run from odbn1 node as user 'oracle'. The SSH agent will have to be initiated (again) for this purpose.

Hi Yumi, I have already saw part of this article on Jeffrey Hunter blog, and it´s very good. But, I have a question, that I still could not acomplished.

Download Clusterware Installation Software: Oracle Cluster System

I want to install Oracle Clusterware with raw devices, using openfiler, like your tutorial right here, but not with OCFS2. So, Considering I´m using ASMLib( that writes some instructions on partition head, so, I don´t need to persistent names after reboot, so, don´t need to use 55- and iscsi.sh files). I used another way to create raw devices using iscsi uuid´s. But, I have already made this vm 2 times from zero and a lot of times, I had restored the snapshot to try a different approach.

Oracle Cluster Manager

All the times I have the same problem when I runned the root.sh script. Have you ever used Oracle Clusterware with OCR´s and VTdisk´s over iscsi on openfiler storage? Thanks and Regards, Comment by Bruno Cantelli — August 9, 2010 @.