SlideShare a Scribd company logo
Presenter: Nikhil Kumar
RAC- Installing your First Cluster and
Database
WHO AM I ?
 Nikhil Kumar (DBA Manager)
6 Years of Experience in Oracle Databases
and Apps.
Oracle Certified Professional Oracle 9i and
11g.
Worked on Mission critical Telecom,
Financial ERP, Manufacturing and
Government Domain.
Agenda
 Introduction of RAC
 Installation of Clusterware.
 Creating diskgroup / Adding disk to Diskgroup using ASMCA.
 Creation of ACFS Volume.
 Installation of RAC Database using DBCA.
Introduction of RAC
A medium to provide high availability to database.
Why RAC?
High availability and scalability without any limitation:-
 OS Patching or Schedule bounce of OS.
 Database maintenance patch(CPU or PSU) .
 Static database parameter change (Due to bug or any
requirement by system).
 Hardware upgrade or change.
 Harddisk failure, power failure or system failure.
 Prevention from Single point of failure?
Identity`Home
Node
Host NodeGiven NameTypeAddressAddress
Assigned
By
Address
Resolved By
Node 1
Public
Node 1racnode1racnode1Public192.168.7.71FixedDNS
Node 1
VIP
Node 1Selected by
Oracle
Clusterware
racnode1-vipVirtual192.168.7.41FixedDNS and hosts
file
Node 1
Private
Node 1racnode1racnode1-privPrivate192.168.71.40FixedDNS and hosts
file, or none
Node 2
Public
Node 2racnode2racnode2Public192.168.7.72FixedDNS
Node 2
VIP
Node 2Selected by
Oracle
Clusterware
racnode2-vipVirtual192.168.7.41FixedDNS and hosts
file
Node 2
Private
Node 2racnode2racnode2-privPrivate192.168.71.41FixedDNS and hosts
file, or none
SCANNoneSelected by
Oracle
Clusterware
Racnode.linuxdc.comVirtual192.168.7.43
192.168.7.44
192.168.7.45
FixedDNS
Network Configuration:
For racnode1 and racnode2
Note : Manually assigning the proper IPs in /etc/host file is mandatory. Even if it resolved
through DNS. This is Oracle Requirement.
Cluster Overview
 Two Node cluster
 Operating System version RHEL 6.4
 Cluster and database software version 11.2.0.4.0
 Cluster Name: NIOUG
 Raw Disk size -- 10 Luns
 Diskgroups (Data,FRA,OCR)
 Creation of empty NIOUG database using DBCA.
Prerequisite
Prerequisite to followed by System/Network
Admin before delivering the server to DBA.
1.
Prerequisite Cont..
2. Verify that SELinux is running and set to ENFORCING:
As the root user,
# getenforce
Enforcing
If the system is running in PERMISSIVE or DISABLED mode, modify the
/etc/sysconfig/selinux file and set SELinux to enforcing as shown
below.
SELINUX=enforcing
The modification of the /etc/sysconfig/selinux file takes effect after a
reboot. To change the setting of SELinux immediately without a
reboot, run the following command:
# setenforce 1
Prerequisite Cont..
3. Need to upgrade selinux-policy rpm to make
SELINUX work current version of RPM Deliver with
RHEL 6.4
[root@STGW2 ~]# rpm -qa selinux-policy*
selinux-policy-3.7.19-195.el6.noarch
selinux-policy-targeted-3.7.19-195.el6.noarch
Need to upgrade with below mentioned package:-
[root@racnode1 ~]# rpm -qa selinux-policy*
selinux-policy-3.7.19-231.el6.noarch
selinux-policy-targeted-3.7.19-231.el6.noarch
Prerequisite Cont..
4. Make sure the shared memory file system is big
enough for Automatic Memory Manager to work.
EXAMPLES:
# umount tmpfs
# mount -t tmpfs tmpfs -o size=12g /dev/shm
( size is based upon 90% of physical memory)
Make the setting permanent by amending the "tmpfs" setting of the
"/etc/fstab" file to look like this.
tmpfs /dev/shm tmpfs defaults,size=12g 0 0
Prerequisite Cont..
5. Put the below entry in /etc/hosts of both node
[root@racnode1 bin]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.7.71 racnode1
192.168.7.72 racnode2
192.168.71.40 racnode1-priv
192.168.71.41 racnode2-priv
192.168.7.41 racnode1-vip
192.168.7.42 racnode2-vip
Prerequisite Cont..
6. Kernel Parameters:
Add the kernel parameters in
/etc/sysctl.conf file.
(Apply it using command sysctl -p
/etc/sysctl.conf)
kernel.shmall = shmmax / 4096
kernel.shmmax= 90% of physical memory
net.ipv4.ip_forward = 0
net.ipv4.conf.default.rp_filter = 1
net.ipv4.conf.default.accept_source_route
= 0
kernel.sysrq = 0
kernel.core_uses_pid = 1
net.ipv4.tcp_syncookies = 1
fs.aio-max-nr= 3145728
fs.file-max = 6815744
kernel.msgmax = 8192
kernel.msgmnb= 65536
kernel.msgmni = 2878
kernel.sem = 250 32000 100 142
kernel.shmall = 2097152
kernel.shmmax= 7730941132
kernel.sysrq= 1
net.core.rmem_default=4194304
net.core.rmem_max=4194304
net.core.wmem_default=262144
net.core.wmem_max=1048576
net.ipv4.ip_local_port_range=9000
65500
Prerequisite Cont..
7. Adding Groups and users:
#groupadd -g 2011 asmdba
#groupadd -g 2012 asmadmin
#groupadd -g 2013 asmoper
#groupadd -g 2014 oper
#groupadd –g 2015 oinstall
#groupadd –g 2016 dba
#useradd -s /bin/bash -d /home/grid -g oinstall -G asmdba,asmadmin,asmoper,dba grid
#useradd -s /bin/bash -d /home/oracle -g oinstall -G asmdba,asmadmin,asmoper,dba oracle
#usermod -a -G asmdba,oper oracle
For example:
# id grid
uid=3010(grid) gid=2004(oinstall)
groups=2000(dba),2004(oinstall),2011(asmdba),2012(asmadmin),2013(asmoper)
#id oracle
uid=3000(oracle) gid=2004(oinstall) groups=2000(dba),2004(oinstall),2011(asmdba),2014(oper)
Prerequisite Cont..
8. Creating the Oracle Base directory
mkdir -p /u01/app/11.2.0/grid
mkdir -p /u01/app/grid
chown -R grid:oinstall /u01
chmod -R 775 /u01
mkdir -p /u01/app/oracle
chown oracle:oinstall /u01/app/oracle
Prerequisite Cont..
9. Network Time Protocol Setting:
If you are using NTP, you must add the "-x" option into the following line in
the "/etc/sysconfig/ntpd" file.
OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid"
Then restart NTP.
# chkconfig --level 2345 ntpd on
Start the Name Service Cache Daemon (nscd).
# chkconfig --level 2345 nscd on
# service nscd start
Prerequisite Cont..
10. Setting Resource
Limits Oracle users:
On each node, add the following lines
to the /etc/security/limits.conf file
(the following example shows the
software account owners oracle
and grid):
cat /etc/security/limits.conf
oracle soft nproc 2047
oracle hard nproc 16384
oracle soft nofile 1024
oracle hard nofile 65536
oracle soft stack 10240
oracle hard stack 32768
grid soft nproc 2047
grid hard nproc 16384
grid soft nofile 1024
grid hard nofile 65536
grid soft stack 10240
grid hard stack 32768
Prerequisite Cont..
11. Setting login file:
As the root user, create a backup of /etc/pam.d/login
# cp /etc/pam.d/login /etc/pam.d/login.bkup
As the root user, add the following line within the /etc/pam.d/login file
session required pam_limits.so
12 .To install and configure ASMLib software packages:
1. Download the ASMLib packages to each node in your cluster.
2. Change to the directory where the package files were downloaded.
3. As the root user, use the rpm command to install the packages. For example:
# rpm -Uvh kmod-oracleasm
# rpm -Uvh oracleasmlib-2.0.4-1.el6.x86_64.rpm
# rpm -Uvh oracleasm-support-2.1.8-1.el6.x86_64.rpm
Prerequisite Cont..
After you have completed these commands, ASMLib is installed on the
system.
4.Repeat steps 2 and 3 on each node in your cluster.
Configuring asmlib:
a.) /usr/sbin/oracleasm configure -i (as root user run on all the nodes)
b.) oracleasm init (Load and initialize the ASMLib driver)
Load the kernel module using the following command.
# /usr/sbin/oracleasm init
Loading module "oracleasm": oracleasm
Mounting ASMlib driver filesystem: /dev/oracleasm
Prerequisite Cont..
Using ASMLib to Create ASM Disks
c.) createdisk (only on the first node)
# /usr/sbin/oracleasm createdisk
disk_name device_partition_name
Mark the five shared disks as follows.
# /usr/sbin/oracleasm createdisk DISK1
/dev/sdb1
Writing disk header: done
Instantiating disk: done
If you need to unmark a disk that was
used in a createdisk command, you
can
use the following syntax:
# /usr/sbin/oracleasm deletedisk
disk_name
Prerequisite Cont..
d.) oracleasm scandisks ( on all
the nodes)
It is unnecessary, but we can run the
"scandisks" command to refresh
the ASM disk configuration.
# /usr/sbin/oracleasm scandisks
Reloading disk partitions: done
Cleaning any stale ASM disks...
Scanning system for ASM disks...
e.) oracleasm listdisks
We can see the disk are now visible
to ASM using the "listdisks"
command.
# /usr/sbin/oracleasm listdisks
Prerequisite Cont..
13. Ping to check the communication between each
node in cluster
ping racnode1
ping racnode2
ping racnode1-priv
ping racnode2-priv
Run the Cluvfy to check the prerequisite of cluster
installation. (Run from Grid user)
/software/grid/runcluvfy.sh stage -pre crsinst -n racnode1,racnode2 –
verbose
Prerequisite Cont..
14. SCAN name should be configured by network admin
before starting the installation.
SCAN can be verified by two ways:-
# host scan_name (It should show 3 ip address)
# nslookup scan_name (Run this command 2-3 times IP should
interchange)
Installation of Grid Infrastructure
Clusterware
Go to /software_directory/grid
run this on grid user
./runInstaller
Download Software Updates
Select the "Install and Configure Grid Infrastructure for
a Cluster" option, then click the "Next" button.
Select the "Advanced Installation" option, then
click the "Next" button.
Select Product Languages
Specify Cluster and SCAN name information,
click the “Next" button.
Enter the details of the second node in the
cluster, then click the "OK" button.
Provide password of grid to configure SSH
Click on “setup” tab to initiate the SSH
configuration between the nodes.
RAC-Installing your First Cluster and Database
Check the Network interface and its segment
Select ASM for storage
Choose asm disk to create diskgroup
Provide password for ASM account
Skipping IPMI, Since we don’t have hardware to support this feature
Group information for ASM
Specify directory for Clusterware files
Specify the directory for central inventory
Prerequisite check is being performed
Result from prerequisite check
Ignoring some prerequisite checks
Click on “Install” to init
Installation is in process
Run root.sh scripts one at a time on the node
Run root.sh scripts one at a time on the node
Setting permission for orainventory
Running root.sh on first node of the cluster
root.sh on node one is complete
root.sh on node2 complete
Go back to OUI screen and click “OK”
Check Clusterware services on both nodes.
Grid Clusterware installation is complete
Creating Diskgroup using ASMCA
Invoking ASMCA Utility on grid user:-
Create new “DATA” diskgroup
Click “OK”
“DATA” diskgroup is created
Creating ACFS Volume
RAC-Installing your First Cluster and Database
Create “archive” volume using “FRA” diskgroup
After selecting size click “OK”
Volume “archive” created
Now click “ASM Cluster File System” Tab
Now after clicking on “Create” tab, Select Volume
“Archive Which we created earlier. Provide the input
to “General Purpose File System” Which will be
mount on the Operating System.
/archive mountpoint of OS is created
Status of ACFS mount point
Check the “/archive” mountpoint on both node
Installation of Oracle Binaries
& Database Creation
Invoke “runInstaller” from Oracle user
Skipping Software Update
Use “Create and configure database” option to
install database binaries and dummy database
Select “Server Class” type for installation
Select both node for installation
Establish SSH connectivity for oracle user
Select “Advance install” type installation
Select default “English” Language
Select “Enterprise Edition” for database
Define directory structure for database binaries
Select the type of database
Provide input to Global database name and SID
Provide memory to Instance
RAC-Installing your First Cluster and Database
Provide password for ASM
Skipping the backup part
Select diskgroup where database files needs to
be placed
Provide password to admin account of database
Group information
Prerequisite Check Complete
Prerequisite Check Complete
Click “Install” to initiate the installation
Installation is in process
Database creation is in process
Click “Ok”
Run root.sh script on both node of cluster
Running root.sh on database server nodes
Software installation and software creation is done
Checking Database Resource
RAC-Installing your First Cluster and Database
RAC-Installing your First Cluster and Database

More Related Content

PPT
Upgrade 11gR2 to 12cR1 Clusterware
PPSX
RAC - The Savior of DBA
PDF
Oracle 11g R2 RAC setup on rhel 5.0
PPT
Cloug Troubleshooting Oracle 11g Rac 101 Tips And Tricks
PDF
Oracle Clusterware Node Management and Voting Disks
PPTX
Oracle Clusterware and Private Network Considerations - Practical Performance...
PPTX
Oracle Real Application Cluster ( RAC )
PPSX
Oracle 11g R2 RAC implementation and concept
Upgrade 11gR2 to 12cR1 Clusterware
RAC - The Savior of DBA
Oracle 11g R2 RAC setup on rhel 5.0
Cloug Troubleshooting Oracle 11g Rac 101 Tips And Tricks
Oracle Clusterware Node Management and Voting Disks
Oracle Clusterware and Private Network Considerations - Practical Performance...
Oracle Real Application Cluster ( RAC )
Oracle 11g R2 RAC implementation and concept

What's hot (20)

PDF
Data Guard Deep Dive UKOUG 2012
PDF
Performance Scenario: Diagnosing and resolving sudden slow down on two node RAC
PPTX
tow nodes Oracle 12c RAC on virtualbox
PDF
A Deep Dive into ASM Redundancy in Exadata
DOCX
Rac questions
DOCX
Vbox virtual box在oracle linux 5 - shoug 梁洪响
PDF
Reducing Risk When Upgrading MySQL
PDF
Oss4b - pxc introduction
PPTX
Convert single instance to RAC
PDF
Oracle Linux and Oracle Database - A Trusted Combination
PDF
RACATTACK Lab Handbook - Enable Flex Cluster and Flex ASM
PDF
Tips and Tricks for Operating Apache Kafka
PDF
Oracle Database Management Basic 1
PPTX
Understand oracle real application cluster
PPTX
Oracle12c data guard farsync and whats new
ODP
High Availability in 37 Easy Steps
PDF
Oracle Active Data Guard 12c: Far Sync Instance, Real-Time Cascade and Other ...
PDF
图文详解安装Net backup 6.5备份恢复oracle 10g rac 数据库
PPTX
Oracle flex asm & flex cluster
PDF
Galera Replication Demystified: How Does It Work?
Data Guard Deep Dive UKOUG 2012
Performance Scenario: Diagnosing and resolving sudden slow down on two node RAC
tow nodes Oracle 12c RAC on virtualbox
A Deep Dive into ASM Redundancy in Exadata
Rac questions
Vbox virtual box在oracle linux 5 - shoug 梁洪响
Reducing Risk When Upgrading MySQL
Oss4b - pxc introduction
Convert single instance to RAC
Oracle Linux and Oracle Database - A Trusted Combination
RACATTACK Lab Handbook - Enable Flex Cluster and Flex ASM
Tips and Tricks for Operating Apache Kafka
Oracle Database Management Basic 1
Understand oracle real application cluster
Oracle12c data guard farsync and whats new
High Availability in 37 Easy Steps
Oracle Active Data Guard 12c: Far Sync Instance, Real-Time Cascade and Other ...
图文详解安装Net backup 6.5备份恢复oracle 10g rac 数据库
Oracle flex asm & flex cluster
Galera Replication Demystified: How Does It Work?
Ad

Viewers also liked (20)

PDF
B1 reading comprehension
PPTX
GRIEVANCE MECHANISM - Total
PPTX
Boxnews 2015
PDF
Lecture04 05
PPTX
Desiccant dehumidification system by destech.eu
PDF
Conductive 3D Printer Filament
PPTX
Dallas Wedding Venue
DOCX
Disabili al lavoro, missione impossibile?
PDF
Combined 2015 Editorials ROI
DOCX
Additional Skills
PPTX
Gagadget 2015.
PPTX
Okino.ua 2016
PPTX
How I made my digipack
PDF
Теории мотиваци
PPTX
Visualizing and drawing of symmetrical designs
PPT
Tin hoc là mot nghanh khoa ho
PPTX
Fashion as branding ppt
PPT
Baocaocuoiki
PPTX
Desafio das Estimativas - Utilizando métricas científicas com Kanban
DOCX
Govt. sector jobs
B1 reading comprehension
GRIEVANCE MECHANISM - Total
Boxnews 2015
Lecture04 05
Desiccant dehumidification system by destech.eu
Conductive 3D Printer Filament
Dallas Wedding Venue
Disabili al lavoro, missione impossibile?
Combined 2015 Editorials ROI
Additional Skills
Gagadget 2015.
Okino.ua 2016
How I made my digipack
Теории мотиваци
Visualizing and drawing of symmetrical designs
Tin hoc là mot nghanh khoa ho
Fashion as branding ppt
Baocaocuoiki
Desafio das Estimativas - Utilizando métricas científicas com Kanban
Govt. sector jobs
Ad

Similar to RAC-Installing your First Cluster and Database (20)

PDF
Oracle cluster installation with grid and iscsi
PDF
Oracle cluster installation with grid and nfs
PDF
les01.pdf
PDF
Installing oracle grid infrastructure and database 12c r1
PDF
Oracle RAC Online Training.pdf
PDF
Oracle RAC 12c Best Practices Sanger OOW13 [CON8805]
PDF
Oracle RAC 12c Best Practices with Appendices DOAG2013
PDF
Oracle RAC 12c Collaborate Best Practices - IOUG 2014 version
PPSX
Linux configer
PDF
Oracle RAC Online Training.pdf
PPT
les_02.ppt of the Oracle course train_2 file
PDF
Rac on NFS
PDF
Install oracle grid infrastructure on linux 6.6
PDF
les03.pdf
DOC
Oracle10g rac course_contents
PDF
Oracle 11g Installation With ASM and Data Guard Setup
ODP
Asian Spirit 3 Day Dba On Ubl
PDF
ORACLE RAC DBA ONLINE TRAINING
PDF
Oracle 12cR2 Installation On Linux With ASM
PPT
01_Architecture_JFV14_01_Architecture_JFV14.ppt
Oracle cluster installation with grid and iscsi
Oracle cluster installation with grid and nfs
les01.pdf
Installing oracle grid infrastructure and database 12c r1
Oracle RAC Online Training.pdf
Oracle RAC 12c Best Practices Sanger OOW13 [CON8805]
Oracle RAC 12c Best Practices with Appendices DOAG2013
Oracle RAC 12c Collaborate Best Practices - IOUG 2014 version
Linux configer
Oracle RAC Online Training.pdf
les_02.ppt of the Oracle course train_2 file
Rac on NFS
Install oracle grid infrastructure on linux 6.6
les03.pdf
Oracle10g rac course_contents
Oracle 11g Installation With ASM and Data Guard Setup
Asian Spirit 3 Day Dba On Ubl
ORACLE RAC DBA ONLINE TRAINING
Oracle 12cR2 Installation On Linux With ASM
01_Architecture_JFV14_01_Architecture_JFV14.ppt

Recently uploaded (20)

PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PPT
Project quality management in manufacturing
PDF
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
PPTX
Welding lecture in detail for understanding
DOCX
573137875-Attendance-Management-System-original
PPT
Mechanical Engineering MATERIALS Selection
PPTX
Lecture Notes Electrical Wiring System Components
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
PPTX
Internet of Things (IOT) - A guide to understanding
PPTX
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
PPTX
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
PDF
Model Code of Practice - Construction Work - 21102022 .pdf
PDF
composite construction of structures.pdf
PPTX
OOP with Java - Java Introduction (Basics)
PPTX
Sustainable Sites - Green Building Construction
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PPTX
additive manufacturing of ss316l using mig welding
PPTX
UNIT 4 Total Quality Management .pptx
PDF
Arduino robotics embedded978-1-4302-3184-4.pdf
PPTX
Construction Project Organization Group 2.pptx
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
Project quality management in manufacturing
Mitigating Risks through Effective Management for Enhancing Organizational Pe...
Welding lecture in detail for understanding
573137875-Attendance-Management-System-original
Mechanical Engineering MATERIALS Selection
Lecture Notes Electrical Wiring System Components
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
Internet of Things (IOT) - A guide to understanding
IOT PPTs Week 10 Lecture Material.pptx of NPTEL Smart Cities contd
CARTOGRAPHY AND GEOINFORMATION VISUALIZATION chapter1 NPTE (2).pptx
Model Code of Practice - Construction Work - 21102022 .pdf
composite construction of structures.pdf
OOP with Java - Java Introduction (Basics)
Sustainable Sites - Green Building Construction
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
additive manufacturing of ss316l using mig welding
UNIT 4 Total Quality Management .pptx
Arduino robotics embedded978-1-4302-3184-4.pdf
Construction Project Organization Group 2.pptx

RAC-Installing your First Cluster and Database

  • 1. Presenter: Nikhil Kumar RAC- Installing your First Cluster and Database
  • 2. WHO AM I ?  Nikhil Kumar (DBA Manager) 6 Years of Experience in Oracle Databases and Apps. Oracle Certified Professional Oracle 9i and 11g. Worked on Mission critical Telecom, Financial ERP, Manufacturing and Government Domain.
  • 3. Agenda  Introduction of RAC  Installation of Clusterware.  Creating diskgroup / Adding disk to Diskgroup using ASMCA.  Creation of ACFS Volume.  Installation of RAC Database using DBCA.
  • 4. Introduction of RAC A medium to provide high availability to database. Why RAC? High availability and scalability without any limitation:-  OS Patching or Schedule bounce of OS.  Database maintenance patch(CPU or PSU) .  Static database parameter change (Due to bug or any requirement by system).  Hardware upgrade or change.  Harddisk failure, power failure or system failure.  Prevention from Single point of failure?
  • 5. Identity`Home Node Host NodeGiven NameTypeAddressAddress Assigned By Address Resolved By Node 1 Public Node 1racnode1racnode1Public192.168.7.71FixedDNS Node 1 VIP Node 1Selected by Oracle Clusterware racnode1-vipVirtual192.168.7.41FixedDNS and hosts file Node 1 Private Node 1racnode1racnode1-privPrivate192.168.71.40FixedDNS and hosts file, or none Node 2 Public Node 2racnode2racnode2Public192.168.7.72FixedDNS Node 2 VIP Node 2Selected by Oracle Clusterware racnode2-vipVirtual192.168.7.41FixedDNS and hosts file Node 2 Private Node 2racnode2racnode2-privPrivate192.168.71.41FixedDNS and hosts file, or none SCANNoneSelected by Oracle Clusterware Racnode.linuxdc.comVirtual192.168.7.43 192.168.7.44 192.168.7.45 FixedDNS Network Configuration: For racnode1 and racnode2 Note : Manually assigning the proper IPs in /etc/host file is mandatory. Even if it resolved through DNS. This is Oracle Requirement.
  • 6. Cluster Overview  Two Node cluster  Operating System version RHEL 6.4  Cluster and database software version 11.2.0.4.0  Cluster Name: NIOUG  Raw Disk size -- 10 Luns  Diskgroups (Data,FRA,OCR)  Creation of empty NIOUG database using DBCA.
  • 7. Prerequisite Prerequisite to followed by System/Network Admin before delivering the server to DBA. 1.
  • 8. Prerequisite Cont.. 2. Verify that SELinux is running and set to ENFORCING: As the root user, # getenforce Enforcing If the system is running in PERMISSIVE or DISABLED mode, modify the /etc/sysconfig/selinux file and set SELinux to enforcing as shown below. SELINUX=enforcing The modification of the /etc/sysconfig/selinux file takes effect after a reboot. To change the setting of SELinux immediately without a reboot, run the following command: # setenforce 1
  • 9. Prerequisite Cont.. 3. Need to upgrade selinux-policy rpm to make SELINUX work current version of RPM Deliver with RHEL 6.4 [root@STGW2 ~]# rpm -qa selinux-policy* selinux-policy-3.7.19-195.el6.noarch selinux-policy-targeted-3.7.19-195.el6.noarch Need to upgrade with below mentioned package:- [root@racnode1 ~]# rpm -qa selinux-policy* selinux-policy-3.7.19-231.el6.noarch selinux-policy-targeted-3.7.19-231.el6.noarch
  • 10. Prerequisite Cont.. 4. Make sure the shared memory file system is big enough for Automatic Memory Manager to work. EXAMPLES: # umount tmpfs # mount -t tmpfs tmpfs -o size=12g /dev/shm ( size is based upon 90% of physical memory) Make the setting permanent by amending the "tmpfs" setting of the "/etc/fstab" file to look like this. tmpfs /dev/shm tmpfs defaults,size=12g 0 0
  • 11. Prerequisite Cont.. 5. Put the below entry in /etc/hosts of both node [root@racnode1 bin]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.7.71 racnode1 192.168.7.72 racnode2 192.168.71.40 racnode1-priv 192.168.71.41 racnode2-priv 192.168.7.41 racnode1-vip 192.168.7.42 racnode2-vip
  • 12. Prerequisite Cont.. 6. Kernel Parameters: Add the kernel parameters in /etc/sysctl.conf file. (Apply it using command sysctl -p /etc/sysctl.conf) kernel.shmall = shmmax / 4096 kernel.shmmax= 90% of physical memory net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 fs.aio-max-nr= 3145728 fs.file-max = 6815744 kernel.msgmax = 8192 kernel.msgmnb= 65536 kernel.msgmni = 2878 kernel.sem = 250 32000 100 142 kernel.shmall = 2097152 kernel.shmmax= 7730941132 kernel.sysrq= 1 net.core.rmem_default=4194304 net.core.rmem_max=4194304 net.core.wmem_default=262144 net.core.wmem_max=1048576 net.ipv4.ip_local_port_range=9000 65500
  • 13. Prerequisite Cont.. 7. Adding Groups and users: #groupadd -g 2011 asmdba #groupadd -g 2012 asmadmin #groupadd -g 2013 asmoper #groupadd -g 2014 oper #groupadd –g 2015 oinstall #groupadd –g 2016 dba #useradd -s /bin/bash -d /home/grid -g oinstall -G asmdba,asmadmin,asmoper,dba grid #useradd -s /bin/bash -d /home/oracle -g oinstall -G asmdba,asmadmin,asmoper,dba oracle #usermod -a -G asmdba,oper oracle For example: # id grid uid=3010(grid) gid=2004(oinstall) groups=2000(dba),2004(oinstall),2011(asmdba),2012(asmadmin),2013(asmoper) #id oracle uid=3000(oracle) gid=2004(oinstall) groups=2000(dba),2004(oinstall),2011(asmdba),2014(oper)
  • 14. Prerequisite Cont.. 8. Creating the Oracle Base directory mkdir -p /u01/app/11.2.0/grid mkdir -p /u01/app/grid chown -R grid:oinstall /u01 chmod -R 775 /u01 mkdir -p /u01/app/oracle chown oracle:oinstall /u01/app/oracle
  • 15. Prerequisite Cont.. 9. Network Time Protocol Setting: If you are using NTP, you must add the "-x" option into the following line in the "/etc/sysconfig/ntpd" file. OPTIONS="-x -u ntp:ntp -p /var/run/ntpd.pid" Then restart NTP. # chkconfig --level 2345 ntpd on Start the Name Service Cache Daemon (nscd). # chkconfig --level 2345 nscd on # service nscd start
  • 16. Prerequisite Cont.. 10. Setting Resource Limits Oracle users: On each node, add the following lines to the /etc/security/limits.conf file (the following example shows the software account owners oracle and grid): cat /etc/security/limits.conf oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 oracle soft stack 10240 oracle hard stack 32768 grid soft nproc 2047 grid hard nproc 16384 grid soft nofile 1024 grid hard nofile 65536 grid soft stack 10240 grid hard stack 32768
  • 17. Prerequisite Cont.. 11. Setting login file: As the root user, create a backup of /etc/pam.d/login # cp /etc/pam.d/login /etc/pam.d/login.bkup As the root user, add the following line within the /etc/pam.d/login file session required pam_limits.so 12 .To install and configure ASMLib software packages: 1. Download the ASMLib packages to each node in your cluster. 2. Change to the directory where the package files were downloaded. 3. As the root user, use the rpm command to install the packages. For example: # rpm -Uvh kmod-oracleasm # rpm -Uvh oracleasmlib-2.0.4-1.el6.x86_64.rpm # rpm -Uvh oracleasm-support-2.1.8-1.el6.x86_64.rpm
  • 18. Prerequisite Cont.. After you have completed these commands, ASMLib is installed on the system. 4.Repeat steps 2 and 3 on each node in your cluster. Configuring asmlib: a.) /usr/sbin/oracleasm configure -i (as root user run on all the nodes) b.) oracleasm init (Load and initialize the ASMLib driver) Load the kernel module using the following command. # /usr/sbin/oracleasm init Loading module "oracleasm": oracleasm Mounting ASMlib driver filesystem: /dev/oracleasm
  • 19. Prerequisite Cont.. Using ASMLib to Create ASM Disks c.) createdisk (only on the first node) # /usr/sbin/oracleasm createdisk disk_name device_partition_name Mark the five shared disks as follows. # /usr/sbin/oracleasm createdisk DISK1 /dev/sdb1 Writing disk header: done Instantiating disk: done If you need to unmark a disk that was used in a createdisk command, you can use the following syntax: # /usr/sbin/oracleasm deletedisk disk_name
  • 20. Prerequisite Cont.. d.) oracleasm scandisks ( on all the nodes) It is unnecessary, but we can run the "scandisks" command to refresh the ASM disk configuration. # /usr/sbin/oracleasm scandisks Reloading disk partitions: done Cleaning any stale ASM disks... Scanning system for ASM disks... e.) oracleasm listdisks We can see the disk are now visible to ASM using the "listdisks" command. # /usr/sbin/oracleasm listdisks
  • 21. Prerequisite Cont.. 13. Ping to check the communication between each node in cluster ping racnode1 ping racnode2 ping racnode1-priv ping racnode2-priv Run the Cluvfy to check the prerequisite of cluster installation. (Run from Grid user) /software/grid/runcluvfy.sh stage -pre crsinst -n racnode1,racnode2 – verbose
  • 22. Prerequisite Cont.. 14. SCAN name should be configured by network admin before starting the installation. SCAN can be verified by two ways:- # host scan_name (It should show 3 ip address) # nslookup scan_name (Run this command 2-3 times IP should interchange)
  • 23. Installation of Grid Infrastructure Clusterware
  • 24. Go to /software_directory/grid run this on grid user ./runInstaller
  • 26. Select the "Install and Configure Grid Infrastructure for a Cluster" option, then click the "Next" button.
  • 27. Select the "Advanced Installation" option, then click the "Next" button.
  • 29. Specify Cluster and SCAN name information, click the “Next" button.
  • 30. Enter the details of the second node in the cluster, then click the "OK" button.
  • 31. Provide password of grid to configure SSH
  • 32. Click on “setup” tab to initiate the SSH configuration between the nodes.
  • 34. Check the Network interface and its segment
  • 35. Select ASM for storage
  • 36. Choose asm disk to create diskgroup
  • 37. Provide password for ASM account
  • 38. Skipping IPMI, Since we don’t have hardware to support this feature
  • 40. Specify directory for Clusterware files
  • 41. Specify the directory for central inventory
  • 42. Prerequisite check is being performed
  • 47. Run root.sh scripts one at a time on the node
  • 48. Run root.sh scripts one at a time on the node
  • 49. Setting permission for orainventory
  • 50. Running root.sh on first node of the cluster
  • 51. root.sh on node one is complete
  • 52. root.sh on node2 complete
  • 53. Go back to OUI screen and click “OK”
  • 54. Check Clusterware services on both nodes.
  • 57. Invoking ASMCA Utility on grid user:-
  • 63. Create “archive” volume using “FRA” diskgroup
  • 64. After selecting size click “OK”
  • 66. Now click “ASM Cluster File System” Tab
  • 67. Now after clicking on “Create” tab, Select Volume “Archive Which we created earlier. Provide the input to “General Purpose File System” Which will be mount on the Operating System.
  • 68. /archive mountpoint of OS is created
  • 69. Status of ACFS mount point
  • 70. Check the “/archive” mountpoint on both node
  • 71. Installation of Oracle Binaries & Database Creation
  • 74. Use “Create and configure database” option to install database binaries and dummy database
  • 75. Select “Server Class” type for installation
  • 76. Select both node for installation
  • 77. Establish SSH connectivity for oracle user
  • 78. Select “Advance install” type installation
  • 81. Define directory structure for database binaries
  • 82. Select the type of database
  • 83. Provide input to Global database name and SID
  • 84. Provide memory to Instance
  • 88. Select diskgroup where database files needs to be placed
  • 89. Provide password to admin account of database
  • 93. Click “Install” to initiate the installation
  • 95. Database creation is in process
  • 97. Run root.sh script on both node of cluster
  • 98. Running root.sh on database server nodes
  • 99. Software installation and software creation is done