This Blog contains all the basic practicals that an aspiring Oracle DBA should perform before stepping out in the world of Oracle Database Administration.These practical topics are prepared based on my own learnings for the past 9 years as an Oracle DBA |
|
1 | significance of various rpms required |
2 | significance of various kernel parameters |
3 | significance of oracle user,groups |
4 | Set different environment variables |
5 | location of tnanames.ora depending upon TNS_ADMIN |
6 | Significance of /etc/oratab |
7 | enabling and disabling automatic database startup |
8 | understanding various options inside response file |
9 | software installation silently using runinstaller |
10 | significance of oraInventory |
11 | create database using DBCA in silent mode |
12 | creating database manually using create database command |
13 | significance of catalog and cataproc scripts |
14 | Fetch data from any dictionary view before running catalog or cataproc scripts |
15 | try and connect to Db without listener |
16 | setting password for listener |
17 | checking the services status of the listener using lsnrctl |
18 | connection of sys user without password file |
19 | easy connect method from a client machine |
20 | connection using tnsnames from any client machine |
21 | drop the dba os group and run sqlplus / as sysdba |
22 | connectivity with and without sqlnet.ora |
23 | starting database using pfile and spfile |
24 | changing parameters in spfile and pfile and understand various parameters |
25 | alter system,alter database,alter session |
26 | enabling disabling autoextension for tablespaces |
27 | providing segment and extent info while creating tablespace |
28 | adding space to database by increasing datafile size |
29 | adding space to database by adding datafile |
30 | setting the limit for maximum datafiles in databse |
31 | database startup stages |
32 | database shutting down options and significance |
33 | starting the db in various modes like upgrade,read only etc |
34 | changing temporay tablespace |
35 | change dbname,dbid,db unique name |
36 | change location of archive location |
37 | set various formats for archive log naming |
38 | create a role |
39 | create a role having password |
40 | creating a database link |
41 | creating a profile |
42 | creating a profile for the password settings; |
43 | creating and deleting an index |
44 | dba_,all_ and user_ type views |
45 | setting various nls parameters |
46 | expiring the user password after sometime |
47 | giving one users table privileges to another user-role |
48 | creat synonym public and normal |
49 | giving tablespace during user creation |
50 | granting all privileges on a table to an user |
51 | granting one role to another |
52 | granting profile to a user at the time of creation of the user |
53 | granting restricted session |
54 | granting two roles to a user |
55 | getting information about character sets and setting characterset |
56 | renaming the tablespace |
57 | kill session command |
58 | setting tablespace quota for any user |
59 | switching the undo_tablespace from one another |
60 | use of systimestamp,sysdate etc |
61 | viewing privileges of any user and changing them |
62 | grant various privegeles to user |
63 | revoke various privegeles from user |
64 | backup of database full,lv 0,lv1 incremental,lv1 commulative |
65 | backup of datafile |
66 | backup of archivelog |
67 | taking backup to any location of chaoice |
68 | rman show all command and all the rman parameters |
69 | backup of the controlfile to a particular location |
70 | delete input and its significance |
71 | crosschecking the backups using rman |
72 | expired and obselete backup |
73 | backup using export and various parameters of expdp |
74 | backup of controlfile in a sql script |
75 | backup of a particular tablespace using RMAN |
76 | backup of backupset |
77 | starting user managed backup using begin backup command |
78 | ending user managed backup using end backup command |
79 | rman list command |
80 | rman report command |
81 | RMAN compressed backup |
82 | rman delete command |
83 | understanding rman channel |
84 | rman show command |
85 | rman backup in nomount,mount,open mode..possible or not? |
86 | recover a spfile |
87 | recover a controlfile |
88 | recover a datafile |
89 | recover different files to different locations |
90 | remove the current temporary tablespace and understand the changes |
91 | recove if current redolog gets deleted/corrupted |
92 | recove if non current redolog gets deleted/corrupted |
93 | recover to a particular time |
94 | recover a whole database(complete disaster recovery) to same file system |
95 | recover a whole database(complete disaster recovery) to different file system |
96 | recover from export backup |
97 | importing a schema |
98 | importing a table |
99 | importing a user from one tablespace to another |
100 | recover till a required SCN |
101 | try only restore and open the database |
102 | RMAN run command for incomplete recovery until time |
103 | dropping a missing datafile |
104 | effect of resetlogs option |
105 | How to enable flashback in 11g |
106 | flashback database |
107 | flashback database by using restore point(have to down the db) |
108 | flashback logs information |
109 | Flashback Query |
110 | flashback table delete |
111 | flashback table drop |
112 | Flashback transaction |
113 | how RMAN recovery happens? |
114 | importing a role |
115 | recovery in case of loss of archive log file |
116 | creation of recovery catalog |
117 | maintenance of recovery catalog |
118 | auditing the database |
119 | AWR report generation using awrrpt.sql |
120 | changing the sna[shot taking time |
121 | playing with baseline |
122 | compiling database objects |
123 | creating a clone database using RMAN backup |
124 | creating a clone database without RMAN backup |
125 | patching a databse manually using opatch |
126 | applying one off |
127 | rollback the applied patch |
128 | create standby using backup |
129 | create standby using rman duplicate |
130 | creating standby using file copy |
131 | upgrading a database |
132 | downgrading a database |
133 | dropping a missing datafile |
134 | recyclebin usage |
135 | renaming a database |
136 | changing dbid of a database |
137 | steps to load data using sqlldr |
138 | transporting a tablespace |
139 | using logminer |
140 | understanding ADDM |
141 | understading ADR |
142 | data recovery advisor |
143 | database replay |
144 | dropping a database |
145 | flashback archive |
146 | segment advisor |
147 | SQL access advisor |
148 | SQL performance analyser |
149 | SQL plan baseline |
150 | SQL tuning advisor |
151 | SQL tuning set |
152 | table shrinking |
153 | total recall |
154 | databagurd broker |
155 | virtual column |
Saugat Chatterjee's Oracle DBA blogs
Wednesday, 9 January 2019
Oracle DBA practicals for beginers
Tuesday, 27 September 2016
OEM default ports required to be open
These are the default ports which should be open for OEM12c to work properly
Enterprise Manager Upload Http Port=4889
Enterprise Manager Upload Http SSL Port=4900
Enterprise Manager Central Console Http SSL Port=7799
Node Manager Http SSL Port=7403
Managed Server Http Port=7202
Enterprise Manager Central Console Http Port=7788
Oracle Management Agent Port=3872
Admin Server Http SSL Port=7101
Managed Server Http SSL Port=7301
Tuesday, 30 August 2016
Step by Step Migration Document OEM 12c to 13c on RHEL 6.8
Step by Step Migration Document OEM
12c to 13c on RHEL 6.8
We will do the OEM migration from OEM12c running on 11203 database as OMR to OEM 13c with 12102 as OMR using two step approach and with minimum downtime
1.
PHASE 1 CLONING OF EXISTING OEM
2.
PHASE 2 MIGRATION OF OEM FROM 12c TO 13c
So lets get started
PHASE 1 CLONING OF EXISTING OEM
Take the RMAN backup of OMR
run {
backup incremental level 0 database plus archivelog;
backup current controlfile;
sql 'alter system switch logfile';
backup archivelog all;
}
Take OMS configuration backup
emctl exportconfig oms -sysman_pwd <sysman password> -dir <directory to take backup>
Start preparing the new OEM server
Prepare the new server where you want to make a clone of your existing OEM,the directory structure is recommended to be same as the old server
Install 11203 database software in new server
Install 12102 database software in new server
Copy the files from old server to new server
- Copy config backup
- Copy OEM12c software from old server
- Copy the RMAN backups
- Copy software library files
- Copy tns files
Unzip the OEM12c software on new server
unzip em12103p1_linux64_disk1.zip
unzip em12103p1_linux64_disk2.zip
unzip em12103p1_linux64_disk3.zip
Create a new database 11203 on new server (this database would be used by OMS installer to create the configuration)
Check the database password and connection before proceeding further
orapwd file=$ORACLE_HOME/dbs/orapw$ORACLE_SID force=y password=<sys_password>
Create pfile based on the pfile from old server and make necessary changes
Make changes to listener files as well
Start the listener
Create database
Startup nomount
CREATE DATABASE <DB_NAME>
USER SYS IDENTIFIED BY <sys password>
USER SYSTEM IDENTIFIED BY <sysman password>
LOGFILE GROUP 1 ('/<location>/redo01.log') SIZE 1024M,
GROUP 2 ('/<location>/redo02.log') SIZE 1024M,
GROUP 3 ('/<location>/redo03.log') SIZE 1024M
MAXLOGFILES 5
MAXLOGMEMBERS 5
MAXLOGHISTORY 1
MAXDATAFILES 500
CHARACTER SET AL32UTF8
NATIONAL CHARACTER SET AL16UTF16
EXTENT MANAGEMENT LOCAL
DATAFILE '/<location>/system01.dbf' SIZE 6G REUSE AUTOEXTEND ON
SYSAUX DATAFILE '/<location>/sysaux01.dbf' SIZE 5G REUSE AUTOEXTEND ON
DEFAULT TABLESPACE <tablespace name>
DATAFILE '/<location>/dbfile.dbf'
SIZE 1G REUSE AUTOEXTEND ON MAXSIZE UNLIMITED
DEFAULT TEMPORARY TABLESPACE temp
TEMPFILE '/<location>/temp01.dbf'
SIZE 1G REUSE AUTOEXTEND ON
UNDO TABLESPACE UNDOTBS1
DATAFILE '/<location>/undo01.dbf'
SIZE 1G REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
@?/rdbms/admin/catalog.sql
@?/rdbms/admin/catproc.sql
as system user
conn system/<sysman password>
@?/sqlplus/admin/pupbld.sql
Check the database password and connection before proceeding further
orapwd file=$ORACLE_HOME/dbs/orapw$ORACLE_SID force=y password=<sys password>
Install OMS silently only software
- Check prerequisites
Go to the location where you unzipped the12c software
cd /<location>/install/requisites/bin
./emprereqkit -executionType install -prerequisiteXMLLoc /<location>/install/requisites/list -dbHost <db host name> -dbPort <listener port> -dbSid <db sid> -dbUser SYS -dbPassword <sys password> -dbRole sysdba -reposUser sysman -runPrerequisites
To run the corrective actions by itself run the above command with the below parameter
-runCorrectiveActions
- Install 12103 OMS software silently
Edit the software_only.rsp response file with the below parameters
cd /<location>/response
UNIX_GROUP_NAME="dba"
INVENTORY_LOCATION="/opt/oracle/oraInventory"
SECURITY_UPDATES_VIA_MYORACLESUPPORT=FALSE
DECLINE_SECURITY_UPDATES=TRUE
ORACLE_MIDDLEWARE_HOME_LOCATION="/<location>"
AGENT_BASE_DIR="/<location>"
ORACLE_HOSTNAME="<db server name>"
Install the software silently
cd /<location>
./runInstaller -silent -responseFile /<location>/software_only.rsp
- Configure OMS with OMR
Before configuring make the following changes in staticports.ini file and make sure that the ports are free
Enterprise Manager Upload Http Port=4889
Enterprise Manager Upload Http SSL Port=4900
Enterprise Manager Central Console Http SSL Port=7799
Node Manager Http SSL Port=7403
Managed Server Http Port=7202
Enterprise Manager Central Console Http Port=7788
Oracle Management Agent Port=3872
Admin Server Http SSL Port=7101
Managed Server Http SSL Port=7301
Change the new_install.rsp file with correct values as follows
UNIX_GROUP_NAME="dba"
INVENTORY_LOCATION="/opt/oracle/oraInventory"
SECURITY_UPDATES_VIA_MYORACLESUPPORT=FALSE
DECLINE_SECURITY_UPDATES=TRUE
ORACLE_MIDDLEWARE_HOME_LOCATION="/<location>"
AGENT_BASE_DIR="/<location>"
ORACLE_HOSTNAME="<hostname>"
WLS_ADMIN_SERVER_USERNAME="weblogic"
WLS_ADMIN_SERVER_PASSWORD="<weblogic password>"
WLS_ADMIN_SERVER_CONFIRM_PASSWORD="<weblogic password>"
NODE_MANAGER_PASSWORD="<nodemanager password>"
NODE_MANAGER_CONFIRM_PASSWORD="<nodemanager password>"
ORACLE_INSTANCE_HOME_LOCATION="/<location>" CONFIGURE_ORACLE_SOFTWARE_LIBRARY=false
DATABASE_HOSTNAME="<hostname>"
LISTENER_PORT="<listener port>"
SERVICENAME_OR_SID="<service name>"
SYS_PASSWORD="<sys password>”
SYSMAN_PASSWORD="<sysman password>"
SYSMAN_CONFIRM_PASSWORD="<sysman password>"
MANAGEMENT_TABLESPACE_LOCATION="/<location>/mgmt.dbf"
CONFIGURATION_DATA_TABLESPACE_LOCATION="/<location>/mgmt_ecm_depot1.dbf"
JVM_DIAGNOSTICS_TABLESPACE_LOCATION="/<location>/mgmt_deepdive.dbf"
AGENT_REGISTRATION_PASSWORD="<agent password>"
AGENT_REGISTRATION_CONFIRM_PASSWORD="<agent password>"
STATIC_PORTS_FILE="/<location>/staticports.ini"
Now configure the OMS with OMR
cd /$OMS_HOME/sysman/install
sh ConfigureGC.sh -silent -responseFile /<location>/response/new_install.rsp
After installation secure the oms
emctl secure oms
emctl stop oms -all –force
emctl start oms
After the successful securing of OMS check the details
emctl status oms -details
emctl config oms -list_repos_details
now at this stage you will get OEM12 on new server which will be a clean slate OEM
Make new OMS a clone of old
- Clone the database using RMAN backup—in simple words do a refresh of the OMR from the RMAN backup taken above
After the successful refresh you will get a functional cloned OMR
- Reconfigure the oms with the backup
omsca recover -as -ms -nostart -backup_file /<location>/opf_ADMIN_<OMSBACKUP>.bka
at this stage you may get these errors
1.
Instance directory not empty:-Clean the OMS instance home
directory
2.
There is another configuration which we just installed:-follow the
below steps
rename emInstanceMapping.properties in <OMS_HOME>/sysman/config/
mv emInstanceMapping.properties emInstanceMapping.propertiesbak
and retry the reconfiguration..again you will get plugin related errors so follow these steps
- Install the plugins before reconfiguring OMS with the backup
cd /<OMS_HOME>/sysman/install
sh PluginInstall.sh -silent -pluginLocation /<downloaded location>/plugins
You can also install the missing plugins by this way
sh PluginInstall.sh -silent PLUGIN_SELECTION="{<plugin 1>,<plugin 2>}"
After every plugin was installed fire the below command
omsca recover -as -ms -nostart -backup_file /<location>/opf_ADMIN_<OMSBACKUP>.bka
After the successful configuration just resecure oms
$OMS_HOME/bin/emctl secure oms
Restart the OMS (Secure OMS requires to restart the OMS)
$OMS_HOME/bin/emctl stop oms -all -force
$OMS_HOME/bin/emctl start oms
emctl config oms -list_repos_details
At this moment you can see that this OEM is pointing to OMR of new server..hence from console you will get 2 OMS servers running from single OMR..this is actualy the HA of OMS
- Pointing new OMS to point to newly refreshed database running from new server
emctl config oms -store_repos_details -repos_conndesc "(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=<hostname>)(PORT=<port>)))(CONNECT_DATA=(SID=<sid>)))" -repos_user sysman
Now again follow the resecuring of OMS
$OMS_HOME/bin/emctl secure oms
Restart the OMS (Secure OMS requires to restart the OMS)
$OMS_HOME/bin/emctl stop oms -all -force
$OMS_HOME/bin/emctl start oms
also check the details of OMS and OMR
emctl status oms -details
At this moment just check the details from both old and new OEM..they should ideally be pointing to different OMR..if not then make sure OEM in old server should connect to the database of old server and also check the jobs..if jobs are getting in scheduled state then just restart the OEM of old server
At this stage if you check OEM console you can see that all the targets in new OEM is unreachable..this is because the targets are still pointing to old OEM and that is still running fine
At this stage the cloning of existing OEM is complete and we can move to the migration phase
PHASE 2 MIGRATION OF OEM FROM 12c
TO 13c
Migration of OMR from 11203 to 12102
Do the migration of database as a simple 12c migration and do the following
- Check patch status and make sure all the latest patches are installed
- enable autoextention on all the tablespaces
- And add sufficient space in each DBFIELE
- Attach the oracle home if needed
After the successful DB migration just restart OMS and at this point the OMS is 12103 and OMR is 12102
Migration of OMS to 13100
Copy the 13c software to the server and give desired permissions
Change the following DB settings/parameters
NOTE:-BEFORE CHANGING DB PARAMTER MAKE A NOTE OF OLDER PARAMTERS
show parameter adaptive;
grant execute on dbms_random to dbsnmp;
alter system set optimizer_adaptive_features=false scope=both;
alter system set compatible='12.1.0.2.0' scope=spfile;
alter system set job_queue_processes=0 scope=spfile;
alter system set parallel_max_servers=8 scope=spfile;
alter system set parallel_min_servers=0 scope=spfile;
shu immediate;
startup
show parameter adaptive;
show parameter compat;
BEFORE STARTING THE MIGRATION JUST CHANGE THE PASSWORDS OF ALL THE USERS
Refer the following doc
12c , 13c Cloud Control: Steps for Modifying the Password for Weblogic and Nodemanager User Accounts in the Enterprise Manager Installation (Doc ID 1450798.1)
Changing sysman password :
alter user sysman identified by "<new password>";
emctl config oms -change_repos_pwd -use_sys_pwd -sys_pwd <sys password> -new_pwd <new password>
Again restart oms
Take backup of OMR and OMS
•Take RMAN backup for OMR
run {
backup incremental level 0 database plus archivelog;
backup current controlfile;
sql 'alter system switch logfile';
backup archivelog all;
}
- Taking configuration backup of OMS
emctl exportconfig oms -sysman_pwd <sysman password> -dir <directory to take backup>
Seting REPO key in the repository
emctl config emkey -copy_to_repos
emctl status emkey
Run prerequisites
Run the prereq script and solve the pending issues
./em13100_linux64.bin -silent EMPREREQ_KIT=true EMPREREQKIT_PROPERTY_FILE=/<location to file>/emprereqkit.rsp
installerMode=emprereqkit
connectString=(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=<hostname>)(PORT=<port>)))(CONNECT_DATA=(SERVICE_NAME=<Service name>)))
dbPassword=<sys password>
dbRole=sysdba
dbUser=SYS
executionType=upgrade
logLoc=/<location of log>
prereqResultLoc=/<location of loc>
reposUser=SYSMAN
runPrerequisites=true
runCorrectiveActions=true
Rerun the above script without corrective action to check the final status
Stop all the components of OMS which includes WebTier, OMS and AdminServer before migrating the OMS.
emctl stop oms –all
emctl stop agent
Generating the Response File for Silent Installation
Invoke the installer and generate the response file you need to use for performing a silent upgrade:
./em13100_linux64.bin -getResponseFileTemplates -outputLoc <location of response files>
If you are installing this on RHEL 6.8 then there is another BUG..Installer dsnt support 6.8
You can overcome this by looking in /etc for any files named *release*. Then use vi to modify it to an OS on the approved list. For example, /etc/redhat-release will contain that version number. Modify it to be an approved version, install, then change the file back.
After everything is done just change /etc/redhat-release with correct values
Perform silent installation of oem13c software only
Edit the software_only.rsp with the below parameters
UNIX_GROUP_NAME="dba"
INVENTORY_LOCATION="/opt/oracle/oraInventory"
ORACLE_MIDDLEWARE_HOME_LOCATION="<MW_HOME> "
AGENT_BASE_DIR="<AGENT_BASE_DIRECTORY>"
ORACLE_HOSTNAME="<hostname>"
./em13100_linux64.bin -silent -responseFile /<location of response files>/software_only.rsp
Follow the below steps to overcome a bug
1. Restore the OEM environment from backup prior to starting the 13c upgrade process.
2. Perform the 13c software-only installation.
3 Take a backup of the file <13c_MIDDLEWARE_HOME>/sysman/admin/emdrep/sql/core/13.1.0.0/cost_center/cost_center_data_upgrade.sql.
4. Remove following line from the file <13c_MIDDLEWARE_HOME>/sysman/admin/emdrep/sql/core/13.1.0.0/cost_center/cost_center_data_upgrade.sql:
Rem drv: <migrate type="data_upgrade" version="13.1.0.0" />
Save the file.
5. Start the 13c upgrade by running:
<13c_MIDDLEWARE_HOME>/sysman/install/ConfigureGC.sh
Configure the OMS for the upgrade on new server
Edit the upgrade.rsp with the below parameters (note the # parameters)
UNIX_GROUP_NAME="dba"
INVENTORY_LOCATION="/opt/oracle/oraInventory"
DECLINE_SECURITY_UPDATES=true
#MYORACLESUPPORT_USERNAME=<string>
#MYORACLESUPPORT_PASSWORD=<string>
#STAGE_LOCATION=<string>
#MYORACLESUPPORT_USERNAME_FOR_SOFTWAREUPDATES=<string>
#MYORACLESUPPORT_PASSWORD_FOR_SOFTWAREUPDATES=<string>
ORACLE_MIDDLEWARE_HOME_LOCATION="<MW_HOME> "
ORACLE_INSTANCE_HOME_LOCATION="<MW_HOME> /gc_inst"
OLD_BASE_DIR="<OLD_MW_HOME> "
ORACLE_HOSTNAME="<hostname>"
AGENT_BASE_DIR="<AGENT_BASE_DIRECTORY>"
#OLD_DATABASE_CONNECTION_DESCRIPTION=<string>
#OLD_DATABASE_SYSMAN_PASSWORD=<string>
WLS_ADMIN_SERVER_USERNAME="Weblogic"
WLS_ADMIN_SERVER_PASSWORD="<Weblogic password>"
WLS_ADMIN_SERVER_CONFIRM_PASSWORD="<Weblogic password>"
NODE_MANAGER_PASSWORD="<nodemanager password>"
NODE_MANAGER_CONFIRM_PASSWORD="<nodemanager password>"
WLS_ADMIN_SERVER_PASSWORD="<Weblogic password>"
#JVM_DIAGNOSTICS_TABLESPACE_LOCATION=<string>
DATABASE_HOSTNAME="<hostname>"
LISTENER_PORT="<port>"
SERVICENAME_OR_SID="<sid>"
SYS_PASSWORD="<sys password>"
SYSMAN_PASSWORD="<Sysman password>"
REPOSITORY_BACKUP_DONE=true
CONFIGURE_SHARED_LOCATION_BIP=true
CONFIG_LOCATION="<location>/config"
CLUSTER_LOCATION="<location>/cluster"
ENABLE_BI_PUBLISHER=TRUE
cd /<OMS HOME>/sysman/install/
ConfigureGC.sh -silent -responseFile /<location>/upgrade.rsp
Installing Agent
Logon to your OEM Server and issue the following commands to generate the agentdeploy script
emcli login -username=sysman -password=<sysman password>
emcli sync
emcli get_supported_platforms
emcli get_agentimage -destination=/<location>/agentimage -platform="Linux x86-64" -version=13.1.0.0.0
./agentDeploy.sh AGENT_BASE_DIR=<AGENT_BASE_DIRECTORY> OMS_HOST=<hostname> EM_UPLOAD_PORT=4900 AGENT_REGISTRATION_PASSWORD=<password> AGENT_PORT=3872
Now copy these image files to each target server and install the OEM13c agent..at this stage one by one all of your targets start pointing to new OEM13c and after migrating all targtes to point to 13c you can stop and say good bye to your old OEM i.e good old friend 12c
ORACLE RAC BEGINNERS GUIDE
ORACLE RAC
1)
ARCHITECTURE
1.1)
WHAT IS IT
In a non-RAC Oracle database, a single instance accesses a single database. The database
consists of a collection of data files, control
files, and redo logs located on disk. The instance comprises the
collection of Oracle-related memory and operating
system processes that run on a computer system.
In an Oracle RAC environment, two or more computers (each with an instance)
concurrently access a single database. This allows an application or user to
connect to either computer and have access to a single coordinated set of data.
1.2)
ARCHITECTURAL
OVERVIEW
THE ORACLE 10G RAC ARCHITECTURE
From
the point of view of the installation, the main architecture of the RAC
environment includes the following:
*
Nodes or Servers
*
Private Interconnect
*
Vendor Supplied Cluster Manager or Cluster Software (Optional)
*
Oracle provided Cluster Ready Services
*
Shared Storage Subsystem
*
Raw Partitions or Cluster File System or Network Attached Storage (NAS) or
Automatic Storage Management (ASM)
*
Public Network Connection
*
Oracle Database software with RAC option
*
Filesystem
1.2.1)
Nodes or Hosts
The
nodes or servers are the main platforms on which the Oracle RAC database is
installed. The Cluster nodes range from a high-end powerful Sun Fire 15K to a
low-end Linux server. They can also range from a mainframe grade IBM zSeries
server to the emerging blade-server technologies such as IBM BladeCenter or
Egenera. First, the appropriate operating system needs to be installed on the
nodes. It is also important to choose the appropriate number of nodes while
setting up the node operating environment.
1.2.2)
Private Interconnect
The
private interconnect is the physical construct that allows inter-node
communication. It can be a simple crossover cable with UDP or it can be a
proprietary interconnect with specialized proprietary communications protocol.
When setting up more than 2- nodes, a switch is usually needed. This provides
the maximum performance for RAC, which relies on inter-process communication
between the instances for cache-fusion implementation.
1.2.3)
Clusterware
Oracle Clusterware is software
that enables servers to operate together as if they are one server. Each server
looks like any standalone server.
Creating
clusters involves installation of the cluster software on all nodes in the
proposed cluster, as well as checking the configuration. The necessary tests
need to be performed to verify the validity of the cluster. At the same time,
the necessary software that controls the private interconnect is also installed
and configured. With the availability of Oracle provided Cluster Ready Services
(CRS), one can achieve a uniform and standard cluster platform. CRS is more
than just cluster software, but it extends the high availability services in
the cluster.
1.2.4)
Shared Storage
The
storage system provides an external common disk system accessible by all nodes
of the cluster. The connection from the nodes to the disk sub-system is usually
through a fiber switch or a SCSI connection. Once the storage volumes are
presented to the hosts in the cluster, usually with the help of the logical
volume manager, one can create volumes of suitable size for use in the RAC
database. With the introduction of ASM methodology, the shared storage
structures can be managed very easily. Once the disk groups are created with
input of the disk devices, the ASM instances on each of the node in the cluster
provide the shared storage resources to create the Database Files. The
preparation of storage structures has been covered extensively in Chapter 5,
Preparing Shared Storage.
1.2.5)
Public Network
The
clustered servers or hosts need to have public network connectivity so that
client machines in the network can access the resources on the RAC system.
1.2.6)
Virtual IP Address for CRS
Oracle 10g release supports the concept of Service, which can be assigned
the Virtual IP address, and which float among the specified nodes. By creating
the Virtual IP address(A virtual IP address (VIP or VIPA)
is an IP address that is not connected to a specific computer
or network interface card (NIC) on a computer.
Incoming packets are sent to the VIP address, but they are redirected to
physical network interfaces.VIPs are mostly used for connection redundancy; a
VIP address may still be available if a computer or NIC fails because an
alternative computer or NIC replies to connections.) and Virtual Host names,
the applications get a sense of transparency in their connection to the RAC
database service.
1.2.7) File systems
A
regular single-instance database has three basic types of files: database
software and dump files; datafiles, spfile, control files and log files, often
referred to as "database files"; and it may have recovery files, if
using RMAN. A RAC database has an additional type of file referred to as
"CRS files". These consist of the Oracle Cluster Registry (OCR) and
the voting disk. An Oracle RAC
database can have up to
100 instances.
Depending on your platform, you can use the
following file storage
options for Oracle RAC
■ ASM, which Oracle recommends
■ Oracle Cluster File System (OCFS), which is
available for Linux and Windows
platforms, or a third-party cluster file
system that is certified for Oracle RAC
■ A network file system
■ Raw
devices
Oracle RAC databases differ architecturally
from Oracle RACsingle-instance Oracle
databases in that each Oracle RAC database
instance also has:
■ At least one additional thread of redo for
each instance
■ An
instance-specific undo tablespace
1.3)
ARCHITECHTURAL
COMPONENTS
1.3.1) Private
interconnect components
Oracle recommends that you configure a redundant
interconnect to prevent the
interconnect from being a single point of
failure. Oracle also recommends that you use
User Datagram Protocol (UDP) on a Gigabit
Ethernet for your cluster interconnect.
Crossover cables are not supported for use
with Oracle Clusterware or Oracle RAC
databases. All nodes in an Oracle RAC
environment must connect to a Local Area Network(LAN) to enable users and
applications to access the database. . Users can access an Oracle RAC database
using a client-server configuration or through one or more middle tiers, with
or without connection pooling. Users can be DBAs, developers, application
users, power users, such as data miners who create their own searches, and so
on. Most public networks typically use TCP/IP, but you can use any supported
hardware and software combination. Oracle RAC database instances can be
accessed through a database’s defined, default IP address and through VIP
addresses.
In addition to the node’s host name and IP
address, you must also assign a virtual host
name and an IP address to each node. The
virtual host name or VIP should be used to
connect to the database instance. For
example, you might enter the virtual host name
CRM in the address list of the tnsnames.ora
file.
1.3.2) Oracle
Clusterware components
The Oracle Clusterware requires two
clusterware components: a voting
disk to record node membership information.Oracle
RAC uses the voting disk to determine which instances are members of a cluster.
The voting disk must reside on shared disk and the Oracle Cluster Registry
(OCR) to record cluster configuration information. Maintains cluster
configuration information as well as configuration information about any
cluster database within the cluster.The OCR also manages information about
processes that Oracle Clusterware controls. The voting disk and the OCR must
reside on shared storage. The
Oracle Clusterware can multiplex
the OCR and the voting disks and Oracle
recommends that you use this feature to ensure cluster high availability.
The following list describes the functions
of some of the major Oracle Clusterware components
Cluster Synchronization Services (CSS)—Manages the cluster configuration by
controlling which nodes are members of the
cluster and by notifying members
when a node joins or leaves the cluster. If
you are using third-party clusterware,
then the css process interfaces with your
clusterware to manage node
membership information.
■ Cluster Ready Services (CRS)—The primary program for managing high
availability operations within a cluster.
Anything that the crs process manages is
known as a cluster resource which could be a
database, an instance, a service, a
Listener, a virtual IP (VIP) address, an
application process, and so on. The crs
process manages cluster resources based on
the resource’s configuration
information that is stored in the OCR. This
includes start, stop, monitor and
failover operations. The crs process
generates events when a resource status
changes. When you have installed Oracle RAC,
crs monitors the Oracle instance,
Listener, and so on, and automatically
restarts these components when a failure
occurs. By default, the crs process makes
five attempts to restart a resource and
then does not make further restart attempts
if the resource does not restart.
■ Event Management (EVM)—A background process that publishes events
that
crs creates.
■ Oracle Notification Service (ONS)—A publish and subscribe service for communicating
Fast Application Notification (FAN- FAN is a notification mechanism that
Oracle RAC uses to notify other processes about configuration and service level
information such as includes service status changes, such as UP or DOWN events) events.
■ RACG—Extends clusterware to support
Oracle-specific requirements and
complex resources. Runs server callout
scripts when FAN events occur.
■ Process Monitor Daemon (OPROCD)—This background process is locked in memory
to monitor the cluster and provide I/O fencing. OPROCD performs its check,
stops running, and if the wake up is beyond the expected time, then OPROCD
resets the processor and reboots the node. An OPROCD failure results in Oracle
Clusterware restarting the node
2) REQUIREMENTS
2.1) MINIMUM REQUIREMENTS
Production
and Test (RAC)
Oracle Enterprise Linux 5 update 2
Oracle Clusterware (Grid) Oracle Clusterware
Release 2 (10.2.0.1.0)
Oracle ASM 11.2.0.2
Oracle Database EE Oracle Database 10g Release 2 (10.2.0.1.0)
Development (Single Instance)
Oracle Enterprise Linux 5 update 2
Oracle ASM -- Grid Infrastructure
Oracle Database EE Oracle Database 10g Release 2 (10.2.0.1.0)
2.2)
TOOLS FOR INSTALLING,CONFIGURING AND MANAGING ORACLE RAC
The following is a description of the tools
used for installing, configuring, and
managing an Oracle RAC database:
■ Oracle Universal Installer (OUI)–OUI
installs the Oracle Clusterware and the
Oracle Database software with Oracle RAC.
After you configure the nodes that
you want to use in your cluster, OUI
installs the Oracle software on the specified
nodes using a network connection.
■ Cluster Verification Utility (CVU)–The CVU
is a command-line tool that you can
use to verify a range of cluster and Oracle
RAC components such as shared
storage devices, networking configurations,
system requirements, and Oracle
Clusterware, as well as operating system
groups and users. You can use the CVU
for preinstallation as well as
postinstallation checks of your cluster environment.
The CVU is especially useful during
preinstallation and during installation of
Oracle Clusterware and Oracle RAC
components. OUI runs the CVU after the
Oracle Clusterware installation to verify
your environment.
■ Oracle Enterprise Manager –Oracle Enterprise
Manager has both the Database
Control and Grid Control graphical user
interfaces (GUIs) for managing singleinstance and Oracle RAC environments.
■ Server Control (SRVCTL)–SRVCTL is a
command-line interface that you can use
to manage an Oracle RAC database from a
single point. You can use SRVCTL to
start and stop the database and instances,
and to delete or move instances and
services. You can also use SRVCTL to add
services and manage configuration
information. You use SVRCTL to start and
stop a group of applications that
2.3) CHECKING REQUIREMENTS
Before you begin your installation, you
should check to make sure that your system
meets the requirements for Oracle Real
Application Clusters (RAC). The requirements
can be grouped into the following three
categories:
■ Checking the Hardware Requirements
■ Network Requirements
■ Verifying the Installed Operating System and
Software Requirements
2.3.1) Checking the Hardware Requirements
Each node that you want to make part of your
Oracle Clusterware, or Oracle
Clusterware and Oracle RAC installation,
must satisfy the minimum hardware
requirements of the software. These hardware
requirements can be categorized as
follows:
■ Physical memory (at least 1 gigabyte (GB) of
RAM)
■ Swap space (at least 2 GB of available swap
space)
■ Temporary space (at least 400 megabytes
(MB))
■ Processor type (CPU) that is certified with
the version of the Oracle software being installed
These
Oracle Clusterware components require the following additional disk space:
■ Two Oracle Clusterware Registry files, 256
MB each, or 512 MB total disk space
■ Three voting disk files, 256 MB each, or 768
MB total disk space
2.3.2)
Network Requirements
The interconnect used by Oracle RAC is the same
interconnect that Oracle Clusterware uses. This interconnect should be a
private interconnect, meaning it is not be accessible by nodes that are
not members of the cluster.
When you configure the network for Oracle
RAC and Oracle Clusterware, each node
in the cluster must meet the following
requirements:
■ Each node needs at least two network
interface cards, or network adapters. One
adapter is for the public network and the
other adapter is for the private network
used by the interconnect.
■ For the private network, the
interconnect must support the user datagram protocol
(UDP) using high-speed network
adapters and switches that support TCP/IP
(Gigabit Ethernet or better
recommended).
■ You
must have at least three IP addresses available for each node:
1. An IP
address with an associated host name (or network name) for the public
interface.
2. A
private IP address with a host name for each private interface.
3. One
virtual IP address with an associated network name.
Public and Private interface names must be
the same for all nodes
2.3.3)
Verifying the Installed Operating System and Software Requirements
Operating
system requirements
These requirements can include any
of the following:
■
The operating system
version
■
The kernel version of
the operating system
■
Installed packages,
patches, or patch sets
■
Installed compilers and
drivers
■
Web browser type and
version
■
Additional application
software requirements
Operating
system users and groups
The following operating system
groups are used when installing Oracle software:
■
The OSDBA group
(typically, dba) for Oracle Database authentication
■
The Oracle Inventory
group (typically, oinstall) for all installations
■
(Optional) A separate
OSASM group (for example, asm) for Automatic Storage
The following operating system users
are required for all installations:
■
A user that owns the
Oracle software (typically, oracle)
■
An unprivileged user
(for example, the nobody user on Linux systems)
About Setting the Time on Both Nodes
Before starting the installation,
ensure that the date and time settings on both nodes
are set as closely as possible to
the same date and time. Oracle strongly recommends
using the Network Time Protocol
(NTP) feature of most operating systems for this
purpose. All the nodes in your
cluster should use the same reference NTP server.
About Deciding on a Shared Storage
Solution
Each node in a cluster requires
external shared disks for storing the Oracle
Clusterware (Oracle Cluster Registry
and voting disk) files, and Oracle Database files.
The supported types of shared
storage depend upon the platform you are using, for
example:
■
A supported cluster
file system, such as OCFS2 for Linux, OCFS for Microsoft
Windows, or General Parallel File
System (GPFS) on IBM platforms
■
Network file system
(NFS), which is not supported on AIX Based Systems, Linux
on POWER, or on IBM zSeries Based
Linux
■
Shared disk partitions
consisting of block devices. Block devices are disk partitions
that are mounted using the Linux
file system. Oracle Clusterware and Oracle RAC
write to these partitions directly.
■
Automatic Storage
Management for Oracle Database files (strongly recommended)
Note: Oracle Clusterware files cannot be stored in ASM.it
is only used to store the database files
Before you install Oracle
Clusterware, you will need to configure 5 shared disk
partitions:
■
1 partition which is
280 MB in size for storing the Oracle Cluster Registry (OCR)
■
1 partition which is
280 MB in size for storing a duplicate OCR file on a different
disk, referred to as the OCR mirror
■
3 partitions which are
280 MB in size, 1 for each voting disk location
About Choosing an Oracle Base
Directory
Oracle Universal Installer (OUI)
creates the Oracle base directory for you in the
location you specify. The Oracle
base directory (ORACLE_BASE) acts as a top-level
directory for Oracle software
installations. The file system that you use for the Oracle base directory must
have at least 7 GB of available disk space for installing the Oracle Database
software. The path to the Oracle base directory must be the same on all nodes.
install Oracle RAC and ASM in separate home directories.
About Choosing an Oracle Clusterware
Home Directory
OUI installs Oracle Clusterware into
a directory structure referred to as CRS_home.
This home is separate from the home
directories for other Oracle products installed on
the same server. OUI creates the CRS
home directory for you. Before you start the
installation, you must have
sufficient disk space on a file system for the Oracle
Clusterware directory, and the CRS
home directory you choose should be owned by
the installation owner of Oracle
Clusterware.
The file system that you use for the
CRS home directory must have at least 2 GB of
available disk space. The path to
the CRS home directory must be the same on all
nodes.
3) INSTALLATION
3.1) INSTALLING AND CONFIGURING
ORACLE CLUSTERWARE AND ORACLE RAC
This chapter includes the following
sections:
■
Preparing the Oracle
Media Installation File
■
Installing Oracle
Clusterware
■
Configuring Automatic
Storage Management in an ASM Home Directory
■
Installing the Oracle
Database Software and Creating a Cluster Database
■
Performing
Postinstallation Tasks
3.1.1) Preparing the Oracle Media
Installation File
OUI installs Oracle Clusterware into
a directory structure that is referred to as CRS_home. the standard
Oracle Database 10g software installation
process installs the Oracle RAC
option when OUI recognizes that you are performing
the installation on a cluster. OUI
installs Oracle RAC into a directory structure that is
referred to as Oracle_home.
3.1.2) Installing Oracle Clusterware
The following topics describe the
process of installing Oracle Clusterware:
■
Configuring the
Operating System Environment
■
Verifying the
Configuration Using the Cluster Verification Utility
■
Using Oracle Universal
Installer to Install Oracle Clusterware
■
Completing the Oracle
Clusterware Configuration
3.1.2.1) Configuring the Operating
System Environment
You run OUI from the oracle user
account. Before you start OUI you must configure
the environment of the oracle user.
You must set the ORACLE_BASE environment
variables to the directory in which
you want the Oracle central inventory files located. Prior to installing the
Oracle Database software and creating an Oracle database, you
should also set the ORACLE_HOME
environment variable to the location in which you
want to install the Oracle Database
software
To modify the user environment prior
to installing Oracle Clusterware on Red
Hat Linux:
1. As the oracle user, execute the following
commands:
[oracle]$ unset ORACLE_HOME
[oracle]$ unset ORACLE_SID
[oracle]$ unset ORACLE_BASE
[oracle]$ export
ORACLE_BASE=/opt/oracle/11gR1
2. Verify the changes have been made by executing
the following commands:
[oracle]$ echo $ORACLE_SID
[oracle]$ echo $ORACLE_HOME
[oracle]$ echo $ORACLE_BASE
/opt/oracle/11gR1
To modify the user environment prior
to installing Oracle Database on Red Hat
Linux:
1. As the oracle user, modify the user profile in
the /home/oracle directory on
both nodes using the following
commands:
[oracle] $ cd $HOME
[oracle] $ vi .bash_profile
Add the following lines at the end
of the file:
export ORACLE_SID=sales
export ORACLE_BASE=/opt/oracle/11gR1
export
ORACLE_HOME=/opt/oracle/11gR1/db
2. Read and implement the changes made to the .bash_profile
file:
source .bash_profile
3. Verify the changes have been made by executing
the following commands:
[oracle]$ echo $ORACLE_SID sales
[oracle]$ echo $ORACLE_HOME /opt/oracle/11gR1/db
[oracle]$ echo $ORACLE_BASE /opt/oracle/11gR1
3.1.2.2) Verifying the Configuration
Using the Cluster Verification Utility
If you have not configured your
nodes, network, and operating system correctly, your
installation of the Oracle
Clusterware or Oracle Database 11g software will not
complete successfully
To verify your hardware and
operating system setup:
1. As the oracle user, change directories to the
staging directory for the Oracle
Clusterware software
[oracle] $ cd /staging_area
2. Run the runcluvfy.sh script, as shown in the
following example, where
docrac1 and docrac2 are the name of
the nodes in your cluster:
[oracle] $ ./runcluvfy.sh stage -pre
crsinst -n docrac1,docrac2 -verbose
3.1.2.3) Using Oracle Universal
Installer to Install Oracle Clusterware
As the oracle user on the docrac1
node, install Oracle Clusterware
To install Oracle Clusterware:
1. Use the following command to start OUI, where staging_area
is the location of
the staging area on disk, or the
location of the mounted installation disk:
cd /staging_area/Disk1
./runInstaller
The Select a Product to Install
window appears.
2. Select Oracle Clusterware from the list, then
click Next.
3.Change the path for the inventory location, if
required. Select oinstall for the
operating system group name. Click Next.
The path displayed for the inventory
directory should be the oraInventory
subdirectory of your Oracle base
directory. For example, if you set the
ORACLE_BASE environment variable to
/opt/oracle/11gR1 before starting
OUI, then the path displayed is
/opt/oracle/11gR1/oraInventory.
4. Accept the default value for the Name field,
which is the name of the Oracle home
directory for this product. For the
Path field, click Browse. In the Choose Directory
window Go up the path until you
reach the root directory (/), click
/u01/app/crs, then click Choose Directory.
After you have selected the path,
click Next. The next window, Product-Specific
Prerequisite Checks, appears after a
short period of time.
5. When you see the message "Check complete.
The overall result of this check is:
Passed", as shown in the
following screenshot, click Next.
6. Change the default cluster name to a name that
is unique throughout your entire
enterprise network. For example, you
might choose a name that is based on the
node names' common prefix.
When you have finished removing the
domain name in the "Modify a node in the
existing cluster" window, click
OK.
7. When you are returned to the Specify Cluster
Configuration window, click Add.
8. In the "Add a new node to the existing
cluster" dialog window, enter the second
node's public name
(docrac2.us.oracle.com), private name
(docrac2-priv), and virtual IP name
(docrac2-vip.us.oracle.com), and
then click OK.
The Specify Cluster Configuration
window now displays both nodes in the
Cluster Nodes.
9. Verify eth0 and eth1 are configured correctly
(proper subnet and interface type
displayed), then click Next.
The Specify Oracle Cluster Registry
(OCR) Location window appears.
10. Select Normal Redundancy for the OCR
Configuration. You will be prompted for two file locations. In the Specify OCR
Location field, enter the name of the device
configured for the first OCR file,
for example, /dev/sda1.
In the Specify OCR Mirror Location
field, enter the name of the device configured
for the OCR mirror file, for example
/dev/sdb1. When finished, click Next.
During installation, the OCR data
will be written to the specified locations.
11. Select Normal Redundancy for the voting
disk location. You will be prompted for
three file locations. For the Voting
Disk Location, enter the name of the device
configured for the first voting disk
file, for example, /dev/sda2. Repeat this
process for the other two Voting
Disk Location fields.
12. Review the contents of the Summary window and
then click Install.
OUI displays a progress indicator
during the installation process.
13. During the installation process, the Execute
Configuration Scripts window
appears. Do not click OK until
you have run the scripts.
The Execute Configuration Scripts
window shows configuration scripts, and the
path where the configuration scripts
are located. Run the scripts on all nodes as
directed, in the order shown. For
example, on Red Hat Linux you perform the
following steps (note that for
clarity, the examples show the current user, node and
directory in the prompt):
a. As the oracle user on docrac1, open a terminal
window, and enter the
following commands:
[oracle@docrac1 oracle]$ cd /opt/oracle/11gR1/oraInventory
[oracle@docrac1 oraInventory]$ su
b. Enter the password for the root user, and then
enter the following command
to run the first script on docrac1:
[root@docrac1 oraInventory]#
./orainstRoot.sh
c. After the orainstRoot.sh script finishes on
docrac1, open another
terminal window, and as the oracle
user, enter the following commands:
[oracle@docrac1 oracle]$ ssh docrac2
[oracle@docrac2 oracle]$ cd
/opt/oracle/11gR1/oraInventory
[oracle@docrac2 oraInventory]$ su
d. Enter the password for the root user, and then
enter the following command
to run the first script on docrac2:
[root@docrac2 oraInventory]#
./orainstRoot.sh
e. After the orainstRoot.sh script finishes on
docrac2, go to the terminal
window you opened in Step 15a. As
the root user on docrac1, enter the
following commands to run the second
script, root.sh:
[root@docrac1 oraInventory]# cd
/u01/app/crs
[root@docrac1 crs]# ./root.sh
At the completion of this script,
the following message is displayed:
f. After the root.sh script finishes on docrac1,
go to the terminal window
you opened in Step 15c. As the root
user on docrac2, enter the following
commands:
[root@docrac2 oraInventory]# cd
/u01/app/crs
[root@docrac2 crs]# ./root.sh
After the root.sh script completes,
return to the OUI window where the Installer
prompted you to run the
orainstRoot.sh and root.sh scripts. Click OK.
The Configuration Assistants window
appears. When the configuration assistants
finish, OUI displays the End of
Installation window.
14. Click Exit to complete the installation
process, then Yes to confirm you want to
exit the installer.
If you encounter any problems, refer
to the configuration log for information. The
path to the configuration log is
displayed on the Configuration Assistants window.
3.1.2.4) Completing the Oracle
Clusterware Configuration
To complete the Oracle Clusterware
configuration on Red Hat Linux:
1. As the oracle user on docrac1, check the status
of the Oracle Clusterware
targets by entering the following
command:
/u01/app/crs/bin/crs_stat -t
This command provides output showing
if all the important cluster services, such
as gsd, ons, and vip, are running on
the nodes of your cluster.
3.1.3) Configuring Automatic Storage Management in an
ASM Home Directory
This section explains how to install
the ASM software in its own home directory.
Installing ASM in its own home
directory enables you to keep the ASM home separate
from the database home directory (Oracle_home).
3.1.3.1) To install ASM in a home
directory separate from the home directory used by Oracle Database:
1. Use the following commands to start OUI, where staging_area
is the location
of the staging area on disk, or the
location of the mounted installation disk:
cd /staging_area/database
./runInstaller
When you start Oracle Universal
Installer, the Select a Product to Install window
appears.
2. Select Oracle Database 11g from the
list, then click Next.
The Select Installation Type window
appears.
3. Select either Enterprise Edition or Standard
Edition and then click Next.
4. In the Specify Home Details window, specify a
name for the ASM home directory,
for example, OraASM11g_home. Select
a directory that is a subdirectory of your
Oracle Base directory, for example,
/opt/oracle/11gR1/asm. Click Browse to
change
the directory in which ASM will be installed.
After you have specified the ASM
home directory, click Next.
The Specify Hardware Cluster
Installation Mode window appears.
5. Click Select All to select all nodes for
installation, and then click Next.
If your Oracle Clusterware
installation was successful, then the Specify Hardware
Cluster Installation Mode window
lists the nodes that you identified for your
cluster, such as docrac1 and
docrac2.
After you click Next, the
Product-Specific Prerequisites Checks window appears.
6. When you see the message "Check complete.
The overall result of this check is:
Passed", as shown in the
following screenshot, click Next.
The Select Configuration Option
window appears.
7. Select the Configure Automatic Storage
Management (ASM) option to install and
configure ASM. The ASM instance is
managed by a privileged role called SYSASM,
which grants full access to ASM disk
groups.
Enter a password for the SYSASM user
account. The passwords should be at least 8
characters in length and include at
least one alphabetic and one numeric character.
Confirm the password by typing it in
again in the Confirm ASM SYS Password
field.
When finished, click Next.
The Configure Automatic Storage
Management window appears.
8. In the Configure Automatic Storage Management
window, the Disk Group Name
defaults to DATA. You can enter a
new name for the disk group, or use the default
name.
Check with your system administrator
to determine if the disks used by ASM are
mirrored at the storage level. If
so, select External for the redundancy. If the disks
are not mirrored at the storage
level, then choose Normal for the redundancy.
9. At the bottom right of the Add Disks section,
click Change Disk Discovery Path
to select any devices that will be
used by ASM but are not listed.
In the Change Disk Discovery Path
window, enter a string to use to search for
devices that ASM will use, such as
/dev/sd*, and then click OK.
You are returned to the Configure
Automatic Storage Management window.
10. Select the disks to be used by ASM, for
example, /dev/sdd and /dev/sde
After you have finished selecting
the disks to be used by ASM, click Next. The
Privileged Operating Systems Groups
window appears.
11. Select the name of the operating system group
you created in the previous chapter
for the OSDBA group, the OSASM
group, and the database operator group. If you
choose to create only the dba group,
then you can use that group for all three
privileged groups. If you created a
separate asm group, then use that value for the
OSASM group.
After you have supplied values for
the privileged groups, click Next. The Oracle
Configuration Manager Registration
window appears.
12. The Oracle Configuration Manager Registration
window enables you to configure
the credentials used for connecting
to OracleMetaLink. You can provide this
information now, or configure it
after the database has been installed. Click Next
to continue.
OUI displays the Summary window.
13. Review the information displayed in the Summary
window. If any of the
information appears incorrect, then
click Back to return to a previous window and
change it. When you are ready to
proceed, click Install.
OUI displays a progress window
indicating that the installation has started.
14. The installation takes several minutes to
complete. During this time, OUI
configures ASM on the specified
nodes, and then configures a listener on each
node.
After ASM has been installed, OUI
runs the Configuration Assistants. When the
assistants have finished
successfully, click Next to continue.
The Execute Configuration Scripts
window appears.
15. Run the scripts as instructed in the Execute
Configuration scripts window. For the
installation demonstrated in this
guide, only one script, root.sh, must be run,
and it must be run on both nodes.
The following steps demonstrate how
to complete this task on a Linux system
(note that for clarity, the examples
show the user, node name, and directory in the
prompt):
a. Open a terminal window. As the oracle user on
docrac1, change directories
to the ASM home directory, and then
switch to the root user:
[oracle@docrac1 oracle]$ cd
/opt/oracle/11gR1/asm
[oracle@docrac1 oracle]$ su
b. Enter the password for the root user, and then
run the script specified in the
Execute Configuration scripts
window:
[root@docrac1 oracle]# ./root.sh
c. As the root.sh script runs, it prompts you for
the path to the local bin
directory. The information displayed
in the brackets is the information it has
obtained from your system
configuration. Press the Enter key each time you
are prompted for input to accept the
default choices.
d. After the script has completed, the prompt
appears. Open another terminal
window, and enter the following
commands:
[oracle@docrac1 oracle]$ ssh docrac2
Enter the passphrase for key
’/home/oracle/.ssh/id_rsa’:
[oracle@docrac2 oracle]$ cd
/opt/oracle/11gR1/asm
[oracle@docrac2 asm]$ su
Password:
e. Enter the password for the root user, and then
run the script specified in the
Execute Configuration scripts
window:
[root@docrac2 asm]# ./root.sh
f. Accept all default choices by pressing the Enter
key.
g. After you finish executing the script on all
nodes, return to the Execute
Configuration Scripts window and
click OK to continue.
OUI displays the End of Installation
window.
16. Review the information in the End of
Installation window. The Web addresses
displayed are not used in this
guide, but may be needed for your business
applications.
17. Click Exit, and then click Yes to
verify that you want to exit the installation.
3.1.3.2) Verifying Your ASM
Installation
Verify that all the database
services for ASM are up and running.
To verify ASM is operational
following the installation:
1. Change directories to the bin directory in the
CRS home directory:
cd /u01/app/crs/bin
2. Run the following command as the oracle user,
where docrac1 is the name of
the node you want to check:
./srvctl status asm -n docrac1
ASM instance +ASM1 is running on
node docrac1.
The example output shows that there
is one ASM instance running on the local
node.
3. Repeat the command shown in Step 2,
substituting docrac2 for docrac1 to
verify the successful installation
on the other node in your cluster.
3.1.4) Installing
the Oracle Database Software and Creating a Cluster Database
The next step is to install the
Oracle Database 11g Release 1 software on the docrac1
Node. Before you start OUI you must
configure the environment of the oracle user. You
must set the ORACLE_SID,
ORACLE_BASE, and ORACLE_HOME environment variables
to the desired values for your
environment.
3.1.4.1) To install Oracle
Database on your cluster:
1. As the oracle user, use the following commands
to start OUI, where
staging_area is the location of the staging area
on disk, or the location of the
mounted installation disk:
cd /staging_area/database
./runInstaller
When you start Oracle Universal
Installer, the Select a Product to Install window
appears.
2. Select Oracle Database 11g from the list, then
click Next.
The Select Installation Type window
appears.
3. Select either Enterprise Edition or Standard
Edition. The Enterprise Edition option
is selected by default. When
finished, click Next.
The Install Location window appears.
4. Specify a name for the Oracle home, for
example, OraDb11g_home.
5. Select an Oracle home directory that is a
subdirectory of your Oracle base
directory, for example,
/opt/oracle/11gR1/db_1.
You can click Browse to
change the directory in which the Oracle Database
software will be installed. After
you have selected the directory, click Choose
Directory to close the Choose Directory window.
If the directory does not exist, you
can type in the directory path in the File Name
field, and then click Choose
Directory. If a window appears asking if you want to
create the directory, click Yes.
After you have verified the
information on the Install Location window, click
Next.
The Specify Hardware Cluster
Installation Mode window appears.
6. Select the nodes on which the Oracle Database
software will be installed. You can
also click Select All to select all
available nodes. After you have selected the nodes
on which to install the Oracle
Database software, click Next.
The Product-Specific Prerequisite
Checks window appears.
7. When you see the confirmation message that your
system has passed the
prerequisite checks, click Next.
The Select Configuration Option
window appears.
8. In the Select Configuration Option window,
accept the default option of Create a
Database and click Next.
The Select Database Configuration
window appears.
9. Select one of the following different types of
databases to be created:
■
General Purpose
■
Transaction Processing
■
Data Warehouse
■
Advanced (for
customized database creation)
The General Purpose database
type is selected by default. Select the type of
database that best suits your
business needs. For the example used by this guide,
the default value is sufficient.
After you have selected the database type, click
Next.
The Specify Database Configuration
Options window appears.
10. In the Global Database Name field, enter a
fully qualified name for your database,
such as sales.mycompany.com. Ensure
that the SID field contains the first part
of the database name, for example,
sales.
11. Check the settings on each of the tabs. If you
are not sure what values to use, then
accept the default values. On the
Sample Schemas tab, if you want sample data
and schemas to be created in your
database, then select the Create database with
sample schemas option. When finished, click Next
to continue.
The Select Database Management
Option window appears.
12. By default, the Use Database Control for
Database Management option is
selected instead of the Use Grid
Control for Database Management option. The
examples in this guide use Database
Control, which is the default value.
Do not select the option Enable
Email Notifications if your cluster is not connected
to a mail server.
After you have made your selections,
click Next.
The Specify Database Storage Option
window appears.
13. If you configured ASM on the cluster, select
the option Automatic Storage
Management (ASM) for the database storage. Otherwise,
select File System and
enter the location of your shared
storage, then click Next.
The Specify Backup and Recovery
Options window appears.
14. Select the default option Do not enable
Automated backup, and then click Next.
You can modify the backup settings
at a later time.
If you selected ASM as your storage
solution, the Select ASM Disk Group window
appears.
15. The Select ASM Disk Group window shows you
where the database files will be
created. Select the disk group that
was created during the ASM installation, and
then click Next.
The Specify Database Schema
Passwords window appears.
16. Assign and confirm a password for each of the
Oracle database schemas.
Unless you are performing a database
installation for testing purposes only, do not
select the Use the same password for
all the accounts option, as this can
compromise the security of your
data. Each password should be at least 8
characters in length and include at
least one alphabetic, one numeric, and one
punctuation mark character.
When finished entering passwords,
click Next. OUI displays the Privileged
Operating System Groups window.
17. Select the name of the operating system group
you created in the previous chapter
for the OSDBA group, the OSASM
group, and the database operator group. If you
choose to create only the dba group,
then you can use that group for all three
privileged groups. If you created a
separate asm group, then use that value for the
OSASM group.
After you have supplied values for
the privileged groups, click Next. The Oracle
Configuration Manager Registration
window appears.
18. The Oracle Configuration Manager Registration
window enables you to configure
the credentials used for connecting
to OracleMetaLink. You can provide this
information now, or configure it
after the database has been installed. Click Next
to continue.
OUI displays the Summary window.
19. Review the information displayed in the Summary
window. If any of the
information is incorrect, click Back
to return to a previous window and correct it.
When you are ready to proceed, click
Install.
OUI displays a progress indicator to
show that the installation has begun. This
step takes several minutes to
complete.
20. As part of the software installation process,
the sales database is created. At the
end of the database creation, you
will see the Oracle Database Configuration
Assistant (DBCA) window with the URL
for the Database Control console
displayed.
Make note of the URL, and then click
OK. Wait for DBCA to start the cluster
database and its instances.
21. After the installation, you are prompted to
perform the postinstallation task of
running the root.sh script on both
nodes.
On each node, run the scripts listed
in the Execute Configuration scripts window
before you click OK. Perform the
following steps to run the root.sh script:
a. Open a terminal window. As the oracle user on
docrac1, change directories
to your Oracle home directory, and
then switch to the root user by entering
the following commands:
[oracle@docrac1 oracle]$ cd /opt/oracle/11gR1/db_1
[oracle@docrac1 db_1]$ su
b. Enter the password for the root user, and then
run the script specified in the
Execute Configuration scripts
window:
[root@docrac1 db_1]# ./root.sh
c. As the root.sh script runs, it prompts you for
the path to the local bin
directory. The information displayed
in the brackets is the information it has
obtained from your system
configuration. Press the Enter key each time you
are prompted for input to accept the
default choices.
d. After the script has completed, the prompt
appears. Enter the following
commands:
[oracle@docrac1 oracle]$ ssh docrac2
[oracle@docrac2 oracle]$ cd
/opt/oracle/11gR1/db_1
[oracle@docrac2 db_1]$ su
e. Enter the password for the root user, and then
run the script specified in the
Execute Configuration scripts
window:
[root@docrac2 db_1]# ./root.sh
f. Accept all default choices by pressing the Enter
key.
After you finish executing the
script on all nodes, return to the Execute
Configuration scripts window and
click OK.
OUI displays the End of Installation
window
22. Click Exit and then click Yes to
verify that you want to exit OUI.
3.1.4.2) Verifying Your Oracle
RAC Database Installation
At this point, you should verify
that all the database services are up and running.
To verify the Oracle RAC database
services are running:
1. Log in as the oracle user and go to the CRS_home/bin
directory:
[oracle] $ cd /u01/app/crs/bin
2. Run the following command to view the status of
the applications managed by
Oracle Clusterware:
[oracle] $ ./crs_stat –t
The output of the command should
show that the database instances are available
(online) for each host.
3.1.4.3) Configuring the Operating System Environment for Database Management
After you have installed the Oracle
RAC software and created a cluster database, there
are two additional tasks to perform
to configure your operating system environment
for easier database management:
■
Updating the oratab
File
■
Reconfiguring the User
Shell Profile
Updating the oratab File
Several of the Oracle Database
utilities use the oratab file to determine the available
Oracle homes and instances on each
node. The oratab file is created by the root.sh
script and is updated by Oracle
Database Configuration Assistant when creating or
deleting a database.
The following is an example of the
oratab file:
# This file is used by ORACLE
utilities. It is created by root.sh
# and updated by the Database
Configuration Assistant when creating
# a database.
# A colon, ':', is used as the field
terminator. A new line terminates
# the entry. Lines beginning with a
pound sign, '#', are comments.
#
# Entries are of the form:
#
$ORACLE_SID:$ORACLE_HOME:<N|Y>:
#
# The first and second fields are
the system identifier and home
# directory of the database
respectively. The third field indicates
# to the dbstart utility that the
database should, "Y", or should not,
# "N", be brought up at
system boot time.
#
# Multiple entries with the same
$ORACLE_SID are not allowed.
#
#
+ASM1:/opt/oracle/11gR1/asm:N
sales:/opt/oracle/11gR1/db_1:N
sales1:/opt/oracle/11gR1/db_1:N
To update the oratab file on Red Hat
Linux after creating an Oracle RAC
database:
1. Open the /etc/oratab file for editing by using
the following command on the
docrac1 node:
vi /etc/oratab
2. Add the Oracle_sid and Oracle_home for
the local instance to the end of the
/etc/oratab file, for example:
sales1:/opt/oracle/11gR1/db_1:N
3. Save the file and exit the vi editor.
4. Modify the /etc/oratab file on each node in the
cluster, adding in the
appropriate instance information.
Reconfiguring the User Shell Profile
There are several environment
variables that can be used with Oracle RAC or Oracle
Database. These variables can be set
manually in your current operating system
session, using shell commands such
as set and export.
You can also have these variables
set automatically when you log in as a specific
operating system user. To do this,
modify the Bourne, Bash, or Korn shell
configuration file (for example
.profile or .login) for that operating system user.
To modify the oracle user’s profile
for the bash shell on Red Hat Linux:
1. As the oracle user, open the user profile in
the /home/oracle directory for
editing using the following
commands:
[oracle] $ cd $HOME
[oracle] $ vi .bash_profile
2. Modify the following lines in the file so they
point to the location of the newly
created Oracle RAC database:
export ORACLE_BASE=/opt/oracle/11gR1
export
ORACLE_HOME=/opt/oracle/11gR1/db_1
export PATH=$ORACLE_HOME/bin:$PATH
3. On each node, modify the .bash_profile file to
set the ORACLE_SID
environment variable to the name of
the local instance. For example, on the host
docrac1 you would add the following
line to the .bash_profile file:
export ORACLE_SID=sales1
On the host docrac2 you would set
ORACLE_SID to the value sales2.
4. Read and implement the changes made to the
.bash_profile file on each
instance:
source .bash_profile
5. On each client computer, configure user access
to use a service name, such as
sales, for connecting to the
database.
3.1.5) Performing Postinstallation Tasks
After you have installed the Oracle
RAC software, there are additional tasks that you
can perform before your cluster
database is ready for use. These steps are
recommended, but are not required.
This section contains the following
topics:
■
About Verifying the
Oracle Clusterware Installation
■
About Backing Up the
Voting Disk
■
About Downloading and
Installing RDBMS Patches
■
Verifying Oracle
Enterprise Manager Operations
■
Recommended
Postinstallation Tasks
3.1.5.1) About Verifying the Oracle Clusterware Installation
After the Oracle Clusterware
installation is complete, OUI automatically runs the
cluvfy utility as a Configuration
Assistant to verify that the Clusterware installation
has been completed successfully.
If the CVU reports problems with
your configuration, correct these errors before
proceeding.
3.1.5.2) About Backing Up the Voting Disk
After your Oracle Database 11g with
Oracle RAC installation is complete, and after
you are sure that your system is
functioning properly, make a backup of the contents
of the voting disk. Use the dd
utility, as described in the section "About Backing Up
and Recovering Voting Disks".
Also, make a backup copy of the
voting disk contents after you complete any node
additions or deletions, and after
running any deinstallation procedures.
3.1.5.3) About Downloading and Installing RDBMS Patches
Periodically, Oracle issues bug
fixes for its software called patches. Patch sets are a
collection of bug fixes that were
produced up to the time of the patch set release. Patch
sets are fully tested product fixes.
Application of a patch set affects the software
residing in your Oracle home.
Ensure that you are running the
latest patch set of the installed software. You might
also need to apply patches that are
not included in a patch set. Information about
downloading and installing patches
and patch sets is covered in Chapter 10,
"Managing Oracle Software and
Applying Patches".
3.1.5.4) Verifying Oracle Enterprise Manager Operations
When you create an Oracle RAC database
and choose Database Control for your
database management, the Enterprise
Manager Database Control utility is installed
and configured automatically.
To verify Oracle Enterprise Manager
Database Control has been started in your
new Oracle RAC environment:
1. Make sure the ORACLE_SID environment variable
is set to the name of the
instance to which you want to
connect, for example sales1. Also make sure the
ORACLE_HOME environment variable is
set to the location of the installed Oracle
Database software.
$ echo $ORACLE_SID
sales
$ export ORACLE_SID=sales1
$ echo $ORACLE_HOME
/opt/oracle/11gR1/db_1
2. Go to the Oracle_home/bin directory.
3. Run the following command as the oracle user:
./emctl status dbconsole
The Enterprise Manager Control
(EMCTL) utility displays the current status of the
Database Control console on the
current node.
4. If the EMCTL utility reports that Database
Control is not started, use the following
command to start it:
./emctl start dbconsole
5. Repeat Step 1 through Step 3 for each node in
the cluster.
3.1.5.5) Recommended Postinstallation Tasks
Oracle recommends that you complete
the following tasks after installing Oracle RAC:
■
About Backing Up the
root.sh Script
■
About Configuring User
Accounts
About Backing Up the root.sh Script
Oracle recommends that you back up
the root.sh script after you complete an
installation. If you install other
products in the same Oracle home directory, OUI
updates the contents of the existing
root.sh script during the installation. If you
require information contained in the
original root.sh script, then you can recover it
from the root.sh backup copy.
About Configuring User Accounts
The oracle user operating system
account is the account that you used to install the
Oracle software. You can use
different operating system accounts for accessing and
managing your Oracle RAC database.
KEY
TERMS
SWAP SPACE- Swap space is an area
on disk that temporarily holds a process memory image. When physical memory
demand is sufficiently low, process memory images are brought back into
physical memory from the swap area on disk. Having sufficient swap space
enables the system to keep some physical memory free at all times.
This type of memory management is often referred to as virtual
memory and allows the total number of processes to exceed physical memory.
Virtual memory enables the execution of a process within physical memory only
as needed.
VIRTUAL IP-A virtual IP address (VIP or VIPA) is an IP address that is not connected to a
specific computer or network interface card
(NIC) on a computer. Incoming packets are sent to the VIP address, but they are
redirected to physical network interfaces.VIPs are mostly used for connection
redundancy; a VIP address may still be available if a computer or NIC fails
because an alternative computer or NIC replies to connections
CACHE FUSION-
Prior to Oracle 9, network-clustered Oracle databases used a storage device as
the data-transfer medium (meaning that one node would write a data block to disk and another node would read that data
from the same disk), which had the inherent disadvantage of lacklustre
performance. Oracle 9i addressed this issue: RAC uses a dedicated
network-connection for communications internal to the cluster.
Since all computers/instances in an RAC access the same database, the
overall system must guarantee the coordination of data changes on different
computers such that whenever a computer queries data it receives the current
version — even if another computer recently modified that data. Oracle RAC
refers to this functionality as Cache Fusion. Cache Fusion involves the
ability of Oracle RAC to "fuse" the in-memory data cached
physically separately on each computer into a single, global cache.
NTP(Network Time Protocol)- NTP is a protocol designed to
synchronize the clocks of servers connected by a network.
RAW devices- a raw device is a special kind of block device file that allows accessing a
storage device such as a hard drive directly, bypassing the operating system's
caches and buffers (although the hardware caches might still be used).
Applications like a Database management system can use raw
devices
Subscribe to:
Posts (Atom)