Tuesday, June 2, 2015

Database Cloning using RMAN

Database Cloning using RMAN:
=======================

rdbms versions should be same

1. taking the backup of the source database
2. copy the backup piecess to the target server
3. start an auxiliary instance on the target
4. create the control file
5. restore the database from backup pieces
6. open the database using reset logs

cloning is done by Manual or RMAN

RMAN
======
on primary:
. oraenv
primdb
sqlplus "/ as sysdba"

check wherther DB is in archive log mode / not

SQL> archive log list;

connect to rman and take backup

$rman
RMAN> connect target /
RMAN> backup database plus archivelog;

I have not mentioned the destination so backup will goes to FRA

on target server:
create the archive destination same as shown in rman output

on primary:
RMAN> backup current controlfile;

copy backup pieces and control file backup to insemination server.

create pfile='/tmp/initclonedb.ora' from spfile;

copy the initfile to target server
scp /tmp/initclonedb.ora oracle@server2:/$ORACLE_HOME/dbs/

on target:
cd $ORACLE_HOME/dbs
vi initclonedb.ora

chaneg primdb to clonedb

:1,$ s#primdb#clonedb#g

change the db_file_name_convert='primdb','clonedb'
change the log_file_name_convert='primdb','clonedb'

on primary:
SQL> select name from v$datafile;

on target:
create the directory structure as there in initclonedb.ora file

add entry in /etc/oratab
vi /etc/oratab

. oraenv
clonedb
sqlplus "/ as sysdba"

starting the auxiliary instance
starup nomount

show parameter service

tnsping primdb
cd /$ORACLE_HOME/network/admin
vi /tnsname.ora
primtns
{XXXXX
XXXXXXXXXXXXXXXXXX
XXXXXXXXXXXXXXX}

sqlplus  system/oracle@primtns

for RMAN cloning we need to connect to source as well as auxiliary instance at same time

rman
RMAN> connect auxiliary /
RMAN> connect target sys/oracle@primtns
RMAN> run
{
allocate auxiliary channel c1 device type disk;
allocate channel c2 divice type disk;
duplicate target database to 'clonedb';
}

select name, open_mode from v$database;

Regards,
Mallik

Steps to configure 11g Physical standby database

Steps to configure 11g Physical standby database:
=====================================

Primary side:
Primary DB_NAME = dell
Priamry DB_UNIQUE_NAME = dell
IP = 192.168.1.100

standby side:
Standby DB_NAME = dell
Standby DB_UNIQUE_NAME = std
IP = 192.168.1.101

Required Parameters:
DB_NAME - must be same on primary and on all standby
DB_UNIQUE_NAME - Must be different on primary adn standby
LOG_ARCHIVE_CONFIG - this parameter includes db_unique_name which are the part of dataguard configuration
LOG_ARCHIVE_DEST_n - local and remote archive log location
LOG_ARCHIVE_DEST_STATE_n - state of archiving (ENABLE OR DEFER)
REMOTE_LOGIN_PASSWORDFILE - EXCLUSSIVE
FAL_SERVER - use for archivelog gap resolution
DB_FILE_NAME_CONVERT - when directory structure is differ
LOG_FILE_NAME_CONVERT - when directory structure is differ
STANDBY_FILE_MANAGEMENT - Keep auto to create files automatically on standby

Steps:
=====

Perform the following steps on primary

Make database in archivelog mode
=========================
to verify the archive log mode

SQL> select log_mode from v$database;

SQL> SHUTDOWN IMMEDIATE;
SQL> STARTUP MOUNT;
SQL> ALTER DATABASE ARCHIVELOG;
SQL> ALTER DATABASE OPEN;

make sure database is in force logging mode
================================
SQL> select force_logging from v$database;
SQL> alter database force logging;

Verify the DB_NAME and DB_UNIQUE_NAME
====================================

SQL> show parameter db_name
SQL> show parameter db_unique_name

**************************************************************
Note: Configure the required services (dell n std) from netmgr
192.168.1.100$netmgr
to create the std service on the stanby datbase(192.168.1.101)
192.168.1.100$lsnrctl start
**************************************************************

Make sure DB_UNIQUE_NAME to be part of dataguard (std is stndby name)
========================================================

alter stsrem set log_archive_config='dg_config=(dell,std)';

set the archive log destination
=====================

alter system set log_archive_dest_2='service=std valid_for=(online_logfile, primary_role)db_unique_name=std';

alter system set log_archive_dest_state_2=enable;

set remote login password to exclusive
============================

altter system set remote_login_passwordfile=exclusive scope=spfile;

show parameter remote_login

Set fal server and file name covert parameter in case if directory structure is different in primary and standby database:

alter system set fal_server=std
alter system set db_file_name_convert='std','dell' scope=spfile;
alter system set log_file_name_convert='std','dell' scope=spfile;
alter system set standby_file_management=auto;

Now backup the primary database using RMAN
===================================

$ rman target /
RMAN> backup database plus archivelog;

Now create standby controlfile and spfile
=============================

SQL> alter database create standby controlfile as '/tmp/stdcontrol.ctl';

create pfile='/tmp/initstd.ora' from spfile;

modify the initstd.ora file
==================
1. change db_unique_name
2. change fal_server
3. change log_arhive_dest_n
4. change location of controlfile

Create appropriate directory on physical standby and copy backupset, archivelog, pfile, standby controlfile and password file to physical standby database:

# standby controlefile to all location:
scp /tmp/stdcontrol.ctl oracle@192.168.1.101:/u01/app/oracle/oradata/std/contole01.ctl
cp /u01/app/oracle/oradata/std/contole01.ctl /u01/app/oracle/flash_ercovery_area/std/control02.ctl

# archivelogs and backupsets:
scp -r /u01/app/oracle/flash_recovery_area/DELL oracle@192.168.1.101:/u01/app/oracle/flash_ercovery_area/

# parameter file:
scp /tmp/initstd.ora oracle@192.168.1.101:/tmp/initstd.ora

# remote login passwordfile:
scp /u01/app/oracle/product/11.2.0/dbhome_1/dbs/orapwddell oracle@192.168.1.101:/u01/app/oracle/product/dbhome_1/dbs/

On Physical standby server
====================

create the services on physical standby database and update /etc/oratab file

$netmgr

Now start listener on both server

lsnrctl start

update /etc/oratab (std:/u01/app/oracle/product/11.2.0/dbhome_1:N)

restore backup on standby:

create spfile from pfile

vi /etc/oratab
std:/u01/app/oracle/product/11.2.0/dbhome_1:N

. oraenv
std
sqlplus / as sysdba
create spfile from pfille='/tmp/inistd.ora';
exit

restore backupfile
=============
rman target=/
startup mount
restore database;

create standby redolog files on primary and standby database

standby
======
alter database add standby logfile ('/u01/app/oracle/oradata/std/standby_redo01.log') size 50M;
alter database add standby logfile ('/u01/app/oracle/oradata/std/standby_redo02.log') size 50M;
alter database add standby logfile ('/u01/app/oracle/oradata/std/standby_redo03.log') size 50M;
alter database add standby logfile ('/u01/app/oracle/oradata/std/standby_redo04.log') size 50M;

select member from v$logfile where type='standby'

on primary database
===============
alter database add standby logfile ('/u01/app/oracle/oradata/dell/standby_redo01.log') size 50M;
alter database add standby logfile ('/u01/app/oracle/oradata/dell/standby_redo02.log') size 50M;
alter database add standby logfile ('/u01/app/oracle/oradata/dell/standby_redo03.log') size 50M;
alter database add standby logfile ('/u01/app/oracle/oradata/dell/standby_redo04.log') size 50M;

select member from v$logfile where type='standby'

Now start the redo apply process on standby
================================
alter database recover managed standby database disconnect from session;

following command is use to stop redo apply process

alter database recover managed standby database cancel;

Test log transport
==============

on the primary server, check the latest archived redo log and force a log switch.

alter session set nls_date_format='DD-MON-YYYY HH24:MI:SS';

select sequence#,first_time,next_time from v$archived_log order by sequence#;

alter system switch logfile;

check the new archived redo log has arrived at the standby server and been applied:

alter session set nls_date_format='DD-MON-YYYY HH24:MI:SS';

select sequence#,first_time,next_time from v$archived_log order by sequence#;

select name, open_mode, database_role, log_mode, protection_mode from v$database;

############### Steps to configure Read Only Standby ################
========================================================
shutdown immediate;
startup mount;
alter database open read only;
select * from scott.emp;

select name, open_mode, database_role, log_mode, protection_mode from v$database;

during the read only mode arch logs will be shipped to standby but will not be applied

############### Steps to configure mount Standby ################
=====================================================
shutdown immediate;
startup mount;
alter database recover managed stadby database disconnect from session;

select name, open_mode, database_role, log_mode, protection_mode from v$database;

############### Active dataguard configuration ################
===================================================
shutdown immediate;
startup mount;
alter database open read only;
alter database recover managed standby database disconnect from session;

select name, open_mode, database_role, log_mode, protection_mode from v$database;

select * from scott.tab1;

############### snapshot standby configuration ################
===================================================
you can open the database in read write  mode.

archive logs will be shipped from primary to standby but it will not be applied to standby

make sure recover process is not running

shutdown immediate;
startup mount;

and make sure flashback is on

SQL> select flashback_on from v$database;
SQL> alter database convert to snapshot standby;

alter database open;
select flashback_on from v$database;
above sql gives you the restore point info.

as soon as you cancel the snapshot all the changes will be rolled back.

create tabel scott.tab2 as select * from scott.emp;

select * from scott.tab2;

############### switch back to physical standby ################
===================================================
shutdown immediate;
startup mount;
alter database convert to physical standby;

shutdown immediate;
startup nomount;

alter database mount standby database;
alter database recover managed standby database disconnect;

select flashback_on from v$database;
which should say no.

select * from scott.tab2
which should say table does not exist.

############### database switch over roles ################
===============================================
on Primary:
select name, open_mode, database_role from v$database;
- role should be primay

on standby:
select name, open_mode, database_role from v$database;
- role should be physical standby

on Primary:
alter database commit to switchover to standby;

shutdown immediate;
startup nomunt;

alter database mount standby database;
alter database recover managed standby database disconnect from session;

select name, open_mode, database_role from v$database;
- now primary is in physical standby

on standby:
alter database commit to switch to primary;

shutdown immediate;
startup;

select name, open_mode, database_role from v$database;
- now standby is in primary mode

select sequence#,first_time,next_time from v$archived_log order by sequence#;

alter system switch logfile;

select sequence#,first_time,next_time from v$archived_log order by sequence#;

on other database:
select sequence#,first_time,next_time from v$archived_log order by sequence#;

Regards,
Mallik

Configuration of the Standby database for Primary Database

Configuration of the Standby database for Primary Database Explanation:
=======================================================

Primary Database  - may have many standby Databases

Standby - may be Physical standby / logical standby
Physical standby - can open in mount mode
Logical standby - can open in open read only mode

Steps
1. Enabling the force logging from the Primary
2. Set parameter on primary database
3. Configure the standby redo logs on primary
3. Backup the primary database
4. Creating the standby control file.

Enabling the archive log mode
=======================
SQL> startup mount;
- datbase mounted
SQL> alter database archivelog;
- database altered
SQL> alter database open;
- database opened

to verify the log mode of the database:
SQL> arvhive log list
OR
SQL> select name, open_mode, log_mode from v$database;

Enabling the Force logging
====================
nologging - disabling the recording operation - for bulk loading we can disable
force logging - recording the tarnsaction (DML or DDL)

- recording the changes on the database excpet changes made on the temporary tablespace or temporary segment
- all the ongoing unlogged operations must finish before enablign the force logging.

Database level force logging has more power than the tablespace level force logging

SQL> alter database enable force logging;
SQL> select force_logging from v$database;

Creating the password file
====================

Use: authenticate the SYS users to database remotely.

- Password file must be created in primary and same has been copied over to the standby. To have same SYS password on both Priamry and standby
- This is the main pre requisite to ship the log from primary and apply at standby
- Name format
orapw<SID>
- Location
$ORACLE_HOOME/dbs

Creating password file
$ orapwd file=orapw<SID> password=<XXXX> entries=5 force=y

- $pwfile_users view to see the users who all got SYSDBBA or SYSOPER privileges

Set the initialisation parameter
========================

- create the SPFIEL if the database is running fro pfile
- SPFIEL is mandatory in dataguard broker

SQL> create spfile from pfile;

log_archive_config - specifies the unique database name for each database in the configuration
db_unique_name - unique database name for the primary DB
log_archive_dest_n - controls the redo transport service
log_archive_dest_state_n - specifies the destination state
db_file_name_convert - converts primary database file names
log_file_name_convert - converts primary database log file names
standby_file_management - controls automatic standby file management (auto / manual)
archive_log_target - forces a log switch after the specified number of seconds elapses
log_archive_trace - trace redo data transmission (delay trace in shipping)
fal_server - specify the oracle Net services name of the fal server (typically this is the database running in the primary role)
fal_client - specify the oracle Net services name of the fal client (typically this is the database running in the standby role)
remote_login_passwordfile - exclusive

log_archive_config
===============
1. specify the DG_CONFIG attribute the list the DB_UNIQUE_NAME for the Primary and each standby database in the data guard configuration
2. additional LOG_ARCHIVE_CONFIG parameter values:
- SEND: enables a database to send redo data to remote destination
- RECEIVE: enables the standby database to receive redo data another database
- Use NoSEND / NoRECEIVE to disable these setting
3. LOG_ARCHIVE_COFIG='DG_CONFIG=(prim,stand)'
4. v$dataguard_config to get unique database names in the dataguard configuration

DB_UNIQUE_NAME
=================
- uniquely identifies the primary and standby DBs
- It's recommended to have different values for primary and standby database
- used for redo transport service to identify the correct destinations

LOG_ARCHIVE_DEST_n
=====================
1. specify at least two LOG_ARCHIVE_DEST_n parameter
2. specify for local archiving and standby database location
3. must contain (at a minimum) one of the following:
- LOCATION
- SERVICE
- Service refers to the tns entry for the standby instance
4. v$archive_dest to see the current setting of log_archive_dest_n initialization parameter.

LOG_ARCHIVE_DEST_2 = 'SERVICE=stand
VALID_FOR=(ONLINE_LOGFILES, PRIMARY_ROLE) DB_UNIQUE_NAME=stand'

LOG_ARCHIVE_DEST_1 = 'LOCATION=/oracle/prim/arch/
VALID_FOR=(ALL_LOGFILES, ALL_ROLES)
DB_UNIQUE_NAME=prim'

Specifying Role-Based destination
==========================
1. Specified by VALID_FOR attribute for log_arhcive_dest_n parameter
2. Values for VALID_FOR attribute: redo_log_type, database_role
3. specifies which type of logfiles to be used and when the archive destination is to be used
4. database_role:
- primary_role
- standby_role
- all_role
5. redo_log_type
- online_logfile
- standby_logfile
- all_logfile

Defining the redo Transport Mode
==========================
use the attributes of log_archuive_dest_n:
1. SYNC and ASUNC

MaxProtection / MaxAvailability - SYNC
MaxPerformance - ASYNC

2. AFFIRM and NOAFFIRM
- Ensure that redo was successfully written to disk on the standby destination
- NOAFFIRM is the default when ASYNC is specified
- AFFIRM is the default when SYNC is specified

Regards,
Mallik

Automation Script | Archivelog Generation Hourly Monitoring

1. List out all the running databases and pic one database where we want to monitore the archive log generation from last 1 month. [oracle@o...