Interesting Question 11...
Detected block corruption on one of the datafile in your Physical Standby database..
How you will recover that block corruption on your Physical Standby Database?
Answer 11…
---We have
db_block_checking, db_block_checksum, db_ultra_safe, db_lost_write_protect
parameters in both primary and Standby and this can prevent it.
--- We can set
the above parameter which will prevent the block corruption.
--- In case if
corruption happens then Active DataGuard Database will perform corruption
detection, prevention, and automatic repair where as in normal physical standby
database we need to manually recover the corrupted blocks using valid backups.
Resolving Logical Block Corruption Errors in a Physical Standby Database (Doc ID 2821699.1)
V$DATABASE_BLOCK_CORRUPTION
RMAN>
RECOVER BLOCK DATAFILE 7 BLOCK 3;
Interesting Question 12...
Production database archive destination is full What is your immediate workaround or fix?
Considering scenario...
Case1: Your archive destination is set to FRA
Case2: Your archive destination is set to some custom location other than FRA.
Answer 12…
Case1:
--- We can
increase the FRA size if there free space available physically.
--- Delete older
archive logs if possible from FRA location.
--- Take RMAN
backup with delete input.
--- Temporarily
change the archive log location to some temporary location
--- Move some
files from FRA location to some temporary location
Case 2:
--- We can
increase the Archive log destination size if there free space available
physically.
--- Delete older
archive logs if possible from Archive location.
--- Take RMAN
backup with delete input.
--- Temporarily
change the archive log location to some temporary location
--- Move some
files from Archive location to some temporary location
Interesting Question 13...
Your Standby/DR database goes out of sync then How you will make it sync?
Considering
scenario.
Case1:
Few archive logs gap
Case2:
Huge archive log gap
Case3: archive logs deleted/missing from primarily without applying DR/Standby
Answer 13…
Case1:
--- If its
temporary network issue then fal_server & fal_client it will resolve the
archive gap
--- Copy the
missing archives to stand by side and catalogue them and start mrp.
Case2:
--- Do
rollforward incremental restore by taking incremental backup from PROD and
restore on DR
--- Do
rollforward automatic incremental restore using service name starting from 12c
Case3:
--- Do
rollforward incremental restore by taking incremental backup from PROD and
restore on DR
--- Do
rollforward automatic incremental restore using service name starting from 12c
Steps
to perform for Rolling Forward a Physical Standby Database using RMAN
Incremental Backup. (Doc ID 836986.1)
Rolling
Forward a Physical Standby Using Recover From Service Command in 12c (Doc ID
1987763.1)
Interesting Question 14...
What is TAF in RAC?
Where you configure TAF? Client side or Server side?
Will TAF support DML statements?
Answer 14…
--- TAF: Transparant
application failover.
--- TAF can be
configured on server side as well client side.
--- Server Side:
--- Service attributes are used server-side
to hold the TAF configuration.
--- Client side:
--- We can use tnsnames.ora file to
configure TAF.
Transparent
application failover... In case
any instance crashed The application sessions will be failed over to active
instance and select statements will continue to run where it left over again.
--- Best recommendation
is to configure on server side and TAF will not support DML statement
Interesting Question 15...
What is
cache fusion in RAC?
What is
split brain syndrome in RAC?
What is
simple Majority Rule in RAC?
Why we should have odd number of voting disks in RAC? What happens if we have even number of voting disks?
Answer 15…
--- Cache
Fusion: Transferring blocks from one cluster Instance to Other cluster
Instance (SGA to SGA)
--- Suppose if I have 4 nodes RAC cluster,
if the block is available on Node1 and the user session connect to
Node2/Node3/Node4 needs the same block then directly block is transferred from
Node1 not from disk (or datafile)
--- This Cache Fusion will use the cluster Private
network for transferring the blocks
--- Spilt Brain
Syndrome: If the nodes are not able to communicate each other then each
cluster nodes act like individual clusters, This situation is called as split
brain syndrome
---
When cluster get into a spilt brain syndrome any of the nodes which are not
communicating those will be evicted from the cluster based on the voting disk.
--- Based on the simple majority rule
whichever nodes writing more data into voting disks will be kept on cluster and
whichever nodes written less data into voting disks will be evicted from
cluster.
--- Always its recommended to keep odd
number of voting disks to meet the simple majority rule
--- Simple
Majority Rule: Keeping Odd number of voting disks will helps us to resolve
the split brain syndrome situation.
---
If we have odd number of voting disks it easy identify the which nodes written
the more data into voting disks and which nodes are written less data into
disks.
trunc{(n/2)+1} n=number of
voting disks configured and n>=1
Network
Heartbeat: Node to node
communication via private network
--- Network ping
Disk Heartbeat: Node to Voting disk communication via private
network
--- Disk ping
Interesting Question 16...
I have a server where 10 standalone database are running on filesystem and multiple Oracle Homes.
Somebody deleted /etc/oratab, in that case How we can check which database belongs to which Oracle Home?
Answer 16…
--- Check listener config files and listener status
find out which are databases are registered on that listener
--- Check the respective tnsnames.ora file under each Oracle
Home $ORACLE_HOME/network/admin location or $TNS_ADMIN location
--- Check central inventory.xml under central
inventory (/u01/app/oraInventory/ContextXML/inventory.xml)
--- Check
/etc/oraInst.loc to get the central inventory location
--- Check local inventory.xml under local inventory on
each Oracle Home ($ORACLE_HOME/oraInventory/ContextXML/inventory.xml &
comps.xml)
--- Check
/etc/oraInst.loc to get the central inventory location
--- We can find it from password file under respective
Oracle Home
--- orapw<SID/DBNAME>
--- If there are any setting under .bash_profile or
any login bash script we can refer
Regards,
Mallik
No comments:
Post a Comment