Interesting Question 6...
What
are the best practices to minimize downtime during patching?
Answer 6…
--- If its RAC database
always go for rolling patching.
--- Try to use
out of place patching feature to minimize the downtime.
--- Do all the
patching pre-patch activities in advance
Pre-Patching activities like – Patch Assessment (4 weeks in advance)
- Patching Plan (2 weeks in advance)
- Patch Download, Patch Staging & Prechks (1 week in
advance)
--- Take DB and
Oracle Home backups 1 day before the patching window
--- Gather the
stats on Dictionary, Fixed objects 1 day before the patching
--- Compile all invalid
object 1 day before the patching
--- Always Patch
DR database first and if needed do switch over and make it primary and Patch
new DR and later run the datapatch
--- Always follow
the SDLC life cycle (DEV-TEST-UAT-PROD)
Interesting Question 7...
My RMAN
backup used to take 5 hours till yesterday all of sudden it is running more
than 10 hours today.
How you
will troubleshoot this scenario?
Answer 7…
--- Check whether
BCT is disable at database level
--- Check what is
the backup location NFS, SAN, NAS or ASM and I/O performance or load on these
mount point disks.
--- Check for any
network or packet drop between database server and storage server where you are
taking backup.
--- Check load on
the database server
--- Check load on
database level and any export or any database load(sqllrd) is happening during
RMAN backup.
--- Gather stats
on fixed object and dictionary objects
EXEC DBMS_STATS.GATHER_SYSTEM_STATS;
EXEC DBMS_STATS.gather_fixed_objects_stats;
--- Check RMAN
script If there is any change on the RMAN backup type (backupsets/image copies)
or no of channels.
--- Check DB
level any changes happened like changes in CPU, Process and other performance
parameters
--- Check at OS
level whether any changes happened on the OS level like OS patching, CPU
reduction or memory reduction
Interesting Question 8...
A user
running select query on EMPLOYEE table where it is fetching 100GB worth of data
Where
as my database has SGA 10GB then how the select query will fetch 100GB data and
represent to end you?
Answer 8…
--- It may or may
not use full table scan based on select query and what amount of data it will return.
--- It will fetch
data in chunks or bunch and pass it to server process and server process will provide
it to end user.
Example:
100GB data can be
divided into 5GB chuck of data and each time it will fetch 5GB data and give it
to server process and in total it may go for 20time fetching data from
datafiles.
Interesting Question 9...
I am doing
a table export which is of 1TB in size but my database has 10GB SGA, How the
table export will work internally?
Is
there any dependency between table export and SGA?
Answer 9…
--- Data pump
uses direct path load.
--- It does not
matter size of data being exported since we are using direct path load where
SGA will not come into picture and we are not writing into buffer_cache.
--- In nut shell
there is no dependency between data pump and buffer_cache however there are few
memory components and database parameters impact data pump performance.
Example:
·
DISK_ASYNCH_IO
·
DB_BLOCK_CHECKING
·
DB_BLOCK_CHECKSUM
·
STREAMS_POOL_SIZE
Interesting Question 10...
What are the RMAN optimization parameters?
And How
you will improve your RMAN backup performance?
Answer 10…
--- make sure
that RMAN optimizastion is set to ON/OFF based on the need.
--- Enable or
disable BCT at database level.
--- Set proper RMAN
channels based on the CPUs.
--- Use proper RMAN
backup type backup set, image copy, incremental merge based on the database type.
--- Enable or
disable RMAN backup compression and encryption.
--- Using RMAN
section size in case BIG file tablespace datafiles.
Regards,
Mallik