RAID 0: block-level striping without parity or mirroring.
RAID 1: mirroring without parity or striping.
RAID 2: bit-level striping with dedicated Hamming-code parity.
RAID 3: byte-level striping with dedicated parity. Hot parity disk.
RAID 4: block-level striping with dedicated parity. Hot parity disk.
RAID 5: block-level striping with distributed parity. Tolerate single disk failure.
RAID 6: block-level striping with double distributed parity. Tolerate double disk failure.
Commonly used: RAID 5 or combination of RAID 0 and 1.
RAID 0+1: striped sets in a mirrored set. Strip first, then mirror.
RAID 1+0: mirrored sets in a striped set. Mirror first, then strip.
RAID 5+0:
RAID 5+1: mirrored striped set with distributed parity.
For database: RAID1+0 and RAID 5.
REF:
http://en.wikipedia.org/wiki/RAID
Saturday, December 11, 2010
How to clear java cache
1. clear all caches
cd $COMMON_TOP/_pages
cat refresh_cache.sh
rm -r _oa__html.old _oa__java.old
mv _oa__html _oa__html.old
mv _oa__java _oa__java.old
mkdir _oa__html _oa__java
2. clear OA Framework cache
• Go to "Functional Administrator" responsibility.
• Under tab "Core Services" -> click on "Caching Framework"
• Click on " Global Configuration"
• Click "Clear ALL Cache"
3. Bounce the Apache server
cd $COMMON_TOP/_pages
cat refresh_cache.sh
rm -r _oa__html.old _oa__java.old
mv _oa__html _oa__html.old
mv _oa__java _oa__java.old
mkdir _oa__html _oa__java
2. clear OA Framework cache
• Go to "Functional Administrator" responsibility.
• Under tab "Core Services" -> click on "Caching Framework"
• Click on " Global Configuration"
• Click "Clear ALL Cache"
3. Bounce the Apache server
drop six internal controls manager flexfields
Compile flexfield: adadmin (3,3) or
/data/appldev/11i/fnd/11.5.0/bin/fdfcmp APPS/***** 0 Y
Following flexfield got compilation error:
AMW_AP_ATTR_GROUP_TYPE
AMW_CONSTRAINT_ATTR_GROUP_TYPE
AMW_CTRL_ATTR_GROUP_TYPE
AMW_PROCESS_ATTR_GROUP_TYPE
AMW_PROC_ORG_ATTR_GROUP_TYPE
AMW_RISK_ATTR_GROUP_TYPE
---------------------------------------------------------
Application Id: 242
Short Name: AMW
Name: Internal Controls Manager
Flexfield Name: AMW_AP_ATTR_GROUP_TYPE
Title: Audit Procedure
Compilation aborted for current descriptive flexfield...
APP-FND-00854: FDFBDF: Cannot find descriptive column information.
APP-FND-01564: ORACLE error 1403 in FDFBCO
Cause: FDFBCO failed due to ORA-01403: no data found.
The SQL statement being executed at the time of the error was: and was executed from the file &ERRFILE.
Continuing...
---------------------------------------------------------
Application Id: 242
Short Name: AMW
Name: Internal Controls Manager
Flexfield Name: AMW_CONSTRAINT_ATTR_GROUP_TYPE
Title: Constraint
Compilation aborted for current descriptive flexfield...
APP-FND-00854: FDFBDF: Cannot find descriptive column information.
APP-FND-01564: ORACLE error 1403 in FDFBCO
Cause: FDFBCO failed due to ORA-01403: no data found.
The SQL statement being executed at the time of the error was: and was executed from the file &ERRFILE.
Continuing...
---------------------------------------------------------
Application Id: 242
Short Name: AMW
Name: Internal Controls Manager
Flexfield Name: AMW_CTRL_ATTR_GROUP_TYPE
Title: Control
Compilation aborted for current descriptive flexfield...
APP-FND-00854: FDFBDF: Cannot find descriptive column information.
APP-FND-01564: ORACLE error 1403 in FDFBCO
Cause: FDFBCO failed due to ORA-01403: no data found.
The SQL statement being executed at the time of the error was: and was executed from the file &ERRFILE.
Continuing...
---------------------------------------------------------
Application Id: 242
Short Name: AMW
Name: Internal Controls Manager
Flexfield Name: AMW_PROCESS_ATTR_GROUP_TYPE
Title: Risk Library Process
Compilation aborted for current descriptive flexfield...
APP-FND-00854: FDFBDF: Cannot find descriptive column information.
APP-FND-01564: ORACLE error 1403 in FDFBCO
Cause: FDFBCO failed due to ORA-01403: no data found.
The SQL statement being executed at the time of the error was: and was executed from the file &ERRFILE.
Continuing...
---------------------------------------------------------
Application Id: 242
Short Name: AMW
Name: Internal Controls Manager
Flexfield Name: AMW_PROC_ORG_ATTR_GROUP_TYPE
Title: Organization Process
Compilation aborted for current descriptive flexfield...
APP-FND-00854: FDFBDF: Cannot find descriptive column information.
APP-FND-01564: ORACLE error 1403 in FDFBCO
Cause: FDFBCO failed due to ORA-01403: no data found.
The SQL statement being executed at the time of the error was: and was executed from the file &ERRFILE.
Continuing...
---------------------------------------------------------
Application Id: 242
Short Name: AMW
Name: Internal Controls Manager
Flexfield Name: AMW_RISK_ATTR_GROUP_TYPE
Title: Risk
Compilation aborted for current descriptive flexfield...
APP-FND-00854: FDFBDF: Cannot find descriptive column information.
APP-FND-01564: ORACLE error 1403 in FDFBCO
Cause: FDFBCO failed due to ORA-01403: no data found.
The SQL statement being executed at the time of the error was: and was executed from the file &ERRFILE.
Continuing...
---------------------------------------------------------
Application Id: 242
Short Name: AMW
Name: Internal Controls Manager
Flexfield Name: AMW_WORK_ATTR_GROUP_TYPE
Title: Work
Compilation was successful for current descriptive flexfield.
Run afdff040_amw.sql from oracle support (SR #3-1515967891: AMW flexfield compilation error for bug 9748057) and then compiling the flexfield again. No issues.
/**/
REM $Header: afdff040.sql 115.14.1151.5 2002/02/20 10:48:00 pkm ship $
REM dbdrv: none
REM +======================================================================+
REM | Copyright (c) 1999 Oracle Corporation Redwood Shores, California, USA|
REM | All rights reserved. |
REM +======================================================================+
REM NAME
REM afdff040.sql
REM DESCRIPTION
REM Drops obsolete/re-created flexfields of all products
REM
REM NOTES
REM
REM HISTORY
REM 02/18/2002 nlo Removed GL_JE_SOURCES,GL_JE_CATEGORIES
REM 02/29/00 ssung Included MRP_SEIBAN_NUMBERS,
REM MRP_PROJECT_PARAMERTERS,
REM GL_CHARTS_OF_ACCOUNTS.
REM 02/03/00 nlo Added AK flexfields
REM 02/01/00 nlo Added JG flexfield contexts
REM 01/17/00 ssung fixes for Bug#1151660
REM 11/09/99 ssung Added AP_LOOKUP_CODES,
REM AS_WORD_PROCESSORS,
REM BOM_STANDARD_COMMENTS,
REM BOM_STANDARD_INSTRUCTIONS,
REM GL_JE_SOURCES, GL_JE_CATEGORIES,
REM AP_EFT_POPUP, AP_DIST_SET_PA_POPUP,
REM AP_PO_NUM_POPUP, AP_LOOKUP_CODES,
REM AP_DIST_PA_POPUP, AP_INV_PA_POPUP
REM 10/28/99 nlo Added JA obsolete descriptive flexfields
REM 10/26/99 nlo Added JG obsolete Flexfield Contexts
REM 10/08/99 nlo Added JA obsolete Flexfield Contexts & JG flexfields
REM 09/15/99 ssung Added some obsolete Flexfield Contexts,
REM list provided by kmaheswa
REM 08/16/99 Murali Added obsolete JL descriptive flexfields
REM 06/25/99 Nlo Deleted JG obsolete descriptive flexfields
REM and specific contexts
REM 05/03/99 Murali Added obsolete descriptive flexfield
REM 04/21/99 Golgun OTHERS exception handlers, and
REM set_session_mode calls are added.
REM 03/08/99 Murali Added obsolete descriptive flexfields
REM 08/28/98 Murali Re-wrote this script as procedures
REM 08/15/97 Murali Created
REM +======================================================================+
SET VERIFY OFF
WHENEVER SQLERROR EXIT FAILURE ROLLBACK;
REM Descriptive FlexFields
set serveroutput on size 20000;
declare
procedure delete_dsc_flexfield(p_prod IN varchar2,
p_desc_flex IN varchar2)
is
begin
fnd_flex_dsc_api.delete_flexfield(appl_short_name => p_prod,
flexfield_name => p_desc_flex);
exception
when others then
dbms_output.put_line('Flexfield '||p_desc_flex||' for '||
p_prod||' does not exist');
null;
end;
procedure delete_key_flexfield(p_prod IN varchar2,
p_key_flex IN varchar2)
is
begin
fnd_flex_key_api.delete_flexfield(appl_short_name => p_prod,
flex_code => p_key_flex);
exception
when others then
dbms_output.put_line('Flexfield '||p_key_flex||' for '||
p_prod||' does not exist');
null;
end;
procedure delete_key_flex_segment(
p_prod IN varchar2,
p_flexname IN varchar2,
p_structure IN varchar2,
p_segment IN varchar2)
is
l_segment fnd_flex_key_api.segment_type;
l_flexfield fnd_flex_key_api.flexfield_type;
l_structure fnd_flex_key_api.structure_type;
begin
begin
l_flexfield := fnd_flex_key_api.find_flexfield(p_prod, p_flexname);
exception
when no_data_found then
dbms_output.put_line('Flex '||p_flexname||' does not exist');
return;
end;
begin
l_structure := fnd_flex_key_api.find_structure(l_flexfield,
p_structure);
exception
when no_data_found then
dbms_output.put_line('Structure '||p_structure||' not found in '||
p_flexname);
return;
end;
begin
l_segment := fnd_flex_key_api.find_segment(
l_flexfield,
l_structure,
p_segment);
fnd_flex_key_api.delete_segment(l_flexfield, l_structure, l_segment);
exception
WHEN no_data_found THEN
dbms_output.put_line('Flex segment '||p_segment||' does not exist');
end;
exception
WHEN others THEN
dbms_output.put_line('delete_key_flex_segment is failed.');
end;
procedure delete_dsc_context(p_prod IN varchar2,
p_desc_flex IN varchar2,
p_desc_ctxt IN varchar2)
is
begin
fnd_flex_dsc_api.delete_context(appl_short_name => p_prod,
flexfield_name => p_desc_flex,
context => p_desc_ctxt);
exception
when others then
dbms_output.put_line('Flexfield '||p_desc_flex||' with context '||
p_desc_ctxt || ' for '|| p_prod||' does not exist');
null;
end;
begin
fnd_flex_dsc_api.set_session_mode('customer_data');
delete_dsc_flexfield('AMW','AMW_AP_ATTR_GROUP_TYPE');
delete_dsc_flexfield('AMW','AMW_CONSTRAINT_ATTR_GROUP_TYPE');
delete_dsc_flexfield('AMW','AMW_CTRL_ATTR_GROUP_TYPE');
delete_dsc_flexfield('AMW','AMW_PROCESS_ATTR_GROUP_TYPE');
delete_dsc_flexfield('AMW','AMW_PROC_ORG_ATTR_GROUP_TYPE');
delete_dsc_flexfield('AMW','AMW_RISK_ATTR_GROUP_TYPE');
-- Descriptive FlexField Segments
---No segments created in customer environment
-- Flexfield Contexts
exception
when others then
dbms_output.put_line('afdff040.sql failed.');
end;
/
/*{TOAD_FILE_CHUNK}*/
commit;
exit;
/data/appldev/11i/fnd/11.5.0/bin/fdfcmp APPS/***** 0 Y
Following flexfield got compilation error:
AMW_AP_ATTR_GROUP_TYPE
AMW_CONSTRAINT_ATTR_GROUP_TYPE
AMW_CTRL_ATTR_GROUP_TYPE
AMW_PROCESS_ATTR_GROUP_TYPE
AMW_PROC_ORG_ATTR_GROUP_TYPE
AMW_RISK_ATTR_GROUP_TYPE
---------------------------------------------------------
Application Id: 242
Short Name: AMW
Name: Internal Controls Manager
Flexfield Name: AMW_AP_ATTR_GROUP_TYPE
Title: Audit Procedure
Compilation aborted for current descriptive flexfield...
APP-FND-00854: FDFBDF: Cannot find descriptive column information.
APP-FND-01564: ORACLE error 1403 in FDFBCO
Cause: FDFBCO failed due to ORA-01403: no data found.
The SQL statement being executed at the time of the error was: and was executed from the file &ERRFILE.
Continuing...
---------------------------------------------------------
Application Id: 242
Short Name: AMW
Name: Internal Controls Manager
Flexfield Name: AMW_CONSTRAINT_ATTR_GROUP_TYPE
Title: Constraint
Compilation aborted for current descriptive flexfield...
APP-FND-00854: FDFBDF: Cannot find descriptive column information.
APP-FND-01564: ORACLE error 1403 in FDFBCO
Cause: FDFBCO failed due to ORA-01403: no data found.
The SQL statement being executed at the time of the error was: and was executed from the file &ERRFILE.
Continuing...
---------------------------------------------------------
Application Id: 242
Short Name: AMW
Name: Internal Controls Manager
Flexfield Name: AMW_CTRL_ATTR_GROUP_TYPE
Title: Control
Compilation aborted for current descriptive flexfield...
APP-FND-00854: FDFBDF: Cannot find descriptive column information.
APP-FND-01564: ORACLE error 1403 in FDFBCO
Cause: FDFBCO failed due to ORA-01403: no data found.
The SQL statement being executed at the time of the error was: and was executed from the file &ERRFILE.
Continuing...
---------------------------------------------------------
Application Id: 242
Short Name: AMW
Name: Internal Controls Manager
Flexfield Name: AMW_PROCESS_ATTR_GROUP_TYPE
Title: Risk Library Process
Compilation aborted for current descriptive flexfield...
APP-FND-00854: FDFBDF: Cannot find descriptive column information.
APP-FND-01564: ORACLE error 1403 in FDFBCO
Cause: FDFBCO failed due to ORA-01403: no data found.
The SQL statement being executed at the time of the error was: and was executed from the file &ERRFILE.
Continuing...
---------------------------------------------------------
Application Id: 242
Short Name: AMW
Name: Internal Controls Manager
Flexfield Name: AMW_PROC_ORG_ATTR_GROUP_TYPE
Title: Organization Process
Compilation aborted for current descriptive flexfield...
APP-FND-00854: FDFBDF: Cannot find descriptive column information.
APP-FND-01564: ORACLE error 1403 in FDFBCO
Cause: FDFBCO failed due to ORA-01403: no data found.
The SQL statement being executed at the time of the error was: and was executed from the file &ERRFILE.
Continuing...
---------------------------------------------------------
Application Id: 242
Short Name: AMW
Name: Internal Controls Manager
Flexfield Name: AMW_RISK_ATTR_GROUP_TYPE
Title: Risk
Compilation aborted for current descriptive flexfield...
APP-FND-00854: FDFBDF: Cannot find descriptive column information.
APP-FND-01564: ORACLE error 1403 in FDFBCO
Cause: FDFBCO failed due to ORA-01403: no data found.
The SQL statement being executed at the time of the error was: and was executed from the file &ERRFILE.
Continuing...
---------------------------------------------------------
Application Id: 242
Short Name: AMW
Name: Internal Controls Manager
Flexfield Name: AMW_WORK_ATTR_GROUP_TYPE
Title: Work
Compilation was successful for current descriptive flexfield.
Run afdff040_amw.sql from oracle support (SR #3-1515967891: AMW flexfield compilation error for bug 9748057) and then compiling the flexfield again. No issues.
/*
REM $Header: afdff040.sql 115.14.1151.5 2002/02/20 10:48:00 pkm ship $
REM dbdrv: none
REM +======================================================================+
REM | Copyright (c) 1999 Oracle Corporation Redwood Shores, California, USA|
REM | All rights reserved. |
REM +======================================================================+
REM NAME
REM afdff040.sql
REM DESCRIPTION
REM Drops obsolete/re-created flexfields of all products
REM
REM NOTES
REM
REM HISTORY
REM 02/18/2002 nlo Removed GL_JE_SOURCES,GL_JE_CATEGORIES
REM 02/29/00 ssung Included MRP_SEIBAN_NUMBERS,
REM MRP_PROJECT_PARAMERTERS,
REM GL_CHARTS_OF_ACCOUNTS.
REM 02/03/00 nlo Added AK flexfields
REM 02/01/00 nlo Added JG flexfield contexts
REM 01/17/00 ssung fixes for Bug#1151660
REM 11/09/99 ssung Added AP_LOOKUP_CODES,
REM AS_WORD_PROCESSORS,
REM BOM_STANDARD_COMMENTS,
REM BOM_STANDARD_INSTRUCTIONS,
REM GL_JE_SOURCES, GL_JE_CATEGORIES,
REM AP_EFT_POPUP, AP_DIST_SET_PA_POPUP,
REM AP_PO_NUM_POPUP, AP_LOOKUP_CODES,
REM AP_DIST_PA_POPUP, AP_INV_PA_POPUP
REM 10/28/99 nlo Added JA obsolete descriptive flexfields
REM 10/26/99 nlo Added JG obsolete Flexfield Contexts
REM 10/08/99 nlo Added JA obsolete Flexfield Contexts & JG flexfields
REM 09/15/99 ssung Added some obsolete Flexfield Contexts,
REM list provided by kmaheswa
REM 08/16/99 Murali Added obsolete JL descriptive flexfields
REM 06/25/99 Nlo Deleted JG obsolete descriptive flexfields
REM and specific contexts
REM 05/03/99 Murali Added obsolete descriptive flexfield
REM 04/21/99 Golgun OTHERS exception handlers, and
REM set_session_mode calls are added.
REM 03/08/99 Murali Added obsolete descriptive flexfields
REM 08/28/98 Murali Re-wrote this script as procedures
REM 08/15/97 Murali Created
REM +======================================================================+
SET VERIFY OFF
WHENEVER SQLERROR EXIT FAILURE ROLLBACK;
REM Descriptive FlexFields
set serveroutput on size 20000;
declare
procedure delete_dsc_flexfield(p_prod IN varchar2,
p_desc_flex IN varchar2)
is
begin
fnd_flex_dsc_api.delete_flexfield(appl_short_name => p_prod,
flexfield_name => p_desc_flex);
exception
when others then
dbms_output.put_line('Flexfield '||p_desc_flex||' for '||
p_prod||' does not exist');
null;
end;
procedure delete_key_flexfield(p_prod IN varchar2,
p_key_flex IN varchar2)
is
begin
fnd_flex_key_api.delete_flexfield(appl_short_name => p_prod,
flex_code => p_key_flex);
exception
when others then
dbms_output.put_line('Flexfield '||p_key_flex||' for '||
p_prod||' does not exist');
null;
end;
procedure delete_key_flex_segment(
p_prod IN varchar2,
p_flexname IN varchar2,
p_structure IN varchar2,
p_segment IN varchar2)
is
l_segment fnd_flex_key_api.segment_type;
l_flexfield fnd_flex_key_api.flexfield_type;
l_structure fnd_flex_key_api.structure_type;
begin
begin
l_flexfield := fnd_flex_key_api.find_flexfield(p_prod, p_flexname);
exception
when no_data_found then
dbms_output.put_line('Flex '||p_flexname||' does not exist');
return;
end;
begin
l_structure := fnd_flex_key_api.find_structure(l_flexfield,
p_structure);
exception
when no_data_found then
dbms_output.put_line('Structure '||p_structure||' not found in '||
p_flexname);
return;
end;
begin
l_segment := fnd_flex_key_api.find_segment(
l_flexfield,
l_structure,
p_segment);
fnd_flex_key_api.delete_segment(l_flexfield, l_structure, l_segment);
exception
WHEN no_data_found THEN
dbms_output.put_line('Flex segment '||p_segment||' does not exist');
end;
exception
WHEN others THEN
dbms_output.put_line('delete_key_flex_segment is failed.');
end;
procedure delete_dsc_context(p_prod IN varchar2,
p_desc_flex IN varchar2,
p_desc_ctxt IN varchar2)
is
begin
fnd_flex_dsc_api.delete_context(appl_short_name => p_prod,
flexfield_name => p_desc_flex,
context => p_desc_ctxt);
exception
when others then
dbms_output.put_line('Flexfield '||p_desc_flex||' with context '||
p_desc_ctxt || ' for '|| p_prod||' does not exist');
null;
end;
begin
fnd_flex_dsc_api.set_session_mode('customer_data');
delete_dsc_flexfield('AMW','AMW_AP_ATTR_GROUP_TYPE');
delete_dsc_flexfield('AMW','AMW_CONSTRAINT_ATTR_GROUP_TYPE');
delete_dsc_flexfield('AMW','AMW_CTRL_ATTR_GROUP_TYPE');
delete_dsc_flexfield('AMW','AMW_PROCESS_ATTR_GROUP_TYPE');
delete_dsc_flexfield('AMW','AMW_PROC_ORG_ATTR_GROUP_TYPE');
delete_dsc_flexfield('AMW','AMW_RISK_ATTR_GROUP_TYPE');
-- Descriptive FlexField Segments
---No segments created in customer environment
-- Flexfield Contexts
exception
when others then
dbms_output.put_line('afdff040.sql failed.');
end;
/
/*{TOAD_FILE_CHUNK}*/
commit;
exit;
enable or disable low-level logging
Enable low-level logging
1. change system profile FND: Debug Log level from unexpected to statement
2. change system profile FND: Debug Log Enabled from No to Yes.
When testing begins, run following:
Select max(log_sequence) from apps.fnd_log_messages
When testing ends, run:
Above again.
Then run
Select * from apps.fnd_log_messages where log_sequence between {from} to {to} and messages_text like ‘PrintAM%’
Above is trouble-shooting java class PrintAM.
The other choice is to use TIMESTAMP.
Reset back to original values for the system profiles after testing as there are performance impacts.
1. change system profile FND: Debug Log level from unexpected to statement
2. change system profile FND: Debug Log Enabled from No to Yes.
When testing begins, run following:
Select max(log_sequence) from apps.fnd_log_messages
When testing ends, run:
Above again.
Then run
Select * from apps.fnd_log_messages where log_sequence between {from} to {to} and messages_text like ‘PrintAM%’
Above is trouble-shooting java class PrintAM.
The other choice is to use TIMESTAMP.
Reset back to original values for the system profiles after testing as there are performance impacts.
backup and recovery (2) -- recovery from rollback segment corruption when in noarchive mode
1) Backup the database while the database is closed.
THE INSTRUCTIONS HERE ARE DESTRUCTIVE. YOU ARE STRONGLY ADVISED TO BACKUP THE DATABASE BEFORE PROCEEDING. IF YOU DO NOT DO THIS YOU MAY LOSE THE CHANCE TO TRY OTHER OPTIONS.
2) If your datafiles are from different points in time, it is best to try to use system tablespace datafiles at a similar timestamp to the OLDEST files you have online in the database. This reduces the chance that you will get problems during the bootstrap phase of opening the database.
3) Edit your *init.ora file to change undo_management and add one parameter.
* Change UNDO_MANAGEMENT=AUTO to UNDO_MANAGEMENT=MANUAL
* Remove or comment out UNDO_TABLESPACE and UNDO_RETENTION.
* Add
_OFFLINE_ROLLBACK_SEGMENTS =(comma separated list of Automatic Undo segments)
Example:
_OFFLINE_ROLLBACK_SEGMENTS = (_SYSSMU1$, _SYSSMU2$, _SYSSMU3$, _SYSSMU4$, _SYSSMU5$, _SYSSMU6$, _SYSSMU7$, _SYSSMU8$, _SYSSMU9$, _SYSSMU10$)
(There are 25 system managed undo segments in OAICM)
Note, sometimes the alert log will tell you what Automatic Undo segments are in use. Search the alert log for SYSS. If the alert log does not contain that information then use _SYSSMU1$ through _SYSSMU10$ as shown in the example above.
In UNIX you can issue this command to get the undo segment names:
$ strings system01.dbf | grep _SYSSMU | cut -d $ -f 1 | sort -u
From the output of the strings command above, add a $ to end of each _SYSSMU undo segment name.
* If you only have a spfile available, you can from the closed, nomount or the mount stage create an init.ora file as follows:
SQL> CREATE PFILE FROM SPFILE;
Do NOT edit the SPFILE.
4) Invoke SQL*Plus, startup mount, check that correct init.ora was used and all datafiles are in the status of online or system.
$ sqlplus "/as sysdba"
SQL> startup mount pfile = (full path / file name to init.ora)
Example:
SQL> startup mount pfile=/db00/db04/oracle/oaicmdb/9.2.0/dbs/initOAICM.ora.forrecovery
Confirm that the hidden parameters from step 3 were used:
SQL> show parameters offline
You should see the hidden parameter listed. If not, the wrong init.ora may have been modified. Do not continue until "show parameters corrupt" shows the hidden parameter.
SQL> show parameters undo
You should see undo management set to manual. If not, the wrong init.ora may have been modified. Do not continue until "show parameters undo" shows undo management as manual.
5) Alter database datafile 'undo datafile' offline drop;
example: alter database datafile ‘/db00/db04/oracle/oaicmdata/undo01.dbf’ offline drop;
6) alter database open;
6) drop all rollback segment with status need recovery:
select owner,segment_name,status
from dba_rollback_segs;
drop rollback segment "_SYSSMU1$"; -- 10 system undo segments were dropped.
7) drop tablespace including contents and datafiles;
example: drop tablespace apps_undots1 including contents and datafiles;
8) create undo tablespace apps_undots1 datafile ‘/db00/db04/oracle/oaicmdata/undo01.dbf’ size 1000m resuse autoextend on next 100m maxsize 8000m.
9) shutdown the database
10) change the following parameters in the init.ora :
UNDO_MANAGEMENT=AUTO
#_OFFLINE_ROLLBACK_SEGMENTS=(_SYSSMU1$, _SYSSMU2$, _SYSSMU3$, ...etc)
11) startup
12) best practice:
Put a copy of controlfile and online redo logs on local (internal) disk to prevent from database corruption due to temporary storage problems.
THE INSTRUCTIONS HERE ARE DESTRUCTIVE. YOU ARE STRONGLY ADVISED TO BACKUP THE DATABASE BEFORE PROCEEDING. IF YOU DO NOT DO THIS YOU MAY LOSE THE CHANCE TO TRY OTHER OPTIONS.
2) If your datafiles are from different points in time, it is best to try to use system tablespace datafiles at a similar timestamp to the OLDEST files you have online in the database. This reduces the chance that you will get problems during the bootstrap phase of opening the database.
3) Edit your *init
* Change UNDO_MANAGEMENT=AUTO to UNDO_MANAGEMENT=MANUAL
* Remove or comment out UNDO_TABLESPACE and UNDO_RETENTION.
* Add
_OFFLINE_ROLLBACK_SEGMENTS =(comma separated list of Automatic Undo segments)
Example:
_OFFLINE_ROLLBACK_SEGMENTS = (_SYSSMU1$, _SYSSMU2$, _SYSSMU3$, _SYSSMU4$, _SYSSMU5$, _SYSSMU6$, _SYSSMU7$, _SYSSMU8$, _SYSSMU9$, _SYSSMU10$)
(There are 25 system managed undo segments in OAICM)
Note, sometimes the alert log will tell you what Automatic Undo segments are in use. Search the alert log for SYSS. If the alert log does not contain that information then use _SYSSMU1$ through _SYSSMU10$ as shown in the example above.
In UNIX you can issue this command to get the undo segment names:
$ strings system01.dbf | grep _SYSSMU | cut -d $ -f 1 | sort -u
From the output of the strings command above, add a $ to end of each _SYSSMU undo segment name.
* If you only have a spfile available, you can from the closed, nomount or the mount stage create an init
SQL> CREATE PFILE FROM SPFILE;
Do NOT edit the SPFILE.
4) Invoke SQL*Plus, startup mount, check that correct init
$ sqlplus "/as sysdba"
SQL> startup mount pfile = (full path / file name to init
Example:
SQL> startup mount pfile=/db00/db04/oracle/oaicmdb/9.2.0/dbs/initOAICM.ora.forrecovery
Confirm that the hidden parameters from step 3 were used:
SQL> show parameters offline
You should see the hidden parameter listed. If not, the wrong init
SQL> show parameters undo
You should see undo management set to manual. If not, the wrong init
5) Alter database datafile 'undo datafile' offline drop;
example: alter database datafile ‘/db00/db04/oracle/oaicmdata/undo01.dbf’ offline drop;
6) alter database open;
6) drop all rollback segment with status need recovery:
select owner,segment_name,status
from dba_rollback_segs;
drop rollback segment "_SYSSMU1$"; -- 10 system undo segments were dropped.
7) drop tablespace
example: drop tablespace apps_undots1 including contents and datafiles;
8) create undo tablespace apps_undots1 datafile ‘/db00/db04/oracle/oaicmdata/undo01.dbf’ size 1000m resuse autoextend on next 100m maxsize 8000m.
9) shutdown the database
10) change the following parameters in the init.ora :
UNDO_MANAGEMENT=AUTO
#_OFFLINE_ROLLBACK_SEGMENTS=(_SYSSMU1$, _SYSSMU2$, _SYSSMU3$, ...etc)
11) startup
12) best practice:
Put a copy of controlfile and online redo logs on local (internal) disk to prevent from database corruption due to temporary storage problems.
add and delete nodes for a RAC
Add a new node to the RAC
1. install and config OS and hardware for new node
$ cluvfy stage –pre crsinst –n node1,node2 –r 10gR2
2. add Oracle Clusterware to the new node
$ $CRS_ORACLE_HOME/oui/bin/addNode.sh (run this on the first node)
Open a new OS window to run $CRS_ORACLE_HOME/install/rootaddnode.sh on the first node).
On the new node, run $ORACLE_HOME/root.sh which will start Clusterware stack and run VIPCA to configure nodeapps.
3. configure ONS for the new node
cat $CRS_ORACLE_HOME/opmn/conf/ons.config on the first node
$CRS_ORACE_HOME/bin/racgons add_config {new node}:{
4. add ASM home to the new node
On the first node, run $ASM_ORACLE_HOME/oui/bin/addNode.sh, then run $ASM_ORACLE_HOME/root.sh on the new node.
5. add RAC home to the new node
On the first node, run $ORACLE_HOME/oui/bin/addNode.sh, then run $ORACLE_HOME/root.sh on the new node.
6. add a listener to the new node
run netca to add a listener: LISTENER_{New Node Name} on the new node.
7. add a database instance to the new node
run dbca on the first node.
Delete a node from the RAC
1. delete the instance on the node to be deleted.
Run dbca on the node to be deleted.
2. clean up the ASM instance
srvctl stop asm –n {node to be deleted}
srvctl remove asm –n {node to be deleted}
rm –f $ASM_ORACLE_HOME/dbs/*ASM*
rm –rf $ASM_ORACLE_HOME/admin/+ASM
3. remove the listener from the node to be deleted.
Run netca from the node to be deleted.
4. remove the node from the database
4.1 update inventory
on the node to be removed
$ORACLE_HOME/oui/bin/runInstaller –updateNodeList ORACLE_HOME={database home} “CLUSTER_NODES={node to be removed}” -local
On the remaining nodes
$ORACLE_HOME/oui/bin/runInstaller –updateNodeList
ORACLE_HOME={database home} “CLUSTER_NODES={remaining nodes}”
4.2 remove the database software installation
./runInstaller
5. remove the node from ASM
repeat step 4 for ASM home.
6. remove ONS configuration from the node to be deleted.
On the first node:
$CRS_ORACLE_HOME/bin/racgons remove_config {node to be removed}:{remote port}
7. remove the node from the clusterware.
As root: on the node to be removed
$CRS_ORACLE_HOME/install/rootdelete.sh
As root: on the first node
$CRS_ORACLE_HOME/bin/olsnodes –n ##to delete node number to be deleted
$CRS_ORACLE_HOME/install/rootdeletenode.sh
On the node to be deleted
$CRS_ORACLE_HOME/oui/bin/runInstaller –updateNodeList ORACLE_HOME={CRSOH} “CLUSTER_NODES={node to be deleted}” CRS=TRUE -local
On the first node
$CRS_ORACLE_HOME/oui/bin/runInstaller –updateNodeList
ORACLE_HOME={CRSOH} “CLUSTER_NODES={remaining nodes}”
CRS=TRUE
Verify
srvctl status nodeapps –n {deleted node}
crs_stat | grep –i {deleted node}
olsnodes –n
1. install and config OS and hardware for new node
$ cluvfy stage –pre crsinst –n node1,node2 –r 10gR2
2. add Oracle Clusterware to the new node
$ $CRS_ORACLE_HOME/oui/bin/addNode.sh (run this on the first node)
Open a new OS window to run $CRS_ORACLE_HOME/install/rootaddnode.sh on the first node).
On the new node, run $ORACLE_HOME/root.sh which will start Clusterware stack and run VIPCA to configure nodeapps.
3. configure ONS for the new node
cat $CRS_ORACLE_HOME/opmn/conf/ons.config on the first node
$CRS_ORACE_HOME/bin/racgons add_config {new node}:{
4. add ASM home to the new node
On the first node, run $ASM_ORACLE_HOME/oui/bin/addNode.sh, then run $ASM_ORACLE_HOME/root.sh on the new node.
5. add RAC home to the new node
On the first node, run $ORACLE_HOME/oui/bin/addNode.sh, then run $ORACLE_HOME/root.sh on the new node.
6. add a listener to the new node
run netca to add a listener: LISTENER_{New Node Name} on the new node.
7. add a database instance to the new node
run dbca on the first node.
Delete a node from the RAC
1. delete the instance on the node to be deleted.
Run dbca on the node to be deleted.
2. clean up the ASM instance
srvctl stop asm –n {node to be deleted}
srvctl remove asm –n {node to be deleted}
rm –f $ASM_ORACLE_HOME/dbs/*ASM*
rm –rf $ASM_ORACLE_HOME/admin/+ASM
3. remove the listener from the node to be deleted.
Run netca from the node to be deleted.
4. remove the node from the database
4.1 update inventory
on the node to be removed
$ORACLE_HOME/oui/bin/runInstaller –updateNodeList ORACLE_HOME={database home} “CLUSTER_NODES={node to be removed}” -local
On the remaining nodes
$ORACLE_HOME/oui/bin/runInstaller –updateNodeList
ORACLE_HOME={database home} “CLUSTER_NODES={remaining nodes}”
4.2 remove the database software installation
./runInstaller
5. remove the node from ASM
repeat step 4 for ASM home.
6. remove ONS configuration from the node to be deleted.
On the first node:
$CRS_ORACLE_HOME/bin/racgons remove_config {node to be removed}:{remote port}
7. remove the node from the clusterware.
As root: on the node to be removed
$CRS_ORACLE_HOME/install/rootdelete.sh
As root: on the first node
$CRS_ORACLE_HOME/bin/olsnodes –n ##to delete node number to be deleted
$CRS_ORACLE_HOME/install/rootdeletenode.sh
On the node to be deleted
$CRS_ORACLE_HOME/oui/bin/runInstaller –updateNodeList ORACLE_HOME={CRSOH} “CLUSTER_NODES={node to be deleted}” CRS=TRUE -local
On the first node
$CRS_ORACLE_HOME/oui/bin/runInstaller –updateNodeList
ORACLE_HOME={CRSOH} “CLUSTER_NODES={remaining nodes}”
CRS=TRUE
Verify
srvctl status nodeapps –n {deleted node}
crs_stat | grep –i {deleted node}
olsnodes –n
change VIP address
1. determine the interface used to support the VIP
$ ifconfig –a
2. stop all resources depending on the VIP
$ srvctl stop instance –d DBRAC –i DB1
$ srvctl stop asm –n node1
$ srvctl stop nodeapps –n node1
3. verify that the VIP is no longer running
$ ifconfig –a
And/Or
$ crs_stat
4. change IP in /etc/hosts and DNS
5. modify the VIP address using srvctl
$ srvctl modify nodeapps –n node1 –A 192.168.2.125/255.255.255.0/eth0
6. start nodeapps and all resources depending on it
$ srvctl start nodeapps –n node1
$ ifconfig –a
2. stop all resources depending on the VIP
$ srvctl stop instance –d DBRAC –i DB1
$ srvctl stop asm –n node1
$ srvctl stop nodeapps –n node1
3. verify that the VIP is no longer running
$ ifconfig –a
And/Or
$ crs_stat
4. change IP in /etc/hosts and DNS
5. modify the VIP address using srvctl
$ srvctl modify nodeapps –n node1 –A 192.168.2.125/255.255.255.0/eth0
6. start nodeapps and all resources depending on it
$ srvctl start nodeapps –n node1
user equivalence or secure shell for remote installation
User equivalence:
vi /etc/hosts.equiv and add following two lines (node name).
pind33
pind34
test it using rsh (remote shell):
rsh pind33 uname –r
rsh pind34 uname –r
secure shell (ssh)
applptch@pind33(PTCH_806_BALANCE/Web:Forms:MWA):$ /usr/bin/ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/applptch/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/applptch/.ssh/id_dsa.
Your public key has been saved in /home/applptch/.ssh/id_dsa.pub.
The key fingerprint is:
17:ce:1d:66:0d:25:01:fa:a9:70:fa:d9:d2:04:a0:c1 applptch@pind33.powellind.com
[~]:[time 14:15:03]
applptch@pind34(PTCH_806_BALANCE/Web:Forms:MWA):$ /usr/bin/ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/applptch/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/applptch/.ssh/id_dsa.
Your public key has been saved in /home/applptch/.ssh/id_dsa.pub.
The key fingerprint is:
5c:ad:ca:aa:93:e7:b0:92:66:92:c4:6b:09:3c:8f:f0 applptch@pind34.powellind.com
[~]:[time 14:15:40]
applptch@pind33(PTCH_806_BALANCE/Web:Forms:MWA):$ ssh pind33 "cat /home/applptch/.ssh/id_dsa.pub" >> /home/applptch/.ssh/authorized_keys
applptch@pind33's password:
[~]:[time 14:16:58]
applptch@pind33(PTCH_806_BALANCE/Web:Forms:MWA):$ ssh pind34 "cat /home/applptch/.ssh/id_dsa.pub" >> /home/applptch/.ssh/authorized_keys
applptch@pind34's password:
[~]:[time 14:17:19]
applptch@pind33(PTCH_806_BALANCE/Web:Forms:MWA):$ scp /home/applptch/.ssh/authorized_keys pind34:/home/applptch/.ssh
applptch@pind34's password:
authorized_keys 100% 1336 1.3KB/s 00:00
[~]:[time 14:17:56]
applptch@pind33(PTCH_806_BALANCE/Web:Forms:MWA):$ ssh pind34
Last login: Wed Sep 15 14:13:53 2010 from 172.18.33.73
[~]:[time 14:18:26]
applptch@pind34(PTCH_806_BALANCE/Web:Forms:MWA):$
applptch@pind34(PTCH_806_BALANCE/Web:Forms:MWA):$ ssh pind33
Last login: Wed Sep 15 14:24:10 2010 from pind34.powellind.com
[~]:[time 14:26:51]
applptch@pind33(PTCH_806_BALANCE/Web:Forms:MWA):$
vi /etc/hosts.equiv and add following two lines (node name).
pind33
pind34
test it using rsh (remote shell):
rsh pind33 uname –r
rsh pind34 uname –r
secure shell (ssh)
applptch@pind33(PTCH_806_BALANCE/Web:Forms:MWA):$ /usr/bin/ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/applptch/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/applptch/.ssh/id_dsa.
Your public key has been saved in /home/applptch/.ssh/id_dsa.pub.
The key fingerprint is:
17:ce:1d:66:0d:25:01:fa:a9:70:fa:d9:d2:04:a0:c1 applptch@pind33.powellind.com
[~]:[time 14:15:03]
applptch@pind34(PTCH_806_BALANCE/Web:Forms:MWA):$ /usr/bin/ssh-keygen -t dsa
Generating public/private dsa key pair.
Enter file in which to save the key (/home/applptch/.ssh/id_dsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /home/applptch/.ssh/id_dsa.
Your public key has been saved in /home/applptch/.ssh/id_dsa.pub.
The key fingerprint is:
5c:ad:ca:aa:93:e7:b0:92:66:92:c4:6b:09:3c:8f:f0 applptch@pind34.powellind.com
[~]:[time 14:15:40]
applptch@pind33(PTCH_806_BALANCE/Web:Forms:MWA):$ ssh pind33 "cat /home/applptch/.ssh/id_dsa.pub" >> /home/applptch/.ssh/authorized_keys
applptch@pind33's password:
[~]:[time 14:16:58]
applptch@pind33(PTCH_806_BALANCE/Web:Forms:MWA):$ ssh pind34 "cat /home/applptch/.ssh/id_dsa.pub" >> /home/applptch/.ssh/authorized_keys
applptch@pind34's password:
[~]:[time 14:17:19]
applptch@pind33(PTCH_806_BALANCE/Web:Forms:MWA):$ scp /home/applptch/.ssh/authorized_keys pind34:/home/applptch/.ssh
applptch@pind34's password:
authorized_keys 100% 1336 1.3KB/s 00:00
[~]:[time 14:17:56]
applptch@pind33(PTCH_806_BALANCE/Web:Forms:MWA):$ ssh pind34
Last login: Wed Sep 15 14:13:53 2010 from 172.18.33.73
[~]:[time 14:18:26]
applptch@pind34(PTCH_806_BALANCE/Web:Forms:MWA):$
applptch@pind34(PTCH_806_BALANCE/Web:Forms:MWA):$ ssh pind33
Last login: Wed Sep 15 14:24:10 2010 from pind34.powellind.com
[~]:[time 14:26:51]
applptch@pind33(PTCH_806_BALANCE/Web:Forms:MWA):$
hangcheck_timer module for monitoring the Linux kernel hangs
Make sure hangcheck_timer is running on all cluster nodes.
crs.oracle.pind22>/sbin/lsmod |grep -i hang
Module Size Used by
hangcheck_timer 37465 0
add entry to start the hangcheck_timer module on all nodes if necessary
crs.oracle.pind22>vi /etc/rc.local
modprobe hangcheck-timer hangcheck_tick=1 hangcheck_margin=10 hangcheck_reboot=1
hangcheck_tick: defines how often, in seconds, the hangcheck_timer module checks nodes for hangs. The default value is 60 seconds.
Hangcheck_margin: defines how long, in seconds, the timer waits for a response from the kernel. The default value is 180 seconds.
Hangcheck_reboot: determines if the hangcheck_timer restarts the node if the kernel fails to respond within the sum of the hangcheck_tick and hangcheck_margin parameter values. If hangcheck_reboot >= 1, then the hangcheck_timer module restarts the system. If hangcheck_reboot = 0, then the hangcheck_timer will not reboot node even if a hang is detected.
For 10g, need to make sure that the cluster misscount is greater than the sum of the setting for hangcheck_tick+hangcheck_margin.
REF:
Hangcheck-timer module requirements for Oracle9i, 10g, and 11g RAC on Linux.
crs.oracle.pind22>/sbin/lsmod |grep -i hang
Module Size Used by
hangcheck_timer 37465 0
add entry to start the hangcheck_timer module on all nodes if necessary
crs.oracle.pind22>vi /etc/rc.local
modprobe hangcheck-timer hangcheck_tick=1 hangcheck_margin=10 hangcheck_reboot=1
hangcheck_tick: defines how often, in seconds, the hangcheck_timer module checks nodes for hangs. The default value is 60 seconds.
Hangcheck_margin: defines how long, in seconds, the timer waits for a response from the kernel. The default value is 180 seconds.
Hangcheck_reboot: determines if the hangcheck_timer restarts the node if the kernel fails to respond within the sum of the hangcheck_tick and hangcheck_margin parameter values. If hangcheck_reboot >= 1, then the hangcheck_timer module restarts the system. If hangcheck_reboot = 0, then the hangcheck_timer will not reboot node even if a hang is detected.
For 10g, need to make sure that the cluster misscount is greater than the sum of the setting for hangcheck_tick+hangcheck_margin.
REF:
Hangcheck-timer module requirements for Oracle9i, 10g, and 11g RAC on Linux.
migrate a database to ASM
1. SQL>alter system set db_create_file_dest=’+DATA’ scope=spfile;
SQL>alter system set control_file=’’ scope=spfile;
2. SQL>shutdown immediate;
3. SQL>startup nomount;
4. RMAN>restore controlfile from ‘/u01/ORCL/control01.ctl’;
5. SQL>alter database mount;
6. RMAN>backup as copy database format ‘+DATA’;
7. RMAN>switch database to copy;
8. RMAN>recover database;
9. SQL>alter database open;
10. SQL>alter database tempfile ‘/u01/ORCL/temp1.dbf’ DROP;
SQL>alter tablespace temp add tempfile;
SQL>alter system set control_file=’’ scope=spfile;
2. SQL>shutdown immediate;
3. SQL>startup nomount;
4. RMAN>restore controlfile from ‘/u01/ORCL/control01.ctl’;
5. SQL>alter database mount;
6. RMAN>backup as copy database format ‘+DATA’;
7. RMAN>switch database to copy;
8. RMAN>recover database;
9. SQL>alter database open;
10. SQL>alter database tempfile ‘/u01/ORCL/temp1.dbf’ DROP;
SQL>alter tablespace temp add tempfile;
11gR2 database swap space
The 11g install guide gives this formula for swap:
If RAM = 1024MB to 2048Mb then 1.5 times RAM
elseif RAM > 2048MB and < 8192MB then match RAM
else RAM > 8192MB then .75 times RAM
If RAM = 1024MB to 2048Mb then 1.5 times RAM
elseif RAM > 2048MB and < 8192MB then match RAM
else RAM > 8192MB then .75 times RAM
10gR2 EBS database init.ora
*.service_names = PROD,PROD1,PROD2,PROD.POWELLIND.COM
PROD1.local_listener = 'PROD1_LOCAL'
PROD1.remote_listener = 'PROD_REMOTE'
PROD2.local_listener = 'PROD2_LOCAL'
PROD2.remote_listener = 'PROD_REMOTE'
*.cluster_database = TRUE
*.cluster_database_instances = 2
*.undo_retention=28800
PROD1.undo_tablespace = APPS_UNDOTS1
PROD1.instance_name = PROD1
PROD1.instance_number = 1
PROD1.thread = 1
PROD2.undo_tablespace = APPS_UNDOTS2
PROD2.instance_name = PROD2
PROD2.instance_number = 2
PROD2.thread = 2
*.db_recovery_file_dest='/backup'
*.db_recovery_file_dest_size=1500G
*.recovery_parallelism=4
*.DB_CREATE_FILE_DEST = '+DSKGRP1'
*.DB_CREATE_ONLINE_LOG_DEST_1 = '+DSKGRP1'
*.DB_CREATE_ONLINE_LOG_DEST_2 = '+DSKGRP1'
*.db_name = PROD
*.control_files = '+DSKGRP1/PROD/CONTROLFILE/Current.275.682690593','+DSKGRP1/prod/controlfile/current.259.711532731'
*.control_file_record_keep_time = 30
*.db_block_size = 8192
*.compatible = 10.2.0
*._system_trig_enabled = TRUE # set FALSE for upgrades/migrations
*.O7_DICTIONARY_ACCESSIBILITY = FALSE
*.nls_language = AMERICAN
*.nls_territory = AMERICA
*.nls_date_format = DD-MON-RR
*.nls_numeric_characters = ".,"
*.nls_sort = binary # Required 11i setting
*.nls_comp = binary # Required 11i setting
*.nls_length_semantics = BYTE # Required 11i setting
*.audit_trail = false # if you want auditing
PROD2.user_dump_dest = /opt/oracle/product/10.2.0/PROD/admin/PROD2_pind42/udump
PROD2.background_dump_dest = /opt/oracle/product/10.2.0/PROD/admin/PROD2_pind42/bdump
PROD2.core_dump_dest = /opt/oracle/product/10.2.0/PROD/admin/PROD2_pind42/cdump
PROD1.user_dump_dest = /opt/oracle/product/10.2.0/PROD/admin/PROD1_pind41/udump
PROD1.background_dump_dest = /opt/oracle/product/10.2.0/PROD/admin/PROD1_pind41/bdump
PROD1.core_dump_dest = /opt/oracle/product/10.2.0/PROD/admin/PROD1_pind41/cdump
*.max_dump_file_size = 20480 # trace file size
*.timed_statistics = true
*._trace_files_public = TRUE
*.processes = 1000 # Max. no. of users x 2
*.sessions = 2000 # 2 X processes
*.db_files = 512 # Max. no. of database files
*.dml_locks = 10000
*.cursor_sharing = EXACT # Required 11i settting
*.open_cursors = 600 # Consumes process memory, unless using MTS.
*.session_cached_cursors = 500
*.archive_lag_target = 1800
*.db_block_checking = FALSE
*.db_block_checksum = TRUE
#*.log_checkpoint_timeout = 1200 # Checkpoint at least every 20 mins.
#*.log_checkpoint_interval = 100000
*.fast_start_mttr_target = 60 #in seconds
*.log_checkpoints_to_alert = TRUE
*.log_buffer = 10485760
*.undo_management=AUTO # Required 11i setting
*.workarea_size_policy=AUTO # Required 11i setting
*.olap_page_pool_size = 4194304
*.shared_pool_size = 900M
*.shared_pool_reserved_size = 90M
*.streams_pool_size = 50331648
*.session_max_open_files = 25
*.java_pool_size = 250M
*.remote_login_passwordfile = exclusive
*.large_pool_size =200M
*.pga_aggregate_target = 8G
*.plsql_optimize_level = 2 # Required 11i setting
*.plsql_code_type = INTERPRETED # Required 11i setting
# *.plsql_code_type = NATIVE # Required 11i setting
# *.plsql_native_library_dir = /opt/oracle/product/10.2.0/PROD/plsql/nativelib
# *.plsql_native_library_subdir_count = 1000
PROD1.utl_file_dir = /pcp/applprod/common/temp,/pcp/applprod/common/xx_custom/outbound/gl,/pcp/applprod/common/xx_custom/inbound/gl,/pcp/applprod/common/EDI/inholding,/pcp/applprod/common/EDI/outholding,/pcp/applprod/common/xx_custom/outbound/acsis,/opt/oracle/product/10.2.0/PROD/appsutil/outbound/PROD1_pind41,/pcp/applprod/common/xx_custom/outbound/ceridian,/usr/tmp
PROD2.utl_file_dir = /pcp/applprod/common/temp,/pcp/applprod/common/xx_custom/outbound/gl,/pcp/applprod/common/xx_custom/inbound/gl,/pcp/applprod/common/EDI/inholding,/pcp/applprod/common/EDI/outholding,/pcp/applprod/common/xx_custom/outbound/acsis,/opt/oracle/product/10.2.0/PROD/appsutil/outbound/PROD2_pind42,/pcp/applprod/common/xx_custom/outbound/ceridian,/usr/tmp
*.job_queue_processes = 10
*.parallel_max_servers = 16
*.parallel_min_servers = 0
*._like_with_bind_as_equality = TRUE
*._sort_elimination_cost_ratio=5
*._fast_full_scan_enabled = FALSE
*._sqlexec_progression_cost=2147483647
*._b_tree_bitmap_plans = FALSE
*._immediate_commit_propagation = TRUE
*._lm_global_posts = TRUE
*._optimizer_cost_based_transformation=off
*.optimizer_secure_view_merging = FALSE
*._kks_use_mutex_pin = TRUE
*.sga_target = 32G
*.pre_page_sga = TRUE
*.smtp_out_server = 'it-cy1-smtp.powellind.pri'
*_fix_control='8544722:ON'
PROD1.local_listener = 'PROD1_LOCAL'
PROD1.remote_listener = 'PROD_REMOTE'
PROD2.local_listener = 'PROD2_LOCAL'
PROD2.remote_listener = 'PROD_REMOTE'
*.cluster_database = TRUE
*.cluster_database_instances = 2
*.undo_retention=28800
PROD1.undo_tablespace = APPS_UNDOTS1
PROD1.instance_name = PROD1
PROD1.instance_number = 1
PROD1.thread = 1
PROD2.undo_tablespace = APPS_UNDOTS2
PROD2.instance_name = PROD2
PROD2.instance_number = 2
PROD2.thread = 2
*.db_recovery_file_dest='/backup'
*.db_recovery_file_dest_size=1500G
*.recovery_parallelism=4
*.DB_CREATE_FILE_DEST = '+DSKGRP1'
*.DB_CREATE_ONLINE_LOG_DEST_1 = '+DSKGRP1'
*.DB_CREATE_ONLINE_LOG_DEST_2 = '+DSKGRP1'
*.db_name = PROD
*.control_files = '+DSKGRP1/PROD/CONTROLFILE/Current.275.682690593','+DSKGRP1/prod/controlfile/current.259.711532731'
*.control_file_record_keep_time = 30
*.db_block_size = 8192
*.compatible = 10.2.0
*._system_trig_enabled = TRUE # set FALSE for upgrades/migrations
*.O7_DICTIONARY_ACCESSIBILITY = FALSE
*.nls_language = AMERICAN
*.nls_territory = AMERICA
*.nls_date_format = DD-MON-RR
*.nls_numeric_characters = ".,"
*.nls_sort = binary # Required 11i setting
*.nls_comp = binary # Required 11i setting
*.nls_length_semantics = BYTE # Required 11i setting
*.audit_trail = false # if you want auditing
PROD2.user_dump_dest = /opt/oracle/product/10.2.0/PROD/admin/PROD2_pind42/udump
PROD2.background_dump_dest = /opt/oracle/product/10.2.0/PROD/admin/PROD2_pind42/bdump
PROD2.core_dump_dest = /opt/oracle/product/10.2.0/PROD/admin/PROD2_pind42/cdump
PROD1.user_dump_dest = /opt/oracle/product/10.2.0/PROD/admin/PROD1_pind41/udump
PROD1.background_dump_dest = /opt/oracle/product/10.2.0/PROD/admin/PROD1_pind41/bdump
PROD1.core_dump_dest = /opt/oracle/product/10.2.0/PROD/admin/PROD1_pind41/cdump
*.max_dump_file_size = 20480 # trace file size
*.timed_statistics = true
*._trace_files_public = TRUE
*.processes = 1000 # Max. no. of users x 2
*.sessions = 2000 # 2 X processes
*.db_files = 512 # Max. no. of database files
*.dml_locks = 10000
*.cursor_sharing = EXACT # Required 11i settting
*.open_cursors = 600 # Consumes process memory, unless using MTS.
*.session_cached_cursors = 500
*.archive_lag_target = 1800
*.db_block_checking = FALSE
*.db_block_checksum = TRUE
#*.log_checkpoint_timeout = 1200 # Checkpoint at least every 20 mins.
#*.log_checkpoint_interval = 100000
*.fast_start_mttr_target = 60 #in seconds
*.log_checkpoints_to_alert = TRUE
*.log_buffer = 10485760
*.undo_management=AUTO # Required 11i setting
*.workarea_size_policy=AUTO # Required 11i setting
*.olap_page_pool_size = 4194304
*.shared_pool_size = 900M
*.shared_pool_reserved_size = 90M
*.streams_pool_size = 50331648
*.session_max_open_files = 25
*.java_pool_size = 250M
*.remote_login_passwordfile = exclusive
*.large_pool_size =200M
*.pga_aggregate_target = 8G
*.plsql_optimize_level = 2 # Required 11i setting
*.plsql_code_type = INTERPRETED # Required 11i setting
# *.plsql_code_type = NATIVE # Required 11i setting
# *.plsql_native_library_dir = /opt/oracle/product/10.2.0/PROD/plsql/nativelib
# *.plsql_native_library_subdir_count = 1000
PROD1.utl_file_dir = /pcp/applprod/common/temp,/pcp/applprod/common/xx_custom/outbound/gl,/pcp/applprod/common/xx_custom/inbound/gl,/pcp/applprod/common/EDI/inholding,/pcp/applprod/common/EDI/outholding,/pcp/applprod/common/xx_custom/outbound/acsis,/opt/oracle/product/10.2.0/PROD/appsutil/outbound/PROD1_pind41,/pcp/applprod/common/xx_custom/outbound/ceridian,/usr/tmp
PROD2.utl_file_dir = /pcp/applprod/common/temp,/pcp/applprod/common/xx_custom/outbound/gl,/pcp/applprod/common/xx_custom/inbound/gl,/pcp/applprod/common/EDI/inholding,/pcp/applprod/common/EDI/outholding,/pcp/applprod/common/xx_custom/outbound/acsis,/opt/oracle/product/10.2.0/PROD/appsutil/outbound/PROD2_pind42,/pcp/applprod/common/xx_custom/outbound/ceridian,/usr/tmp
*.job_queue_processes = 10
*.parallel_max_servers = 16
*.parallel_min_servers = 0
*._like_with_bind_as_equality = TRUE
*._sort_elimination_cost_ratio=5
*._fast_full_scan_enabled = FALSE
*._sqlexec_progression_cost=2147483647
*._b_tree_bitmap_plans = FALSE
*._immediate_commit_propagation = TRUE
*._lm_global_posts = TRUE
*._optimizer_cost_based_transformation=off
*.optimizer_secure_view_merging = FALSE
*._kks_use_mutex_pin = TRUE
*.sga_target = 32G
*.pre_page_sga = TRUE
*.smtp_out_server = 'it-cy1-smtp.powellind.pri'
*_fix_control='8544722:ON'
10gR2 ASM init.ora
##############################################################################
# Copyright (c) 1991, 2001, 2002 by Oracle Corporation
########################################################################
###########################################
# Cluster Database
###########################################
cluster_database=true
###########################################
# Diagnostics and Statistics
###########################################
background_dump_dest=/opt/oracle/product/10.2.0/asm/admin/+ASM/bdump
core_dump_dest=/opt/oracle/product/10.2.0/asm/admin/+ASM/cdump
user_dump_dest=/opt/oracle/product/10.2.0/asm/admin/+ASM/udump
###########################################
# Miscellaneous
###########################################
instance_type=asm
###########################################
# Pools
###########################################
large_pool_size=12M
db_cache_size = 64M
shared_pool_size = 128M # default is 48M
###########################################
# Security and Auditing
###########################################
remote_login_passwordfile=exclusive
asm_diskgroups='DSKGRP1'
+ASM1.instance_number=1
+ASM2.instance_number=2
*.processes=100
# Copyright (c) 1991, 2001, 2002 by Oracle Corporation
########################################################################
###########################################
# Cluster Database
###########################################
cluster_database=true
###########################################
# Diagnostics and Statistics
###########################################
background_dump_dest=/opt/oracle/product/10.2.0/asm/admin/+ASM/bdump
core_dump_dest=/opt/oracle/product/10.2.0/asm/admin/+ASM/cdump
user_dump_dest=/opt/oracle/product/10.2.0/asm/admin/+ASM/udump
###########################################
# Miscellaneous
###########################################
instance_type=asm
###########################################
# Pools
###########################################
large_pool_size=12M
db_cache_size = 64M
shared_pool_size = 128M # default is 48M
###########################################
# Security and Auditing
###########################################
remote_login_passwordfile=exclusive
asm_diskgroups='DSKGRP1'
+ASM1.instance_number=1
+ASM2.instance_number=2
*.processes=100
10gR2 ASM views
Viewing Information About Automatic Storage Management
V$ASM_DISKGROUP In an ASM instance, describes a disk group (number, name, size related info, state, and redundancy type).
In a DB instance, contains one row for every ASM disk group mounted by the local ASM instance.
This view performs disk discovery every time it is queried.
V$ASM_DISK In an ASM instance, contains one row for every disk discovered by the ASM instance, including disks that are not part of any disk group.
In a DB instance, contains rows only for disks in the disk groups in use by that DB instance.
This view performs disk discovery every time it is queried.
V$ASM_DISKGROUP_STAT Has the same columns as V$ASM_DISKGROUP, but to reduce overhead, does not perform a discovery when it is queried. It therefore does not return information on any disks that are new to the storage system. For the most accurate data, use
V$ASM_DISKGROUP instead.
V$ASM_DISK_STAT Has the same columns as V$ASM_DISK, but to reduce overhead, does not perform a discovery when it is queried. It therefore does not return information on any disks that are new to the storage system. For the most accurate data, use V$ASM_DISK instead.
V$ASM_FILE In an ASM instance, contains one row for every ASM file in every disk group mounted by the ASM instance.
In a DB instance, contains no rows.
V$ASM_TEMPLATE In an ASM or DB instance, contains one row for every template present in every disk group mounted by the ASM instance.
V$ASM_ALIAS In an ASM instance, contains one row for every alias present in every disk group mounted by the ASM instance.
In a DB instance, contains no rows.
V$ASM_OPERATION In an ASM instance, contains one row for every active ASM long running operation executing in the ASM instance.
In a DB instance, contains no rows.
V$ASM_CLIENT In an ASM instance, identifies databases using disk groups managed by the ASM instance.
In a DB instance, contains one row for the ASM instance if the database has any open ASM files.
V$ASM_DISKGROUP In an ASM instance, describes a disk group (number, name, size related info, state, and redundancy type).
In a DB instance, contains one row for every ASM disk group mounted by the local ASM instance.
This view performs disk discovery every time it is queried.
V$ASM_DISK In an ASM instance, contains one row for every disk discovered by the ASM instance, including disks that are not part of any disk group.
In a DB instance, contains rows only for disks in the disk groups in use by that DB instance.
This view performs disk discovery every time it is queried.
V$ASM_DISKGROUP_STAT Has the same columns as V$ASM_DISKGROUP, but to reduce overhead, does not perform a discovery when it is queried. It therefore does not return information on any disks that are new to the storage system. For the most accurate data, use
V$ASM_DISKGROUP instead.
V$ASM_DISK_STAT Has the same columns as V$ASM_DISK, but to reduce overhead, does not perform a discovery when it is queried. It therefore does not return information on any disks that are new to the storage system. For the most accurate data, use V$ASM_DISK instead.
V$ASM_FILE In an ASM instance, contains one row for every ASM file in every disk group mounted by the ASM instance.
In a DB instance, contains no rows.
V$ASM_TEMPLATE In an ASM or DB instance, contains one row for every template present in every disk group mounted by the ASM instance.
V$ASM_ALIAS In an ASM instance, contains one row for every alias present in every disk group mounted by the ASM instance.
In a DB instance, contains no rows.
V$ASM_OPERATION In an ASM instance, contains one row for every active ASM long running operation executing in the ASM instance.
In a DB instance, contains no rows.
V$ASM_CLIENT In an ASM instance, identifies databases using disk groups managed by the ASM instance.
In a DB instance, contains one row for the ASM instance if the database has any open ASM files.
10gR2 clusterware processes
oracle@pind22(PTCH2/DBTier):$ ps -ef|grep d.bin
oracle 7545 7544 0 Aug24 ? 00:00:32 /opt/oracle/product/10.2.0/crs/bin/evmd.bin
root 7687 5675 0 Aug24 ? 00:30:17 /opt/oracle/product/10.2.0/crs/bin/crsd.bin reboot
root 8102 7707 0 Aug24 ? 00:00:00 /opt/oracle/product/10.2.0/crs/bin/oprocd.bin run -t 1000 -m 500 -f
oracle 8231 7757 0 Aug24 ? 00:17:47 /opt/oracle/product/10.2.0/crs/bin/ocssd.bin
oracle@pind22(PTCH2/DBTier):$ ps -ef|grep racgimon
oracle 8736 1 0 Aug24 ? 00:00:04 /opt/oracle/product/10.2.0/asm/bin/racgimon daemon ora.pind22.ASM2.asm
oracle 25906 1 0 Sep09 ? 00:00:20 /opt/oracle/product/10.2.0/PTCH/bin/racgimon startd PTCH
EVMD: scans callout directory and invokes callouts in reactions to detected events.
CRSD: engine for high-availability operations. Uses the Oracle Cluster Registry (OCR) file for information such as cluster node list, cluster database instance to node mapping and CRS application resource profiles. The size is around 100MB and needs 2 copies.
OPROCD: is a process monitor for the cluster (not used on Linux and Window). As we are using Oracle Enterprise Linux and the process is running. It is used in Linux. We also hangcheck_timer for this.
OCSSD and RACGIMON: provides access to node membership and group service. Uses voting disk or quorum disk. The size of the disk is about 20MB and needs 3 copies.
The processes got started by Unix init process.
oracle@pind22(PTCH2/DBTier):$ ps -ef|grep init
root 5590 1 0 Aug24 ? 00:00:00 /bin/sh /etc/init.d/init.evmd run
root 5675 1 0 Aug24 ? 00:00:00 /bin/sh /etc/init.d/init.crsd run
root 5803 1 0 Aug24 ? 00:39:37 /bin/sh /etc/init.d/init.cssd fatal
root 7707 5803 0 Aug24 ? 00:00:00 /bin/sh /etc/init.d/init.cssd oprocd
root 7724 5803 0 Aug24 ? 00:00:00 /bin/sh /etc/init.d/init.cssd oclsomon
root 7757 5803 0 Aug24 ? 00:00:00 /bin/sh /etc/init.d/init.cssd daemon
oracle 7545 7544 0 Aug24 ? 00:00:32 /opt/oracle/product/10.2.0/crs/bin/evmd.bin
root 7687 5675 0 Aug24 ? 00:30:17 /opt/oracle/product/10.2.0/crs/bin/crsd.bin reboot
root 8102 7707 0 Aug24 ? 00:00:00 /opt/oracle/product/10.2.0/crs/bin/oprocd.bin run -t 1000 -m 500 -f
oracle 8231 7757 0 Aug24 ? 00:17:47 /opt/oracle/product/10.2.0/crs/bin/ocssd.bin
oracle@pind22(PTCH2/DBTier):$ ps -ef|grep racgimon
oracle 8736 1 0 Aug24 ? 00:00:04 /opt/oracle/product/10.2.0/asm/bin/racgimon daemon ora.pind22.ASM2.asm
oracle 25906 1 0 Sep09 ? 00:00:20 /opt/oracle/product/10.2.0/PTCH/bin/racgimon startd PTCH
EVMD: scans callout directory and invokes callouts in reactions to detected events.
CRSD: engine for high-availability operations. Uses the Oracle Cluster Registry (OCR) file for information such as cluster node list, cluster database instance to node mapping and CRS application resource profiles. The size is around 100MB and needs 2 copies.
OPROCD: is a process monitor for the cluster (not used on Linux and Window). As we are using Oracle Enterprise Linux and the process is running. It is used in Linux. We also hangcheck_timer for this.
OCSSD and RACGIMON: provides access to node membership and group service. Uses voting disk or quorum disk. The size of the disk is about 20MB and needs 3 copies.
The processes got started by Unix init process.
oracle@pind22(PTCH2/DBTier):$ ps -ef|grep init
root 5590 1 0 Aug24 ? 00:00:00 /bin/sh /etc/init.d/init.evmd run
root 5675 1 0 Aug24 ? 00:00:00 /bin/sh /etc/init.d/init.crsd run
root 5803 1 0 Aug24 ? 00:39:37 /bin/sh /etc/init.d/init.cssd fatal
root 7707 5803 0 Aug24 ? 00:00:00 /bin/sh /etc/init.d/init.cssd oprocd
root 7724 5803 0 Aug24 ? 00:00:00 /bin/sh /etc/init.d/init.cssd oclsomon
root 7757 5803 0 Aug24 ? 00:00:00 /bin/sh /etc/init.d/init.cssd daemon
using raw partitions
1. install shared disks
2. identify the shared disks to use
# fdisk -l
3. partition the device (assume using /dev/sda disk)
# fdisk /dev/sda
4. binding the partitions
4.1 identify the devices that are already bound
# /usr/bin/raw –qa
4.2 edit /etc/sysconfig/rawdevices in following format
/dev/raw/raw1 /dev/sda1
4.3 adjusting ownership and permission. For OCR root:dba and 640 respectively, and for Voting Disk oracle:dba and 660 respetively.
4.4 execute rawdevices command
# /sbin/service rawdevices restart
For red hat 5.0 using following binding mechanism. Edit 60-raw.rules and 99-raw-perms.rules, then un /sbin/start_udev
/etc/udev/rules.d/60-raw.rules
ACTION=="add", KERNEL=="emcpowerb1", RUN+="/bin/raw /dev/raw/raw5 %N"
ACTION=="add", KERNEL=="emcpowerc1", RUN+="/bin/raw /dev/raw/raw6 %N"
ACTION=="add", KERNEL=="emcpowero1", RUN+="/bin/raw /dev/raw/raw7 %N"
ACTION=="add", KERNEL=="emcpowerq1", RUN+="/bin/raw /dev/raw/raw8 %N"
ACTION=="add", KERNEL=="emcpowerr1", RUN+="/bin/raw /dev/raw/raw9 %N"
/etc/udev/rules.d/99-raw-perms.rules
KERNEL=="raw[5]" MODE="0640", GROUP="oinstall", OWNER="root"
KERNEL=="raw[6]" MODE="0666", GROUP="oinstall", OWNER="oracle"
KERNEL=="raw[7]" MODE="0666", GROUP="oinstall", OWNER="oracle"
KERNEL=="raw[8]" MODE="0666", GROUP="oinstall", OWNER="oracle"
KERNEL=="raw[9]" MODE="0640", GROUP="oinstall", OWNER="root"
2. identify the shared disks to use
# fdisk -l
3. partition the device (assume using /dev/sda disk)
# fdisk /dev/sda
4. binding the partitions
4.1 identify the devices that are already bound
# /usr/bin/raw –qa
4.2 edit /etc/sysconfig/rawdevices in following format
/dev/raw/raw1 /dev/sda1
4.3 adjusting ownership and permission. For OCR root:dba and 640 respectively, and for Voting Disk oracle:dba and 660 respetively.
4.4 execute rawdevices command
# /sbin/service rawdevices restart
For red hat 5.0 using following binding mechanism. Edit 60-raw.rules and 99-raw-perms.rules, then un /sbin/start_udev
/etc/udev/rules.d/60-raw.rules
ACTION=="add", KERNEL=="emcpowerb1", RUN+="/bin/raw /dev/raw/raw5 %N"
ACTION=="add", KERNEL=="emcpowerc1", RUN+="/bin/raw /dev/raw/raw6 %N"
ACTION=="add", KERNEL=="emcpowero1", RUN+="/bin/raw /dev/raw/raw7 %N"
ACTION=="add", KERNEL=="emcpowerq1", RUN+="/bin/raw /dev/raw/raw8 %N"
ACTION=="add", KERNEL=="emcpowerr1", RUN+="/bin/raw /dev/raw/raw9 %N"
/etc/udev/rules.d/99-raw-perms.rules
KERNEL=="raw[5]" MODE="0640", GROUP="oinstall", OWNER="root"
KERNEL=="raw[6]" MODE="0666", GROUP="oinstall", OWNER="oracle"
KERNEL=="raw[7]" MODE="0666", GROUP="oinstall", OWNER="oracle"
KERNEL=="raw[8]" MODE="0666", GROUP="oinstall", OWNER="oracle"
KERNEL=="raw[9]" MODE="0640", GROUP="oinstall", OWNER="root"
email logfile on windows using blat
syntax:
Blat -to [optional switches (see below)]
Blat -install [[[]]] [-q]
Blat -profile [-delete | ""] [profile1] [profileN] [-q]
Blat -h [-q]
-install [ [ []]]
: set's SMTP server, sender, number of tries and port for profile
( and may be replaced by '-').
: file with the message body ('-' for console input, end with ^Z)
-to : recipient list (also -t) (comma separated)
-tf : recipient list filename
-subject: subject line (also -s)
-f : overrides the default sender address (must be known to server)
-i : a 'From:' address, not necessarily known to the SMTP server.
-cc: carbon copy recipient list (also -c) (comma separated)
-cf : cc recipient list filename
-bcc: blind carbon copy recipient list (also -bcc) (comma separated)
-bf : bcc recipient list filename
-organization: Organization field (also -o and -org)
-body : Message body
-x: Custom 'X-' header. eg: -x "X-INFO: Blat is Great!"
-r : Request return receipt.
-d : Request disposition notification.
-h : displays this help.
-q : supresses *all* output.
-debug : Echoes server communications to screen (disables '-q').
-noh : prevent X-Mailer header from showing homepage of blat
-noh2 : prevent X-Mailer header entirely
-p : send with SMTP server, user and port defined in .
-priority : set message priority 0 for low, 1 for high.
-server : Specify SMTP server to be used. (optionally, addr:port)
-port : port to be used on the server, defaults to SMTP (25)
-hostname: select the hostname used to send the message
-mime : MIME Quoted-Printable Content-Transfer-Encoding.
-enriched : Send an enriched text message (Content-Type=text/enriched)
-html : Send an HTML message (Content-Type=text/html)
-uuencode : Send (binary) file UUEncoded
-base64 : Send (binary) file using base64 (binary Mime)
-try : how many time blat should try to send. from '1' to 'INFINITE'
-attach : attach binary file to message (may be repeated)
-attacht: attach text file to message (may be repeated)
-ti : Set timeout to 'n' seconds.
-u : Username for AUTH LOGIN (use with -pw)
-pw : Password for AUTH LOGIN (use with -u)
-log: Log everything but usage to
-plain : Use AUTH PLAIN to login to SMTP server (use with -u -pw)
-charset : User defined charset. The default is ISO-8859-1
www.blat.net to download blat for windows.
blat %manual_load_logfile% -to
Blat -install
Blat
Blat -install
Blat -profile [-delete | "
Blat -h [-q]
-install
: set's SMTP server, sender, number of tries and port for profile
(
-to
-tf
-subject
-f
-i
-cc
-cf
-bcc
-bf
-organization
-body
-x
-r : Request return receipt.
-d : Request disposition notification.
-h : displays this help.
-q : supresses *all* output.
-debug : Echoes server communications to screen (disables '-q').
-noh : prevent X-Mailer header from showing homepage of blat
-noh2 : prevent X-Mailer header entirely
-p
-priority
-server
-port
-hostname
-mime : MIME Quoted-Printable Content-Transfer-Encoding.
-enriched : Send an enriched text message (Content-Type=text/enriched)
-html : Send an HTML message (Content-Type=text/html)
-uuencode : Send (binary) file UUEncoded
-base64 : Send (binary) file using base64 (binary Mime)
-try
-attach
-attacht
-ti
-u
-pw
-log
-plain : Use AUTH PLAIN to login to SMTP server (use with -u -pw)
-charset
www.blat.net to download blat for windows.
blat %manual_load_logfile% -to
Blat -install
migration procedure for Oracle database 8i on Windows to 10g on Linux
migration procedure for Oracle database 8i on Windows (NT and 2000) to 10g Linux platform.
1.. Identify developers and business owners for the database and set up schedule to move and test the database.
2.. create the new database on Linux box using dbca with template USON_LINUX_DEV.dbt
2a. enable archive log
SQL> archive log list; # if archive log is not enabled, then shutdown database and startup mount, then alter database archivelog;
2b. enable flashback (a good feature for fixing user errors, need to enable archive log).
SQL> select flashback_on from v$database; the return is NO, the put database in mount stage, then alter database flashback on;
3.. check tablespaces on the source database (i.e., database on Windows). Create missing tablespaces in the target database (database on Linux).
Comments: 8i uses RBS, 10g uses UNDOTBS1.
Default tablespaces on 8i: SYSTEM,RBS,USERS,TEMP, and TOOLS.
Default tablespaces on 9i: SYSTEM, UNDOTBS1,USERS,TEMP, and SYSAUX.
Need to create TOOLS and other user defined tablespaces.
On Linux with ASM: create tablespace tools; command to create tablespace.
4.. Check database link on the source database. get a list of all database aliases listed on host column of dba_db_links. Then copy all entries for those database aliases from tnsnames.ora on source database to target database’ tnsnames.ora. Pay attention to the databases have been moved already.
5.. Check all jobs running against this database. For Window NT, at dos prompt, type at command. For Window 2000, use Window’s service.
6.. Move all source code from the source database host to the target database host in $ORACLE_BASE/admin/$ORACLE_SID/script/. Convert the dos batch job into shell script. Add environment variable setting in the shell scripts. Fix all file directories in the scripts. Put batch job log in $ORACLE_BASE/admin/$ORACLE_SID/log and data in $ORACLE_BASE/admin/$ORACLE_SID/data directories.
7.. Put the source database in the restrict mode.
8.. Make a full export of the target database on the database host (using the original exp command: i.e, 8i)
Exp system/manager full=y file=exp_full_${ORACLE_SID|.dmp log=exp_full_${ORACLE_SID}.log
9.. shutdown the source database.
10.. ftp the dump file and log file over to the target host in the binary mode. Put files in $ORACLE_BASE/admin/$ORACLE_SID/exp
11.. perform a full import with the ignore parameter enabled on the target database.
Imp system/manager full=y file=exp_full_${ORACLE_SID}.dmp log=imp_full_${ORACLE_SID}.log IGNORE=y
12.. review the import log file, fix error, work with developer to test the batch jobs. Ask developers to test the moved database.
1.. Identify developers and business owners for the database and set up schedule to move and test the database.
2.. create the new database on Linux box using dbca with template USON_LINUX_DEV.dbt
2a. enable archive log
SQL> archive log list; # if archive log is not enabled, then shutdown database and startup mount, then alter database archivelog;
2b. enable flashback (a good feature for fixing user errors, need to enable archive log).
SQL> select flashback_on from v$database; the return is NO, the put database in mount stage, then alter database flashback on;
3.. check tablespaces on the source database (i.e., database on Windows). Create missing tablespaces in the target database (database on Linux).
Comments: 8i uses RBS, 10g uses UNDOTBS1.
Default tablespaces on 8i: SYSTEM,RBS,USERS,TEMP, and TOOLS.
Default tablespaces on 9i: SYSTEM, UNDOTBS1,USERS,TEMP, and SYSAUX.
Need to create TOOLS and other user defined tablespaces.
On Linux with ASM: create tablespace tools; command to create tablespace.
4.. Check database link on the source database. get a list of all database aliases listed on host column of dba_db_links. Then copy all entries for those database aliases from tnsnames.ora on source database to target database’ tnsnames.ora. Pay attention to the databases have been moved already.
5.. Check all jobs running against this database. For Window NT, at dos prompt, type at command. For Window 2000, use Window’s service.
6.. Move all source code from the source database host to the target database host in $ORACLE_BASE/admin/$ORACLE_SID/script/. Convert the dos batch job into shell script. Add environment variable setting in the shell scripts. Fix all file directories in the scripts. Put batch job log in $ORACLE_BASE/admin/$ORACLE_SID/log and data in $ORACLE_BASE/admin/$ORACLE_SID/data directories.
7.. Put the source database in the restrict mode.
8.. Make a full export of the target database on the database host (using the original exp command: i.e, 8i)
Exp system/manager full=y file=exp_full_${ORACLE_SID|.dmp log=exp_full_${ORACLE_SID}.log
9.. shutdown the source database.
10.. ftp the dump file and log file over to the target host in the binary mode. Put files in $ORACLE_BASE/admin/$ORACLE_SID/exp
11.. perform a full import with the ignore parameter enabled on the target database.
Imp system/manager full=y file=exp_full_${ORACLE_SID}.dmp log=imp_full_${ORACLE_SID}.log IGNORE=y
12.. review the import log file, fix error, work with developer to test the batch jobs. Ask developers to test the moved database.
sudo for oracle account
Add following to /etc/sudoers
oracle ALL=(ALL) NOPASSWD:ALL
This will enable oracle os account to run root.sh.
For remote OEM agent installation, this is required too.
oracle ALL=(ALL) NOPASSWD:ALL
This will enable oracle os account to run root.sh.
For remote OEM agent installation, this is required too.
mirror 10gR2 OCR and voting disks
During the ClusterWare installation phase, there is an option to choose mirroring of OCR and Voting Disks. As we did not mirror them, we did it. Here is the instruction for doing it.
VOTING DISKS
1. Login as root on pind22
2. . /home/oracle/.profile_crs ## source env
3. crsctl query css votedisk ##should return /dev/raw/raw6
4. take a backup: dd if=/dev/raw/raw6 of=/data/oradata/votingdisk.linn.bk
5. ls -al /dev/raw/* ##check pending disk device
6. add /dev/raw/raw7 and /dev/raw/raw8 as mirrors of voting disks online.
crsctl add css votedisk /dev/raw/raw7
crsctl add css votedisk /dev/raw/raw8
7. if adding online failed
crsctl stop crs ###on pind21/22, very important, need to shutdown crs on all cluster nodes
crsctl add css votedisk /dev/raw/raw7 -force
crsctl add css votedisk /dev/raw/raw8 –force
crsctl start crs ###on pind21/22
8. crsctl query css votedisk ## should return /dev/raw/raw6, /dev/raw/raw7, /dev/raw/raw8
OCR
1. login as root on pind22
2. . /home/oracle/.profile_crs ## source env
3. ocrcheck ## should return /dev/raw/raw5
4. ls –al /dev/raw/* ## /dev/raw/raw9 is pending disk device
5. ocrconfig –export /data/oradata/ocr.linn.dmp –s online ## for backup
6. ocrconfig –replace ocrmirror /dev/raw/raw9 ## add ocr mirror
7. ocrcheck ## should return /dev/raw/raw5 and /dev/raw/raw9 as ocr disk device
Output log:
[root@pind22 ~]# cd /home/oracle/
[root@pind22 oracle]# . .profile_crs
crs.root.pind22>crsctl query css votedisk
0. 0 /dev/raw/raw6
located 1 votedisk(s).
crs.root.pind22>dd if=/dev/raw/raw6 of=/data/oradata/votingdisk.linn.bk
2064384+0 records in
2064384+0 records out
1056964608 bytes (1.1 GB) copied, 399.387 seconds, 2.6 MB/s
crs.root.pind22>ls -al /dev/raw/*
crw-r----- 1 root oinstall 162, 5 Aug 13 10:45 /dev/raw/raw5
crw-rw-rw- 1 oracle oinstall 162, 6 Aug 24 13:18 /dev/raw/raw6
crw-rw-rw- 1 oracle oinstall 162, 7 Aug 13 10:45 /dev/raw/raw7
crw-rw-rw- 1 oracle oinstall 162, 8 Aug 13 10:45 /dev/raw/raw8
crw-r----- 1 root oinstall 162, 9 Aug 13 10:45 /dev/raw/raw9
crs.root.pind22>crsctl add css votedisk /dev/raw/raw7
Cluster is not in a ready state for online disk addition
crs.root.pind22>crsctl stop crs
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
crs.root.pind22>crsctl add css votedisk /dev/raw/raw7
Cluster is not in a ready state for online disk addition
crs.root.pind22>crsctl add css votedisk /dev/raw/raw7 -force
Now formatting voting disk: /dev/raw/raw7
successful addition of votedisk /dev/raw/raw7.
crs.root.pind22>crsctl add css votedisk /dev/raw/raw8 -force
Now formatting voting disk: /dev/raw/raw8
successful addition of votedisk /dev/raw/raw8.
crs.root.pind22>crsctl start crs
Attempting to start CRS stack
The CRS stack will be started shortly
crs.root.pind22>crsctl query css votedisk
0. 0 /dev/raw/raw6
1. 0 /dev/raw/raw7
2. 0 /dev/raw/raw8
located 3 votedisk(s).
crs.root.pind22>
crs.root.pind22>ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 1031920
Used space (kbytes) : 3320
Available space (kbytes) : 1028600
ID : 330979008
Device/File Name : /dev/raw/raw5
Device/File integrity check succeeded
Device/File not configured
Cluster registry integrity check succeeded
crs.root.pind22>ls -al /dev/raw/*
crw-r----- 1 root oinstall 162, 5 Aug 24 13:24 /dev/raw/raw5
crw-rw-rw- 1 oracle oinstall 162, 6 Aug 24 13:25 /dev/raw/raw6
crw-rw-rw- 1 oracle oinstall 162, 7 Aug 24 13:25 /dev/raw/raw7
crw-rw-rw- 1 oracle oinstall 162, 8 Aug 24 13:25 /dev/raw/raw8
crw-r----- 1 root oinstall 162, 9 Aug 13 10:45 /dev/raw/raw9
crs.root.pind22>ocrconfig -export /data/oradata/ocr.linn.dmp -s online
crs.root.pind22>ocrconfig -replace ocrmirror /dev/raw/raw9
crs.root.pind22>ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 1031920
Used space (kbytes) : 3320
Available space (kbytes) : 1028600
ID : 330979008
Device/File Name : /dev/raw/raw5
Device/File integrity check succeeded
Device/File Name : /dev/raw/raw9
Device/File integrity check succeeded
Cluster registry integrity check succeeded
crs.root.pind22>
Due to bug 3972986 (see ref 2), Cluster is not in a ready state for online disk addition. Need to shutdown all databases and ASM, and then shutdown CRS.
REF:
1. OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE), including moving from RAW Devices to Block Devices. [ID 428681.1]
2. OCR Corruption after Adding/Removing voting disk to a cluster when CRS stack is running [ID 390880.1]
VOTING DISKS
1. Login as root on pind22
2. . /home/oracle/.profile_crs ## source env
3. crsctl query css votedisk ##should return /dev/raw/raw6
4. take a backup: dd if=/dev/raw/raw6 of=/data/oradata/votingdisk.linn.bk
5. ls -al /dev/raw/* ##check pending disk device
6. add /dev/raw/raw7 and /dev/raw/raw8 as mirrors of voting disks online.
crsctl add css votedisk /dev/raw/raw7
crsctl add css votedisk /dev/raw/raw8
7. if adding online failed
crsctl stop crs ###on pind21/22, very important, need to shutdown crs on all cluster nodes
crsctl add css votedisk /dev/raw/raw7 -force
crsctl add css votedisk /dev/raw/raw8 –force
crsctl start crs ###on pind21/22
8. crsctl query css votedisk ## should return /dev/raw/raw6, /dev/raw/raw7, /dev/raw/raw8
OCR
1. login as root on pind22
2. . /home/oracle/.profile_crs ## source env
3. ocrcheck ## should return /dev/raw/raw5
4. ls –al /dev/raw/* ## /dev/raw/raw9 is pending disk device
5. ocrconfig –export /data/oradata/ocr.linn.dmp –s online ## for backup
6. ocrconfig –replace ocrmirror /dev/raw/raw9 ## add ocr mirror
7. ocrcheck ## should return /dev/raw/raw5 and /dev/raw/raw9 as ocr disk device
Output log:
[root@pind22 ~]# cd /home/oracle/
[root@pind22 oracle]# . .profile_crs
crs.root.pind22>crsctl query css votedisk
0. 0 /dev/raw/raw6
located 1 votedisk(s).
crs.root.pind22>dd if=/dev/raw/raw6 of=/data/oradata/votingdisk.linn.bk
2064384+0 records in
2064384+0 records out
1056964608 bytes (1.1 GB) copied, 399.387 seconds, 2.6 MB/s
crs.root.pind22>ls -al /dev/raw/*
crw-r----- 1 root oinstall 162, 5 Aug 13 10:45 /dev/raw/raw5
crw-rw-rw- 1 oracle oinstall 162, 6 Aug 24 13:18 /dev/raw/raw6
crw-rw-rw- 1 oracle oinstall 162, 7 Aug 13 10:45 /dev/raw/raw7
crw-rw-rw- 1 oracle oinstall 162, 8 Aug 13 10:45 /dev/raw/raw8
crw-r----- 1 root oinstall 162, 9 Aug 13 10:45 /dev/raw/raw9
crs.root.pind22>crsctl add css votedisk /dev/raw/raw7
Cluster is not in a ready state for online disk addition
crs.root.pind22>crsctl stop crs
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
crs.root.pind22>crsctl add css votedisk /dev/raw/raw7
Cluster is not in a ready state for online disk addition
crs.root.pind22>crsctl add css votedisk /dev/raw/raw7 -force
Now formatting voting disk: /dev/raw/raw7
successful addition of votedisk /dev/raw/raw7.
crs.root.pind22>crsctl add css votedisk /dev/raw/raw8 -force
Now formatting voting disk: /dev/raw/raw8
successful addition of votedisk /dev/raw/raw8.
crs.root.pind22>crsctl start crs
Attempting to start CRS stack
The CRS stack will be started shortly
crs.root.pind22>crsctl query css votedisk
0. 0 /dev/raw/raw6
1. 0 /dev/raw/raw7
2. 0 /dev/raw/raw8
located 3 votedisk(s).
crs.root.pind22>
crs.root.pind22>ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 1031920
Used space (kbytes) : 3320
Available space (kbytes) : 1028600
ID : 330979008
Device/File Name : /dev/raw/raw5
Device/File integrity check succeeded
Device/File not configured
Cluster registry integrity check succeeded
crs.root.pind22>ls -al /dev/raw/*
crw-r----- 1 root oinstall 162, 5 Aug 24 13:24 /dev/raw/raw5
crw-rw-rw- 1 oracle oinstall 162, 6 Aug 24 13:25 /dev/raw/raw6
crw-rw-rw- 1 oracle oinstall 162, 7 Aug 24 13:25 /dev/raw/raw7
crw-rw-rw- 1 oracle oinstall 162, 8 Aug 24 13:25 /dev/raw/raw8
crw-r----- 1 root oinstall 162, 9 Aug 13 10:45 /dev/raw/raw9
crs.root.pind22>ocrconfig -export /data/oradata/ocr.linn.dmp -s online
crs.root.pind22>ocrconfig -replace ocrmirror /dev/raw/raw9
crs.root.pind22>ocrcheck
Status of Oracle Cluster Registry is as follows :
Version : 2
Total space (kbytes) : 1031920
Used space (kbytes) : 3320
Available space (kbytes) : 1028600
ID : 330979008
Device/File Name : /dev/raw/raw5
Device/File integrity check succeeded
Device/File Name : /dev/raw/raw9
Device/File integrity check succeeded
Cluster registry integrity check succeeded
crs.root.pind22>
Due to bug 3972986 (see ref 2), Cluster is not in a ready state for online disk addition. Need to shutdown all databases and ASM, and then shutdown CRS.
REF:
1. OCR / Vote disk Maintenance Operations: (ADD/REMOVE/REPLACE/MOVE), including moving from RAW Devices to Block Devices. [ID 428681.1]
2. OCR Corruption after Adding/Removing voting disk to a cluster when CRS stack is running [ID 390880.1]
myadpatch
cd to patchtop (which has unzipped patchfiles)
adpatch patchtop=`pwd` driver=`ls u*.drv` logfile=adpatch_`ls u*drv|cut -d . -f 1`.log workers=$((2*`grep -c processor /proc/cpuinfo`))
for none interactive, set defaultsfile for adpatch.
defaultsfile=$APPL_TOP/admin/$TWO_TASK/adalldefaults.txt
adpatch patchtop=`pwd` driver=`ls u*.drv` logfile=adpatch_`ls u*drv|cut -d . -f 1`.log workers=$((2*`grep -c processor /proc/cpuinfo`))
for none interactive, set defaultsfile for adpatch.
defaultsfile=$APPL_TOP/admin/$TWO_TASK/adalldefaults.txt
fnd_conc_clone.setup_clean, cmclean.sql and fnd_net_services.remove_server
In fnd_net_services.remove_server
delete from fnd_app_servers
delete from fnd_appl_tops
delete from fnd_database_assignments
delete from fnd_database_instances
delete from fnd_database_services
delete from fnd_databases
delete from fnd_db_service_members
delete from fnd_oracle_homes
delete from fnd_system_server_map
delete from fnd_tns_alias_address_lists
delete from fnd_tns_alias_addresses
delete from fnd_tns_alias_descriptions
delete from fnd_tns_alias_set_usage
delete from fnd_tns_alias_sets
delete from fnd_tns_aliases
delete from fnd_tns_listener_ports
delete from fnd_tns_listeners
In fnd_conc_clone.setup_clean
Delete from fnd_concurrent_queue_size
Delete from fnd_concurrent_queues_tl
Delete from fnd_concurrent_queues
Delete from fnd_nodes;
truncate_table(OracleUserFND , 'FND_OAM_CONTEXT_FILES');
truncate_table(OracleUserFND , 'FND_OAM_APP_SYS_STATUS');
-- Added following JTF tables based on bug 2949216
truncate_table(OracleUserJTF , 'JTF_PREFAB_HA_COMPS');
truncate_table(OracleUserJTF , 'JTF_PREFAB_HA_FILTERS');
truncate_table(OracleUserJTF , 'JTF_PREFAB_HOST_APPS');
truncate_table(OracleUserJTF , 'JTF_PREFAB_WSH_POES_B');
truncate_table(OracleUserJTF , 'JTF_PREFAB_WSH_POES_TL');
truncate_table(OracleUserJTF , 'JTF_PREFAB_WSHP_POLICIES');
truncate_table(OracleUserJTF , 'JTF_PREFAB_CACHE_STATS');
truncate_table(OracleUserFND,'FND_APP_SERVERS');
truncate_table(OracleUserFND,'FND_APPL_TOPS');
truncate_table(OracleUserFND,'FND_APPS_SYSTEM');
truncate_table(OracleUserFND,'FND_DATABASE_ASSIGNMENTS');
truncate_table(OracleUserFND,'FND_DATABASE_INSTANCES');
truncate_table(OracleUserFND,'FND_DATABASE_SERVICES');
truncate_table(OracleUserFND,'FND_DATABASES');
truncate_table(OracleUserFND,'FND_DB_INSTANCE_PARAMS');
truncate_table(OracleUserFND,'FND_DB_SERVICE_MEMBERS');
truncate_table(OracleUserFND,'FND_OAM_CONTEXT_FILES');
truncate_table(OracleUserFND,'FND_ORACLE_HOMES');
truncate_table(OracleUserFND,'FND_SYSTEM_SERVER_MAP');
truncate_table(OracleUserFND,'FND_TNS_ALIAS_ADDRESS_LISTS');
truncate_table(OracleUserFND,'FND_TNS_ALIAS_ADDRESSES');
truncate_table(OracleUserFND,'FND_TNS_ALIAS_DESCRIPTIONS');
truncate_table(OracleUserFND,'FND_TNS_ALIAS_SET_USAGE');
truncate_table(OracleUserFND,'FND_TNS_ALIAS_SETS');
truncate_table(OracleUserFND,'FND_TNS_ALIASES');
truncate_table(OracleUserFND,'FND_TNS_LISTENER_PORTS');
truncate_table(OracleUserFND,'FND_TNS_LISTENERS');
fnd_conc_clone.setup_clean truncates all tables mentioned in fnd_net_services.remove_server procedure, thus do not need to run fnd_net_services.remove_server during cloning after running fnd_conc_clone.setup_clean.
In cmclean.sql
UPDATE fnd_concurrent_processes
UPDATE fnd_concurrent_queues
UPDATE fnd_concurrent_requests
UPDATE fnd_conflicts_domain
UPDATE fnd_concurrent_conflict_sets
As cmclean.sql and fnd_conc_clone.setup_clean touch different sets of concurrent request tables, it is better to run both during clone.
delete from fnd_app_servers
delete from fnd_appl_tops
delete from fnd_database_assignments
delete from fnd_database_instances
delete from fnd_database_services
delete from fnd_databases
delete from fnd_db_service_members
delete from fnd_oracle_homes
delete from fnd_system_server_map
delete from fnd_tns_alias_address_lists
delete from fnd_tns_alias_addresses
delete from fnd_tns_alias_descriptions
delete from fnd_tns_alias_set_usage
delete from fnd_tns_alias_sets
delete from fnd_tns_aliases
delete from fnd_tns_listener_ports
delete from fnd_tns_listeners
In fnd_conc_clone.setup_clean
Delete from fnd_concurrent_queue_size
Delete from fnd_concurrent_queues_tl
Delete from fnd_concurrent_queues
Delete from fnd_nodes;
truncate_table(OracleUserFND , 'FND_OAM_CONTEXT_FILES');
truncate_table(OracleUserFND , 'FND_OAM_APP_SYS_STATUS');
-- Added following JTF tables based on bug 2949216
truncate_table(OracleUserJTF , 'JTF_PREFAB_HA_COMPS');
truncate_table(OracleUserJTF , 'JTF_PREFAB_HA_FILTERS');
truncate_table(OracleUserJTF , 'JTF_PREFAB_HOST_APPS');
truncate_table(OracleUserJTF , 'JTF_PREFAB_WSH_POES_B');
truncate_table(OracleUserJTF , 'JTF_PREFAB_WSH_POES_TL');
truncate_table(OracleUserJTF , 'JTF_PREFAB_WSHP_POLICIES');
truncate_table(OracleUserJTF , 'JTF_PREFAB_CACHE_STATS');
truncate_table(OracleUserFND,'FND_APP_SERVERS');
truncate_table(OracleUserFND,'FND_APPL_TOPS');
truncate_table(OracleUserFND,'FND_APPS_SYSTEM');
truncate_table(OracleUserFND,'FND_DATABASE_ASSIGNMENTS');
truncate_table(OracleUserFND,'FND_DATABASE_INSTANCES');
truncate_table(OracleUserFND,'FND_DATABASE_SERVICES');
truncate_table(OracleUserFND,'FND_DATABASES');
truncate_table(OracleUserFND,'FND_DB_INSTANCE_PARAMS');
truncate_table(OracleUserFND,'FND_DB_SERVICE_MEMBERS');
truncate_table(OracleUserFND,'FND_OAM_CONTEXT_FILES');
truncate_table(OracleUserFND,'FND_ORACLE_HOMES');
truncate_table(OracleUserFND,'FND_SYSTEM_SERVER_MAP');
truncate_table(OracleUserFND,'FND_TNS_ALIAS_ADDRESS_LISTS');
truncate_table(OracleUserFND,'FND_TNS_ALIAS_ADDRESSES');
truncate_table(OracleUserFND,'FND_TNS_ALIAS_DESCRIPTIONS');
truncate_table(OracleUserFND,'FND_TNS_ALIAS_SET_USAGE');
truncate_table(OracleUserFND,'FND_TNS_ALIAS_SETS');
truncate_table(OracleUserFND,'FND_TNS_ALIASES');
truncate_table(OracleUserFND,'FND_TNS_LISTENER_PORTS');
truncate_table(OracleUserFND,'FND_TNS_LISTENERS');
fnd_conc_clone.setup_clean truncates all tables mentioned in fnd_net_services.remove_server procedure, thus do not need to run fnd_net_services.remove_server during cloning after running fnd_conc_clone.setup_clean.
In cmclean.sql
UPDATE fnd_concurrent_processes
UPDATE fnd_concurrent_queues
UPDATE fnd_concurrent_requests
UPDATE fnd_conflicts_domain
UPDATE fnd_concurrent_conflict_sets
As cmclean.sql and fnd_conc_clone.setup_clean touch different sets of concurrent request tables, it is better to run both during clone.
EBS Internal Concurrent Manager (ICM) starts all concurrent managers every minute
It was Friday morning at about 10:00am, we need to bounce the integration instance INT database tier to disable the archive log mode and apply a small patch in GL area. It was slow to shutdown and startup database. For the application patch, in the DEV, it only needs about 6 minute. However, in INT, it took about an hour.
The slowness of database caused a big problem to startup application tier. There are timeout when I tried to startup all nodes (two web/forms nodes and two concurrent manager and admin server nodes). Furthermore, the load average of database increased rapidly to about 500 and the server is no longer responding. We have to reboot the server.
After some digging, I also found that there are many processes on concurrent manager tier. The internal concurrent manager (ICM) log shows that ICM was starting all concurrent managers every minute. As there are a few concurrent processes configured for each manager, starting the concurrent manager will start all the concurrent processes. Each concurrent process has a corresponding database process. Thus, database server got overloaded and could not respond to new client request.
The slowness of database was determined to be a hugepage setting on Linux. The initial setting is for 8GB SGA. However, the INT database’s SGA got changed to 16GB. After resetting SGA back to 8GB, INT is back to normal.
However, ICM starts all concurrent managers when the database is 10 times slower than normal. This exposed a design fault for ICM. It appears that ICM is checking fnd_current_queues.target_processes and running_processes. If these two columns match, then ICM will go to sleep. Otherwise, start concurrent managers again. It appears the ICM is not checking timeout, Oracle errors etc.
A service request has been opened for Oracle to give us a patch to fix the ICM restarting concurrent managers very minute.
The slowness of database caused a big problem to startup application tier. There are timeout when I tried to startup all nodes (two web/forms nodes and two concurrent manager and admin server nodes). Furthermore, the load average of database increased rapidly to about 500 and the server is no longer responding. We have to reboot the server.
After some digging, I also found that there are many processes on concurrent manager tier. The internal concurrent manager (ICM) log shows that ICM was starting all concurrent managers every minute. As there are a few concurrent processes configured for each manager, starting the concurrent manager will start all the concurrent processes. Each concurrent process has a corresponding database process. Thus, database server got overloaded and could not respond to new client request.
The slowness of database was determined to be a hugepage setting on Linux. The initial setting is for 8GB SGA. However, the INT database’s SGA got changed to 16GB. After resetting SGA back to 8GB, INT is back to normal.
However, ICM starts all concurrent managers when the database is 10 times slower than normal. This exposed a design fault for ICM. It appears that ICM is checking fnd_current_queues.target_processes and running_processes. If these two columns match, then ICM will go to sleep. Otherwise, start concurrent managers again. It appears the ICM is not checking timeout, Oracle errors etc.
A service request has been opened for Oracle to give us a patch to fix the ICM restarting concurrent managers very minute.
Subscribe to:
Posts (Atom)