Thursday, September 17, 2015

Can't access Weblogic Console | ERR_SSL_WEAK_SERVER_EPHEMERAL_DH_KEY


When accessing WebLogic Console for OEM Cloud Control using Chrome, I received following error.




I knew nothing has changed, no new patches were applied. So what happened?

After doing some google search, I found out that latest version of Google Chrome 45 is no longer accepting weak cipher.  Any website that uses outdated security code will not open in Chrome anymore.



DHE_EXPORT cipher which is used by Weblogic is valureable for Logjam attack.

In My Oracle Support Doc ID 2054204.1 Oracle acknowledges this as a bug and currently working on a patch.  

Workaround for Chrome is to pass in the following parameters to Chrome.exe:


"C:\Program Files (x86)\Google\Chrome\Application\chrome.exe" --cipher-suite-blacklist=0x0033,0x0039


Right click the Chrome shortcut (where ever you have it) and go to "Shortcut" tab and in Target field type in the parameter.  After this close all Chrome windows and restart Chrome browser.





Here is the workaround for other browser as mentioned in the MOS ID 2054204.1:



a. Internet Explorer:
==============
    1. Increase key strength of WLS certificates to 1024 bits:
        << Note 1510058.1>> - Regenerating OEM-WLS Demo Identity Certificate with 1024 bit Keystrength
    2. Access WLS Console in Internet Explorer

b. Firefox:
=======
    1. Increase key strength of WLS certificates to 1024 bits:
        << Note 1510058.1>> - Regenerating OEM-WLS Demo Identity Certificate with 1024 bit Keystrength
    2. Open firefox browser and type 'about:config' in URL field
    3. Search for 'security.ssl3.dhe_rsa_aes_128_sha' and 'security.ssl3.dhe_rsa_aes_256_sha'
    4. Double click (Toggle) on 'security.ssl3.dhe_rsa_aes_128_sha' and 'security.ssl3.dhe_rsa_aes_256_sha' so that their value gets changed to 'false'
    5. Close the firefox and open new firefox window
    6. Access OEM Weblogic Admin Server Console



For up to date information, please see MOS.

Monday, September 14, 2015

ORA-01180: can not create datafile 1 during restore

Moved to my blog:

http://www.aamirharoon.com/11gr2/ora-01180-can-not-create-datafile-1-during-restore/

It's been a while since I posted any blog.  My three boys keep me very busy :)

I came across this error at work and thought it was worth mentioning. Maybe someone else can benefit from it and not spend hours looking for solution.

During a Disaster Recovery exercise, my task was to drop already existing database (it was left there from previous restore) and restore database from current backup.

It is important to note that a database already existed in ARCHIVELOG mode (later on it'll become clear why I am mentioning this).  

So, after deleting the database using dbca, I executed restore database command.  Immediately after I got the ugly, confusing not enough information error :(


RMAN-03002: failure of restore command at 09/10/2015 17:04:24
ORA-01180: can not create datafile 1
ORA-01110: data file 1: '+DG_DATA_01/db1/datafile/system.211.849181900'

Wednesday, April 30, 2014

How to Clone Oracle Grid Infrastructure / Restart and database

Following instruction can be used to clone Oracle GI Restart from working Oracle Linux Server to another new server.  This method can save a lot of time in installing the GI Software, oracle database software and then applying PSU patch to every new server.
 
Assumption: You already have the OS template or server with Oracle Linux 6.4, Oracle Grid Infrastructure(GI) and Database software 11gR2 installed.  Also, the binaries are already been copied to target server.

These instructions can also be used for environments created using VDI or OVM template with Oracle software already installed.

 Cloning GI Home

ROOT tasks (only needed if HAS was already configured)

/grid/product/11.2.0/grid/crs/install/roothas.pl -deconfig -force

find /grid/product/11.2.0/grid -name "*.log" -exec rm -f {} \;
find /grid/product/11.2.0/grid  -name "myserver" -exec rm -rf {} \;
find /grid/product/11.2.0/grid/gpnp -type f -exec rm -f {} \;
find /grid/product/11.2.0/grid/cfgtoollogs -type f -exec rm -f {} \;

rm -rf /grid/product/11.2.0/grid/crs/init/*
rm -rf /grid/product/11.2.0/grid/cdata/*
rm -rf /grid/product/11.2.0/grid/crf/*
rm -rf /grid/product/11.2.0/grid/log/myserver
find /grid/product/11.2.0/grid -name '*.ouibak' -exec rm {} \;
find /grid/product/11.2.0/grid -name '*.ouibak.1' -exec rm {} \;
rm -rf /etc/oracle/*
rm -rf /grid/product/11.2.0/grid/log/myserver
 
chmod u+s /grid/product/11.2.0/grid/bin/oracle
chmod g+s /grid/product/11.2.0/grid/bin/oracle
chmod u+s /grid/product/11.2.0/grid/bin/extjob
chmod u+s /grid/product/11.2.0/grid/bin/jssu
chmod u+s /grid/product/11.2.0/grid/bin/oradism

ORACLE tasks

Make sure init.ohasd process is not running.  If it is become root and kill it.
ps -ef | grep has
Detach GI Home from existing inventory (only needed if you copied the binaries to different directory structure than source)
myserver-> /grid/product/11.2.0/grid/oui/bin/runInstaller -detachHome ORACLE_HOME=/grid/product/11.2.0/grid

Starting Oracle Universal Installer...
Checking swap space: must be greater than 500 MB.   Actual 4095 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /oracle/oraInventory
'DetachHome' was successful. 
Remove ASM entry from (if it is already there from previous OS template) /etc/oratab
Modify listener.ora file and update hostname.
Ready to clone GI Home
myserver-> cd /grid/product/11.2.0/grid/clone/bin
myserver-> /grid/product/11.2.0/grid/perl/bin/perl clone.pl ORACLE_BASE=/oracle ORACLE_HOME=/grid/product/11.2.0/grid OSDBA_GROUP=dba OSOPER_GROUP=dba ORACLE_HOME_NAME=Ora11g_gridinfrahome1 CRS=TRUE

./runInstaller -clone -waitForCompletion  "ORACLE_BASE=/oracle" "ORACLE_HOME=/grid/product/11.2.0/grid" "oracle_install_OSDBA=dba" "oracle_install_OSOPER=dba" "ORACLE_HOME_NAME=Ora11g_gridinfrahome1" "CRS=TRUE" -silent -noConfig -nowait 
Starting Oracle Universal Installer...
 
Checking swap space: must be greater than 500 MB.   Actual 4095 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2013-11-05_08-00-37AM. Please wait ...Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.
 
You can find the log of this install session at:
 /oracle/oraInventory/logs/cloneActions2013-11-05_08-00-37AM.log
.................................................................................................... 100% Done
.
 
 
Could not backup file /grid/product/11.2.0/grid/rootupgrade.sh to /grid/product/11.2.0/grid/rootupgrade.sh.ouibak
Could not backup file /grid/product/11.2.0/grid/root.sh to /grid/product/11.2.0/grid/root.sh.ouibak
 
Installation in progress (Tuesday, November 5, 2013 8:00:46 AM EST)
........................................................................                             72% Done.
Install successful
 
Linking in progress (Tuesday, November 5, 2013 8:00:48 AM EST)
Link successful
 
Setup in progress (Tuesday, November 5, 2013 8:01:07 AM EST)
.................                                               100% Done.
Setup successful
 
End of install phases.(Tuesday, November 5, 2013 8:01:28 AM EST)
WARNING:
The following configuration scripts need to be executed as the "root" user.
/grid/product/11.2.0/grid/root.sh
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts
    
Run the script on the local node.
The cloning of Ora11g_gridinfrahome1 was successful.
Please check '/oracle/oraInventory/logs/cloneActions2013-11-05_08-00-37AM.log' for more details.
Following configuration scripts need to be executed as the "root" user.
myserver-> /grid/product/11.2.0/grid/root.sh

Check /grid/product/11.2.0/grid/install/root_myserver.mydomain.com_2013-11-05_08-32-20.log for the output of root script

myserver-> /grid/product/11.2.0/grid/perl/bin/perl -I/grid/product/11.2.0/grid/perl/lib -I/grid/product/11.2.0/grid/crs/install /grid/product/11.2.0/grid/crs/install/roothas.pl

Using configuration parameter file: /grid/product/11.2.0/grid/crs/install/crsconfig_params
LOCAL ADD MODE
Creating OCR keys for user 'oracle', privgrp 'dba'..
Operation successful.
LOCAL ONLY MODE
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
CRS-4664: Node myserver successfully pinned.
Adding Clusterware entries to upstart
 
myserver     2013/11/05 08:34:12     /grid/product/11.2.0/grid/cdata/myserver/backup_20131105_083412.olr
Successfully configured Oracle Grid Infrastructure for a Standalone Server

Now ready to create ASM instance. Use oracleasm command to scandisks and list the name of disks available to you.

myserver-> oracleasm scandisks
myserver-> oracleasm listdisks

Run the asmca command and replace the disk name from the output of oracleasm listdisks command and adjust the AU_SIZE accordingly.

myserver-> cd /grid/product/11.2.0/grid/bin
myserver-> ./asmca -silent -configureASM -sysAsmPassword password -asmsnmpPassword password -diskString 'ORCL:*' -diskGroupName DG_DATA_01 -diskList 'ORCL:DISK1,ORCL:DISK2,ORCL:DISK3,ORCL:DISK4' -redundancy EXTERNAL -au_size 4 -compatible.asm 11.2 -compatible.rdbms 11.2
 
ASM created and started successfully.
 
Disk Group DG_DATA_01 created successfully.
If you have Huge Pages configured then disable AMM.  Run the following SQL in ASM and restart to disable AMM

alter system set memory_target=0 scope=spfile; 
alter system set memory_max_target=0 scope=spfile; 
alter system set sga_target=272M scope=spfile;

Add listener to CRS
srvctl add listener 
srvctl start listener 
srvctl status listener

Cloning DB Home


Detach Oracle home from inventory

myserver-> /oracle/product/11.2.0/dbhome_1/oui/bin/runInstaller -detachHome ORACLE_HOME=/oracle/product/11.2.0/dbhome_1
Starting Oracle Universal Installer...
 
Checking swap space: must be greater than 500 MB.   Actual 4095 MB    Passed
The inventory pointer is located at /etc/oraInst.loc
The inventory is located at /oracle/oraInventory
'DetachHome' was successful.

Clone Oracle Home

myserver-> cd /oracle/product/11.2.0/dbhome_1/clone/bin
myserver-> /oracle/product/11.2.0/dbhome_1/perl/bin/perl clone.pl ORACLE_BASE=/oracle ORACLE_HOME=/oracle/product/11.2.0/dbhome_1 OSDBA_GROUP=dba OSOPER_GROUP=dba ORACLE_HOME_NAME=OraDb11g_home1
./runInstaller -clone -waitForCompletion  "ORACLE_BASE=/oracle" "ORACLE_HOME=/oracle/product/11.2.0/dbhome_1" 
"oracle_install_OSDBA=dba" "oracle_install_OSOPER=dba" "ORACLE_HOME_NAME=OraDb11g_home1" -silent -noConfig -nowait 
Starting Oracle Universal Installer...
 
Checking swap space: must be greater than 500 MB.   Actual 4095 MB    Passed
Preparing to launch Oracle Universal Installer from /tmp/OraInstall2013-11-05_09-26-43AM. Please wait ...Oracle Universal Installer, Version 11.2.0.3.0 Production
Copyright (C) 1999, 2011, Oracle. All rights reserved.
 
You can find the log of this install session at:
 /oracle/oraInventory/logs/cloneActions2013-11-05_09-26-43AM.log
.................................................................................................... 100% Done
.
 
Installation in progress (Tuesday, November 5, 2013 9:26:53 AM EST)
..............................................................................               79% Done.
Install successful
 
Linking in progress (Tuesday, November 5, 2013 9:26:58 AM EST)
Link successful
 
Setup in progress (Tuesday, November 5, 2013 9:27:23 AM EST)
Setup successful
 
End of install phases.(Tuesday, November 5, 2013 9:27:46 AM EST)
WARNING:
The following configuration scripts need to be executed as the "root" user.
/oracle/product/11.2.0/dbhome_1/root.sh
To execute the configuration scripts:
    1. Open a terminal window
    2. Log in as "root"
    3. Run the scripts
    
The cloning of OraDb11g_home1 was successful.
Please check '/oracle/oraInventory/logs/cloneActions2013-11-05_09-26-43AM.log' for more details.
Run the following as ROOT user:
/oracle/product/11.2.0/dbhome_1/root.sh
Ready to create database either using dbca or scripts.

Sunday, November 10, 2013

Optimize Import Data Pump

Oracle Import Data Pump is no doubt much faster than deprecated import process.  However, it has some flaws that makes it slower.  

For example, if you import data_only for partitioned table, the impdp does serial import (see MOS ID 1327029.1).  

Also, when you have a schema with many indexes, it can take a while because impdp process creates one index at a time.  The index is created using parallelism you specify in the par file or at the command line.  But it does not create multiple indexes at the same time.

Here are the general steps I use to speed up the import process using impdp.

data_only import


  1. Put database in NO ARCHIVELOG Mode.
  2. Disable constraints (FK and PK)
  3. Change all indexes in the schema(s) to unusable
  4. Generate grants statements for sequences
  5. Disable all triggers
  6. Drop sequences
  7. Import data using following options
    1. content=data_only
    2. TABLE_EXISTS_ACTION=TRUNCATE
    3. parallel=4 (if your machine can handle more then increase this number)
  8. Import sequences
    1. content=metadata_only
    2. INCLUDE=SEQUENCE
    3. parallel=4
  9. Apply the grants generated in step 2.
  10. Rebuild indexes using parallel 20 in multiple sessions.
    1. generate ALTER INDEX owner.index_name REBUILD PARALLEL 20; into a file
    2. use Unix utility called split to split the file into 10 different files.
    3. use nohup and & to background the sqlplus session to create index and start all 10 sessions.
  11. Enable foreign and primary constraints
  12. Change indexes back to NO PARALLEL or change it to whatever they were before.
  13. Enable triggers
  14. Update Stats
  15. Put database back in ARCHIVELOG mode
Using the method mentioned above I was able to import about 511GB in 28 minuets for a database running in NO ARCHIVELOG mode.  Rebuilt 2,089 indexes in 40mins.  I created a korn shell script for the entire import process and it took about 1 hours and 20 minuets to complete.


Import everything

  1. Put database in NO ARCHIVELOG Mode
  2. Set the db_block_checksum parameter to false to improve import speed: ALTER SYSTEM SET db_block_checksum=FALSE SCOPE=BOTH;
  3. Generate scripts or save info for following:
    1. grants
    2. parallelism for indexes
    3. password
    4. synonyms
    5. DDL for foreign and primary constraints
    6. Set index parallel to 20 and generate DDL for indexes into 10 files
  4. Drop the schema if already exist
  5. Import using following parameters
    1. content=ALL
    2. exclude=constraint
    3. exclude=INDEX
  6. Create indexes using nohup and & to background and start all 10 sessions.
  7. Create constraints
  8. Put indexes back to their original parallelism
  9. Apply grants
  10. Create synonyms
  11. Set the passwords
  12. Recompile all invalid objects
  13. Update stats
  14. Set the db_block_checksum parameter back to typical: ALTER SYSTEM SET db_block_checksum=TYPICAL SCOPE=BOTH;
  15. Put database back in ARCHIVELOG Mod. 
I used the above steps to import about 2.75TB of data (multiple schemas) in about 14 hours.
I hope these steps help anyone who is looking to speed up import process.  I didn't put exact SQL and korn shell commands I use in my scripts.  Please contact me if you would like exact SQL and/or korn shell script commands.

If anyone does it any differently please let me know.  I am interested to see how other DBAs import large amount of data when Trasnportable Tablespace (TTS) is not an option.