Sunday 29 May 2011

How to boost the speed of export/import process

I was given the task of migrating some databases for a client from Oracle 9i to Oracle 11g and also to move the database from Sun Solaris platform to IBM AIX platform.

The client wanted the migration to be done quickly as we were bound by space and the number of databases to be migrated were around 30. Normally, the approach in such migration is in the following phases:

- PHASE 1 - Move the database from one platform to another on the same version. But since the version we had was 9i, and the Solaris and AIX are not of same endian, we could not use transportable tablespace feature.

- PHASE 2 - Upgrade the database after the migration. In our situation, we will have to upgrade the database first and then migrate to different platform to use the transparent transportable tablespace feature. The risk here is that if we had some issues during the upgrade phase, rollowing back would take some more time as the old binaries and DB backup will need to be restored.



Taking all the constraints we had, we decided to use the old and beautiful exp/imp utility. However, we all know that if we don't know properly how to use this tool optimally, we could be in big trouble as it may take a very long time to run. So the plan the migration was as follows:

STEP 1 - export the database

SETP 2 - import data only

SETP 3 - import constraints only

STEP 4 - import the rest



Now to change the speed gear to the highest one, here are the tricks that I took to make it all happen in a much lesser time than usual. Basically, we tested it out with the default settings and found that it took approx. 10hrs to import 40GB of data. However, we managed to drop this time down to 4hrs by simply applying the following tricks during the whole process:

Assuming a full export, use direct=y if you’re not using any predicates in the export. Set your buffer to be big (10MB at least). Also try not to collect statistics during the export. This is something we can do after the import.

For example: exp username/password@db file=dump.dmp log=logfile.log full=y buffer=10485760 statistics=none

Alter all your constraints novalidate prior to export if you can bring the app down, and if you can take a consistent export. This helps in reinstating the constraints instantly without forcing Oracle to validate constraints on massive tables
I set workarea_size_policy=manual. This will ensure that manual PGA management is used. Since my import session will be the only session connected to the database, I think this is the simplest way.
Set sort_area_size=4GB (or more). Actually, the AIX server only allowed us to change this parameter to a max valus of 1GB but that helped us.

Set massive online redo logs at 2GB each, 2 members each, 6 groups.

To elobrate why we introduced step 4 in the process is that there seems to be a bug in Oracle whereby if we will import the constraints and indexes together, then the underlying indexes of a constraint is not dropped when the constraint is dropped. To overcome this problem, I always recommend to import the constraints before the you import the indexes.

After the import, I reset everything back down to “normal”. At the end of the day, the client was happy and so was I. It had a very happy ending...

No comments:

Post a Comment