Monday 14 November 2011

Compressed Tables Improves Query Performance

I performed the following test to see how much performance improvment I could get if Icompress a table and run a full scan on it.

1.  Ran the following to get the initial response time of full scan on my table:

SQL> select count(*) from test;

Elapsed: 00:01:33.19

Execution Plan

-------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Cost (%CPU)| Time     |
-------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |     1 | 72313   (1)| 00:14:28 |
|   1 |  SORT AGGREGATE    |      |     1 |            |          |
|   2 |   TABLE ACCESS FULL| TEST |    24M| 72313   (1)| 00:14:28 |
-------------------------------------------------------------------

2.   Ran the below PLSQL block against the above table to find the compression ratio for above table. The compression ratio for above table was 2.82
set serveroutput on
declare
pragma autonomous_transaction;
-- sample percentage
pct number := 0.000099;
-- original block count (should be less than 10k)
blkcnt number := 0;
-- compressed block count
blkcntc number;
begin
execute immediate ' create table TEMP_UNCOMPRESSED pctfree 0
as select * from &&tabname where rownum < 1';
while ((pct < 100) and (blkcnt < 1000)) loop
execute immediate 'truncate table TEMP_UNCOMPRESSED';
execute immediate 'insert into TEMP_UNCOMPRESSED select * from &&tabname sample block (' || pct || ',10)';
execute immediate 'select
count(distinct(dbms_rowid.rowid_block_number(rowid)))
from TEMP_UNCOMPRESSED' into blkcnt;
pct := pct * 10;
end loop;
execute immediate 'create table TEMP_COMPRESSED compress as
select * from TEMP_UNCOMPRESSED';
execute immediate 'select
count(distinct(dbms_rowid.rowid_block_number(rowid)))
from TEMP_COMPRESSED' into blkcntc;
execute immediate 'drop table TEMP_COMPRESSED';
execute immediate 'drop table TEMP_UNCOMPRESSED';
dbms_output.put_line('Compression Ratio for &&tabname: '||round(blkcnt/blkcntc,2));
end;
/

3.  Compressed the table:

alter table test move compress;

4.  Ran the full scan on the compressed table:

SQL> select count(*) from test;

Elapsed: 00:00:28.16

Execution Plan

--------------------------------------------------------------------
| Id  | Operation          | Name  | Rows  | Cost (%CPU)| Time     |
--------------------------------------------------------------------
|   0 | SELECT STATEMENT   |       |     1 | 22964   (1)| 00:04:36 |
|   1 |  SORT AGGREGATE    |       |     1 |            |          |
|   2 |   TABLE ACCESS FULL| TEST |    25M| 22964   (1)| 00:04:36 |
--------------------------------------------------------------------


With the above test, it is quite clear that if we compress a table with high compression factor, the response time will definitley improve.
Hence, I have a golden rule and that is to compress the tables whose compression factor is more than 2 and the table is quite static. This is very helpful for a datawarehouse environment.