Hello Forum,
I am running into performance problem trying to update a large table (440 Million records). I have never worked on such a bigh table before. I tried running the following update statement and it did not finish in 48 hours and therefore i killed it:
update /*+ index( r idx_hcpcs) */ cms.rev_cntr r
set (r.si, r.apc) =
(select x.si, x.apc
from cms.xwalk02 x
where r.hcpcs_cd = x.cpt);
There was no way on knowing how fast was this thin running and therefore i decided to write a little script to time the process and commit every 100,000 records. Here is what i ran next:
DECLARE
CURSOR cur_all IS
SELECT rowid, hcpcs_cd
FROM cms.rev_cntr;
CURSOR cur_xwalk (p_hcpcs_cd_in In xwalk02.cpt%TYPE) IS
SELECT si, apc
FROM cms.xwalk02
WHERE cpt = p_hcpcs_cd_in;
lv_counter NUMBER;
lv_si xwalk02.si%TYPE;
lv_apc xwalk02.apc%TYPE;
BEGIN
lv_counter := 0;
execute immediate 'truncate table cms.log_commit';
FOR cur_all_rec IN cur_all LOOP
-- check if hcpcs_cd is NOT NULL...if NOT NULL then process
IF cur_all_rec.hcpcs_cd IS NOT NULL THEN
FOR cur_xwalk_rec IN cur_xwalk(cur_all_rec.hcpcs_cd)
LOOP
lv_si := cur_xwalk_rec.si;
lv_apc := cur_xwalk_rec.apc;
END LOOP;
UPDATE cms.rev_cntr
SET si = lv_si,
apc = lv_apc
WHERE rowid = cur_all_rec.rowid;
-- commit every 100000
IF lv_counter = 100000 THEN
insert into cms.log_commit values 'Rev_Cntr',100000,sysdate);
COMMIT;
lv_counter := 0;
ELSE lv_counter := lv_counter +1;
END IF;
END IF; -- end of NULL check FOR hcpcs_cd
END LOOP;
COMMIT; -- issued at end to commit records < 10000
END;
I ran the same script on our test DB with 400,000 records and it ran in 3 minutes, but on production, it has taken 7 hors to do 700,000 records. It will be done in about 15 days. Here are some facts and information about this process and the database:
1. We are running Oracle 8i on Unix Box(E450) with dual processor and 2 Gig RAM.
2. There are 12 disks with total of 360 Gig space.
3. The data is being "Mirrored" meaning the same data is being duplicated on 2 disks.
4. There is a bitmap index on rev_cntr.hcpcs_cd column with 14000 distinct hcpcs_cd values
5. The xwalk table is small(12000 records) and it has index on both columns.
6. We are gettinh high I/O and only 2% CPU utilization
7. PCTFree on Rev_Cntr is 10 and PCTUsed is 40
Here are my questions:
1. Can Oracle and this Unix box handle such a volume
2. Is the bitmap index on rev_cntr table appropriate or should i use regular index
3. What other suggestions do you have to speed up this process.
4. would it be better to break up the table into 4 or 5 smaller tables.
I appreciate any help you can offer. I am sure there are many experts here that can help me. Thanks for your time.
Mason
email:
I am running into performance problem trying to update a large table (440 Million records). I have never worked on such a bigh table before. I tried running the following update statement and it did not finish in 48 hours and therefore i killed it:
update /*+ index( r idx_hcpcs) */ cms.rev_cntr r
set (r.si, r.apc) =
(select x.si, x.apc
from cms.xwalk02 x
where r.hcpcs_cd = x.cpt);
There was no way on knowing how fast was this thin running and therefore i decided to write a little script to time the process and commit every 100,000 records. Here is what i ran next:
DECLARE
CURSOR cur_all IS
SELECT rowid, hcpcs_cd
FROM cms.rev_cntr;
CURSOR cur_xwalk (p_hcpcs_cd_in In xwalk02.cpt%TYPE) IS
SELECT si, apc
FROM cms.xwalk02
WHERE cpt = p_hcpcs_cd_in;
lv_counter NUMBER;
lv_si xwalk02.si%TYPE;
lv_apc xwalk02.apc%TYPE;
BEGIN
lv_counter := 0;
execute immediate 'truncate table cms.log_commit';
FOR cur_all_rec IN cur_all LOOP
-- check if hcpcs_cd is NOT NULL...if NOT NULL then process
IF cur_all_rec.hcpcs_cd IS NOT NULL THEN
FOR cur_xwalk_rec IN cur_xwalk(cur_all_rec.hcpcs_cd)
LOOP
lv_si := cur_xwalk_rec.si;
lv_apc := cur_xwalk_rec.apc;
END LOOP;
UPDATE cms.rev_cntr
SET si = lv_si,
apc = lv_apc
WHERE rowid = cur_all_rec.rowid;
-- commit every 100000
IF lv_counter = 100000 THEN
insert into cms.log_commit values 'Rev_Cntr',100000,sysdate);
COMMIT;
lv_counter := 0;
ELSE lv_counter := lv_counter +1;
END IF;
END IF; -- end of NULL check FOR hcpcs_cd
END LOOP;
COMMIT; -- issued at end to commit records < 10000
END;
I ran the same script on our test DB with 400,000 records and it ran in 3 minutes, but on production, it has taken 7 hors to do 700,000 records. It will be done in about 15 days. Here are some facts and information about this process and the database:
1. We are running Oracle 8i on Unix Box(E450) with dual processor and 2 Gig RAM.
2. There are 12 disks with total of 360 Gig space.
3. The data is being "Mirrored" meaning the same data is being duplicated on 2 disks.
4. There is a bitmap index on rev_cntr.hcpcs_cd column with 14000 distinct hcpcs_cd values
5. The xwalk table is small(12000 records) and it has index on both columns.
6. We are gettinh high I/O and only 2% CPU utilization
7. PCTFree on Rev_Cntr is 10 and PCTUsed is 40
Here are my questions:
1. Can Oracle and this Unix box handle such a volume
2. Is the bitmap index on rev_cntr table appropriate or should i use regular index
3. What other suggestions do you have to speed up this process.
4. would it be better to break up the table into 4 or 5 smaller tables.
I appreciate any help you can offer. I am sure there are many experts here that can help me. Thanks for your time.
Mason
email: