Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

SAN read Performance 6

Status
Not open for further replies.

khalidaaa

Technical User
Jan 19, 2006
2,323
BH
Hi All,

I'm running into a debate with our DBAs to placing data into SAN. One of the senior DBAs suggested to place LUNs across physical SAN disks as he thinks this will improve read performance! But i have alwasy thought that in SAN no matter where you place the LUN, read is just as fast!

We both agree that writes will go to the SAN cache first so this is ok but what about the reads?

Currently, i'm assigning LUNs to a partition (which is facing some performance problems) from one SAN disk only!

Regards,
Khalid
 
More disks = better performance.

Think about it, if you make two requests from the same disk the head cannot be in two places at once, so it has to service the first request, then you have to wait while the head moves into position and then for the disk to come round again for the second request.

Two requests to two disks will be serviced much quicker, both about as fast as the first request to a single disk.

Now think of many - 100 or more - requests to one disk vurses many requests to many disks.

As a rule of thumb, the more (disk) spindles the faster the response.

As databases tend to have random access the head will never be in the right place on a single disk so you will always encounter the seek delay but with many disks the system will be able to send multiple requests and then service the replies, out of order, as and when they come back.
 
Thanks DukeSSD for your valuable input.

In that case i will have to create a LUN on each disk only! Then what's the use of the rest of the space within the disks?

I know what you are getting into! It's a trade off between performance and space! But is it worth doing?

Regards,
Khalid
 
Just to add to the above, i'm using RAID10 for this LUN's partition. The array consists of 4 disks (2 strpied and mirrored over the other 2). So shouldn't this boost the read performance?

Regards,
Khalid
 
RAID1 and RAID10 should have better read performance than no RAID (one LUN per disk), RAID0 or RAID5.

So if you're having performance issues I would look into another direction.

What is this LPAR running? Oracle? In this case I would recommend looking into the AIX tunables, number of processes, aio servers, etc...
 
Thanks MoreFeo for the input.

The LPAR is running a legacy application from indus called EMPAC. It uses oracle 8i.

Although the CPU is always high but i have doubts in the disks as they get busy when running such queries! + as per our DBAs splitting the data, indexes, arch and redo logs will first ease the administration + will better distrubute the DB as we are using one file system for all for the time being!

Is there any best practices for Optimizing the SAN for oracle DBs?

Regards,
Khalid
 
Khalid,

Your DBA's argument holds true for traditional non-RAID disks. But with RAID storage subsystems you have multiple heads and a cache much larger than single disk drives have.

With SAN storage, especially where one RAID array is shared by multiple hosts, tuning queue depth can be of great benefit. Have a look at the IBM System Storage DS4000 and Storage Manager V10.30 redbook, chapter 5 where they discuss how to set the queue depth correctly.
 
Is the SAN an IBM SAN? If not you may want to verify with the manufacturer about recommended setting, especially queue depth, for AIX and the SAN. One of our markets had a 570 with an Oracle 9 application attached to a Xiotech SAN. Their queue depth was set to like 2 for each hdisk. This caused extemely slow response times and backup times of over four hours for the data. Once they corrected this setting, and some settings on the fiber cards, the response times greatly improved and the backup dropped to 30 minutes run time.
 
itsp1965, abubasim, kenwalters,

Thank you very much for your input.

Abubasim,

I will read the recommended manual and will come back to you.

Regards,
Khalid
 
Thanks to all.

Problem solved by backing up database, and restore it into a new file system coming from 2 arrays in the SAN with 10 disks intotal (as opposed to the one array of 4 disks initially).

Stars for all.

Regards,
Khalid
 
at least for ds4000 configuration recommendations for oracle you can find here:


there is a queue depth rule in documment:

chdev -l hdiskn -a queue_depth=X -P
The formula for the correct queue depth for a hdisk is:
X=2048 / (number of hosts * hdisks per host) if using HACMP, only count the number of active
hdisks.

(default dueue_depth for ds4000 is set to 10 in AIX)
 
Thanks for the star.

Just one question, when you do a backup and restore of the database, does this process 'reorg' the DB?

I'm thinking that the improvement you're experiencing may come from this reorganization of the DB, rather than from the 2 luns.

It's just a thought I've had, but it could be interesting to solve this point.
 
I didn't read this post before today and I have some general things to say.

Regardless of what disksystem you have attached to your SAN you will in some cases end in a situation where the number of HDA's is a factor.

But when you talk about read you have to take into account that a good disksystem (like HDS USP V and EMC DMX-4) have a cache hit rate of more then 90%.
So you will end up in a situation where less then 10% of your database I/O will have to wait for I/O access to the HDA's.

Some database systems are harder then others on your cache, and a system like Adabas can be a pest on a disk cache especially if you don't use high-end disksystems with a lot of cache memory.

When you talk about disk write please remember that disk systems like HDS USP V and EMC DMX-4 do writes very different then small disksystems (like the DS4000)
Small disksystems have a very limited write cache

In our installation we would normally never attach a big unix server to a midrange disksystem like EMC Clarion or HDS AMS.

/johnny

 
Thanks ogniemi, MoreFeo & johnny99 for your input.

MoreFeo,

You are welcome :) you deserved it!

When we backed up and restored the DB, we were treating it just like any big file to be restored. I mean we didn't organize the DB internally (like tables, indexes, etc...). My conclusion is that now the same DB is spread on more disks with less fragmentations!

Regards,
Khalid
 
Well, I was thinking you did an export-import of the DB, but if you just did a backup of the DB file then my question doesn't apply.
 
MoreFeo,

Hold on for a second! Yes we restored the database from an exported DB!

Regards,
Khalid
 
So what are you saying right now is that when i exported and imported the DB i did reorganize it! I thought reorganizing the DB is a seperate task involving reorganizing the tables, indexes, objects within!
 
Well, I'm not sure cause I'm not a DBA, but I think an export-import does some kind of reorg or defrag that can improve performance. Any Oracle DBA here to confirm it?
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top