Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Need help with Disk setup (RAID1 vs RAID10) 3

Status
Not open for further replies.

jmcahren

Programmer
Dec 30, 2004
9
US
I am an application developer in an organization where we have no server experts.

My application (Essbase OLAP Server) on a PE6300, is experiencing performance problems, and the bottleneck is the disk (RAID10). I theorize that splitting the RAID10 set up into three RAID1 sets will work better (faster) because more disk operations will be able to occur simultaneously. I want to isolate the big databases from one another. Currently, there are 40 databases on the single RAID10 set, and when running database calculations, around 4GB of data is read/written to the set. If only one database is calculated at a time, it is fast. If multiple databases are calculated at the same time, the performance degrades severely.

Will splitting the RAID10 into three RAID1 sets improve performance for the concurrent read/writes? Do we need an additional or different raid controller? Backplane? Channels? How does all this work?

We have four 74GB 10k drives sitting on a shelf that we could install into this server too (there are unused drive bays in the case). Can I make them RAID1 sets too? Is there a limit to how many RAID1 sets that I can install on a controller? Will having a mix of 10k and 15k drives (in independant RAID1 sets) pose a problem?

Why is one controller unused? (see below)


My PE6300 has two disk controllers:

Controller 0: Ultra 160 SCSI Controller 0
Controller 1: PERC 3/DC Controller 0

Controller 0 has no disks attached. Controller 1 has one RAID1 and one RAID10 disk set attached. The RAID10 set consists of six 36GB drives (Seagate ST336753LC SEAGATE 36.7GB SCSI U320 80PIN SCA 15K RPM 3.5LP CHEETAH). The RAID1 is a pair of the same model of disk.

 
For starters....

First the Power Edge 6300, according to the specs I could find has a 64-bit 33 MHz PCI bus, which is strangling your throughput. Expecting anymore performance out of this server is unrealistic, the Pci bus is the main bottleneck, and is saturated as is; the u160 raid controller is the secondary bottleneck. All you can pass through the motherboard is 133 MB/sec,; a garden hose compared to a 2" pipe. As a guesstimate, as I have no server clients using a setup up as yours anymore, a newer server with the right equipment would be >6 times faster.

Pci-x

pci express

To get more performance you need to get a server with bus running Pci-x at 133 MHz or Pci-express, and a raid card which can feed the Pci-x bus at that speed such as the new perc 4 DCs or LSI u320-2x, or the Intel SRCU42X or for a motherboard running Pci-express, an LSI u320-2e.

A Perc 3 is a u160 card, with a limitation of 160 Mg per channel, which is beyond scsi bus saturation at this point. 15 k drives will help little on this bus. Add to this the co-processor and other electronics on a perc/3 are much slower than the newest raid cards.

Raid 10 is very fast, raid 1 sucks compared to it.
With your situation beyond upgrading the server, I would use 15k drives, in a larger stripe, consisting off at least 10 drives, perhaps offloading temp and log files to a raid 1.
 
Controller 0 is not a RAID controller. It's a simple SCSI controller for standard SCSI disks, tape decks, etc.

Controller 1 is your RAID. As I recall, Perc3 only has 2 channels on it. Each RAID set (whether it be RAID 1, 5, 10, whatever) requires it's own channel. So, 2 channels times RAID1 on both = 4 disks in server max. Those are gonna have to be mighty big disks...
 
Thank you so much for the replies. I am a little disgusted because we went to our tech-ops team (who procures and manages all company servers), and told them what we needed (massive disk speed), and they pointed us to this box/config. It's less than 12 months old, and it seems like they should have been aware that faster pci bus speeds were around the corner (or were out already), and we should have bought a different server. This server has 4 procs and 8GB of RAM, and is one of the most expensive boxes that we have bought to date. We don't have the budget to replace it for over 2 years.

Is it possible to install another disk controller? It sounds like the PCI bus wouldn't be able to provide the bandwidth needed to make it worthwhile anyway...

We are now buying HP servers with fiber connections to a big enterprise SAN. They are probably PCI-x, and I'm sure those new boxes are blowing this thing out of the water. Oh well.

Thanks again for your feedback.
 
Jim..

There is a limit to the number of any type raid sets, but it is related to the max number of drives allowed on a raid controller. Technically you can have 15 Raid 1 arrays on a 2 channel scsi raid adapter ( 30 disk max.). Not to put down your setup, but mixing the drives with different speeds has little relevance, with your present motherboard bus speed.

Lawnboy...
Each raid set DOES NOT require it's own channel, in any form of raid. Raid performance is enhanced by dividing drives in a raid set(s) over multiple channels. Placing all the drives of a large raid set on a single channel is a poor idea as a large number of drives will saturate the scsi bus.

"So, 2 channels times RAID1 on both = 4 disks in server max"
Raid 1 is two disk, obviously not striped
Raid 1 becomes raid 10, when 4-30 disks are mirrored and striped. With a large number of disks, greater than 5 or 6 per channel (u320) or 3 or 4 with u160, a 4 channel or multiple raid adapters would be needed to combat raid adapter scsi saturation.
 
Sorry to be dense, so are you saying that adding another controller and dividing up my disks between them may indeed improve total disk throughput?
 
Jim..

Man I feel for you!

Unless I got the Pci bus spec wrong,( I only found one spec sheet stating 33 MHz Pci), there is little you can do, as this server was in the market in 1999. I find it incredible they choose this server, as I have been setting up 64 bit, 66 MHz Pci bus servers for over 4 years. Basically the guy who choose the server, set up the person in charge of the database to be a target on a firing range. A 1999 server... surprised he did not go all the way and give you a Intel 286 at 12 MHz.

Adding another raid controller will do nothing, perhaps slow it down a small bit due to the added IRQ requests of the second adapter. Basically the raid adapter you have has a higher throughput than the mobo Pci bus.

I would approach the company's main people and explain this situation. This present server will cost FAR more in productivity loss, than a replacement. Over a 5 year period, the productivity loss would be astounding, with only a 10 user base.
My original estimate of a new servers performance was stated as >6, think it would be >9 times. I would shy away from 4 CPU machines from Dell as they use old technology motherboards, your better off with a dual cpu machine such as the Dell poweredge 2800, with Pci express. A 4 cpu motherboard does not give a dramatic increase in speed over a two cpu motherboard considering the technology is the same; considering the 4 cpu motherboards from Dell have older technology, the new 2 cpu boards would be much faster. The PE 6600 and 6800 have older technology.
 
I am SO SORRY, I made a big mistake. This is a 6600, not a 6300.

Technical Specifications
Dell™ PowerEdge™ 6600 Systems User's Guide
Microprocessor type
up to four Intel® Xeon™ microprocessors with a internal operating frequency of at least 1.40 GHz

Front-side bus (external) speed
400 MHz

Internal cache
at 1.4 GHz, 256-KB L2 and 512-KB L3 cache
at 1.5 GHz, 256-KB L2 and 512-KB L3 cache
at 1.6 GHz, 256-KB L2 and 1-MB L3 cache

Expansion Bus
Bus type
PCI and PCI-X

Expansion slots
ten full-length PCI and PCI-X slots (64-bit, 100-MHz) and one full-length PCI slot (32-bit, 33-MHz)

Memory
Architecture
72-bit, ECC, PC-1600 compliant, DDR SDRAM registered DIMMs, with 4-way interleaving, rated for 200-MHz operation

Memory module sockets
sixteen 72-bit wide, 168-pin DIMM sockets on two riser cards

Memory module capacities
128-, 256-, 512-MB, or 1-GB registered SDRAM DIMMs

 
Jim..
"Sorry to be dense, so are you saying that adding another controller and dividing up my disks between them may indeed improve total disk throughput? Yes and no...

Warning..the newest raid adapters allow disk to be rearranged on the SAME raid channel, but NEVER move disks set up on one channel, onto another unless you plan on REBUILDING the server. !!!!!!!!! Older raid adapters do not allow movement of disks even on the same channel !!!

For u160, u160 raid adapter, which you have (38-52 meg/sec throughput, according to Seagate). The 15k drives are u320 but they drop to u160 level due to the raid adapter
If all 6 disks are on one channel for the raid 10, roughly average throughput 45 Meg/sec..6 times 45= 270 Meg/sec, u160 handles 160 meg/sec, definite bus saturation.
Divide them over two channels, considering the other disks in the raid 1 are in there, 8 disks total, 4 on each channel, your above the saturation point (180 ), not to far above, so you are OK compared to having all the raid 10 disks on one channel. Will you get a dramatic increase in speed, considering the limitations of the motherboard Pci bus, allowing a maximum of 133 Meg across the Pci bus? I doubt it, but it should be better than all the disks of the raid 10 on the same channel.
Is it worth a server rebuild, as rearranging disks on to a different channel would require a SERVER REBUILD.


With a new server, a u320 system, u320 raid and u320 disks..
Throughput range 52-82 Meg/sec
If all 6 disks are on one channel of the raid, technically you could saturate the scsi bus at very busy times. Roughly each disks produces 60 Meg/sec on the scsi bus (rough average when the server is fairly busy), burst at 80 Meg (Very rare), 6 disk at 60 equal 360 Meg, u320 handles 320 Meg/sec, just above the saturation point. Divide the drives over two channels, add in the raid 1 disks, 8 total, 4 on each channel, would place 240 Meg/sec on each channel.. far below the saturation point of 360 Meg/sec. A Pci-x or express would allow about 1 Gig or greater across the Pci bus.

One point I forgot to mention in the previous post above, all the 15k drives, maybe the 10k drives, could be used in a new server, a big chunk of a new servers cost.
 
Jim ...
Please,Please, the next time you post give the correct specs, this changes everything, including my last post.

If you have a Percy 3 raid adapter in this server I would definitely upgrade to a Percy 4 (LSIlogic u320-2, lsi supplies dell), better yet, but not sure, if the u320-2x is compatible with your motherboard, this is a better choice, if it is, as it has faster electronics. You can obtain this on ebay as a Dell raid or and Lsilogic raid. The Percy 4 is at least twice as fast as the Percy 3 on a Pci 66 MHz. If the Lsilogic u320-2x is compatible, should be somewhat faster than the lsi u320-2, but the u320-2x has the highest throughput only with a Pci-x 133 MHz motherboard.

Next I would definitely divide the raid 10 array
over the two channels, even though you would need to rebuild the server. I would not use the 10k drives in the server, as I have used the setup as I proposed, and the 10k drives will drag the performance down of both the raid 1 and raid 10. If you want the highest throughput you need to increase the number of drives in the raid 10 array, 5 or 6 drives per channel would give you the maximum performance, 5 drives per channel should do it. Remember I am not a fan of raid 1, due to it's crappy throughput. If your Os was on the Raid1, just use the the raid 10 with two partitions, one for the OS and one for everything else. If it was for the database temp files, logs etc, I think you will hurt the performance. If you had a separate raid 10 with 4 disks for this purpose, it would be faster.


Lastly databases can be faster with a different raid stripe size, then the default size which is 64k. I would only change this if either you do benchmark testing of the the different sizes 32k,64k,128k, which is a load of work or if you can get reliable info from someone with a similar setup involving the Essbase database and the same raid card.


Very high priced for Ebay, should be about$200.00 with battery and ram (Lsi u320-2)

Lsilogic u320-2, same as the Perc 4, (older classification)

Intel SRCU42X, same as the Lsi u320-2x, and the Perc4 (newest version), check compatibility
 
technome:
Not trying to argue, trying to learn... When I said "So, 2 channels times RAID1 on both = 4 disks in server max", this is inaccurate? You can place more than 1 RAID1 set on a single channel? Or can you place more than 2 RAID1 sets on 2 channels (each primary disk on one channel, each slave disk on another)?

Agreed, my wording was poor, should say each RAID set requires at least one channel. Or am I still not getting it?
 
LawnBoy...

"You can place more than 1 RAID1 set on a single channel? Or can you place more than 2 RAID1 sets on 2 channels (each primary disk on one channel, each slave disk on another)?"

Not that I recommend it but you can setup up to 7 raid1s on a single channel; considering a channel has a max of 15 drives, 7 raids would take 14 drives, last one is mute as it is a single drive, and the SCA backplane would require one of the scsi IDs anyway.
On a two channel raid, 14 raid1 arrays, with a SCA backplane ID taken for each channel.
Again I will mention scsi bus saturation, this many drives on a single or double channel, all array sets being accessed would produce slow throughput. 5 to 6 drives per channel is really the max number for maximum throughput on a u320 bus. For an data archival array, bus saturation would not be that critical, as 640 Meg/sec per two channels is fast enough.

All my arrays are arranged with primary on one channel, secondary on the other, nothing dangerous about it for any raid type. I alway purchase two or more channel raid adapters, leaving expansion possibilities, and it is faster

"Agreed, my wording was poor, should say each RAID set requires at least one channel. Or am I still not getting it?"
In case I was not clear, each array set does not require it's own channel, it is preferrable to have all arrays divided over multiple channels for performance. There is a slight increase in safety on a two channel adapter, though not all electronics are duplicated, many are, so some components could fail on one channel, and with some raid types you could get lucky..then again, the last raid adapter failure I had was in 1993 ( setup a hundred since).

I verified the max number of array sets on a channel with Lsilogic a few years back, when someone posed the same question.

 
I am really sorry for the server model confusion. Between dev/qa/uat/train for our various servers (db, web, etc.) We have 25 servers in our application group.

As far as procuring the controllers, I believe that our organization forces us to buy from Dell for warranty purposes.

We are already rebuilding on this server in June for other reasons, so that will be a perfect time to redesign the drive subsystem. I have scheduled a meeting with our tech-ops group next week, and I will present this info.

Thanks again for your help, it is greatly appreciated!
 
Jim...

With 25 servers it is easy to get confused with specs, no problem.
Well if the 6300 was correct you had no where to go, with a 6600 you can gain some decent speed. Double check you have a perc 3, it is a fairly old u160 adapter, if you do have it the perc 4 will definitely speed you up; the technology was a good size jump between the perc 3 and 4. It is hard to remember the speed difference between the perc3 and perc4, along with the fact the perc 4 is u320 and you have u320 drives, guesstimate of min 3 times the speed, you might get >4 times. Make sure you get u320 scsi cables.

I have the perc 4 (lsi u320-2x) on an Iwill DK8n dual 246 Opteron with at 133 MHz pci-x bus on my lab machine, and it cooks with raid 10. Couple of my clients have the perc 4, the older classification (lsi u320-2) running Ms SQL in raid 5, with only a 4 disk array, pleasantly fast. If you do have the raid 10 on a single channel, you will get a definite speed boost by dividing it over two channels.
Can understand about the warranty concerns.

Good luck Jim..
sounds like Bones talking to Captain Kirk


Lawnboy..
Your welcome


 
We are procuring two more 37GB drives and a PERC4 controller. We are also moving the RAID1 pair for the OS into the RAID10 set. Please see the attached proposals for the new config.

One question: Since this is an unconventional system whereby database loads and calculations take place periodically during the day on one or more databases. The database data files are cycled through in their entirety (~4GB), i.e., they are read in from disk, calculated, and written back to disk block by block. This can take 30 minutes or more in many cases. Also, several of these loads and calcs can take place at the same time, creating a huge strain on the disks. This is why I have proposed approach #2, which I know is not optimal for bursts, but may be more efficient during the perfect storm when four or five databases are recalcing at the same time...??

Please see my proposed configs here:
 
I like your config #1, I assume the last disk on each channel is a hotspare on #1 ? With a raid this size I would only use one global spare to cover the two channels. A Global spare will replace a failed disk on either channel. So for a total of 3 new disks, instead of two, you would have roughly the most throughput you can obtain from a raid 10 array. The reads and writes will be really good, reads should be fantastic.
Let me know what the two last two yellow disks represent.
How much motherboard ram do you have?

I can not see all the raid1 arrays giving decent performance in config #2; all the arrays will be very slow. Yes I know databases act differently on raid, but even if you get a slow down during the storm, the Raid 10, will more than make up for it all day long. Perhaps you could post at 2cu.com for a second opinion, though there are many posters who really do not have experience, Maybe Femmet will chime in.

Check out the links below..

Check out the server benchmarks, In Dutch, but the graphs are understandable. A guy named Femmet has done remarkable benchmarking.

Femmet also hang outs at 2cpu.com
 
Yellow means new hardware that we have ordered. We were going to place all 10 disks in the RAID10 set. I'm not too worried about a hot spare because we have a 4hr warranty response, during which time the mirror should hold up.

Thanks again for your assistance, I was leaning toward config #1, I just wanted to make sure that I considered all alternatives.
 
I am a fan of hotspares, but with a fast resonse your ok.
Once the server is up and running, play with the read ahead, and cached I/O settings, which do not affect the raid, they can be safely toggled. Write thru or write back should not have an affect on raid 1 or raid 10, probably best to have it as write thru (safe toggle), which probably leaves more adapter chache ram for data caching. Run a defragementation program daily, as this gives 5% better performance, over a fragmented disk. Also get a program which does boot time defragmentation, on my SQL servers, a once per 2 month boottime defrag, makes SQL speedup, I been using Diskeeper.
Post back after the rebuild

Good luck
Paul
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top