Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

New Dell Poweredge Server options

Status
Not open for further replies.

romlopez

IS-IT--Management
Aug 27, 2005
16
0
0
US
Hello all
I am about to buy two new Dell poweredge servers.
Servers will be running AC, DNS, DFS, IIS (local intranet site), NLB. Will be also used as print, file servers.
Choices: SATA or SCSI?
-SATA 3.0Gbs internal data transfer rate is twice faster than SCSI
-SATA (16MB) cache is twice than SCSI (8MB)
-SCSI Seek & write time is faster than SATA 3.0Ghz
-SATA is less pricey than SCSI, + SATA has more capacity
Does SATA data transfer rate and cache compensate for the speed of SCSI?

Thank you for your help in this matter.
 
Any help would be greatly appreciated
 
SCSI is higher sustained revolutions per minute. Until all SATA drives start spinning at 10k and 15k respectively, SCSI will always be faster. That's just the raw math. Aside from that, SATA ALWAYS has a lower mean time before failure. So, you're going to be replacing them more often.
 
Ok. I will be buying SCSI. Here are the specs for the servers.
***PowerEdge 2800 (two of them)***
-Dual Xeon processors 3.0Ghz/2MB cache
-5GB DDR2 400Mhz (1GB from dell, other from an online store)
-Hot-pluggable Split backplane
-1 PERC4/DC, 2 Internal Channels, 0 External Channels controller
-2 RAID 5's
1) 73GB 10K RPM * 3 = OS
2) 300GB 10K RPM * 4 = Data storage

Questions:
-Cluster or NLB on these servers? ( for 100% fault tolerance)
-2 RAID controllers? or just one? (1 for each RAID5)
-I am planning to buy another two servers later on wich will be running MS Exchange and the data will be located on these two servers. Is that a good idea?

Any help would be greatly appreciated, thanks!
 
A couple of thoughts: when you utilize a split backplane on a 2800, you can have two Raid5 arrays, with a max of 4 disks ea. So, by utilizing this setup you have kind of "painted yourself into a corner", in that you can't expand the 4 disk array any further. I would also questions why you need / want a 140gb+ OS partition / array. And, if you're concerned about ultimate read/write performance, the 15k scsi disks far outperform their 10k counterparts, although they are definitely more expensive. You need to determine the kind of throughput you need.

My favorite way of configuring these is utilizing the media bay drive option for the OS. This lets you have a Raid1 mirror for the OS, in a special Hot-swap housing they slide in under the CD drive. Raid1 should be fine for your OS partition, as that should never grow as dynamically as your data. So you could buy a pair of 15k 36Gb disks for the option bay, and then use the 2nd channel for a Raid5 data array using all 8 bays available on a 2800. You could possibly opt for 10k drives here to save some $$$, as disk speed will probably not be your bottleneck on data transfers.

On a side note- the 2800's are great bargains, IMO. They expanded the drive carrier from the 6-drive bay the 2500/2600's had. They are truly a bargain.
 
Good for you , you did not fall for paying loads extra for the ultimate CPU chip speed.

1)73G 10k *3
Why raid 5 for the OS ? Raid 1 would be sufficient, and is safer then raid 5 for the OS. I question the need for 73 G drives for the OS, 36G would be plenty, with spare room for log/temp files, unless your have special needs of the extra space. As is, if you have many programs which are generally install to c:\program files\, you can install them to a sub directory on the data drive. With the OS on raid 1, the coprocessor would not be involved with parity, so it will not add stress to the coprocessor.
Two raid cards, unnecessary, as 7 drives over two channels will not saturated the u320 channels. An added raid card will add IRQ requests which will slow down both raids cards, stick with 1 per server; a second raid card is beneficial if you have many more drives then you have spec-ed out. An added advantage of raid 1, I wrote this up, it is long.


Bit obvious but...
When you get the servers in, pull the ram from one of the servers and place it in the remaining server, you might as well keep memory sticks all the same in each server, place all the new memory in one server.

Plan on automating a Consistency check weekly, if possible
,raid 5 arrays of your size are more prone to disk errors

This is about raid 6, which your adapter will not do, but read the article and it will explain my reference to needing regular consistency checks.



Exchage data on these servers..depends how much data will flow across the network, probably be better off keeping the data on the exchange server.

Ps The Perc4 is a fast adapter with a 600 MHz coprocessor. The 2800 has pretty good rating as a server. Make sure you get the redundant power supplies . Can not comment on clustering, just do not cluster the disks arrays. The 10k drive choice is good, as the only time you will see a difference is with an SQL database.

I do the same thing with Dell, I purchase the minimum, buy the rest on line..I order bare minimum number of drives(2) purchase the rest online due to 5 year warranty, though it does require you to have at least a hotspare or coldspare, as the RMA takes a couple of days.



........................................
Chernobyl disaster..a must see pictorial
 
Agree with Twwabw on the...
"My favorite way of configuring these is utilizing the media bay drive option for the OS. This lets you have a Raid1 mirror for the OS, in a special Hot-swap housing they slide in under the CD drive."

........................................
Chernobyl disaster..a must see pictorial
 
Thank you for all your help

I will use the media bay for the OS RAID1 15K 73GB and the 1x8 Hot Plug SCSI Hard Drive Backplane for data.
I will use 1 PERC4/DC, 2 Internal Channels

Now you have me worried about using RAID5....since Dell only gives RAID 0,1,5,10 solutions; (RAID6 I can't find it anywhere) would you recommend me using RAID10 instead of RAID5?

Also, is it a little bit easier to use a couple of NAS or SAN for storage? (even tough I think DFS won't run with NAS or SAN)
Thank you and sorry for all the trouble
 
RAID10 is very nice, but expensive. Your maximum usable space is N/2, just like RAID1, and the minimum number of drives in the array is 4.

However, what RAID10 offers in return is performance--write performance is similar to RAID0, read performance is even better. It also (potentially!!!) offers much better survivability than other common RAID levels--because it's a set of mirrored stripes, as many as N/2 drives can fail while your array remains intact... of course, it's also possible to lose both drives in a single stripe, and hence, the entire array, too.
 
RAID10 surely is expensive!!! N/2!!!

But now I am more confused on what to buy...
Instead of having both servers with lots of storage (more money spent) (And the upcoming exchange servers as well)
Buying a cuple of NAS or SAN would be better by copying one to another (DFS or clustering) BUT...according to what I know SAN is not based on IP but on DNS, and NAS is based on IP. For ex: \\fileserver (on SAN) \\192.168.1.1 (on NAS)
Please correct me if I am wrong (I am new at this)
We have some programs that won't run with an IP address in the share.
I know that SAN & NAS have a built in OS, what I do not know if they provide fault tolerance like RAID5 or RAID10... help me please, thank you (My budget is not too limited ;) )
 
It also (potentially!!!) offers much better survivability than other common RAID levels--...agree.

Your worry about raid 5 for the data...
With the OS on the raid 1 and regular backups of the raid arrays, your in a good position. The raid 1 is inherently safer, as the chances of two physical disk failures at the same time is far less then a raid 5 failure; the number of disks are less than a raid 5 for the OS, giving the raid 1 disks less of a failure rate, due to the less chance of the disks bombing out due to read errors or physical disk failure... again the larger the number of disk the greater the chance of disk errors, as pointed out in the Intel document. Also there is less data, due to the lack of parity info, less chance of a read error.
Again I will stress the need to automate a consistency check.. this procedure checks read ability on every block of an array, and corrects errors.. without regular checks, if a block(s) become unreadable, the errors are usually found only if the heads perform a read request..if too many errors are found at the same time, an array can fail, as in the case where an array fails in degraded mode, due to a second disk failure.

The OS is super critical, if you lose it with active directory, it can be a nightmare, raid 1 for the OS is a good move. If the raid 5 data array should crash, the restore from tape is will be simple, and fairly fast dependent upon you tape unit's speed... yes, you lose a days work, but this is not bad terrible (if you do not have a redundant server).

If you have the resources....
Yes, the odds of failure with raid 10, OVERALL, are much lower than raid 5, as the chance of loosing two drives in the array, which mirror themselves, at the same time, is low (but always a possibility, but it is the worst case scenario). With a minimum of one global spare, the danger is lower. Performance of raid 10 is greater than raid 5 due to the lack of parity creation overhead, and to a smaller extent the parity does require extra disk space requiring greater head movement to find data . To duplicate your capacity in raid 10 would require 6 drives for the data array.
Considering the large capacity drives, the greater number of drives involved (greater number of blocks involved),I would still go for one more drive, as a global hotspare. Which bring in the subject of bus saturation...on a u320 scsi channel, 5 drives per channel is the maximum number of drives before the channel is saturated with data one the array is under load, any more than 5 drives on a channel will not increase an array's speed, if anything it will lower it slight due to the scsi overhead of the greater number of drives. So this would have an affect on a raid 10, with all 6 (or 7 drives on one channel with a hotspare), probably would not have a large affect.

If I understand you correctly, you have the addin card, not the embedded raid controller. With the addin card you can use the onboard scsi interface for the tape backup, with the embedded raid interface you will need to get an addin scsi card for a tape unit.

Last, once the server comes in, I would leave it on for a few days without data, and run the Dell diagnostics repeatedly on the array before transferring data.

"On a side note- the 2800's are great bargains, IMO. They expanded the drive carrier from the 6-drive bay the 2500/2600's had. They are truly a bargain."
Fully agree..I read a database server review, and the 2800's were in the league with higher priced competitors.

Can you explain why you need 73 Gig for the OS ? Remember, less block, less chance of read failures. If you installed all programs to the data partition (95% will), the OS would take <5 Gig, installed apps, use reads mostly, so the apps on raid raid 5 would not hurt performance as long as the temp/log files are on the raid 1.





........................................
Chernobyl disaster..a must see pictorial
 
what I know SAN is not based on IP but on DNS, and NAS is based on IP. For ex: \\fileserver (on SAN) \\192.168.1.1 (on NAS)

This is incorrect. NAS is, these days, just another file server (though one that will support pretty much any protocol you want out of the box with little to no configuration. SMB, Novell, NFS, etc.) Think of it as "a big disk in the sky that you can dump data to."

SAN is another animal entirely--it's a separate network (usually fibre based) dedicated entirely to storage. Your clients talk to your servers, and your servers talk to the disks and other data storage devices on the SAN. The main features offered are performance (fast!), centralized backup, and reduced overhead on your data network. Downsides are it's a cast iron ***** to set up unless you know what you're doing, and it's spendy. Really spendy. Like "entry level" gear costing upwards of $30k spendy.
 
Than you all for your help!!!! I really appreciate it

Servers will have a RAID1 36GB 15K for OS and RAID5 300GB*3 for Data. (In RAID10 is n/2 what is it on RAID5, thanks)
No SAN, no NAS. I suppose NAS are good OS savings no HW savings (except if you buy the SATA HD's), and SAN I do not even want to talk about it...

When I am trying to setup the configuration in dell's website it says:
-Primary controller
none, embedded, perc's, Ultra3 SCSI LVD
-Second controller
-none, perc's, Ultra3 SCSI LVD

If I leave a PERC4/DC as the primary controller (RAID1 & 5 go here one on each internal channel right?), I guess that means I do not have an embedded controller?
As said by technome I will use the embedded for tape backup.
If what I say it's right, I have to buy the Ultra3($199) for the second controller and run a cable from the inside to the outside by the expansion slots for the tape backup?

Quick question. The cables are sold like this:
External SCSI Cable, Rear
1M MULTI-MODE FC CABLE LC-LC,Tyco
3M MULTI-MODE FC CABLE LC-LC,Tyco
5M MULTI-MODE FC CABLE LC-LC,Tyco
10M MULTI-MODE FC CABLE LC-LC,Tyco
30M MULTI-MODE FC CABLE LC-LC,Tyco
50M MULTI-MODE FC CABLE LC-LC,Tyco
I suppose M means meters? and those cables go internally, and for the tape backup which one do I have to buy? The external one?


Thank you for sharing your wisdom! ( I am learning a lot )
 
If you specifically order a perc4 DC, this is the addin card

Now the Perc4E DC, is the card you want, not the Perc4 DC.. this is a PCI Express card with a faster onboard coprocessor and cache. The perc4 dc has a 400 MHz coprocessor, the perc4E has a 600Mhz copressor, sorry I did not say this before but I thought your references was to the newer card. Same price faster.
The card you want, from the website reference...
PERC4eDC-PCI Express, 128MB Cache, 2-External Channels [add $500]
You want the external channel ports, just in case, at a latter date, you need external enclosures. The card has two internal connectors and two external connectors. Shame on Dell for trying to unload the older perc4 DC pci-x cards at this point.

With the addin Perc4E DC, you will have two Adaptec onboard u320 scsi interfaces available. Just to make it clear the embedded Perc4ei will give very similar performance to the addin card, but would require and addin scsi card for tape use, also you are locked into this motherboard (basically) if the raid components fail. The addin card could be used in another machine.

Please explain where fiber cable come in, do you have a fiber cable interfaced tape backup ?, if you have a regular tape scsi tape unit, you will be running a scsi cable from the external Adaptec scsi interface connectors on the rear of the machine. If you buy a scsi cables and terminators I recommend getting off the Adaptec site..guarentee you get the correct types, M is for meter.MULTI-MODE FC= Fiber cable.

........................................
Chernobyl disaster..a must see pictorial
 
Ps. Double check the Perc4E DC with 2 external channels...these cards are made by Lsilogic, the lsilogic u320-2E, I have not used the Dell version, only the Lsi version, just check it is the same ,with two internal and 2 external connectors, they should be the EXACT same card other than firmware, and heat sink..thinking, possible Dell has a slight variant..doubt it though, never did before. Should look like this





........................................
Chernobyl disaster..a must see pictorial
 
In RAID10 is n/2 what is it on RAID5, thanks

n-1.

Your 3 300GB drives will result in 600GB of usable space, with the other 300GB consisting of parity data.

As far as RAID cards go, you want the embedded one, and to buy the add-in SCSI card. The reason for this is that the 2800 does not have an external SCSI connector, so you will have to go back and buy the add-in card anyway if you ever want to use an external autoloader, library, whatever.
 
jkupski said:
As far as RAID cards go, you want the embedded one, and to buy the add-in SCSI card. The reason for this is that the 2800 does not have an external SCSI connector

This is not true- the 2800 comes with a Dual channel integrated LSI 1030 Ultra320 SCSI controller. This controller has an external 68-pin scsi connector connected to one of the two channels. It is located on the rear panel, next to the 2nd power supply. This is the std. configuration. If you opt for the embedded PERC4ei RAID controller, it utilizes this embedded controller for RAID, and both SCSI channels are unavailable. Then, if you needed an external or additional SCSI device, you would need to opt for an additional internal SCSI adapter, like the optional Adaptec 39160 Ultra3 SCSI LVD Controller Card.

If you opt for an add-in Raid card instead of the embedded option, like the PERC4/DC, or PERC4eDC, then the internal embedded SCSI is available for other devices, including the external connector.
 
NP- just wanted to make sure we kept the original poster on the right track.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top