Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

2600 SBS Spec, RAID & Disks 4

Status
Not open for further replies.

bernie321

Programmer
Jan 7, 2004
477
GB
Hi

We are purchasing a Dell Server for our company, we currently have 18 users.

The server will run SBS 2003 Premium Edition, Running: SQL Server an approximately 500MB Database, Exchange, Filesharing, Intranet, Print Server & ISA Server.

It will also terminate up to 6 VPN tunnels.

We are expecting to take on 4 or so extra staff in the very near future.

This is the basic spec of the server will this be approprate and expandable for our needs.

My main concern is if RAID 5 is approprate, will the 10k disks be fine (rather than the 15k disks).

PowerEdge 2600
2 x 2.8GHz Xeon CPU
2 GB RAM
3 x 73GB 10k hard drives - RAID 5
PERC Di controller

Many thanks for your help
 
Have a couple of clints with similiar setups...

On the the sysytems, both are 2.8 Ghz, single processor, 2 gig ram, 5 Seagate 36 gig drives 15k drives, Lsi u320-2 raid adapter, SQL 2000. Servers are also use for general file server use, both are Active Directory FMSOs. Both servers are hot proformers. Both clients use SQL 2000 for Great plains Dynamics, very reponsive.

The 15 k drives would make a difference but not drastic, supposedly 28% but believe it more like 10-15%, if reources are not tight I would go with the 15k due to the SQL database. The perc 3 DI, I believe is the internal U160 adapter; most internal raid adapters use more motherboard resources than addin boards. I would definitely go for the u320 raid interface and u320 drives. Dell uses the Lsi u320-2 and u320-1, (perc 4). Also I would get one more drive as a hotspare. The Lsi u320-2 is dual channel adapter which allows you to divide the drives over both channels for better proformance. As a note I use the default Windows cluster size and the default 64 k stripe size, writeback caching.
 
Thanks for the info

The embedded PERC 4/Di Duel Channel embedded RAID controller rus a bandwith of 320MB/sec.

I don't think our resources can stretch to the LSI model above which is not embedded as its 5 times the price, will this have a serious impact on performance?

Thanks
 
Don't know where I got the idea it was a perc3.
Had a hard time finding the specs on the Perc4 DI. According to the specs, this is an LSIlogic embedded raid, but I could not find out much about it. Dell does not release a great deal of info, cruised around LSI's site, did not find anything. The scsi specs of 532mb/sec are good; it is a u320, another plus, processing chip appears to be the same as the Lsilogic u320-2, so performance should be on par with the u320-2.



Having Exchange along with everything else will be taxing.

Please post your question on the following site, for more opinions
 
Whilst I'd never recommend running that many apps on a single server the spec looks OK IMO. On file servers we have here I've never noticed a difference between 10k and 15k drives (the network is the bottleneck) possibly for SQL queries etc you would though. You probably be better off with a 2x36GB RAID1 for the system and a 3x72GB RAID5 for the data, but not vital if it's beyond the budget.

One other thing, if you're running a SCSI tape drive inside the server then make sure you spec an Adaptec 39140 card with it. Dell recommend you just attach it to the PERC and make sure the PERC BIOS has that channel set in SCSI mode (rather than RAID mode) - it's supposed to effectively turn the PERC into two distinct controllers but this isn't the case in reality and a lot of Dell techs are annoyed at this recommendation.

Basically the tape controller and RAID array interfere with each and leads to stability problems - if you get a separate 39140 and connect the tape drive to that then you isolate them from each other.
 
Hi Guys

Sorry for the delay in getting back to you.

I dont really have much choice in running all the apps on one server, thats the problem with the SBS package - its all on one.

I have been looking at getting the PERC4 DC - Its not embedded but i cant seem to find any more info on it.

Any ideas on this one?

Thanks B
 
Sorry forgot to ask - is there any advantage in having an external channel.

I pretty much think that everything will be internal
 
The perc 4 DC is an Lsilogic product...

The same raid adapter I have in the above mentioned builds. The externals channels are used if you have an external raid enclosure verses internally mounted drives. From the info I posted previously, the DI and DC should have similar performance, though I have never used the embedded DI. Called Lsilogic and asked a tech if there is any performance difference between their embedded and addin raid adapters,no real answer, but he brought up a point of firmware and driver availability. The embedded will not be supported as long as the addin, less driver/firmware releases are likely, and for a fact you can use Dell's or Lsilogic firmware/drivers on the perc addin cards which I have done many times.
cruise around this site, in the storage area..

For specific benchmarks on the perc4 DC( lsilogic u320-2), look for posts by FemmeT...

As Nick mentions, the difference between 10k and 15k drives will only affect the SQL speed (server does the all processing), and some server based programs such as tape backup throughput will increase. Without SQL, there would be very little benefit of 15k over 10k drives. Distribute the drives over the two channels, to keep the scsi raid channels unsaturated.
 
I'm speccing a new database app server atm and I was planning to have 2x36GB drives RAID-1 for the OS and 3x36GB drives RAID-5 for the DB (all 15k), the DB will only actually be about 10GB so space isn't a concern. This is to go into a PE6650 server so I'm capped at 5 drives internally else I would have done RAID-10 for the DB.

I initially asked Dell for a PERC4/DC and a split backplane and was planning to run both RAID containers on different channels and on different parts of the backplane. However the Dell server specialist said the bottleneck would actually be the RAID controller cache (128MB) so it would be pointless not only splitting the backplane but also using different channels for each container. The guy has given us some good advice in the past so I've taken his word for it and got a 1xInt 1xExt version of the card in case we add external storage later and will configure both containers on the same internal channel. I'm still suprised that the cache is the real bottleneck though - I don't suppose anyone has any performance testing experience of splitting containers between channels vs them sharing a channel?
 
Because of price changes over the past few months I have been able to do a few upgrades to our server:

2x 3.0ghz with 1mb cache
Duel hotplug 750w power supply
2GB DDR RAM (4x512 266mhz)
4x 36GB, 15k, U320, Raid 5
PV100T DAT DDS4
PERC4 DC
Duel port Intel Gigabit NIC

From what everyone has told me is that the DC model is worth it at the extra cost. Because it is takeing resources from the motherboard - there is no point in spending the money elsewhere without it.

Thanks B
 
Nick, nice to see other highly knowledgeable people on line, thanks for the added info I left out..

Dell techs, bless them, try hard, but few are really raid knowledgeable, very few. Lsilogic techs, generally will give you more in depth info at there toll free number 800-633-4545, just tell them you need a pre-sales question answered.

The raid adapters cache being a bottle neck, no way, unless raid adapters were able to cache gigabytes as in solid state drives; the cache is most useful for sequential info but less for random info, which is more likely on most servers. The cache is sufficient, 256 Mg would be better, but as you will see after installation your u320 raid adapter loafs along compared to a u160 raid adapters. You brought up an excellent point of the network card being the main bottleneck to a general use server. On small client networks, the NIC cards are working hard, as my u320 array drives are leisurely accessed. In real world cache sizes over 128Meg produce about a 10% difference under specific situations, tweaking the Os can produce 15-20% overall

On 2cpu.com, a user FemmeT, has done monumental raid card testing, he has some benchmark graphs of most of the raid adapters on the market..and there are discussions of cache sizes.. mind you cache sizes do wonders for benchmarks. Well worth a search on his user name for all his posts. In 10 years of searching raid info, this guy has shared more benchmark info then any one person.

Five 36 gig drives, should just saturate a u320 channel, in the real world, you probably would not see a difference between a single and 2 channel, though I have not tested it. Figuring 60 to 70 meg/sec on sustained requests, times 5 (drives)=300 to 350 meg/sec.

Nick, a couple of years ago I tested a raid 10 and raid 5, on a general use server for 60 users (u160 10k drives).. after benchmarking and actually using it with programs, I set it up as raid 5, the performance difference was barely noticeable.. as a dedicated DB or SQL server, I have not done any testing.

The only concern I have about your setup.. if in the future you add more drive you will be hampered by bus saturation, versus if you have a two channel raid and 2 channel backplane, you have scalable ( i hate the word scalable) disk capacity.

Bernie...
Nice specs on the server, the Perc 4 is a kick ass raid adapter, at the very tip top of performance. Remember to save your raid config to a floppy, and to document every setting you make to the array setup. Number your disks (permanent marker), and mark your cables. Should anything happen, it is comforting to know you have not messed up drive sequences or cables. Also I spray Teflon spray on all cable in a server which allows easy cable removal/insertion, without bending HD cable pins ,or a major pull stress on power connectors; Remington "DriLube" or Elmer's Slide-All, (non conductive), also good on Mobo slots, available at gun and hardware stores, maybe locksmiths. I spray a couple of quick coats, allowing the solvent to evaporate.
 
Hmm interesting stuff, it's a bit late for this server as I ordered it yesterday heh but will bear that in mind for the next one and check the links you gave. At least with this server the DB shouldn't actually grow beyond 20GB so there shouldn't be a need to upgrade disks.
 
Hi Guys

Thanks for the info everyone.

I forgot to mention that as NickFerrar reccomended I have added an Adaptec embedded controller for the DAT drive.

Would there be any performance loss I did ask Dell to put two drives on each channel? I think it may be likely that I will add drives in the future.

I currently have 4x512 RAM spec'd would there be a performance difference if i got 2x1gb just in case i wanted to max the RAM for Small Business Server (SBS - Max 4GB, dell 2600 - 6 DIMMS).

Would anyone reccomend adding anything extra now, although we dont want to spend more than necessary now - I wouldnt want our users to be waiting for the server to run processes because it is struggleing.

Many thanks for everyone's help!

 
I spoke to dell to make a few changes and get them to split the drives across two channels and the sales guy got a server specialist to call me.

He said that there would be parity problems.... is this correct.

He reccomended that I add 2 more drives on RAID 1 on the on-board controller for the system files. The Tape drive on the Adaptec controller and the 4; RAID 5 drives on the PERC 4 DC.

Is the on-board controller going to use much motherboard resources?

Many thanks for your help

B
 
I don't see where parity problems would come from although your revised spec of 4x36GB drives split between controllers would only allow you RAID-0 or RAID-1. RAID-0 you could expand in the future but offers no fault tolerance and RAID-1 you wouldn't be able to expand. I would go for 5 disks, 2 in a RAID-1 config for the operating system/programs and 3 in a RAID-5 config for the data, it's pretty simple to add disks to a RAID-5 (although there would only be one drive slot free anyway in a 2600).

I don't have any benchmarking to go off but we use PERC 4/Di's as standard here (and have used the previous gen motherboard PERCs in past servers) and performance is good but we're not pushing the system IO to the limit.

I really can't see you need a dedicated PERC per RAID controller, most if not all RAID controllers should happily support multiple RAID containers. I assume the tech is saying there is an issue in mixing RAID types on the same controller but again the PERC should support this without problems. I'd ask him to explain the issue in a bit more detail although I guess it isn't a huge cost to enable the internal PERC.
 
Bleh no edit - on my last paragraph I meant "I really can't see you need a dedicated PERC per RAID container
 
The guy changed his mind when he sent the quote through - He's done as you said Nick: Put the System drives on one channel and the data drives on the other channel of the PERC.

But they have put the Tape on the onboard controller, as the tape will only be in use during the night - will this be ok?

I have gone for 6 disks because if one of the RAID 5 Disks fail then I can hot swap the 4th failed disk on the RAID 5 Array.

Thanks B
 
Nick..
I would not worry, since expansion is not a problem.

Bernie...
My goodness, "a parity error" caused by dividing drives over two channels, Dell has just found a problem with raid which never existed or will exist, except for Dell techs. Dividing array drives over two channels is a STANDARD proceedure...

I guess they are trying to dazzle you with Bulsh*t
Dell does not like to vary the standard server setups, and this only verifies their techs knowledge of raid. Just for fun call Lsilogic tech support, if you want to verify this, they would like a good laugh too.

All devices use some motherboard resources. LsiLogic based adapters have always been the one of the least resource intensive.The resources used by raid adapters are nullified by the fact the onboard raid co-processor and other components relieve the motherboard CPU of the I/O functions of the disks. Basically the raid adapter relieves the CPU of >20% of the work it would normally do in I/O related functions; once you add the card, you have basically upgraded you CPU to do >20% more work. Some network cards are far more CPU intensive then these raid adapters will ever be. Raid cards use about 5% of motherboard resources.

Take two exact servers, both with raid1, one with hardware raid, the other with software raid. The software raid1 will work much harder, the heads will thrash excessively, the disks will run hot compared to the hardware raid ( mildly warm), the software raid will create mucho noise. Drive life decrease exponentially for temperatures over 75 degrees Fahrenheit.

Speaking for myself, I would have two disks raid1 for the pagefile, logs, ntds.DOT,Sql logs, temp files,etc. 18 gigs would do it, but the 18 gig drives will become difficult to find in the near future and expensive, so 36 gig drives are the best choice. Use the raid adapter, not the on board adaptec channels for this.

Then I would create another array, a two partition raid 5 array, with the first partition large enough for the OS and programs, 8 gig should be very adequate. The remainder of total capacity used for a data partition. I would include a hot spare in the raid 5 array. Tape attached to the Adaptec onboard OK, never to a raid channel as Nick stated.

Ram, I doubt you would see any difference going from 512k to 1 gig memory UNLESS the timing on the memory strips are different, primarily the CAS timing( lower the better).. I would ask Dell before purchasing. As is, with (4)512k modules, the two empty slots could be populated with 1 gig memory later. I would be temped to use 4 512k strips, which provides the ability to test ram for problems. If you use 1 gig strips, (unless you have 4 strips), you won't be able to remove a strip and have the server run.
 
Guys, You posted as I was writing.

Nothing wrong with Nicks setup, and his point of a separate raid adapter being unnecessary is good. If a seprate adapter is added it just adds an extra to the motherboard load.. more IRQ requests, more CPU load ( will slow the overall raid throughput). Every device added to a production server has an effect on server speed, roughly a 5-7% decrease per device, such as network cards or added raid cards, (as per benchmarks of raid throughput).
 
So this spec would be ok:

2x 3.6ghz with 1mb cache
Duel hotplug 750w power supply
2GB DDR RAM (4x512 266mhz)
On the PERC4 DC:
4x 36GB, 15k, U320, Raid 5
2x 36GB, 15k, U320, Raid 1
On-board controller:
PV100T DAT DDS4
CD-ROM
Duel port Intel Gigabit NIC

Dell don't want to put the system files on the RAID 5 Drives. Like you say they don't seem to like moving away from standard setups.

Thanks B
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top