Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Does this sound right? FC or iSCSI SAN 1

Status
Not open for further replies.

cajuntank

IS-IT--Management
May 20, 2003
947
US
Ok... I thought I had my mind made up on the new HP EVA4400 series SAN using fiber channel hard disks for online storage and FATA for nearline storage.

I talked to a storage engineer from CDW today and he was trying to point me to looking at the new MSA2000 using SAS drives and going iSCSI instead of fiber. I asked his reasoning for this and he told me that since I was purely Windows OS, that he saw no need for the added expense of the EVA. He said Windows just can't drive data that fast and iSCSI at 1GbE would perform just as fast in Windows as having the EVA at 4GbE. He stated that if I would have said I also have Unix, AS400, etc... that he would have definitely gone the EVA route.

He said a few other things and it made somewhat some sense. Can I get a second opinion on someone that's been through this? I know is not always a black or white numbers game (1GbE or 4GbE), but I need some re-assurances.
Thanks.
 
iscsi has about 85 pct of the performance of FCP.It all depends what kind of throughput you need.Iscsi has the advantage you do not have to invest in FC hba's,fiber switches,....If you are going for iscsi,and want good performance, use jumboframes if you can anyways (mtu=9000).
Filling up 1 Gbit means getting about 100 MB/sec throughput on a windows box.I doubt you will ever get there :)
But,when using linux for instance, the bottleneck will be your 1 Gbit ethernet ( unless you can do link aggregation to make the pipe a little bigger ).FCAl is used for high performance,minimal response time environments.And if your host suffers to much from the overhead of the iscsi(software initiator) ,you can still allways put an iscsi HBA in the box ...
Regarding the drive types,SAS drives and FCAL have about the throughput,here the raid layout will determine mostly the performance.How does MSI and HP do their raid ?

rgds,

R.
 
How does MSI and HP do their raid"... I'm not quite sure I know what your asking but I'll take a guess in that I set the raid level on the MSA2000 for say either RAID5 or RAID 1+0. I'm going to be using the storage for both Exchange 2007 information store, SQL 2005/2008 databases, general files storage, and Sharepoint document storage.

The OS, logs, pagefile, etc..., I'll keep on local internal drives to the servers.

So what the CDW guy said was pretty straight shooting about the performance and Windows?
I think I would go with dedicated iSCSI HBA(s) and a dedicated switch just for this. So using jumbo frames helps also?
 
Both Exchange 2007 and SQL 2005/2008 are small block IO write intensive applications. For these applications, iSCSI is on par with fcp. Sharepoint backends on SQL, so the same is true there.

What percentage of the workload is file storage, and what are your average file sizes? In these types of applications, FCP tends to have the advantage. If it's only a small percentage of your overall workload, then iSCSI is still a good choice.

Don't bother with the iSCSI HBA. The performance of the MS iSCSI initiator is on par at a cost of just a couple percent CPU. I would go with a dedicated switch and enable jumbo frames end to end.
 
File storage is only a small percentage in the entire scheme. I've been reading up on iSCSI and see it's definite advantages, especially in cost compared to FC.

Do I need to budget for 10GbE connectivity now? I use everything serverwise HP and HP has 10Gb NICs with the iSCSI accelerator built in and I would look at a 5400 Procurve with 10Gb CX4 copper ports. The 5400 could handle up to 24 10GbE ports which will be more than enough for me.

I could back down to dualport 1GbE cards for teaming (I guess you can team in iSCSI) and a standard 1GbE Switch if need be.
 
I guess going 10GbE don't make sense if the storage medium itself can't go 10GbE. I don't see HP has anything 10GbE in the storageworks line. I guess I'll call them Monday and see what my options are.
 
For the small block IO applications like Exchange and SQL, you're wasting dollars on 10GbE. THe IOPS may be realitively high, but the block size is small and your actual throughput isn't exceeding 1Ge.

 
I would suggest you to stay away from 10GbE, there are lot of driver related issues going on in the market. Give it another 1-2 months before you jump on 10GbE.

Also if you are a Windows house, try going the WUDSS SAN/iSCSI storage route. Just my 2 cents.
 
Does the WUDSS give me anything more than the traditional storage appliance?

I have budgeted for the new HP MSA 2000i storage appliance maxed out with 300GB SAS drives (I can add additional MSA storage enclosures to this). I am also going to do a DL180 G5 with 500GB or 750GB SATA drives (again, maxed) for my backup server (this server too will be iSCSI connected). I'll do my daily backups to this disk array and have a MSL2024 tape library hanging off of it for monthly or bi-monthly archives. My plan is to have all of this connected into a Procurve 2900 series switch with jumbo frames enabled (as per suggestion); all on it's own self contained network.

Am I missing anything or would somebody do this differently?
 
What about my design... any thoughts or anything I'm missing?
 
Awesome ! I did hear from a fellow co worker that with HP the issues is you can't use off the shelf HD's. Do check on those. He was unhappy about this as they ended up charging arm and leg for it ! Do check it out !
 
I don't understand the reason behind putting the logs locally. Why don't you put the logs in the SAN? If you had dual controllers it would add more redundancy. If your entire server is gone, you can simply replace it with a new one and assign the volume on the SAN to it.
 
He said Windows just can't drive data that fast and iSCSI at 1GbE would perform just as fast in Windows as having the EVA at 4GbE.

That statement is not true. It could be true if you were comparing a Windows server running on cheap PC grade hardware... but enterprise grade server hardware running Windows 2003 can and will drive data 'that' fast on FCP.

Important performance facters I haven't seen disclosed yet are the # of servers to be attached. The total number of LUNS, and the total GB/TB needed. The type of workload, number of concurrent users... also what backup software are you using?

Try not to get caught up in the iSCSI vs FCP religous debate. If possible get a solution that does both, and use what makes the most sense on a host by host basis.

iSCSI was not feasable for our large SQL servers. Some of which have 20+ luns totalling 5Tb of data on a single Windows 2003/SQL2005 server.
 
Number of servers would be no more than 8. The MSA has slots for 12 drives, which I'll max out using 300GB SAS drives. The main usage of this storage will be for Exchange 2007 and SQL 2005/2008. As far as LUNs, I'm still researching this, but my first guess would be RAID1+0 the chassis and then carve out I'm thinking maybe 5 LUNs from that. I'll add additional chassis of hard drives as my need grows. Thoughts?
 
What you propose will yield you 1.8 Tb of usable data.

The problem I see with that is that all the luns, for all 8 servers will be sharing the same physical spindles... so your Exchange logs, and data, SQL logs and SQL data and SQL tempDB will all be on the same phyisical disks... which is against best practice for performance. If your supporting a small # of users this may be ok.

You could also consider making a 6 disk raid 1+0 array, and using the remaining 6 disks in a Raid5 array. This would yield you 2.4 TB of usable space and give you 2 isolated sets of spindles. I would use the Raid10 luns for log drives and the raid5 luns for Data drives... as an example.

 
That statement is not true. It could be true if you were comparing a Windows server running on cheap PC grade hardware... but enterprise grade server hardware running Windows 2003 can and will drive data 'that' fast on FCP.

Important performance facters I haven't seen disclosed yet are the # of servers to be attached. The total number of LUNS, and the total GB/TB needed. The type of workload, number of concurrent users... also what backup software are you using?

Try not to get caught up in the iSCSI vs FCP religous debate. If possible get a solution that does both, and use what makes the most sense on a host by host basis.

iSCSI was not feasable for our large SQL servers. Some of which have 20+ luns totalling 5Tb of data on a single Windows 2003/SQL2005 server.

The workload is the key determining factor. If it's small block IO, then even at high IO rates you're not generating enough throughput to exceed the bandwidth of 1Ge. On Exchange 2007, it's not uncommon to have 101 LUNs per host with storage totaling over 20TB per host connecting to storage via iSCSI. Although the space requirement is high, the block size is 8k and the overall IOPS requirement is under 10,000 IOPS per host.

 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top