Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Need a recommendation for a NAS/SAN

Status
Not open for further replies.

acl03

MIS
Jun 13, 2005
1,077
US
We currently have about 5 servers, each running the free VMWare server with 3-4 Virtual Windows Servers each.

We want to consolidate these to a single ESX server, and possibly add a second one for disaster recovery.

Questions:

1) Is iSCSI fast enough to run a VM box or should i get FC?
2) NAS or SAN (do NAS's come with FC, or iscsi only?)
3) What brand/model would be recommended? I like HP servers for the most part, but I am open to any options based on price & performance/support.
4) This may be better suited for the VMWare forum - does mirroring software (we currently use HP OpenView Storage Mirroring on our fileserver) work with VMWare? We want two VM Boxes, so if the first one fails, we can failover to our DR box with little or no downtime/data loss.



Thanks,
Andrew
 
I have been researching this same issue just here recently myself. So in a nutshell here is what I've discovered:

In a Windows only environment and unless your doing "HUGE" databases, then iSCSI is the sweetspot you need to be at; utilized performance and nice price-point compared to FC. FC is good for OS's and databases that can take advantage of the higher throughput like UNIX, AS400, Oracle, and SQL (again when really large, ie... I was told tens to hundreds of TBs), but other than that, Windows just can't push data out that fast so iSCSI is the better choice.

I would go SAN not NAS... With NAS, because it has it's own OS, it's really no different than setting up another server to simply share out files. SAN on the other hand is more for moving large blocks of data. This is important for applications like database, imaging and transaction processing.

I personally like HP's product and am going with the MSA2000i solution with SAS drives for online storage and SATA drives in another chassis(although you could mix in same chassis) for near-line storage. Two dedicted Procurve switches (the 2900 series) for failover and the NC380T Dual 1GbE NICs with the Accelerated iSCSI license for better performance. HP's website lists support for Windows 2003 R2, Redhat, Suse, and VMWare.

I'd touch base with HP or a trusted vendor to have more updated support info from what the website shows. I know Windows 2008 is supported, but the website doesn't reflect that yet.

Hope I could help.
 
FOr the MS OSs, microsfot defines SAN as using block mode acces to LUNs (iSCSI or FCP) and NAS ans using file mode access to files (CIFS or NAS). I like that definition because of the clarity it brings. Given this, use SAN; either iSCSI or FCP. If you have an existing FCP infrasructure; use it. If not, consider iSCSI and the MS software initiator. It generally outperforms hardware HBAs at the cost of a few percent CPU utilization.

Many vendors provide unified storage; from the same pool of disk thay can carve out space and present it as block mode SAN or file mode NAS. It is of course different areas of the same pool; you can't get both at the same time for the same data set.

You mention HP. Have you considered Netapp? They have specific solutions taylored to VMWare? How about EMC? For that matter, any other SAN vendors? It pays to shop around.


 
1) That depends on how much ethernet bandwidth you plan to feed the ESX server. You would have to calculate the total bandwidth needed by the 20 VMs for both network and storage, and architect the ESX server with enough ethernet to support it. Also if your storage and ESX servers are on separate ethernet switchs validate that the links between the switches can handle the estimated load.

2) As XMSRE said. SAN for ESX. Now if you wanted to kill a couple birds with 1 stone and buy a storage solution that not only gives you iSCSI and FCP SAN access, but can also do NAS and replace your file servers... check out the Netapp products.

3) Brand/Model of storage: How much space do you need? What type of performance?

4) ESX server failures can be mitigated with VMOTION... which covers you if one of the ESX servers fail, however that doesn't mirror your data. If you truly need to mirror the data for redundancy, are you also planning to buy a 2nd storage system to mirror too? Or was the plan to mirror from 1 VM source to a different VM target, both residing on the same Storage system?
 
Well the mirroring is my main concern for now. How can VMOTION mitigate server failures? I planned to buy two boxes (looking like a SAN).

So, i was thinking something like:
Code:
Site A (Main site)     Site B (DR Site)
------                 ------
ESX Server A           ESX Server B
FC Switch A            FC Switch B
SAN/Disk A             SAN/Disk B

I want my VM's running on ESX Server A, and all of the data (VMDK files) on SAN/Disk A.

Ideally, all of the VM's (VMX files, VMDK, etc) would be mirrored in real time to Site B. Site A dies (server blows up, fire, etc) we activate Site B, start up the VM's, very little downtime.

I am new to ESX and SANs...am I way off here? Is this possible?



Thanks,
Andrew
 
FWIW - we use four ESX servers in an HA configuration and our servers have Qlogic iSCSI cards connected to an Equallogic SAN. The software capabilities of Equallogic along with its rock-solid reliability might be just what you are looking for.

I'd recommend Equallogic after looking at several others.
 
I second BobMCT's suggestion...we use pretty much the same setup; 4-node ESX cluster, dual-port QLogic iSCSI HBA's in each node, two EqualLogic array's and a stack of Cisco 3750's...The ease of use is unreal not to mention the performance, the relationship between EqualLogic and VMWare (although a lot of different SAN/NAS vendors can claim this), built-in Replication between array's (with no extra cost I might add whereas other solutions may make you pay huge dollars as an add-on)...I could go on and on...oh yeah, price, we have about 10TB of disk space (configured in RAID 10) between the two arrays and we probably spent 80k

I hate all Uppercase... I don't want my groups to seem angry at me all the time! =)
- ColdFlame (vbscript forum)
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top