Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Storage Selection Question

Status
Not open for further replies.

Sparky369

IS-IT--Management
Nov 27, 2007
10
0
0
CA
We are working on moving from Local Server storage to a iSCSI SAN enviroment better storage use and Flexibility.
I have been doing some looking and now have it down to 2 options and I wanted some feedback.
Option 1:
Dell MD3000i (iSCSI SAN Device)
Dual Controler
8 x 300GB 15K SAS
4 x 1GB ethernet

Option 2:
Storevault S500 (NAS and iSCSI SAN)
Single Controller
12 x 500GB 7200rpm SATA II
2 x 1GB ethernet

I like the fact that Storevault is a division of NetApp and that the software and options look very mature but the lack of network ports and preformance concernes make me less sure. The Dell has some snapshot limitations but its extra nic's and Dual controllers look pretty nice.
Any advise on this would be great.
If anyone has other options I should look at here is a run down of our enviroment:
VirtualIron with 2 nodes running 5 VM Servers
1 Backend Exchange server
1 SQL 2005 Server
1 AD Server / File Server

 
You don't have enough spindle count in either configuation for the network ports to become an issue.

If you assume on the Dell an 8 drive RAID 5 (closest in space to the storevault) then the performance, specifically write performance will be less than 25% of what you can achieve with the storevault in RAID DP. I think your performance concerns are misplaced and should focus on the Dell solution. This will be very important with write intensive and sensitive applications like Exchange and SQL.

The Dell snapshot limitations mean you're storage is basicaly unuaable with 3 or 4 snaps in place. The storvault can easily accommodate couple of hundred snaps per volume. This is one of the major differentiators for Netapp's snapshot technology.

 
So you are saying that the 12 disk Raid DP Netapp will beat out a Raid 5 8 disk Raid 5 array on the Dell by 25%. So the SAS vs SATA would not make much of a difference? As the solution scales out and we add more like 15 drives to the Dell will the preformace match better.
 
There's no write penalty on Netapp. RAID 5 require 4 operations for a write; the write penalty is 4.

The 15K SAS drives get about 115 IOPS/spindle. The 7200 SATA drives get about 60.


For RAID 5, write performance = P*(N-1)/4 or about 200 IOPS.

For netapp in RAID DP it's P*(N-2) or about 600 IOPS. You get space like RAID 6 and write performance like RAID 0.

 
I have been doing a little more reading and I have a few more questions. I have been looking at an appliance from iStor and they have created a board/asic for the purpose of iSCSI. My question, Is NetApp the only company that has no wite penalty for the raid level, iStor also uses a virtual raid layer so would this have any preformance advantage? The Istor array I am looking at has 15 SataII drives at 7200RPM.
If you do not mind explaining how the iops/spindle number is figured so I can do some calculations on my own.
 
The technology used by netapp at the virtualization layer to eliminate the write penalty is patented.
 
So does that mean that NetApp will always preform 3 to 4 times better based on the same spindle count to other systems?
As why do systems like the Dell MD3000i have 4 nic ports if their is no real advantage to having that many?
 
It means that random write performance will be 3 to 4 times better than RAID 5 or RAID 6 based systems and 2 times better than RAID 10. Read performance is comparable.

THe channel bandwidth of a single 1Ge port is easily an order of magnitude higher than the IO the spindle counts mentioned are capable of supporting. Disk will be a bottleneck long before the channel.

If you were to use a system like the MD3000i in a situation where there were few writes and many sequential reads (file server serving large files for instance), double the spindle count, then you'd have a chance of saturating a 1Ge channel and more would be preferrable.

The applications specifically mentioned here are Exchange and SQL. They have a small IO size and a random workload with a low read/write ratio. You'll hit the disk bottleneck long before the channel. One interface is sufficient.



 
So how would I figure out what my virtual enviroment needs as far as IO and mb/s. So the spec sheet on the IBM system lists that it can do 64000 iops to the cach and 22000 iops to the disks how do they get these numbers?
 
These are numbers that are measured by engineering, in a lab setup , where the conditions are most optimal ( unlimited number of disks,idle high performant host,multiple threads workload,specific block size workload,... ),these are numbers that are not obtained in real life :)

rgds,

R.
 
That makes sense but how do I go about sizing a san for a virtual enviroment?
 
I love that game. The numbers listed on spec sheets...

They are always presented in the most favorable light. If a vendor lists 64000 IOPS, they don't mention it's for the same 512 byte sector from cache, yup that works. 22000 IOPS over a single 1Ge interface, what they didn't tell you as those are 512 byte sequential IOs.

Ask them for throuput figures that match those IOPS numbers. Ask them for the numbers for 4K or 8K random IOs with a 50:50 mix of reads and writes over a 500GB dataset.

You need to compare apples to apples. The IO numbers need to be produced with a size, pattern, and range that approximates the application you intend to put on the storage device. Try running your own benchmarks on an eval system using jetstress or SIO.




 
I am still working on getting some demo units, I have one from Dell on the way but Storevault acts like Canada is some crazy contry to ship to. Anyway I have found a few reviews that say the Storevault is slow to write when using iscsi and storevault confirmed that with the overhead of the S500 unit this was the case.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top