Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

SAN performance

Status
Not open for further replies.

peterve

IS-IT--Management
Mar 19, 2000
1,348
0
0
NL
Hi,

I'm building a project plan to implement a SAN in our environment. I'm struggling with some theoretical exercises such as determining the performance between clients and the disks in the SAN array. In fact, there are a couple of components that are not very clear to me
Suppose I want to use a iSCSI bases SAN, using Gigabit Ethernet. What would be the effective performance between the server that is hosting the SAN volumes and the SAN disks themselves ?
What are my options to make the SAN performing well compared to servers with built-in disks ?

Where can I find an overview of all of the SAN components and their performance ?
(disks, disk array backplane, SAN controller, switches, iSCSI adapters, ethernet vs FC, ...)

thanks



thanks

--------------------------------------------------------------------
--------------------------------------------------------------------
How can I believe in God when just last week I got my tongue caught in the roller of an electric typewriter?
---------------------------------------------------------------------
 
iSCSI performance is going to mostly weighed by your current network performance. As a SAN is a Fibre Channel attached environment, iSCSI will have to be measured by the slowest component on your Network.

For Disk Array performance this will all depend on your Raid Group layout, LUN layout, RAID emulation, Stripe Size, Data on the volume, small block vs large block I/O. As you can see there are many different measures, not just one.

I would first start off with gathering all the spec sheets for each component in the environment from your vendor.

An iSCSI adapter is just a standard Network Interface Card with an iSCSI driver on top.
 
suppose I'm using dedicated cabling.
When I'm using an TOE or iSCSI HBA interface, and a single 1Gbit connection between the server that is hosting the SAN volume; and the switch/controller
What would be the effective throughput between the server and the disks holding the data ?
60Mbyte/sec ? 100Mbyte/sec ? 300Mbyte/sec ?

--------------------------------------------------------------------
How can I believe in God when just last week I got my tongue caught in the roller of an electric typewriter?
---------------------------------------------------------------------
 
Theoretically, 100MB/s but you must take off at least 20% of that for TCP overhead.
 
On top of iSCSI performance limitations, you have to look at your SAN volume performance as well. Disk arrays might not be able to fill up an iSCSI pipe, especially under certain configurations. Other arrays might easily be able to fill multiple iSCSI ports at full (effective) bandwidth.

The question is sort of like "how much power can my Stereo provide, I'm running 14 gauge speaker wire", when we don't know anything about your amplifier or your speakers.

Assuming you use a real TOE or iSCSI HBA, you won't have that much of an issue at the host side. So figure 80MB/sec (as comtec17 replied) ~if~ your disk can serve it that fast. Concurrent multi-host access? How many iSCSI ports on the iSCSI array? How many hosts? etc.

--
Bill Plein
a.k.a. squiddog
Contact me at
 
what would be typical throughput on the disks side ?
Everybody is talking about I/Os per second (on both the disks and the SAN controller), but nobody has mentioned the effective throughput inside the disk array, between the disk array an the controller, and from the controller to the switch

--------------------------------------------------------------------
How can I believe in God when just last week I got my tongue caught in the roller of an electric typewriter?
---------------------------------------------------------------------
 
Throughput is only one side of the performance coin, IOPs is the other. Each array has a particular performance personality or fingerprint.

But you can summarize: arrays can never ~sustain~ more IOPs or more throughput than the sum of their spindles, and it's all downhill from there.

Of course, that's oversimplifying it. My FC4700 can sustain over 100MB/sec on a single port, sequential reads, against an 8+1 RAID-5 group. It can write at 40-80MB/s sustained as well.

But most apps don't do sustained large block sequential reads, so you need to dig deeper.

Before you start trying to assess a total solution (SAN plus disk), you need to understand how to measure and assess the disk. Because you can never exceed the speed of your array.

--
Bill Plein
a.k.a. squiddog
Contact me at
 
Fundamantally speaking, FC will outperform ethernet. they are different protocols designed for different purposes.

running storage over a LAN can be more attractive from a pricepoint perspective, and can work well depending on the situation. but IMO, this is a case of getting what you pay for.

any disk vendor can supply you with numbers on the equipment they sell. once you narrow down your choices, i'd ask each of them for customer referals.

the most important thing tho, is to know what you need beforehand so that you can find a solution that fits your needs.
 
Have you looked at Equal Logic? We have seen impressive performance on both Exchange and file sharing. Looking into SQL hopefully in the next few months. We have several hundred thousand mailboxes with no complaints. If I have time I will try to run some performance numbers but I rarely ever touch those servers.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top