Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations SkipVought on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

network teaming vs disk performance formula

Status
Not open for further replies.

zenacomp

Technical User
Jan 15, 2010
6
0
0
US
Hey Guys,
I'm trying to determine how many network cards I want to use for link aggregation on a switch. The number of gigabit network cards I need depends on the speed of my RAID 5 Array. Question is, what is the formula for Array performance to determine how many network cards I should use in 802.3ad. The Cheetah 15k.7 maxs at about 204MB/s. With RAID 5 if I had 4 drives I guess you could say I really only have 3 in the case of read/write. I could say that gives about 612MB/s and I need 5 gigabit cards (4.7Gb/s). Thoughts? Is there a formula?
 
Network performance and RAID performance are independent of each other. I am assuming you are connecting to an iSCSI storage device which has an internal RAID adapters that allow you to manage your storage. The network cards that you used to connect to the storage device has no baring on RAID performance
 
Not true. If I had a PCI 66MHZ BUS my array performance would suffer from the PCI bottleneck. What I'm looking at is disk performance sizing with link aggregation. Its not a SAN it's just a RAID controller. If I have 8 network cards the array needs to be able to utilize the bandwidth or its pointless in having 8 LAG connections.
 
Burst and sustained transfer rates are two different things. A 4 disk raid 5 array isn't fast by any means, and definately wouldn't fill 8 gigabit nics.
 
I may need to rephrase my question. Does anyone have a formla for sizing RAID 5 and Gigabit Link Aggregation?
 
No formula from me but have you tried seeing what iops you're getting from the arrays at the moment?

Have you used iometer to test throughput with 1,2,3 and 4 links?

What I am trying to say is that by using something like iometer and increasing your available bandwidth to the SAN you should be able to see at what point you're not getting any improvements because you're bottleneck at that point is either the RAID controller or the disks themselves.

Simon

The real world is not about exam scores, it's about ability.

 
Unfortunately I'm looking at sizing for new builds. I have to work with the specs from the manufacturers. Of course everything has to match up to prevent any bottlenecks. The Array controller, the slot it sits in, the drive specs and even the RAID technology would impact the performance. My original thought was this:
With RAID 5 if I had 4 drives I guess you could say I really only have 3 in the case of read/write. 3 Cheetah 15.7 drives are about 612MB/s combined so I need about 5 gigabit cards (4.7Gb/s)using 802.3ad (LAG) to match up.
 
What kind of workload do you anticipate? Is it a purely sequential workload, streaming video for instance, or a random workload like a database? It makes a big difference.

J
 
In a RAID 5 you cannot simply multiply the max perf of drives by the number of drives.RAID5 has an immense performance hit for writes, because for each write, the parity must be updated.Read performance is OK, as you do not have that parity impact at that time.And then there's the filesystem type you are going to use ( journaled or not (e.g. ext2/ext3 in linux ), can have a huge impact on performance as well. Then there's the tcpip overhead to consider as well.

And regarding the network cards, you can also buy cards with 4 ports ;-), leaving you 1 single card ( or 2 dual port cards for redundancy )

rgds,

R.

NetApp Certified NCDA/NCIE-SAN
 
I have to ask, have you decided on vendors at the moment? If so then I would probably have a chat with their pre-sales tech guys and get some advice from them.

You may also have to consider a tiered approach to your storage needs, you would want nice fast disks for the DB and slower disks for the file side of things.


Simon

The real world is not about exam scores, it's about ability.

 
So far I haven't heard of a disk array sizing method for LAG. I'll have to see what my vendors say. My question was hypothetical I'm just looking for the sizing method for now. Quad port NICs are nice to have if your hardware has the speed to utilize it. Thanks guys.
 
You also have to know about the hosts connecting to this array, and how they will utilize those links. I.E. VMware has different options for load balancing across links for iSCSI arrays.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top