Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations strongm on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

expected speed with Gbit ethernet? 4

Status
Not open for further replies.

jon1117

Technical User
Feb 10, 2003
27
US
Ok. Just upgraded my network to Gbit and I don't feel I'm getting anywhere near the speed I should. I have a 3Com 12port Gbit switch. I have a WinNT4 server, NAS cabinet and workstations connected using cat6 cable and all with Gbit cards, the NAS runs dual cards. I recently had to backup the data from the NAS and it took forever to copy the 350 GB's of data from it to one of the workstations and I monitored the speed most of the time. I was getting...at max, 14-15MB/sec transfer speed making the transfer of data take almost 17HRs.
Now if i do the math...1000mb/sec = 125MB/sec, and although I do not expect to get anywhere near the 125MB/sec speed thats possible I would still expect to get at least 50% of that. Was getting 10MB/sec or so when connected as 10/100 and I didn't spend all this money to upgrade to gbit for a merely 3-4MB/sec boost. What kind of speeds should be expected off of a Gbit setup? and can anyone give me an idea of what might be restricting me?
 
Well letz see....

Cat6 isn't an issue,

12 port GIG switch from 3Com - is AUtoNegoTiation turned ON ? If so OFF ! Check your ports 1 by 1 and your Workstation NICS 1 by 1.

Then look at the NAS specs and the data transfer rate it supports..not the NIC but the transfer rate leaving the NAS.

Just some thoughts.
 
Also take into account the speed the server and workstation can read and write the data to disk.
 
Using Iperf to measure the throughput between Gig attached devices, the best I have seen is 350mbps. However, I have never seen a disk subsystem that can keep up with that.

When doing large copies, 15 gig and above, I usually use xcopy. Windows Explorer sends many SMB commands prior to starting the file copy, where xcopy does not. I have seen significant improvements in transfer speeds by using the DOS command.

As mentioned, Duplex could be an issue. Here is a copy of a posting I submitted on another thread that describes how to use Iperf to measure the throughput between your devices. If you are getting very low throughputs, 14mbps and below, you could have a duplex issue.

Here is how you troubleshoot a network slowdown such as this:

1) Go out to
2) Download IPERF, it is a 100KB executable.

3) Put a copy of IPERF on the server. From the DOS prompt run iperf -s. This will put the server in listening mode.

4) Put a copy of IPERF on a workstation that is experiencing poor response times.

5) At the workstation's DOS prompt type iperf -c [server ip address]. IPERF on the workstation will connect to the IPERF on the server on TCP port 5001. For 10 seconds it will transfer as much data as it can. At the end of 10 seconds, it will report the throughput of the network. If this value is less than 80 megabits per second, you have problems.

6) If the throughput is good, reverse the test. Run iperf -s on the workstation and iperf -c [workstation ip address] on the server. If this throughput is bad, you have a duplex mismatch. When testing with IPERF, you will find that if a duplex mismatch is present, you will get good throughput one direction and poor throughput in the other.

7) If you get good throughput in both directions, your problem is not the network infrastructure.

Here is one other thing to check. During the file transfer run netstat -s on both the client and the server. Under the TCP section, check out the Segments Retransmitted. If this value is incrementing, you are losing frames between the client and the server. This can be another indication of a duplex problem, or bad cabling.

Happy troubleshooting!

mpennac
 
Is the 3com port 10/100/1000 ports if so then you have to very what each device is actually coming on to the network as , hopefully this is a managed switch that you can look at things . As previousposts said speed/duplex issues makes big differences in transfer speeds . If they are only connecting at 100 megabits then the max transfer speed will like 12 megabytes per second which you will never see due to overhead and the switch doing other functions .
 
Another point is that at higher speed we SHOULD have larger packets but the standard packet size is still near 1500 bytes. Jumbo Frames can raise this to near 9000 bytes par packet, which helps reduce CPU consuption and increase speed.

I tried to remain child-like, all I acheived was childish.
 
jimbopalmer, and all:

Just make sure the client side, and not just the NIC, are set properly with MTU and RWIN for those jumbo frames. Otherwise a 10mbs/Half duplex connection will appear fast in the setting.

This is something you will have to do by hand. The post by mpennac I star'ed above will give you the basis for setting these values. I use the freeware DrTcp utility to set client-side settings:
And this is an area where you really have to stay in touch with the manufacturer of GigE adapters and switches to be certain you have the latest patches, firmware and drivers.

There has been a lot of disappointment by end-users in GigE; it is decidedly not as Plug-and-Play as 10/100 FAST ethernet. You really need to test, adjust and fiddle.

And this is one of those hardware implementations where the more equipment you have from a sole vendor the better off you will be.

Finally, investigate the NAS in this implementation. It could well be that it is a buss speed issue due to the configuration of the hard drives/controllers. It would make sense, if possible, to split the drives onto separate controllers. It is not clear from the original posting whether the NAS is using SCSI, SATA, IDE or USB for the buss side of the hard disks, and if any RAID setup is involved. The bottleneck might be at the buss of the hard disks and not the GigE at all.
 
Hi Jon,
some good posts. But not all are relevant. Here are 2 more cents from a networking guy.

1) Make sure that you have 1 Gbit/s Full Duplex enabled on both sides. If necessary disable Autonegotiation.

2) Do not use Jumbo frames, please. Sniffer tests have shown, that you end up with compatibility problem and sometimes even lower speed than with the normal frame size. Problems are often caused by poor NIC drivers!

3) 14-15 MByte/s is not a bad rate.
The math goes like this.
- 15 MByte/s * 8 Bits per Byte = 120 Mbit/s.
- Depending on the protocol used for the transmission you need to add 20% - 30% as overhead data for protcol headers and acknowledgements, etc. Sometimes even more. So we end up with 150 Mbit/s. Not bad at all.

You could max out an FDX GB-link up to 60-70% in theory. But
a) not all switches are fast enough.
b) your limitation could be either on your NIC card (which one did you buy?) or at last the PC (internal bus structure of PCI) and the operating system having to copy the data in. (Remember, it is Microsoft.)

Things to check out for the bottleneck:
- Utilization of your NT4 server, memory usage
- TCP/IP driver (MS drivers have not the best throughput), Try to find a driver that supports TCP Window Scaling, check if the NAS does support this, too. This can boost your throughput a bit.
If that does not help, you need to accept, that a nominally high bandwidth is not always the all-time solution. It opens up a new bottleneck which, in my opinion should be the machine running standard PC hardware and a Microsoft OS.

Regards
Matthias
 
Guys, good info...

I am trying to decide between cat 5e and cat 6 cabling. I am using the IPERF utility to test throughput between my machine and our file server.

My machine is GB and is connecting directly into a Cisco GB switch of which the file server is also connecting to with dual/teamed GB adapters. I was thinking that cat 5e would give me about 300MB throughput (from what I have read) and that cat 6 (If GB on both ends would give me the GB throughput in theory).

Transfer rates with both types of cables (5e and 6) are about 400 MB's which is very good, but why pay the money for cat 6 if we can't uitlize the GB performance? Are my thoughts accurate?

Thanks in advance!!
 
Just because you have a GB network card and a GB port doesn't necessarily mean you are going to connect at GB speed correct? If I have GB on both ends but cat 5 cable, cat 5 network drop and cat 5 patch panel, speed will step down to 100MB for the cat 5 spec correct?
 
bdoub1eu:
correct.
Not correct, It will depend on the hardware. It might fall back to a workable speed and this could be 100M but it could be faster or slower. It could try to talk at GB and just have lots of errors.
 
So how do you really know if you need Cat 6 or not since you really can't test unless you go ahead and upgrade the hardware?

I was using the IPERF utility as well as robocopy to see how fast I could copy and write 4 GB's of data to our file server. I was also testing by connecting my laptop via cat 6 cable straight into our GB backbone switch and comparing to being plugged into a cat 5 jack going through a GB switch in another building. Tough to come up with definite results. If we can only read/write as fast as our disk subsystem will let us, 100MB would probably work fine. We did upgrade the switch in this remote building I was testing in to GB and therefore have GB fiber modules on both ends so the pipe between that building and us is 10x what it was. I just don't see alot of performance benefits by going to 6 or even 5e. Any ideas?
 
Hi there.
First and foremost:
Your link speed settings on switch and PC have only very, very little to do with the quality of your cable.

Cat 5e and Cat 6 have different electrical characteristics.
Main difference, to make a long story short: You will have better shielding against noise, and less Near End Crosstalk (NEXT) from other pairs. This means you will have better quality of the signal which will be beneficial, when you use maximum length of your cables (100m) especially in EMI-polluted envorinment. Plus you can run higher frequencies across your cable once you wanna try 10Gig across copper.

BUT.
Gigabit Ehternet supports both Cabling standards!!
Cat5e and Cat6. And to be honest. There are not too many networks in the field that support Cat6 across all components (cables, patch panels)

Link Speed: --> Totally different story.

You can and should set the link speed manually, so that there won't be any negotiation between the end station and the switch. Choose 1000Mbit/s and Full-Duplex to ensure maximum throughput.

The Auto-Neg process ist always done between switch and end station to ensure the best possible settings for speed and duplex.

But both sides will by no means test the quality of your cable!!!
There are no mechanisms in place that let's your connection fall back into 100 MBit/s because of a Cat5e cable or a Cat 5 patch panel!! That stuff will not be tested in Autoneg. For details you may wanna check the 802.3 spec at IEEE.

Means,
In an normal office environment, Cat5e or Cat6 won't bring different results in throughput for you.
Best Regards
Matt
 
Good info Matttheknife...

But I thought that was a big selling point for cat 5e and 6 was the GB throughput...

I guess I just mis-understood the whole process...I thought that 5e and 6 were GB and if you had GB connections on both ends but had cat 5 cabling, cat 5 patchpanel/cable (I thought cat5 couldn't do GB) that it would step down to 100MB...Is that wrong?

Thanks for the info!
 
Also how do you set the speed manually? In the propeties for my NIC I don't see the option to set it at 1000MB Full Duplex nor do I have that option on my server.
 
Hi bdoub1eu,

regarding the process it seems you misunderstood.
As I pointed out before, the Autonegotiation does not test the quality of the cable.
Both sides (PC and Switch) simply check whether they can enable the best possible parameters.
They start at 1000 Full, if this fails then 1000-Half, then 100-Full, 100-Half, etc...

Where to set speed and duplex for the NIC?
Got to the device manager (My Computer --> Properties), there choose your NIC, right-click on Properties. Under Advanced (Settings) you should find the Switches/Settings for Duplex and Speed). The exact name of the options available depende on the actual driver of the NIC.

When you set Speed and Duplex manually, make sure to test if your link goes up and stays up.
(What kind of switch are you using).
Let me know if you have any problems.

Cheers
Matt

 
Under the NIC in device manager for speed and duplex, it is set to auto and if I change it I don't have an option for GB Full duplex...Just 10 half/full and 100 half and full.

We have Cisco switches, 3750's...
 
Correct, I do not think there IS a 1000baseT half duplex, not that anyone will miss it. Gig is always full duplex.

I tried to remain child-like, all I acheived was childish.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top