Hi,
I have two HP ML350 servers, as follows:
"S1" is an ML350 G4 with 3GHz CPU, 2GB RAM, Ultra320 15K disks in RAID 5 and built in gigabit NIC running SBS 2003 Premium with SP2.
"S2" is a brand new ML350 G5 with quad core CPU, 2GB RAM, SAS 10K disks in RAID 5 and built in gigabit NIC running Windows Server 2003 Standard R2 SP2.
I have been doing some testing on the network and the performance is nowhere near what I would hope for and expect it to be. I used the following configurations and tests:
CONFIG1: Both servers connected to unbranded 100Mbps switch at 100Mbps
CONFIG2: Both servers connected together with a cross-over cable at 1Gbps
CONFIG3: Both servers connected to a brand new Netgear 1Gbps switch at 1Gbps
TEST1: From the console of S1, use Windows Explorer to copy an i386 folder from S2 to S1.
TEST2: From the console of S2, use Windows Explorer to copy an i386 folder from S1 to S2.
CONFIG1 TEST1 throughput = 26 Mbps
CONFIG1 TEST2 throughput = 46 Mbps
CONFIG2 TEST1 throughput = 29 Mbps
CONFIG2 TEST2 throughput = 67 Mbps
CONFIG3 TEST1 throughput = 42Mbps
CONFIG3 TEST2 throughput = 72Mbps
The tests were performed after cold boots on both servers, with no other devices connected to the network. TCP offload engine is disabled on S2. Both NIC's and switch ports were set to "auto" for speed and duplex. With the gigabit switch and the crossover cable, both NIC's reported that they were running at 1Gbps.
I can live with the performance on the 100Mbps switch, but I can't understand why increasing the speed of the network to 1Gbps (a tenfold theoretical increase) results in a miserable performance increase.
Can anyone suggest where to start troubleshooting this?
Thanks,
Dave.
I have two HP ML350 servers, as follows:
"S1" is an ML350 G4 with 3GHz CPU, 2GB RAM, Ultra320 15K disks in RAID 5 and built in gigabit NIC running SBS 2003 Premium with SP2.
"S2" is a brand new ML350 G5 with quad core CPU, 2GB RAM, SAS 10K disks in RAID 5 and built in gigabit NIC running Windows Server 2003 Standard R2 SP2.
I have been doing some testing on the network and the performance is nowhere near what I would hope for and expect it to be. I used the following configurations and tests:
CONFIG1: Both servers connected to unbranded 100Mbps switch at 100Mbps
CONFIG2: Both servers connected together with a cross-over cable at 1Gbps
CONFIG3: Both servers connected to a brand new Netgear 1Gbps switch at 1Gbps
TEST1: From the console of S1, use Windows Explorer to copy an i386 folder from S2 to S1.
TEST2: From the console of S2, use Windows Explorer to copy an i386 folder from S1 to S2.
CONFIG1 TEST1 throughput = 26 Mbps
CONFIG1 TEST2 throughput = 46 Mbps
CONFIG2 TEST1 throughput = 29 Mbps
CONFIG2 TEST2 throughput = 67 Mbps
CONFIG3 TEST1 throughput = 42Mbps
CONFIG3 TEST2 throughput = 72Mbps
The tests were performed after cold boots on both servers, with no other devices connected to the network. TCP offload engine is disabled on S2. Both NIC's and switch ports were set to "auto" for speed and duplex. With the gigabit switch and the crossover cable, both NIC's reported that they were running at 1Gbps.
I can live with the performance on the 100Mbps switch, but I can't understand why increasing the speed of the network to 1Gbps (a tenfold theoretical increase) results in a miserable performance increase.
Can anyone suggest where to start troubleshooting this?
Thanks,
Dave.