Hi,
need a TCP-IP guru badly here.
Does anyone know if there is any relation between how fast the TCP-IP throughput recovers after a packet loss and the loss rate of client ACKs?
For example I have an environment where I can have a peak throughput of lets say 10Mbps between a server and a client. When a packet from server to client is being lost the TCP-IP will immediately reduce the transmit rate and then will gradually increase it until it reaches again the maximum value. However in an almost identical environment I can see that it takes much longer time for the throughput to recover back to its maximum value after a packet loss (about 10 times longer). One of the differences between the two environments is that in the second one there is a much higher loss rate on the uplink (from client to server). I was wondering if this can be related to the recovery speed, or if I should look for other reasons.
Thanks,
Radu
need a TCP-IP guru badly here.
Does anyone know if there is any relation between how fast the TCP-IP throughput recovers after a packet loss and the loss rate of client ACKs?
For example I have an environment where I can have a peak throughput of lets say 10Mbps between a server and a client. When a packet from server to client is being lost the TCP-IP will immediately reduce the transmit rate and then will gradually increase it until it reaches again the maximum value. However in an almost identical environment I can see that it takes much longer time for the throughput to recover back to its maximum value after a packet loss (about 10 times longer). One of the differences between the two environments is that in the second one there is a much higher loss rate on the uplink (from client to server). I was wondering if this can be related to the recovery speed, or if I should look for other reasons.
Thanks,
Radu