Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations IamaSherpa on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Calculating the optimum burst size

Status
Not open for further replies.

tcpnagy

Vendor
Sep 23, 2011
3
EU
I am currently building a high speed network running TCP-IP. There will be a requirement to police the end users traffic.

Q) The question I have is what size should the burst rate be? The information I have got so far is:

Burst Rate = 2 x Round Trip Time x line speed

Q) Then my following question is what impact does this have on TCP performance. I guess the idea of the 2 x RTT is to allow a complete window to be sent and an acknowledgement to be received by the send. But what happens if the window size is 16Mbytes.

Q) I am also aware that the TCP windows size is dynamic, however the window size is regulated by the receiving device. I am not aware that intermediate device such as routers, switches can influence this.

Q) Is there any mathematical formula to calculate the burst size to provide the most optimum TCP throughput derived from physical line speed, Max BE contracted line rate and Round trip time?

 
Routers and switches CAN 'break' RWIN if the link speed changes.

If you have a gigabit network then a 54 meg wireless link, (22 meg throughput) then another gig link, the packets will buffer before the wireless leg, if that buffer overflows you lose data that needs to be resent.

Too small switch and router buffers can foil a plan to increase burst size.

I tried to remain child-like, all I acheived was childish.
 
For the purpose of this questions. Assume and you have more then enough bandwidth in the core.

Also let put some numbers into this. Lets say that we have a physical access speed on 100Mpb. But we want to limit the users access to 50Mbps.

The data will still be sent at line speed for a short period on time. And then the line will hove no data for a period of time. The split will be done in such a way to ensure that the average is 50Mps.

So me question is how big and the chunks that it can send. Ie the burst?
 
Hi Tcp,

Burst rate is usual used in relation to contented networks. Taking your example above, the committed data rate for the user would be 50Mbps and the burst could be up to 100Mbps. However in your description you explicitly say you want to limit them to 50Mbps max. Do you mean they should be allowed to go over 50Mbps for short periods (i.e. the burst?)

Whilst the TCP window size is dynamic as you mentioned, you may still need to set the maximum size if the pipe is short and fast.

i.e. for the 50Mbps and a RTT of 10ms, the pipe can have 5,000,000 * 0.01 bits on it before a return bit is seen. i.e. 500,000 bits or 62,500 bytes. So the receive window should be at least 63Kbytes in size to allow a single TCP session to utilise the available bandwidth.

There is another window to be aware of - the congestion window. Take a look at some TCP congestion control algorithms for the gory details, such a New Reno or CUBIC. Basically they scale the send window according to the ACK's. So if the RTT starts to increase (maybe due to queuing on a router), it will scale back the send window and throttle the connection to avoid excessive retransmissions.

Also, equipment such as network accelerators can transparently manipulate the RWIN sizes.

The question seems a little vague - did you want to know what mechanisms to use for limiting the data rate? If you are using a contracted line with committed rates and and burst rates, then I would just calculate the RWIN sizes for the burst (maximum) and let the TCP congestion control handle the contention.

Cheers,
Scott
 
Thank you GeekyDeaks

Useful input.

Yes you are correct in that the Burst speed may well be up to a maximum of 100Mbps. while over a periode of time the average must be 50Mbps.

For the RTT, I would assume 150ms. I this case you would need a higher buffer size.

But I still have a questions of the TCP behaviour. I know that with the TCP Scales window scale option you can potentially get a windows size that is close to 1Gig. For sure I will have a look at the algorithms that you mention.

I am sware that there are a number of different algorithms in use today. I would be interested to know which is most commonly used today. Say in 80% or more end systems.

If there are any working example be used I would be interested in knowing about this.




 
Hiya,

Ok for a RTT of 150ms and 50Mbps, the pipe size is approximately 7,500,000 bits long. This is about 940Kbytes or approximately 1MByte. For this kind of latency you need to make sure the O/S is configured to allow the receive window to re-size to this amount.

I believe you are correct about the potential size of the receive window, however the O/S usually puts a much lower limit on it. Where you set this depends on the O/S. For windows it's a registry setting, for linux it's a kernel parameter. The reason for this is that the receiver needs to allocate this memory for each socket established, which can become quite costly if you have a large bunch of 1MByte buffers. The maximum size is also only an issue when the session is active for more than a few seconds. Something like a web page access results in several short lived TCP sessions, none of which will ramp up the receive window size significantly.

I believe the majority of systems use New Reno for congestion control, but CUBIC seems to be the default for new linux kernels. I think windows uses a hybrid algorithm with two congestion windows. At any rate, they tend to be fairly similar, but with slight tweaks for the newer technologies. To be honest, the point of the congestion control is to ensure a TCP session does not over utilise the available bandwidth, so if you have two sessions active with receive windows large enough to max out the slowest link on the network, congestion control will cause both of them to back off to just under half the maximum speed.

If you are really interested in this, I would suggest you use wireshark to take a trace of a typical TCP session and see how different O/S's handle the window resizing as they start receiving data.

HTH,
Scott
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top