Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Mike Lewis on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

t1 inbound congestion latency 1

Status
Not open for further replies.

jr0ck

IS-IT--Management
Dec 23, 2007
7
US
Hello all,
I am working with a cisco 1720 router and t1 service. I noticed congestion latency on both inbound and outbound traffic of ~350ms under heavy load. I enabled 'fair-queue' on the t1 serial interface (t1-wic) and the *outbound* congestion latency went way down to ~5ms. However any heavy *inbound* traffic that saturates the t1 will still give me the ~350ms of latency. Enabling fair queueing or wred on the fe0 interface that is connected to the switch does nothing. I have also tried 'traffic-shape rate' command on the fe0 with some different param and it does help but not much.

Can somebody explain to me how to reduce this inbound congestion latency i am seeing? Do newer service routers have better capabilites in dealing with this? I will post more details if needed.

Thank you! ,j
 
Unless you have control over the other side of the link, there's nothing you can do about it. As you've discovered, the types of tools you have a available are for outbound traffic only. If you want to reduce the amount of traffic inbound on the T1, you have to control the other side.
 
Thanks for the reply..
The other side of the link is my ISP. I know they offer class-off-service options for a cost, but it seems that is really not what I want or need. I simply want to have some sort of congestion avoidance on the inbound regardless of traffic type as I am doing with traffic being sent. Should I be able to request fair queueing be turned on on their end? What can be done?
Thanks again for the reply / replies !
,j
 
If anything, you probably want them to leave fair queueing on if the link is that busy. Otherwise, your low bandwidth traffic will be squeezed out by any high bandwidth flows and some of your applications/users will see even worse performance.

If this is a consistent problem then I think you really just need to upgrade to a faster link or add another T-1. They probably can turn on some congestion avoidance stuff like WRED, but I don't think that's a good solution. If the T-1 is this busy most of the time, that's a very good sign that you just need more bandwidth than you have available.
 
Thanks for another reply jn.. It seems to me that on their end they are using fifo because of the latency seen during inbound congestion. What I was wondering is if I should request that fair queuing be enabled on their end and if its common to do so. I respect your opinion that we may need more bandwidth but it seems to me what I really want to do is smooth out the latency from bursty traffic. Even if I had more bandwidth it seems that any machine or device capable of pulling data fast enough would end up saturating the link.
Thanks again for the ideas. ,j
 
Can you please post a 'sho int s1/0' or whatever the int the T1 is attached to...
 
Thanks for the reply,
Here is the sho int s0 (below)

As described above I already have flow-based WFQ enabled on the s0 interface. Outbound traffic behaves nicely with it enabled. It is the inbound traffic I am having congestion latency problems with. I am trying to have my ISP enable WFQ on their end, or let me know what options I have. I also have a trouble ticket open to address "dribbleling" errors on the circuit. Although I am having this fixed, this is not the root of the problem, it is fifo queuing on their end. Thanks for any help or advice. ,j

-----------------------------
Serial0 is up, line protocol is up
Hardware is PQUICC with Fractional T1 CSU/DSU
Internet address is
MTU 1500 bytes, BW 1536 Kbit, DLY 20000 usec,
reliability 255/255, txload 8/255, rxload 55/255
Encapsulation PPP, loopback not set
Keepalive set (10 sec)
LCP Open
Listen: CDPCP
Open: IPCP
Last input 00:00:00, output 00:00:00, output hang never
Last clearing of "show interface" counters 5d11h
Input queue: 0/75/2714/0 (size/max/drops/flushes); Total output drops: 6
Queueing strategy: weighted fair
Output queue: 0/1000/64/0 (size/max total/threshold/drops)
Conversations 0/17/256 (active/max active/max total)
Reserved Conversations 0/0 (allocated/max allocated)
5 minute input rate 334000 bits/sec, 37 packets/sec
5 minute output rate 51000 bits/sec, 25 packets/sec
2461677 packets input, 1869651994 bytes, 0 no buffer
Received 0 broadcasts, 0 runts, 2 giants, 0 throttles
605232 input errors, 33537 CRC, 260014 frame, 0 overrun, 0 ignored, 311681
abort
1968425 packets output, 483229984 bytes, 0 underruns
0 output errors, 0 collisions, 138 interface resets
0 output buffer failures, 0 output buffers swapped out
25 carrier transitions
DCD=up DSR=up DTR=up RTS=up CTS=up
 
that t1 is filthy dirty.
do you think that queuing will be the long term solution?
maybe its time to actually add more bandwidth if your maxing the link out alot the time?
 
You have a physical problem with that circuit that needs to be corrected. That is a really high error rate, as plshlpme mentioned. Fixing that should be your first priority.
 
Thanks for the replies..
Yes as I stated there were 'dribbleing errors' found on the cicruit during testing. This was fixed late last week (1 bad pair, and replaced telco interface card). I recently did a 'clear count' and I'm not taking any errors now. I knew after posting that sho int it was a bad idea with all the errors shown as everybody would think that is the problem. However it wasn't / isn't.

Yes I believe what I am still after is having the queuing method changed on the opposite end of the link from FIFO to WFQ. This all has to do with latency seen during inbound bandwidth saturation. Adding more DS1's will only give double the throughput with the same congestion latency seen with inbound traffic.
I'll post any updates to the situation. Any other ideas feel free to post, thanks.
,j
 
You won't get the same latency if the DS1s are bonded somehow. If they're terminating on the same equipment, you can at least get your provider to use per-packet CEF.
 
Thanks again for the replies.

From my understanding of load-balanceing and CEF this will still not give me the low latency queueing that I am aiming for. What I am after is not allowing the high-bandwidth flows to congest the queue buffer. Hence my instance on the queueing method being the real problem.

This is internet, not P-2-P.

,j
 
Look, I've already told you how to fix this. If you don't have enough bandwidth, queueing is only going to help so much. Have your provider turn on WFQ, if that is an option. At some point, you're probably just going to have to throw more bandwidth at the problem. Queueing is not going to magically provide you with more bandwidth. If you end up needing more bandwidth, add another T1 and use per-packet CEF or some other bonding mechanism.

I'm not sure what else you want us to tell you. Sorry if this is abrupt, but I'm extremely tired and I feel like we're going around in circles. I promise to get some more caffeine in a moment, followed by a nap.
 
Don't get me wrong jneil, I appreciate and value your posts & feedback. Sorry to make you aggrivated or whatever. I am discovering things as I work with this and am posting what I have discovered, I don't mean to sound like I'm disregarding what you guys are saying. Hopefuly this will enlighten somebody else with the same issue.

Regarding more bandwidth, I understand your logic, however it's not that I want or even need more bandwidth/ higher throughput, what I really want has to do with QoS, low-latency-queueing or LLQ. On a serial link or bonded serial links of any kind I would need to have proper queuing on both ends to really get what I am after. I have requested that my ISP turn on WFQ, havn't heard back yet.

Again, thanks for all the feedback and information. I appreciate it guys.
,j
 
Sorry about earlier. Like I said, I'm *really* sleepy and was very aggravated with some people here at work. :) You're on the right track. Your ISP should be able to offer some sort of weighted queueing that will allow your low-bandwidth flows to co-exist peacefully with your high-bandwidth flows.

Good luck, and let us know how it turns out!
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top