I've read a bunch of things regarding the fact that you can expect packet loss with smaller size packets (for instance with smart bits starting at 56 bytes and scaling up to 1280 bytes. But what I haven't seen is a plausible explanation as to why?
Off the top of my head, the only obvious reason would be that a smaller packet is easier to retransmit (less buffer space required on the sending host) than a large packet.
I can't recall that I've ever heard of that criteria being used, but it shouldn't be hard to implement. I have heard of discarding packets based on protocol frequently. Is there some specific source or product that you are citing?
Can you point us to your "bunch of things" so we can get some context for your question?
Personally, I don't see any reason for it.
Perhaps somebody is confused about the latency/jitter sensitivity of voice-traffic, which results in packets being discarded at the application-level by means of the jitter-buffer setting.
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.