<div class="gmail_quote">On Mon, Feb 28, 2011 at 7:29 PM, Fred Baker <span dir="ltr"><<a href="mailto:fred@cisco.com">fred@cisco.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="im"><br>
On Feb 28, 2011, at 2:26 PM, Justin McCann wrote:<br>
<br>
> This may be well-known, but how does one retrieve the current outgoing and incoming queue sizes from the link-layer interfaces? I mean the number of packets/bytes in the various queues, not the size of the buffers (potential queue size). I've mucked around with ethtool and netlink to grab other statistics, but haven't found any way to pull the current queue sizes into userspace.<br>
<br>
</div>When the SNMP MIB was first developed, they originally specified ifOutQLen as "The length of the output packet queue (in packets)." One problem: Cisco implements that as "how many packets can I put into the queue in the deepest possible case". The outfit I worked for before Cisco implemented it as "what is the current depth of the output queue". Interoperable implementation...<br>
<br>
<br>
There are a few issues with the "current" model. Suppose you are in this network<br>... </blockquote><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">and you ask the target what the length of its Ethernet queue is. When you get the answer, what is the length of its Ethernet queue?<br>
... </blockquote><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">At the time the NMS receives the response, there's a reasonable chance that the queue is absolutely empty. Or that it is twice as full as it was before. Or some other number. All you can really say is that at the time the agent answered the question, the queue had ifOutQLen packets in it.<br>
<br>
The object got deprecated on the grounds that it didn't make a lot of sense.<br></blockquote><div><br></div><div>Thanks for your response. This is more research-related, trying to detect what parts of the stack on an end host are exhibiting and/or causing network stalls a few RTTs or more in duration. I'm also watching the number of bytes and packets sent/received, and when activity stops altogether, looking at the queue sizes shows where things are getting held up. I don't think the approach would be as useful for a middlebox that is just doing best-effort forwarding, but it would probably work if the box was acting as a TCP proxy. So, it's not bufferbloat-related per se, but I figure having the information doesn't hurt, as long as it's not misused like you mention.</div>
<div><br></div><div> Justin</div></div>