* Getting current interface queue sizes @ 2011-02-28 22:26 Justin McCann 2011-03-01 0:29 ` Fred Baker 0 siblings, 1 reply; 6+ messages in thread From: Justin McCann @ 2011-02-28 22:26 UTC (permalink / raw) To: bloat [-- Attachment #1: Type: text/plain, Size: 562 bytes --] This may be well-known, but how does one retrieve the current outgoing and incoming queue sizes from the link-layer interfaces? I mean the number of packets/bytes in the various queues, not the size of the buffers (potential queue size). I've mucked around with ethtool and netlink to grab other statistics, but haven't found any way to pull the current queue sizes into userspace. If there isn't any way to do it, I'll work on it if someone points me in the right direction. I'm mostly interested in the e1000 and bnx2 drivers at present. Thanks, Justin [-- Attachment #2: Type: text/html, Size: 671 bytes --] ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Getting current interface queue sizes 2011-02-28 22:26 Getting current interface queue sizes Justin McCann @ 2011-03-01 0:29 ` Fred Baker 2011-03-06 22:31 ` Justin McCann 0 siblings, 1 reply; 6+ messages in thread From: Fred Baker @ 2011-03-01 0:29 UTC (permalink / raw) To: Justin McCann; +Cc: bloat On Feb 28, 2011, at 2:26 PM, Justin McCann wrote: > This may be well-known, but how does one retrieve the current outgoing and incoming queue sizes from the link-layer interfaces? I mean the number of packets/bytes in the various queues, not the size of the buffers (potential queue size). I've mucked around with ethtool and netlink to grab other statistics, but haven't found any way to pull the current queue sizes into userspace. When the SNMP MIB was first developed, they originally specified ifOutQLen as "The length of the output packet queue (in packets)." One problem: Cisco implements that as "how many packets can I put into the queue in the deepest possible case". The outfit I worked for before Cisco implemented it as "what is the current depth of the output queue". Interoperable implementation... There are a few issues with the "current" model. Suppose you are in this network +---+ +------+ |NMS| |Target| +-+-+ +---+--+ | | ---+--------------+---- and you ask the target what the length of its Ethernet queue is. When you get the answer, what is the length of its Ethernet queue? To determine the answer, think through the process. - NMS sends a question to Target - Target thinks about it, concocts an answer, and puts it in the Ethernet queue - Packets ahead of response in Ethernet queue drain - response packet is transmitted. - NMS receives response At the time the NMS receives the response, there's a reasonable chance that the queue is absolutely empty. Or that it is twice as full as it was before. Or some other number. All you can really say is that at the time the agent answered the question, the queue had ifOutQLen packets in it. The object got deprecated on the grounds that it didn't make a lot of sense. When I try to measure queue lengths, I do so by trying to pass a message though the queue. That lets me measure the impact of the queue on delay, which is actually far more interesting. ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Getting current interface queue sizes 2011-03-01 0:29 ` Fred Baker @ 2011-03-06 22:31 ` Justin McCann 2011-03-07 7:21 ` Fred Baker 0 siblings, 1 reply; 6+ messages in thread From: Justin McCann @ 2011-03-06 22:31 UTC (permalink / raw) To: Fred Baker; +Cc: bloat [-- Attachment #1: Type: text/plain, Size: 2209 bytes --] On Mon, Feb 28, 2011 at 7:29 PM, Fred Baker <fred@cisco.com> wrote: > > On Feb 28, 2011, at 2:26 PM, Justin McCann wrote: > > > This may be well-known, but how does one retrieve the current outgoing > and incoming queue sizes from the link-layer interfaces? I mean the number > of packets/bytes in the various queues, not the size of the buffers > (potential queue size). I've mucked around with ethtool and netlink to grab > other statistics, but haven't found any way to pull the current queue sizes > into userspace. > > When the SNMP MIB was first developed, they originally specified ifOutQLen > as "The length of the output packet queue (in packets)." One problem: Cisco > implements that as "how many packets can I put into the queue in the deepest > possible case". The outfit I worked for before Cisco implemented it as "what > is the current depth of the output queue". Interoperable implementation... > > > There are a few issues with the "current" model. Suppose you are in this > network > ... and you ask the target what the length of its Ethernet queue is. When you > get the answer, what is the length of its Ethernet queue? > ... At the time the NMS receives the response, there's a reasonable chance that > the queue is absolutely empty. Or that it is twice as full as it was before. > Or some other number. All you can really say is that at the time the agent > answered the question, the queue had ifOutQLen packets in it. > > The object got deprecated on the grounds that it didn't make a lot of > sense. > Thanks for your response. This is more research-related, trying to detect what parts of the stack on an end host are exhibiting and/or causing network stalls a few RTTs or more in duration. I'm also watching the number of bytes and packets sent/received, and when activity stops altogether, looking at the queue sizes shows where things are getting held up. I don't think the approach would be as useful for a middlebox that is just doing best-effort forwarding, but it would probably work if the box was acting as a TCP proxy. So, it's not bufferbloat-related per se, but I figure having the information doesn't hurt, as long as it's not misused like you mention. Justin [-- Attachment #2: Type: text/html, Size: 2823 bytes --] ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Getting current interface queue sizes 2011-03-06 22:31 ` Justin McCann @ 2011-03-07 7:21 ` Fred Baker 2011-03-07 18:28 ` Jim Gettys 0 siblings, 1 reply; 6+ messages in thread From: Fred Baker @ 2011-03-07 7:21 UTC (permalink / raw) To: Justin McCann; +Cc: bloat On Mar 6, 2011, at 2:31 PM, Justin McCann wrote: > Thanks for your response. This is more research-related, trying to detect what parts of the stack on an end host are exhibiting and/or causing network stalls a few RTTs or more in duration. I'm also watching the number of bytes and packets sent/received, and when activity stops altogether, looking at the queue sizes shows where things are getting held up. I don't think the approach would be as useful for a middlebox that is just doing best-effort forwarding, but it would probably work if the box was acting as a TCP proxy. So, it's not bufferbloat-related per se, but I figure having the information doesn't hurt, as long as it's not misused like you mention. No doubt. But I think you'll find that Cisco equipment tells you the maximum queue depth, not the current queue depth, or doesn't implement the object. ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Getting current interface queue sizes 2011-03-07 7:21 ` Fred Baker @ 2011-03-07 18:28 ` Jim Gettys 2011-03-07 21:18 ` Justin McCann 0 siblings, 1 reply; 6+ messages in thread From: Jim Gettys @ 2011-03-07 18:28 UTC (permalink / raw) To: bloat-devel On 03/07/2011 02:21 AM, Fred Baker wrote: > > On Mar 6, 2011, at 2:31 PM, Justin McCann wrote: > >> Thanks for your response. This is more research-related, trying to detect what parts of the stack on an end host are exhibiting and/or causing network stalls a few RTTs or more in duration. I'm also watching the number of bytes and packets sent/received, and when activity stops altogether, looking at the queue sizes shows where things are getting held up. I don't think the approach would be as useful for a middlebox that is just doing best-effort forwarding, but it would probably work if the box was acting as a TCP proxy. So, it's not bufferbloat-related per se, but I figure having the information doesn't hurt, as long as it's not misused like you mention. > > No doubt. But I think you'll find that Cisco equipment tells you the maximum queue depth, not the current queue depth, or doesn't implement the object. > Cisco is far from unique. I found it impossible to get this information from Linux. Dunno about other operating systems. It's one of the things we need to fix in general. Exactly what the right metric(s) is (are), is interesting, of course. The problem with only providing instantaneous queue depth is that while it tells you you are currently suffering, it won't really help you detect transient bufferbloat due to web traffic, etc, unless you sample at a very high rate. I really care about those frequent 100-200ms impulses I see in my traffic. So a bit of additional information would be goodness.g - Jim ^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: Getting current interface queue sizes 2011-03-07 18:28 ` Jim Gettys @ 2011-03-07 21:18 ` Justin McCann 0 siblings, 0 replies; 6+ messages in thread From: Justin McCann @ 2011-03-07 21:18 UTC (permalink / raw) To: Jim Gettys; +Cc: bloat-devel [-- Attachment #1: Type: text/plain, Size: 1816 bytes --] On Mon, Mar 7, 2011 at 1:28 PM, Jim Gettys <jg@freedesktop.org> wrote: > Cisco is far from unique. I found it impossible to get this information > from Linux. Dunno about other operating systems. > It's one of the things we need to fix in general. So I'm not the only one. :) I'm looking to get this for Linux, and am willing to implement it if necessary, and was looking for the One True Way. I assume reporting back through netlink is the way to go. > Exactly what the right metric(s) is (are), is interesting, of course. The > problem with only providing instantaneous queue depth is that while it tells > you you are currently suffering, it won't really help you detect transient > bufferbloat due to web traffic, etc, unless you sample at a very high rate. > I really care about those frequent 100-200ms impulses I see in my traffic. > So a bit of additional information would be goodness.g > My PhD research is focused on automatically diagnosing these sorts of hiccups on a local host. I collect a common set of statistics across the entire local stack every 100ms, then run a diagnosis algorithm to detect which parts of the stack (connections, applications, interfaces) aren't doing their job sending/receiving packets. Among the research questions: What stats are necessary/sufficient for this kind of diagnosis, What should their semantics be, and What's the largest useful sample interval? It turns out that when send/recv stops altogether, the queue lengths indicate where things are being held up, leading to this discussion. I have them for TCP (via web100), but since my diagnosis rules are generic, I'd like to get them for the interfaces as well. I don't expect that the Ethernet driver would stop transmitting for a few 100 ms at a time, but a wireless driver might have to. Justin [-- Attachment #2: Type: text/html, Size: 2364 bytes --] ^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2011-03-07 21:18 UTC | newest] Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2011-02-28 22:26 Getting current interface queue sizes Justin McCann 2011-03-01 0:29 ` Fred Baker 2011-03-06 22:31 ` Justin McCann 2011-03-07 7:21 ` Fred Baker 2011-03-07 18:28 ` Jim Gettys 2011-03-07 21:18 ` Justin McCann
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox