* monitoring queue length
@ 2018-11-30 22:05 Azin Neishaboori
2018-12-01 3:13 ` Jonathan Morton
0 siblings, 1 reply; 5+ messages in thread
From: Azin Neishaboori @ 2018-11-30 22:05 UTC (permalink / raw)
To: bloat-devel
[-- Attachment #1: Type: text/plain, Size: 722 bytes --]
Hi
Pardon me if this is too elementary a question. But I am not sure how to
see the length of a queue on an interface. I have changed the qdisc
configuration from default pfifo_fast to pfifo to get
statistics. All I see though is a report on backlog.
The output I see is like this:
Sent 938609238 bytes 780809 pkt (dropped 0, overlimits 0 requeues 0)
backlog 0b 0p requeues 0
I do not know if this backlog is indeed queue length or not. It seems
strange to be queue length, because even when I flood the network with high
UDP data rates over capacity, it still shows 0b of backlog. Am I looking at
the wrong parameter? If so, could you please point out the tool that shows
the instantaneous queue length?
Thanks a lot
[-- Attachment #2: Type: text/html, Size: 957 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: monitoring queue length
2018-11-30 22:05 monitoring queue length Azin Neishaboori
@ 2018-12-01 3:13 ` Jonathan Morton
2018-12-01 7:37 ` Azin Neishaboori
0 siblings, 1 reply; 5+ messages in thread
From: Jonathan Morton @ 2018-12-01 3:13 UTC (permalink / raw)
To: Azin Neishaboori; +Cc: bloat-devel
> On 1 Dec, 2018, at 12:05 am, Azin Neishaboori <azin.neishaboori@gmail.com> wrote:
>
> I do not know if this backlog is indeed queue length or not. It seems strange to be queue length, because even when I flood the network with high UDP data rates over capacity, it still shows 0b of backlog. Am I looking at the wrong parameter? If so, could you please point out the tool that shows the instantaneous queue length?
There are two potential points of confusion here:
1: Linux throttles both TCP and UDP at the application socket level to prevent queues building up in the qdiscs and HW devices. If it's your machine producing the packets, that's probably the effect you're seeing; there'll be a few packets queued in the HW (invisibly) and none in the qdisc. That's approximately true regardless of which qdisc is in use, though with a shaping qdisc you might see a few packets collect there instead of in the HW.
2: If your traffic is coming from outside, it won't be queued upon receipt unless you introduce an artificial bottleneck. There are ways of doing that.
For queuing experiments, we normally set up a "dumbbell" topology in which two different machines act as source and drain of traffic, and a third machine in the middle acts as a network emulator with artificial delays, losses and bandwidth limits. That middlebox is where you would then observe the queuing.
- Jonathan Morton
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: monitoring queue length
2018-12-01 3:13 ` Jonathan Morton
@ 2018-12-01 7:37 ` Azin Neishaboori
2018-12-01 7:47 ` Jonathan Morton
0 siblings, 1 reply; 5+ messages in thread
From: Azin Neishaboori @ 2018-12-01 7:37 UTC (permalink / raw)
To: Jonathan Morton; +Cc: bloat-devel
[-- Attachment #1: Type: text/plain, Size: 2719 bytes --]
Hi Jonathan
Thank you for your response. I think did not describe my setup well in the
first email. So here it goes:
I have an ubuntu virtual machine on my laptop that sends/receives data from
remote iperf servers. My laptop is connected by a 100Mbps Ethernet cable to
a router, and the router has two cellular antennas and an LTE SIM. The
router routes all the incoming and outgoing traffic to/from the VM from/to
remote iperf servers using the cellular link. The cellular link is thus my
bottleneck link, and I am looking for queue buildup at the router’s egress
interface, i.e., the cellular interface. I am looking into uplink mostly,
and the uplink rate of LTE is both limited and has errors. The uplink rate
I see is around 10 Mbps on a good received signal strength condition.
So based on the dumbbell topology you described, I should see queue buildup
at the egress cellular interface of the router, right?
But when I periodically (once every second) run tc -s qdisc ls, the
backlog is consistently zero. Am I looking at the wrong information? Or am
I missing something even bigger?
Thanks a lot
Azin
On Fri, Nov 30, 2018 at 10:14 PM Jonathan Morton <chromatix99@gmail.com>
wrote:
> > On 1 Dec, 2018, at 12:05 am, Azin Neishaboori <
> azin.neishaboori@gmail.com> wrote:
> >
> > I do not know if this backlog is indeed queue length or not. It seems
> strange to be queue length, because even when I flood the network with high
> UDP data rates over capacity, it still shows 0b of backlog. Am I looking at
> the wrong parameter? If so, could you please point out the tool that shows
> the instantaneous queue length?
>
> There are two potential points of confusion here:
>
> 1: Linux throttles both TCP and UDP at the application socket level to
> prevent queues building up in the qdiscs and HW devices. If it's your
> machine producing the packets, that's probably the effect you're seeing;
> there'll be a few packets queued in the HW (invisibly) and none in the
> qdisc. That's approximately true regardless of which qdisc is in use,
> though with a shaping qdisc you might see a few packets collect there
> instead of in the HW.
>
> 2: If your traffic is coming from outside, it won't be queued upon receipt
> unless you introduce an artificial bottleneck. There are ways of doing
> that.
>
> For queuing experiments, we normally set up a "dumbbell" topology in which
> two different machines act as source and drain of traffic, and a third
> machine in the middle acts as a network emulator with artificial delays,
> losses and bandwidth limits. That middlebox is where you would then
> observe the queuing.
>
> - Jonathan Morton
>
>
[-- Attachment #2: Type: text/html, Size: 3136 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: monitoring queue length
2018-12-01 7:37 ` Azin Neishaboori
@ 2018-12-01 7:47 ` Jonathan Morton
2018-12-01 8:04 ` Azin Neishaboori
0 siblings, 1 reply; 5+ messages in thread
From: Jonathan Morton @ 2018-12-01 7:47 UTC (permalink / raw)
To: Azin Neishaboori; +Cc: bloat-devel
> On 1 Dec, 2018, at 9:37 am, Azin Neishaboori <azin.neishaboori@gmail.com> wrote:
>
> So based on the dumbbell topology you described, I should see queue buildup at the egress cellular interface of the router, right?
Yes - but the actual cellular interface is on the far side of a translation device, and so its queue is hidden from Linux. That's unfortunately true of *every* 3G or LTE interface I've yet seen. Older devices have a virtual serial PPP interface to the translator, newer ones pretend to be Ethernet devices on the near side of the translator - in both cases with much more bandwidth than the cellular interface itself.
This is actually quite a serious problem for people trying to improve the quality of cellular Internet connections. All of the low-level stuff that would be useful to experiment with is deliberately and thoroughly hidden.
If you put in an artificial bottleneck of 10Mbps on the outgoing interface, you should be able to develop a queue there. You can use HTB or HFSC, with the qdisc of your choice as a child on which the actual queuing occurs.
A better way to measure the impact of queuing in the raw device is to observe the increase of latency when the link is loaded versus when it is idle. I recommend using the Flent tool for that.
- Jonathan Morton
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: monitoring queue length
2018-12-01 7:47 ` Jonathan Morton
@ 2018-12-01 8:04 ` Azin Neishaboori
0 siblings, 0 replies; 5+ messages in thread
From: Azin Neishaboori @ 2018-12-01 8:04 UTC (permalink / raw)
To: Jonathan Morton; +Cc: bloat-devel
[-- Attachment #1: Type: text/plain, Size: 1789 bytes --]
Hi Jonathan
Thank you for your very helpful insight.
I can see the effect of bufferbloat in increased RTT, but when trying to
further support the data with the queue size, I encountered the
zero-backlog data which was very confusing to me. So now I know :)
Thanks a lot for taking time, reading my message and providing helpful
insight.
Best
Azin
On Sat, Dec 1, 2018 at 2:47 AM Jonathan Morton <chromatix99@gmail.com>
wrote:
> > On 1 Dec, 2018, at 9:37 am, Azin Neishaboori <azin.neishaboori@gmail.com>
> wrote:
> >
> > So based on the dumbbell topology you described, I should see queue
> buildup at the egress cellular interface of the router, right?
>
> Yes - but the actual cellular interface is on the far side of a
> translation device, and so its queue is hidden from Linux. That's
> unfortunately true of *every* 3G or LTE interface I've yet seen. Older
> devices have a virtual serial PPP interface to the translator, newer ones
> pretend to be Ethernet devices on the near side of the translator - in both
> cases with much more bandwidth than the cellular interface itself.
>
> This is actually quite a serious problem for people trying to improve the
> quality of cellular Internet connections. All of the low-level stuff that
> would be useful to experiment with is deliberately and thoroughly hidden.
>
> If you put in an artificial bottleneck of 10Mbps on the outgoing
> interface, you should be able to develop a queue there. You can use HTB or
> HFSC, with the qdisc of your choice as a child on which the actual queuing
> occurs.
>
> A better way to measure the impact of queuing in the raw device is to
> observe the increase of latency when the link is loaded versus when it is
> idle. I recommend using the Flent tool for that.
>
> - Jonathan Morton
>
>
[-- Attachment #2: Type: text/html, Size: 2273 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2018-12-01 8:05 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-11-30 22:05 monitoring queue length Azin Neishaboori
2018-12-01 3:13 ` Jonathan Morton
2018-12-01 7:37 ` Azin Neishaboori
2018-12-01 7:47 ` Jonathan Morton
2018-12-01 8:04 ` Azin Neishaboori
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox