From azin.neishaboori at gmail.com Sat Dec 1 02:37:33 2018 From: azin.neishaboori at gmail.com (Azin Neishaboori) Date: Sat, 1 Dec 2018 02:37:33 -0500 Subject: monitoring queue length In-Reply-To: <3D0D58C0-B601-413B-B3EA-80C396E6187B@gmail.com> References: <3D0D58C0-B601-413B-B3EA-80C396E6187B@gmail.com> Message-ID: Hi Jonathan Thank you for your response. I think did not describe my setup well in the first email. So here it goes: I have an ubuntu virtual machine on my laptop that sends/receives data from remote iperf servers. My laptop is connected by a 100Mbps Ethernet cable to a router, and the router has two cellular antennas and an LTE SIM. The router routes all the incoming and outgoing traffic to/from the VM from/to remote iperf servers using the cellular link. The cellular link is thus my bottleneck link, and I am looking for queue buildup at the router’s egress interface, i.e., the cellular interface. I am looking into uplink mostly, and the uplink rate of LTE is both limited and has errors. The uplink rate I see is around 10 Mbps on a good received signal strength condition. So based on the dumbbell topology you described, I should see queue buildup at the egress cellular interface of the router, right? But when I periodically (once every second) run tc -s qdisc ls, the backlog is consistently zero. Am I looking at the wrong information? Or am I missing something even bigger? Thanks a lot Azin On Fri, Nov 30, 2018 at 10:14 PM Jonathan Morton wrote: > > On 1 Dec, 2018, at 12:05 am, Azin Neishaboori < > azin.neishaboori at gmail.com> wrote: > > > > I do not know if this backlog is indeed queue length or not. It seems > strange to be queue length, because even when I flood the network with high > UDP data rates over capacity, it still shows 0b of backlog. Am I looking at > the wrong parameter? If so, could you please point out the tool that shows > the instantaneous queue length? > > There are two potential points of confusion here: > > 1: Linux throttles both TCP and UDP at the application socket level to > prevent queues building up in the qdiscs and HW devices. If it's your > machine producing the packets, that's probably the effect you're seeing; > there'll be a few packets queued in the HW (invisibly) and none in the > qdisc. That's approximately true regardless of which qdisc is in use, > though with a shaping qdisc you might see a few packets collect there > instead of in the HW. > > 2: If your traffic is coming from outside, it won't be queued upon receipt > unless you introduce an artificial bottleneck. There are ways of doing > that. > > For queuing experiments, we normally set up a "dumbbell" topology in which > two different machines act as source and drain of traffic, and a third > machine in the middle acts as a network emulator with artificial delays, > losses and bandwidth limits. That middlebox is where you would then > observe the queuing. > > - Jonathan Morton > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chromatix99 at gmail.com Sat Dec 1 02:47:15 2018 From: chromatix99 at gmail.com (Jonathan Morton) Date: Sat, 1 Dec 2018 09:47:15 +0200 Subject: monitoring queue length In-Reply-To: References: <3D0D58C0-B601-413B-B3EA-80C396E6187B@gmail.com> Message-ID: <7ACB9F40-A42B-4807-85EF-10500F11E437@gmail.com> > On 1 Dec, 2018, at 9:37 am, Azin Neishaboori wrote: > > So based on the dumbbell topology you described, I should see queue buildup at the egress cellular interface of the router, right? Yes - but the actual cellular interface is on the far side of a translation device, and so its queue is hidden from Linux. That's unfortunately true of *every* 3G or LTE interface I've yet seen. Older devices have a virtual serial PPP interface to the translator, newer ones pretend to be Ethernet devices on the near side of the translator - in both cases with much more bandwidth than the cellular interface itself. This is actually quite a serious problem for people trying to improve the quality of cellular Internet connections. All of the low-level stuff that would be useful to experiment with is deliberately and thoroughly hidden. If you put in an artificial bottleneck of 10Mbps on the outgoing interface, you should be able to develop a queue there. You can use HTB or HFSC, with the qdisc of your choice as a child on which the actual queuing occurs. A better way to measure the impact of queuing in the raw device is to observe the increase of latency when the link is loaded versus when it is idle. I recommend using the Flent tool for that. - Jonathan Morton From azin.neishaboori at gmail.com Sat Dec 1 03:04:51 2018 From: azin.neishaboori at gmail.com (Azin Neishaboori) Date: Sat, 1 Dec 2018 03:04:51 -0500 Subject: monitoring queue length In-Reply-To: <7ACB9F40-A42B-4807-85EF-10500F11E437@gmail.com> References: <3D0D58C0-B601-413B-B3EA-80C396E6187B@gmail.com> <7ACB9F40-A42B-4807-85EF-10500F11E437@gmail.com> Message-ID: Hi Jonathan Thank you for your very helpful insight. I can see the effect of bufferbloat in increased RTT, but when trying to further support the data with the queue size, I encountered the zero-backlog data which was very confusing to me. So now I know :) Thanks a lot for taking time, reading my message and providing helpful insight. Best Azin On Sat, Dec 1, 2018 at 2:47 AM Jonathan Morton wrote: > > On 1 Dec, 2018, at 9:37 am, Azin Neishaboori > wrote: > > > > So based on the dumbbell topology you described, I should see queue > buildup at the egress cellular interface of the router, right? > > Yes - but the actual cellular interface is on the far side of a > translation device, and so its queue is hidden from Linux. That's > unfortunately true of *every* 3G or LTE interface I've yet seen. Older > devices have a virtual serial PPP interface to the translator, newer ones > pretend to be Ethernet devices on the near side of the translator - in both > cases with much more bandwidth than the cellular interface itself. > > This is actually quite a serious problem for people trying to improve the > quality of cellular Internet connections. All of the low-level stuff that > would be useful to experiment with is deliberately and thoroughly hidden. > > If you put in an artificial bottleneck of 10Mbps on the outgoing > interface, you should be able to develop a queue there. You can use HTB or > HFSC, with the qdisc of your choice as a child on which the actual queuing > occurs. > > A better way to measure the impact of queuing in the raw device is to > observe the increase of latency when the link is loaded versus when it is > idle. I recommend using the Flent tool for that. > > - Jonathan Morton > > -------------- next part -------------- An HTML attachment was scrubbed... URL: