From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mout.gmx.net (mout.gmx.net [212.227.15.15]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id A80293CB40 for ; Tue, 2 Jan 2024 16:24:21 -0500 (EST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=gmx.de; s=s31663417; t=1704230659; x=1704835459; i=moeller0@gmx.de; bh=w9AyeDWrNx0Uo6mW5GN/DPZ8hajVCnijWv5gHrOyNSQ=; h=X-UI-Sender-Class:Subject:From:In-Reply-To:Date:Cc:References: To; b=McPHROdgfnTmzUrkRi6Nl6mIQzqN2odgP37Pwg4bfQtR1v3BNyuMqCPRxIvUa06N EFP4vfFOJUJUm/Kq2cwMrUgm/qhSuDVkO9Q+NnTMevcM1sE60a9dIQH6Jou46Qohs 1JpTnKv5Z9KSBxYY7DvqyJwfIdFK516PtA2WxCwxCTTrJKbzupKUZiofQm7NnkFan k60Eqw5rw/8pHGl0kdL0WraCFbhk4Lj5B5ixz8ezFfQeb/ckuVX9Ip16L7StK/QCX 7HJK4smXdtXN7B3BD4r/cZYYtYSZqkaziUVB1QgSWb3xoD0xxQDl9VJhIWDkC3nPF rtOTGJD2YRWp7kKz2w== X-UI-Sender-Class: 724b4f7f-cbec-4199-ad4e-598c01a50d3a Received: from smtpclient.apple ([77.0.67.13]) by mail.gmx.net (mrgmx004 [212.227.17.190]) with ESMTPSA (Nemesis) id 1MAONd-1rWYUi2zPv-00BwYf; Tue, 02 Jan 2024 22:24:19 +0100 Content-Type: text/plain; charset=utf-8 Mime-Version: 1.0 (Mac OS X Mail 16.0 \(3696.120.41.1.4\)) From: Sebastian Moeller In-Reply-To: Date: Tue, 2 Jan 2024 22:24:18 +0100 Cc: Cake List Content-Transfer-Encoding: quoted-printable Message-Id: References: To: dave seddon X-Mailer: Apple Mail (2.3696.120.41.1.4) X-Provags-ID: V03:K1:8uKPda/kwhWH+kzK0NKIrZ4zmr4PLW5bV4Mfye2q7bpFmSwYiCI aWKeBKJkUu0FEYNykA8DotTEYuCwbE5s4reYj4hRfDe+hkepsno8iIvtx2eUaI/XmeU9/83 okajCgZuSQi8BY+2LKi/JPXvlrkHQhC/zmJ67QVZeqGxg9b18skJUV4lY1wToDB+DN1WPiw uVjeFEIXOmzhjidbcc7HQ== X-Spam-Flag: NO UI-OutboundReport: notjunk:1;M01:P0:Q+9m+EGEt8g=;Y7xo1jpNK8yokeJfhczclrpZcWN iSi8mTf19+1no0au9pwt30ppU4REe6t5x3LqJ0+m0G9EakUwDaG6HESx2BT38zKALz7U6N6h+ W95eTy8JOd8r5Qy2mpQRLYRo/XUfFZpsgXG6YizKOgxtWUyolLq2MDLTkbos9K+3eBfMKs3SN GWQzH+7WxXREn69oY+isTROYULyqJxlBqbE/FVQbCkcrgFs/NhACIIjo+LYzXv2Dljyh6DQhr v4agElLjLRZJNMdeSAremk2uNZOfxbs62FOtyF6VZScIKRlggW7JMcNsEajeNXon7eHKbIMIA L4+BuZ6ryWquH1aZpQ0BsxXnSsvAOI87tD/pMwDFjFzdwHTISzKvPHIT98EQ9N571F28X3DOg QUKqM0Nq/puOoRUDoHS71zlz73hXgidykSstavzvjmqS23H80Kh2ciM9EJLIXHh+YEwm/8Q5n 51GzUR8OWef6IAvM9VTLlTrGFlWrJsto1/tHBvvAdtMrTYIt/vqx92FYVRpWI7wGNtI2xES7Z R6ecN0mC9qaGA6sgP26gkxLm4YN5zV6XIw3CGsRoHYvDm7B1vu3JSyTeBtvHf2D1eabqGmJ3U vbgpGv872iMxjxY5KEz3xprPs6wP5PCj/5P0aAeWPlvjInoNCYL0MK/OtOS0/ES7zjnfCFvVt nN6JqRbe9OII53SWrcFWebTSxbGR9w6W3c4MKjspGh2ZdlcQYtMcyx35MwIOwp8CbsyMYhpIq 58wf/glnyL1v4i1r5nTCiUf4AG/oWolHFsX5Z3wpFskbakKlnsZD072F2dX2ZEqW/LGQbFBTy zAkD9GQBICr+6+xLrXK/dBuX8KQmp+5QCtaKmmFs/pc83bEBmfmwXFfDOmhMgFH6+fMQYwiqv TC4yf86vItlSnUTYwGbobUlwJOucdCg9n7BukB43PucyH7b2cpCBuAnzn+vrQiez97azBdPbh 9tGjtZnHmHtPj095U9S6h0KZ4Gk= Subject: Re: [Cake] Ubiquity (Unifi ) Smart Queues X-BeenThere: cake@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: Cake - FQ_codel the next generation List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Tue, 02 Jan 2024 21:24:22 -0000 Hi Dave, > On Jan 2, 2024, at 22:15, dave seddon = wrote: >=20 > Thanks Sebastian! >=20 > Now I see the rates!! >=20 > I actually reduced the rates to ensure this device is the bottleneck = 80/10 Mb/s >=20 > >=20 > root@USG-Pro-4:~# tc -d class show dev eth2 > class htb 1:10 root leaf 100: prio 0 quantum 118750 rate 9500Kbit ceil = 9500Kbit burst 1598b/1 mpu 0b overhead 0b cburst 1598b/1 mpu 0b overhead = 0b level 0=20 > class fq_codel 100:12c parent 100:=20 > class fq_codel 100:213 parent 100:=20 > class fq_codel 100:22e parent 100:=20 >=20 > root@USG-Pro-4:~# tc -d class show dev ifb_eth2 > class htb 1:10 root leaf 100: prio 0 quantum 200000 rate 76000Kbit = ceil 76000Kbit burst 1596b/1 mpu 0b overhead 0b cburst 1596b/1 mpu 0b = overhead 0b level 0=20 > class fq_codel 100:2c8 parent 100:=20 > class fq_codel 100:3df parent 100:=20 >=20 [SM] They apparently have a rather loose translation from 80/10 GUI = rates to shaper settings (derated by 5%), and my personal hobby horse = seem to ignore the overhead question more or less entirely**...=20 bad Ubiquity... if you want to be a "good boy" email me ;) **) If that 5% derating is supposed to handle overhead, oh my... per = packet overhead stays constant with packet number while payload size = scales with packet size so overhead inversely scales with packet size = and hence static rerating for per-packet overhead is approximate at best = to stay polite. Also what is up with the burst values? why do they differ by 32 octets? Sorry for bothering you with this, these are mostly questions for = Ubiquity and the are unlikely to monitor this list ;) Regards Sebastian > On Tue, Jan 2, 2024 at 12:53=E2=80=AFPM Sebastian Moeller = wrote: > Hi Dave. >=20 > just a few comments from the peanut gallery... >=20 >=20 > > On Jan 2, 2024, at 19:59, dave seddon via Cake = wrote: > >=20 > > G'day, > >=20 > > Happy new year y'all >=20 > +1 >=20 > >=20 > > I thought people might be interested to see what Ubiquity/Unifi is = doing with "Smart Queues" on their devices. The documentation on their = website is not very informative. > >=20 > > Hopefully, this is vaguely interesting because Ubiquity is widely = deployed and apparently they have a market cap of >$8 billion, so you = would hope they do a "good job" (... Seems like they might be a target = customer for libreqos ) > >=20 > > > > https://finance.yahoo.com/quote/ui/ > >=20 > > ( I use Unifi because their wifi stuff seems ok, and all the = switching/routing/wifi is all integrated into the single gui control = system. Also honestly, I'm not sure I know how to do prefix delegation = stuff on Linux by hand. ) > >=20 > > Network diagram > >=20 > > Spectrum Cable Internets <----------> Eth2 [ USG-Pro-4 ] Eth0 <---> = [Switches] <----> Access points > >=20 > > "Smart Queue" Configuration > > Ubiquity doesn't have many knobs, you just enable "smart queues" and = set the bandwidth. > >=20 > >=20 > >=20 > >=20 > > "Smart Queue" Implementation > >=20 > > Looks like they only apply tc qdiscs to the Eth2, and sadly this is = NOT cake, but fq_codel. > >=20 > > And cake isn't available :( > >=20 > > root@USG-Pro-4:~# tc qdisc replace dev eth0 cake bandwidth 100m rtt = 20ms > > Unknown qdisc "cake", hence option "bandwidth" is unparsable > >=20 > > Outbound eth2 > >=20 > > root@USG-Pro-4:~# tc -p -s -d qdisc show dev eth2 > > qdisc htb 1: root refcnt 2 r2q 10 default 10 direct_packets_stat 0 = ver 3.17 > > Sent 1071636465 bytes 5624944 pkt (dropped 0, overlimits 523078 = requeues 0) <---- OVERLIMITS? > > backlog 0b 0p requeues 0=20 > > qdisc fq_codel 100: parent 1:10 limit 10240p flows 1024 quantum 1514 = target 5.0ms interval 100.0ms ecn=20 > > Sent 1071636465 bytes 5624944 pkt (dropped 2384, overlimits 0 = requeues 0) <----- DROPS > > backlog 0b 0p requeues 0=20 > > maxpacket 1514 drop_overlimit 0 new_flow_count 1244991 ecn_mark 0 > > new_flows_len 1 old_flows_len 1 > > qdisc ingress ffff: parent ffff:fff1 ----------------=20 > > Sent 12636045136 bytes 29199533 pkt (dropped 0, overlimits 0 = requeues 0)=20 > > backlog 0b 0p requeues 0=20 > > =E2=80=A2 target 5.0ms is the default ( = https://www.man7.org/linux/man-pages/man8/tc-fq_codel.8.html ). I = wonder if they did much testing on this hardware? >=20 > [SM] Not sure whether playing with target in isolation would be much = use, in codel theory target should be 5-10% of interval ans interval = should be in the order of magnitude of to be handled RTTs (the default = is 100ms wich works reasonably well even across the Atlantic, but you = probably knew all that). >=20 > > =E2=80=A2 ( I actually have a spare "wan" ethernet = port, so I guess I could hook up a PC and perform a flent test. ) > > =E2=80=A2 It's unclear to me what the "htb" is doing, because = I would have expected the download/upload rates to be configured here, = but they appear not to be >=20 > [SM] Likely because HTB does not reveal this when asked with the `-s` = option, try `-q` instead and not as qdisc but as class (so maybe `tc -d = class show dev eth2`). >=20 > > =E2=80=A2 I'm not really sure what "overlimits" means or what = that does, and tried looking this up, but I guess the kernel source is = likely the "best" documentation for this. Maybe this means it's = dropping? Or is it ECN? >=20 > I think this text about TBF explains this reasonably well (HTB is = essentially a hierarchical version of TBF): >=20 > see: = https://tldp.org/HOWTO/Adv-Routing-HOWTO/lartc.qdisc.classless.html >=20 > 9.2.2. Token Bucket Filter >=20 > The Token Bucket Filter (TBF) is a simple qdisc that only passes = packets arriving at a rate which is not exceeding some administratively = set rate, but with the possibility to allow short bursts in excess of = this rate. >=20 > TBF is very precise, network- and processor friendly. It should be = your first choice if you simply want to slow an interface down! >=20 > The TBF implementation consists of a buffer (bucket), constantly = filled by some virtual pieces of information called tokens, at a = specific rate (token rate). The most important parameter of the bucket = is its size, that is the number of tokens it can store. >=20 > Each arriving token collects one incoming data packet from the data = queue and is then deleted from the bucket. Associating this algorithm = with the two flows -- token and data, gives us three possible scenarios: >=20 > =E2=80=A2 The data arrives in TBF at a rate that's equal to = the rate of incoming tokens. In this case each incoming packet has its = matching token and passes the queue without delay. >=20 > =E2=80=A2 The data arrives in TBF at a rate that's smaller = than the token rate. Only a part of the tokens are deleted at output of = each data packet that's sent out the queue, so the tokens accumulate, up = to the bucket size. The unused tokens can then be used to send data a a = speed that's exceeding the standard token rate, in case short data = bursts occur. >=20 > =E2=80=A2 The data arrives in TBF at a rate bigger than the = token rate. This means that the bucket will soon be devoid of tokens, = which causes the TBF to throttle itself for a while. This is called an = 'overlimit situation'. If packets keep coming in, packets will start to = get dropped. >=20 > The last scenario is very important, because it allows to = administratively shape the bandwidth available to data that's passing = the filter. >=20 > The accumulation of tokens allows a short burst of overlimit data to = be still passed without loss, but any lasting overload will cause = packets to be constantly delayed, and then dropped. >=20 > Please note that in the actual implementation, tokens correspond to = bytes, not packets. >=20 >=20 > >=20 > > Inbound eth2 via ifb > >=20 > > root@USG-Pro-4:~# tc -p -s -d qdisc show dev ifb_eth2 > > qdisc htb 1: root refcnt 2 r2q 10 default 10 direct_packets_stat 0 = ver 3.17 > > Sent 13029810569 bytes 29185742 pkt (dropped 0, overlimits 14774339 = requeues 0) <---- OVERLIMITS? > > backlog 0b 0p requeues 0=20 > > qdisc fq_codel 100: parent 1:10 limit 10240p flows 1024 quantum 1514 = target 5.0ms interval 100.0ms ecn=20 > > Sent 13029810569 bytes 29185742 pkt (dropped 10688, overlimits 0 = requeues 0) <---- WOW. DROPS!! > > backlog 0b 0p requeues 0=20 > > maxpacket 1514 drop_overlimit 0 new_flow_count 2256895 ecn_mark 0 > > new_flows_len 0 old_flows_len 2 > >=20 > > Apparently rather than applying the tc qdsic on the outbound path on = the LAN side ( eth0 ), they are applying it inbound on the the eth2 via = ifb_eth2. >=20 > [SM] Same approach that sqm-scripts takes, if you attach the ingress = shaper to the LAN port egress all internet traffic not traversing that = interface will not be shaped, e.g. traffic to/from the router itself or = WiFi traffic. If you are sure that such by-pass traffic does not exist, = putting the ingress shaper on lan-egress can save the cost of the ifb = indirection, but for full WiFi routers that is generally not true. >=20 >=20 > > Initially, I was pretty surprised to see so many drops on the = inbound path, but maybe this is actually normal? >=20 > [SM] Depends on your traffic and whether ECN is used or not. In your = case it appears ECN is not used and then DROPS are the only way fq_codel = can tell a flow to step on the brakes.... >=20 > >=20 > > I could imagine the upstream CDNs pushing pretty hard with low RTTs, = but I would probably have expected the bottlenecks to form at the access = points. e.g. It's gigabit all the way until it reaches the air interface = of the access points. .... Or do I have a problem in my LAN network? >=20 > [SM] The idea is to create an artificial bittleneck (using HTB) so the = most relevant queue is under AQM control... >=20 > >=20 > > I wonder if I can log into the access points to look at them = too?.... > >=20 > > ( BTW - to get to root on these devices you can SSH in as an "admin" = users, and then just "sudo su" ) > >=20 > > ifconfig > >=20 > > root@USG-Pro-4:~# ifconfig -a > > eth0 Link encap:Ethernet HWaddr fc:ec:da:d1:1b:9f =20 > > inet addr:172.16.50.1 Bcast:172.16.50.255 = Mask:255.255.255.0 > > inet6 addr: [SNIP]:feec:daff:fed1:1b9f/64 Scope:Global > > inet6 addr: fe80::feec:daff:fed1:1b9f/64 Scope:Link > > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > > RX packets:11343139 errors:0 dropped:0 overruns:0 frame:0 > > TX packets:21614272 errors:0 dropped:0 overruns:0 = carrier:0 > > collisions:0 txqueuelen:0 = <---- queue len 0? Maybe this is a driver issue? =20 > > RX bytes:2047750597 (1.9 GiB) TX bytes:23484692545 (21.8 = GiB) > >=20 > > eth1 Link encap:Ethernet HWaddr fc:ec:da:d1:1b:a0 =20 > > inet addr:172.16.51.1 Bcast:172.16.51.255 = Mask:255.255.255.0 > > inet6 addr: fe80::feec:daff:fed1:1ba0/64 Scope:Link > > inet6 addr: [SNIP]:daff:fed1:1ba0/64 Scope:Global > > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > > RX packets:154930 errors:0 dropped:0 overruns:0 frame:0 > > TX packets:233294 errors:0 dropped:0 overruns:0 carrier:0 > > collisions:0 txqueuelen:0=20 > > RX bytes:32255162 (30.7 MiB) TX bytes:116504400 (111.1 = MiB) > >=20 > > eth2 Link encap:Ethernet HWaddr fc:ec:da:d1:1b:a1 =20 > > inet addr:172.88.[SNIP] Bcast:255.255.255.255 = Mask:255.255.240.0 > > inet6 addr: [SNIP]:d474:3d71/128 Scope:Global > > inet6 addr: fe80::feec:daff:fed1:1ba1/64 Scope:Link > > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > > RX packets:60912335 errors:0 dropped:0 overruns:0 frame:0 > > TX packets:10546508 errors:0 dropped:0 overruns:0 = carrier:0 > > collisions:0 txqueuelen:0=20 > > RX bytes:26087920038 (24.2 GiB) TX bytes:1892854725 (1.7 = GiB) > >=20 > > eth3 Link encap:Ethernet HWaddr fc:ec:da:d1:1b:a2 =20 > > BROADCAST MULTICAST MTU:1500 Metric:1 > > RX packets:0 errors:0 dropped:0 overruns:0 frame:0 > > TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 > > collisions:0 txqueuelen:0=20 > > RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) > >=20 > > eth0.20 Link encap:Ethernet HWaddr fc:ec:da:d1:1b:9f =20 > > inet addr:172.16.60.1 Bcast:172.16.60.255 = Mask:255.255.255.0 > > inet6 addr: [SNIP]:daff:fed1:1b9f/64 Scope:Global > > inet6 addr: fe80::feec:daff:fed1:1b9f/64 Scope:Link > > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > > RX packets:782123 errors:0 dropped:0 overruns:0 frame:0 > > TX packets:480343 errors:0 dropped:0 overruns:0 carrier:0 > > collisions:0 txqueuelen:0=20 > > RX bytes:60600161 (57.7 MiB) TX bytes:108372413 (103.3 = MiB) > >=20 > > eth0.40 Link encap:Ethernet HWaddr fc:ec:da:d1:1b:9f =20 > > inet addr:172.16.40.1 Bcast:172.16.40.255 = Mask:255.255.255.0 > > inet6 addr: [SNIP]:daff:fed1:1b9f/64 Scope:Global > > inet6 addr: fe80::feec:daff:fed1:1b9f/64 Scope:Link > > UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 > > RX packets:2695 errors:0 dropped:0 overruns:0 frame:0 > > TX packets:194291 errors:0 dropped:0 overruns:0 carrier:0 > > collisions:0 txqueuelen:0=20 > > RX bytes:123970 (121.0 KiB) TX bytes:42370172 (40.4 MiB) > >=20 > > ifb_eth2 Link encap:Ethernet HWaddr de:ed:87:85:80:27 =20 > > inet6 addr: fe80::dced:87ff:fe85:8027/64 Scope:Link > > UP BROADCAST RUNNING NOARP MTU:1500 Metric:1 > > RX packets:29656324 errors:0 dropped:2531 overruns:0 = frame:0 > > TX packets:29653793 errors:0 dropped:0 overruns:0 = carrier:0 > > collisions:0 txqueuelen:32 = <----- queue len 32? Curious > > RX bytes:13086765284 (12.1 GiB) TX bytes:13086264146 = (12.1 GiB) > >=20 > >=20 > > System info > >=20 > > This has a prehistoric kernel, I guess because they have some stuff = that taints the kernel > >=20 > > root@USG-Pro-4:~# uname -a > > Linux USG-Pro-4 3.10.107-UBNT #1 SMP Thu Jan 12 08:30:03 UTC 2023 = mips64 GNU/Linux >=20 > [SM] I remember the time we felt great about using a series 3 kernel = instead of old series 2 gunk, but upstream is at series 6 by now (but it = also started to increase major numbers more aggressively after series 2) >=20 > >=20 > > root@USG-Pro-4:~# cat /var/log/dmesg | grep taint > > ubnt_platform: module license 'Proprietary' taints kernel. > > Disabling lock debugging due to kernel taint > >=20 > > I also notice this module, but I'm not sure it is in use. > > /lib/modules/3.10.107-UBNT/kernel/net/netfilter/xt_rateest.ko > >=20 > >=20 > > root@USG-Pro-4:~# cat /proc/cpuinfo=20 > > system type : UBNT_E220 > > machine : Unknown > > processor : 0 > > cpu model : Cavium Octeon II V0.1 > > BogoMIPS : 2000.00 > > wait instruction : yes > > microsecond timers : yes > > tlb_entries : 128 > > extra interrupt vector : yes > > hardware watchpoint : yes, count: 2, address/irw mask: [0x0ffc, = 0x0ffb] > > isa : mips1 mips2 mips3 mips4 mips5 mips64r2 > > ASEs implemented : > > shadow register sets : 1 > > kscratch registers : 3 > > core : 0 > > VCED exceptions : not available > > VCEI exceptions : not available > >=20 > > processor : 1 > > cpu model : Cavium Octeon II V0.1 > > BogoMIPS : 2000.00 > > wait instruction : yes > > microsecond timers : yes > > tlb_entries : 128 > > extra interrupt vector : yes > > hardware watchpoint : yes, count: 2, address/irw mask: [0x0ffc, = 0x0ffb] > > isa : mips1 mips2 mips3 mips4 mips5 mips64r2 > > ASEs implemented : > > shadow register sets : 1 > > kscratch registers : 3 > > core : 1 > > VCED exceptions : not available > > VCEI exceptions : not available > >=20 > >=20 > >=20 > > root@USG-Pro-4:~# ethtool -i eth2 > > driver: octeon-ethernet > > version: 2.0 > > firmware-version:=20 > > bus-info: Builtin > > supports-statistics: no > > supports-test: no > > supports-eeprom-access: no > > supports-register-dump: no > > supports-priv-flags: no > >=20 > > root@USG-Pro-4:~# ethtool -S eth2 > > no stats available > >=20 > > ( Oh great! Thanks guys! ) > >=20 > > root@USG-Pro-4:~# netstat -ia > > Kernel Interface table > > Iface MTU Met RX-OK RX-ERR RX-DRP RX-OVR TX-OK TX-ERR TX-DRP = TX-OVR Flg > > eth0 1500 0 11340913 0 0 0 21612063 0 = 0 0 BMRU > > eth1 1500 0 154902 0 0 0 233236 0 = 0 0 BMRU > > eth2 1500 0 60898610 0 0 0 10544414 0 = 0 0 BMRU > > eth3 1500 0 0 0 0 0 0 0 = 0 0 BM > > eth0.20 1500 0 781992 0 0 0 480214 0 = 0 0 BMRU > > eth0.40 1500 0 2695 0 0 0 194260 0 = 0 0 BMRU > > ifb_eth2 1500 0 29642598 0 2530 0 29640068 0 = 0 0 BORU <---- RX drops? > > imq0 16000 0 0 0 0 0 0 0 = 0 0 ORU > > lo 65536 0 9255 0 0 0 9255 0 = 0 0 LRU > > loop0 1500 0 0 0 0 0 0 0 = 0 0 BM > > loop1 1500 0 0 0 0 0 0 0 = 0 0 BM > > loop2 1500 0 0 0 0 0 0 0 = 0 0 BM > > loop3 1500 0 0 0 0 0 0 0 = 0 0 BM > > npi0 1500 0 0 0 0 0 0 0 = 0 0 BM > > npi1 1500 0 0 0 0 0 0 0 = 0 0 BM > > npi2 1500 0 0 0 0 0 0 0 = 0 0 BM > > npi3 1500 0 0 0 0 0 0 0 = 0 0 BM > >=20 > > root@USG-Pro-4:/opt/vyatta/etc# cat version=20 > > Version: v4.4.57.5578372.230112.0824 > >=20 > > --=20 > > Regards, > > Dave Seddon > > _______________________________________________ > > Cake mailing list > > Cake@lists.bufferbloat.net > > https://lists.bufferbloat.net/listinfo/cake >=20 >=20 >=20 > --=20 > Regards, > Dave Seddon > +1 415 857 5102