General list for discussing Bufferbloat
 help / color / mirror / Atom feed
* [Bloat] fq_codel on bridge multiple subnets?
@ 2019-01-03 17:32 Dev
  2019-01-03 18:12 ` Toke Høiland-Jørgensen
  2019-01-03 21:23 ` [Bloat] reatime buffer monitoring? Dev
  0 siblings, 2 replies; 15+ messages in thread
From: Dev @ 2019-01-03 17:32 UTC (permalink / raw)
  To: bloat

I’m trying to create a bridge on eth1 and eth2, with a management interface on eth0, then enable fq_codel on the bridge. My bridge interface looks like:

#>: cat /etc/network/interfaces

…
iface eth1 inet manual

iface eth2 inet manual

# Bridge setup
iface br0 inet static
	bridge_ports eth1 eth2
	#bridge_stp on
		address 192.168.3.75
		broadcast 192.168.0.255
		netmask 255.255.255.0
		gateway 192.168.3.1

#>: tc qdisc add dev br0 root fq_codel

#>: ip a

6: br0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:e0:67:0f:4d:d0 brd ff:ff:ff:ff:ff:ff
    inet 192.168.3.75/24 brd 192.168.3.255 scope global br0
       valid_lft forever preferred_lft forever
    inet6 fe80::2e0:67ff:fe0f:4dd0/64 scope link
       valid_lft forever preferred_lft forever

I want fq_codel to manage the buffer for multiple subnets using these two interfaces as a bridge, will that work, or only for the 192.168.3.75/24 that’s configured on br0?

Also, is there a command to watch what the buffer is doing real time once I run traffic across it?

- Dev

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Bloat] fq_codel on bridge multiple subnets?
  2019-01-03 17:32 [Bloat] fq_codel on bridge multiple subnets? Dev
@ 2019-01-03 18:12 ` Toke Høiland-Jørgensen
  2019-01-03 18:54   ` Pete Heist
  2019-01-03 21:23 ` [Bloat] reatime buffer monitoring? Dev
  1 sibling, 1 reply; 15+ messages in thread
From: Toke Høiland-Jørgensen @ 2019-01-03 18:12 UTC (permalink / raw)
  To: Dev, bloat

Dev <dev@logicalwebhost.com> writes:

> I’m trying to create a bridge on eth1 and eth2, with a management
> interface on eth0, then enable fq_codel on the bridge. My bridge
> interface looks like:

You'll probably want to put FQ-CoDel on the underlying physical
interfaces, as those are the ones actually queueing the traffic...

-Toke

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Bloat] fq_codel on bridge multiple subnets?
  2019-01-03 18:12 ` Toke Høiland-Jørgensen
@ 2019-01-03 18:54   ` Pete Heist
  2019-01-04  5:22     ` Dev
  0 siblings, 1 reply; 15+ messages in thread
From: Pete Heist @ 2019-01-03 18:54 UTC (permalink / raw)
  To: Dev; +Cc: bloat


> On Jan 3, 2019, at 7:12 PM, Toke Høiland-Jørgensen <toke@toke.dk> wrote:
> 
> Dev <dev@logicalwebhost.com> writes:
> 
>> I’m trying to create a bridge on eth1 and eth2, with a management
>> interface on eth0, then enable fq_codel on the bridge. My bridge
>> interface looks like:
> 
> You'll probably want to put FQ-CoDel on the underlying physical
> interfaces, as those are the ones actually queueing the traffic...

I can confirm that. I'm currently using a bridge on my home router. eth3 and eth4 are bridged, eth4 is connected to the CPE device which goes out to the Internet, eth4 is where queue management is applied, and this works. It does not work to add this to br0…


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Bloat] reatime buffer monitoring?
  2019-01-03 17:32 [Bloat] fq_codel on bridge multiple subnets? Dev
  2019-01-03 18:12 ` Toke Høiland-Jørgensen
@ 2019-01-03 21:23 ` Dev
  2019-01-03 21:51   ` Pete Heist
  1 sibling, 1 reply; 15+ messages in thread
From: Dev @ 2019-01-03 21:23 UTC (permalink / raw)
  To: bloat

 Is there a command to watch what the buffer is doing real time once I run traffic across it, or some way to know what it’s doing?

I’m experimenting with something like:

watch -n 1 tc -g -s class show dev eth0

But I gotta guess there’s a better way to do this?

- Dev


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Bloat] reatime buffer monitoring?
  2019-01-03 21:23 ` [Bloat] reatime buffer monitoring? Dev
@ 2019-01-03 21:51   ` Pete Heist
  2019-01-03 22:31     ` Dave Taht
  0 siblings, 1 reply; 15+ messages in thread
From: Pete Heist @ 2019-01-03 21:51 UTC (permalink / raw)
  To: Dev; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 855 bytes --]

Since it looks like you have bql, this should work for the hardware queues: https://github.com/dtaht/bqlmon <https://github.com/dtaht/bqlmon>

It might be nice to have a realtime monitor for the qdisc stats, but these can also change very quickly so looking at drops and marks, or other stats, can be useful as well.

> On Jan 3, 2019, at 10:23 PM, Dev <dev@logicalwebhost.com> wrote:
> 
> Is there a command to watch what the buffer is doing real time once I run traffic across it, or some way to know what it’s doing?
> 
> I’m experimenting with something like:
> 
> watch -n 1 tc -g -s class show dev eth0
> 
> But I gotta guess there’s a better way to do this?
> 
> - Dev
> 
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat


[-- Attachment #2: Type: text/html, Size: 1557 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Bloat] reatime buffer monitoring?
  2019-01-03 21:51   ` Pete Heist
@ 2019-01-03 22:31     ` Dave Taht
  2019-01-04  2:06       ` Dev
  0 siblings, 1 reply; 15+ messages in thread
From: Dave Taht @ 2019-01-03 22:31 UTC (permalink / raw)
  To: Pete Heist; +Cc: Dev, bloat

https://github.com/ffainelli/bqlmon is the official repo.

On Thu, Jan 3, 2019 at 1:51 PM Pete Heist <pete@heistp.net> wrote:
>
> Since it looks like you have bql, this should work for the hardware queues: https://github.com/dtaht/bqlmon
>
> It might be nice to have a realtime monitor for the qdisc stats, but these can also change very quickly so looking at drops and marks, or other stats, can be useful as well.
>
> On Jan 3, 2019, at 10:23 PM, Dev <dev@logicalwebhost.com> wrote:
>
> Is there a command to watch what the buffer is doing real time once I run traffic across it, or some way to know what it’s doing?
>
> I’m experimenting with something like:
>
> watch -n 1 tc -g -s class show dev eth0
>
> But I gotta guess there’s a better way to do this?
>
> - Dev
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat



-- 

Dave Täht
CTO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-831-205-9740

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Bloat] reatime buffer monitoring?
  2019-01-03 22:31     ` Dave Taht
@ 2019-01-04  2:06       ` Dev
  0 siblings, 0 replies; 15+ messages in thread
From: Dev @ 2019-01-04  2:06 UTC (permalink / raw)
  To: Dave Taht; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 1627 bytes --]

Okay, in case it’s of use to anyone else, here’s a quick howto for this tool:

apt-get install libncurses5-dev build-essential

git clone https://github.com/ffainelli/bqlmon.git <https://github.com/ffainelli/bqlmon.git>

cd bqlmon

make

./bqlmon -i eth0

HTH.

- Dev

> On Jan 3, 2019, at 2:31 PM, Dave Taht <dave.taht@gmail.com> wrote:
> 
> https://github.com/ffainelli/bqlmon is the official repo.
> 
> On Thu, Jan 3, 2019 at 1:51 PM Pete Heist <pete@heistp.net> wrote:
>> 
>> Since it looks like you have bql, this should work for the hardware queues: https://github.com/dtaht/bqlmon
>> 
>> It might be nice to have a realtime monitor for the qdisc stats, but these can also change very quickly so looking at drops and marks, or other stats, can be useful as well.
>> 
>> On Jan 3, 2019, at 10:23 PM, Dev <dev@logicalwebhost.com> wrote:
>> 
>> Is there a command to watch what the buffer is doing real time once I run traffic across it, or some way to know what it’s doing?
>> 
>> I’m experimenting with something like:
>> 
>> watch -n 1 tc -g -s class show dev eth0
>> 
>> But I gotta guess there’s a better way to do this?
>> 
>> - Dev
>> 
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>> 
>> 
>> _______________________________________________
>> Bloat mailing list
>> Bloat@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
> 
> 
> 
> -- 
> 
> Dave Täht
> CTO, TekLibre, LLC
> http://www.teklibre.com
> Tel: 1-831-205-9740


[-- Attachment #2: Type: text/html, Size: 3148 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Bloat] fq_codel on bridge multiple subnets?
  2019-01-03 18:54   ` Pete Heist
@ 2019-01-04  5:22     ` Dev
  2019-01-04  9:19       ` Pete Heist
  0 siblings, 1 reply; 15+ messages in thread
From: Dev @ 2019-01-04  5:22 UTC (permalink / raw)
  To: Pete Heist; +Cc: bloat

Okay, so this is what I have for /etc/network/interfaces (replaced eth0-2 with what Debian Buster actually calls them):

auto lo br0
iface lo inet loopback

allow-hotplug enp8s0
iface enp8s0 inet static
	address 192.168.10.200
	netmask 255.255.255.0
	gateway 192.168.10.1
	dns-nameservers 8.8.8.8

iface enp7s6 inet manual
	tc qdisc add dev enp7s6 root fq_codel

iface enp9s2 inet manual
	tc qdisc add dev enp9s2 root fq_codel

# Bridge setup
iface br0 inet static
	bridge_ports enp7s6 enp9s2
	#bridge_stp on
		address 192.168.3.50
		broadcast 192.168.3.255
		netmask 255.255.255.0
		gateway 192.168.3.1
		dns-nameservers 8.8.8.8

so my bridge interfaces now show:

>: tc -s qdisc show dev enp7s6
qdisc fq_codel 0: root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
 Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0

and 

>: tc -s qdisc show dev enp9s2
qdisc fq_codel 0: root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
 Sent 12212 bytes 80 pkt (dropped 0, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
  new_flows_len 0 old_flows_len 0

with my bridge like:

ip a 

5: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 00:04:5a:86:a2:84 brd ff:ff:ff:ff:ff:ff
    inet 192.168.3.50/24 brd 192.168.3.255 scope global br0
       valid_lft forever preferred_lft forever
    inet6 fe80::204:5aff:fe86:a284/64 scope link
       valid_lft forever preferred_lft forever

So do I have it configured right or should I change something? I haven’t gotten a chance to stress test it yet, but will try tomorrow.

- Dev

> On Jan 3, 2019, at 10:54 AM, Pete Heist <pete@heistp.net> wrote:
> 
> 
>> On Jan 3, 2019, at 7:12 PM, Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>> 
>> Dev <dev@logicalwebhost.com> writes:
>> 
>>> I’m trying to create a bridge on eth1 and eth2, with a management
>>> interface on eth0, then enable fq_codel on the bridge. My bridge
>>> interface looks like:
>> 
>> You'll probably want to put FQ-CoDel on the underlying physical
>> interfaces, as those are the ones actually queueing the traffic...
> 
> I can confirm that. I'm currently using a bridge on my home router. eth3 and eth4 are bridged, eth4 is connected to the CPE device which goes out to the Internet, eth4 is where queue management is applied, and this works. It does not work to add this to br0…
> 


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Bloat] fq_codel on bridge multiple subnets?
  2019-01-04  5:22     ` Dev
@ 2019-01-04  9:19       ` Pete Heist
  2019-01-04 18:20         ` Dev
  0 siblings, 1 reply; 15+ messages in thread
From: Pete Heist @ 2019-01-04  9:19 UTC (permalink / raw)
  To: Dev; +Cc: bloat

It’s a little different for me in that I’m rate limiting on one of the physical interfaces, but otherwise, your setup should reduce latency under load when the Ethernet devices are being used at line rate.

If your WAN interface is enp8s0 and goes out to the Internet, you may want to shape there (htb+fq_codel or cake) depending on what upstream device is in use.

If enp7s6 and enp9s2 are only carrying LAN traffic, and not traffic that goes out to the Internet, fq_codel’s target and interval could be reduced.

> On Jan 4, 2019, at 6:22 AM, Dev <dev@logicalwebhost.com> wrote:
> 
> Okay, so this is what I have for /etc/network/interfaces (replaced eth0-2 with what Debian Buster actually calls them):
> 
> auto lo br0
> iface lo inet loopback
> 
> allow-hotplug enp8s0
> iface enp8s0 inet static
> 	address 192.168.10.200
> 	netmask 255.255.255.0
> 	gateway 192.168.10.1
> 	dns-nameservers 8.8.8.8
> 
> iface enp7s6 inet manual
> 	tc qdisc add dev enp7s6 root fq_codel
> 
> iface enp9s2 inet manual
> 	tc qdisc add dev enp9s2 root fq_codel
> 
> # Bridge setup
> iface br0 inet static
> 	bridge_ports enp7s6 enp9s2
> 	#bridge_stp on
> 		address 192.168.3.50
> 		broadcast 192.168.3.255
> 		netmask 255.255.255.0
> 		gateway 192.168.3.1
> 		dns-nameservers 8.8.8.8
> 
> so my bridge interfaces now show:
> 
>> : tc -s qdisc show dev enp7s6
> qdisc fq_codel 0: root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
> Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
> backlog 0b 0p requeues 0
>  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
>  new_flows_len 0 old_flows_len 0
> 
> and 
> 
>> : tc -s qdisc show dev enp9s2
> qdisc fq_codel 0: root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
> Sent 12212 bytes 80 pkt (dropped 0, overlimits 0 requeues 0)
> backlog 0b 0p requeues 0
>  maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
>  new_flows_len 0 old_flows_len 0
> 
> with my bridge like:
> 
> ip a 
> 
> 5: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
>    link/ether 00:04:5a:86:a2:84 brd ff:ff:ff:ff:ff:ff
>    inet 192.168.3.50/24 brd 192.168.3.255 scope global br0
>       valid_lft forever preferred_lft forever
>    inet6 fe80::204:5aff:fe86:a284/64 scope link
>       valid_lft forever preferred_lft forever
> 
> So do I have it configured right or should I change something? I haven’t gotten a chance to stress test it yet, but will try tomorrow.
> 
> - Dev
> 
>> On Jan 3, 2019, at 10:54 AM, Pete Heist <pete@heistp.net> wrote:
>> 
>> 
>>> On Jan 3, 2019, at 7:12 PM, Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>>> 
>>> Dev <dev@logicalwebhost.com> writes:
>>> 
>>>> I’m trying to create a bridge on eth1 and eth2, with a management
>>>> interface on eth0, then enable fq_codel on the bridge. My bridge
>>>> interface looks like:
>>> 
>>> You'll probably want to put FQ-CoDel on the underlying physical
>>> interfaces, as those are the ones actually queueing the traffic...
>> 
>> I can confirm that. I'm currently using a bridge on my home router. eth3 and eth4 are bridged, eth4 is connected to the CPE device which goes out to the Internet, eth4 is where queue management is applied, and this works. It does not work to add this to br0…
>> 
> 


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Bloat] fq_codel on bridge multiple subnets?
  2019-01-04  9:19       ` Pete Heist
@ 2019-01-04 18:20         ` Dev
  2019-01-04 18:57           ` Dave Taht
  0 siblings, 1 reply; 15+ messages in thread
From: Dev @ 2019-01-04 18:20 UTC (permalink / raw)
  To: Pete Heist; +Cc: bloat

I want to pass multiple hundreds of Mbps across this bridge very consistently, and across multiple subnets to different enterprise gateways which then connect to the internet, will plan a little test to see how this does under load. Hopefully I don’t need special NIC’s to handle it?

> On Jan 4, 2019, at 1:19 AM, Pete Heist <pete@heistp.net> wrote:
> 
> It’s a little different for me in that I’m rate limiting on one of the physical interfaces, but otherwise, your setup should reduce latency under load when the Ethernet devices are being used at line rate.
> 
> If your WAN interface is enp8s0 and goes out to the Internet, you may want to shape there (htb+fq_codel or cake) depending on what upstream device is in use.
> 
> If enp7s6 and enp9s2 are only carrying LAN traffic, and not traffic that goes out to the Internet, fq_codel’s target and interval could be reduced.
> 
>> On Jan 4, 2019, at 6:22 AM, Dev <dev@logicalwebhost.com> wrote:
>> 
>> Okay, so this is what I have for /etc/network/interfaces (replaced eth0-2 with what Debian Buster actually calls them):
>> 
>> auto lo br0
>> iface lo inet loopback
>> 
>> allow-hotplug enp8s0
>> iface enp8s0 inet static
>> 	address 192.168.10.200
>> 	netmask 255.255.255.0
>> 	gateway 192.168.10.1
>> 	dns-nameservers 8.8.8.8
>> 
>> iface enp7s6 inet manual
>> 	tc qdisc add dev enp7s6 root fq_codel
>> 
>> iface enp9s2 inet manual
>> 	tc qdisc add dev enp9s2 root fq_codel
>> 
>> # Bridge setup
>> iface br0 inet static
>> 	bridge_ports enp7s6 enp9s2
>> 	#bridge_stp on
>> 		address 192.168.3.50
>> 		broadcast 192.168.3.255
>> 		netmask 255.255.255.0
>> 		gateway 192.168.3.1
>> 		dns-nameservers 8.8.8.8
>> 
>> so my bridge interfaces now show:
>> 
>>> : tc -s qdisc show dev enp7s6
>> qdisc fq_codel 0: root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
>> Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
>> backlog 0b 0p requeues 0
>> maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
>> new_flows_len 0 old_flows_len 0
>> 
>> and 
>> 
>>> : tc -s qdisc show dev enp9s2
>> qdisc fq_codel 0: root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
>> Sent 12212 bytes 80 pkt (dropped 0, overlimits 0 requeues 0)
>> backlog 0b 0p requeues 0
>> maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
>> new_flows_len 0 old_flows_len 0
>> 
>> with my bridge like:
>> 
>> ip a 
>> 
>> 5: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
>>   link/ether 00:04:5a:86:a2:84 brd ff:ff:ff:ff:ff:ff
>>   inet 192.168.3.50/24 brd 192.168.3.255 scope global br0
>>      valid_lft forever preferred_lft forever
>>   inet6 fe80::204:5aff:fe86:a284/64 scope link
>>      valid_lft forever preferred_lft forever
>> 
>> So do I have it configured right or should I change something? I haven’t gotten a chance to stress test it yet, but will try tomorrow.
>> 
>> - Dev
>> 
>>> On Jan 3, 2019, at 10:54 AM, Pete Heist <pete@heistp.net> wrote:
>>> 
>>> 
>>>> On Jan 3, 2019, at 7:12 PM, Toke Høiland-Jørgensen <toke@toke.dk> wrote:
>>>> 
>>>> Dev <dev@logicalwebhost.com> writes:
>>>> 
>>>>> I’m trying to create a bridge on eth1 and eth2, with a management
>>>>> interface on eth0, then enable fq_codel on the bridge. My bridge
>>>>> interface looks like:
>>>> 
>>>> You'll probably want to put FQ-CoDel on the underlying physical
>>>> interfaces, as those are the ones actually queueing the traffic...
>>> 
>>> I can confirm that. I'm currently using a bridge on my home router. eth3 and eth4 are bridged, eth4 is connected to the CPE device which goes out to the Internet, eth4 is where queue management is applied, and this works. It does not work to add this to br0…
>>> 
>> 
> 


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Bloat] fq_codel on bridge multiple subnets?
  2019-01-04 18:20         ` Dev
@ 2019-01-04 18:57           ` Dave Taht
  2019-01-04 20:33             ` [Bloat] fq_codel on bridge throughput test/config Dev
  0 siblings, 1 reply; 15+ messages in thread
From: Dave Taht @ 2019-01-04 18:57 UTC (permalink / raw)
  To: Dev; +Cc: Pete Heist, bloat

Well, good nics are best. :) One reason why the apu2 series is so
popular is because it uses the extremely good intel chipset.

On Fri, Jan 4, 2019 at 10:20 AM Dev <dev@logicalwebhost.com> wrote:
>
> I want to pass multiple hundreds of Mbps across this bridge very consistently, and across multiple subnets to different enterprise gateways which then connect to the internet, will plan a little test to see how this does under load. Hopefully I don’t need special NIC’s to handle it?
>
> > On Jan 4, 2019, at 1:19 AM, Pete Heist <pete@heistp.net> wrote:
> >
> > It’s a little different for me in that I’m rate limiting on one of the physical interfaces, but otherwise, your setup should reduce latency under load when the Ethernet devices are being used at line rate.
> >
> > If your WAN interface is enp8s0 and goes out to the Internet, you may want to shape there (htb+fq_codel or cake) depending on what upstream device is in use.
> >
> > If enp7s6 and enp9s2 are only carrying LAN traffic, and not traffic that goes out to the Internet, fq_codel’s target and interval could be reduced.
> >
> >> On Jan 4, 2019, at 6:22 AM, Dev <dev@logicalwebhost.com> wrote:
> >>
> >> Okay, so this is what I have for /etc/network/interfaces (replaced eth0-2 with what Debian Buster actually calls them):
> >>
> >> auto lo br0
> >> iface lo inet loopback
> >>
> >> allow-hotplug enp8s0
> >> iface enp8s0 inet static
> >>      address 192.168.10.200
> >>      netmask 255.255.255.0
> >>      gateway 192.168.10.1
> >>      dns-nameservers 8.8.8.8
> >>
> >> iface enp7s6 inet manual
> >>      tc qdisc add dev enp7s6 root fq_codel
> >>
> >> iface enp9s2 inet manual
> >>      tc qdisc add dev enp9s2 root fq_codel
> >>
> >> # Bridge setup
> >> iface br0 inet static
> >>      bridge_ports enp7s6 enp9s2
> >>      #bridge_stp on
> >>              address 192.168.3.50
> >>              broadcast 192.168.3.255
> >>              netmask 255.255.255.0
> >>              gateway 192.168.3.1
> >>              dns-nameservers 8.8.8.8
> >>
> >> so my bridge interfaces now show:
> >>
> >>> : tc -s qdisc show dev enp7s6
> >> qdisc fq_codel 0: root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
> >> Sent 0 bytes 0 pkt (dropped 0, overlimits 0 requeues 0)
> >> backlog 0b 0p requeues 0
> >> maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
> >> new_flows_len 0 old_flows_len 0
> >>
> >> and
> >>
> >>> : tc -s qdisc show dev enp9s2
> >> qdisc fq_codel 0: root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
> >> Sent 12212 bytes 80 pkt (dropped 0, overlimits 0 requeues 0)
> >> backlog 0b 0p requeues 0
> >> maxpacket 0 drop_overlimit 0 new_flow_count 0 ecn_mark 0
> >> new_flows_len 0 old_flows_len 0
> >>
> >> with my bridge like:
> >>
> >> ip a
> >>
> >> 5: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
> >>   link/ether 00:04:5a:86:a2:84 brd ff:ff:ff:ff:ff:ff
> >>   inet 192.168.3.50/24 brd 192.168.3.255 scope global br0
> >>      valid_lft forever preferred_lft forever
> >>   inet6 fe80::204:5aff:fe86:a284/64 scope link
> >>      valid_lft forever preferred_lft forever
> >>
> >> So do I have it configured right or should I change something? I haven’t gotten a chance to stress test it yet, but will try tomorrow.
> >>
> >> - Dev
> >>
> >>> On Jan 3, 2019, at 10:54 AM, Pete Heist <pete@heistp.net> wrote:
> >>>
> >>>
> >>>> On Jan 3, 2019, at 7:12 PM, Toke Høiland-Jørgensen <toke@toke.dk> wrote:
> >>>>
> >>>> Dev <dev@logicalwebhost.com> writes:
> >>>>
> >>>>> I’m trying to create a bridge on eth1 and eth2, with a management
> >>>>> interface on eth0, then enable fq_codel on the bridge. My bridge
> >>>>> interface looks like:
> >>>>
> >>>> You'll probably want to put FQ-CoDel on the underlying physical
> >>>> interfaces, as those are the ones actually queueing the traffic...
> >>>
> >>> I can confirm that. I'm currently using a bridge on my home router. eth3 and eth4 are bridged, eth4 is connected to the CPE device which goes out to the Internet, eth4 is where queue management is applied, and this works. It does not work to add this to br0…
> >>>
> >>
> >
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat



-- 

Dave Täht
CTO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-831-205-9740

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [Bloat] fq_codel on bridge throughput test/config
  2019-01-04 18:57           ` Dave Taht
@ 2019-01-04 20:33             ` Dev
  2019-01-04 20:40               ` Toke Høiland-Jørgensen
  2019-01-05  0:17               ` Stephen Hemminger
  0 siblings, 2 replies; 15+ messages in thread
From: Dev @ 2019-01-04 20:33 UTC (permalink / raw)
  To: bloat

Okay, thanks to some help from the list, I’ve configured a transparent bridge running fq_codel which works for multiple subnet traffic. Here’s my setup:

Machine A ——— 192.168.10.200 — — bridge fq_codel machine B —— laptop C 192.168.10.150
Machine D ——— 192.168.3.50 — —| 

On Machine A:

straight gigE interface 192.168.10.200

Bridge Machine B: enp3s0 mgmt interface
				enp2s0 bridge interface 1
				enp1s0 bridge interface 2
				br0 bridge for 1 and 2
	
	# The loopback network interface 
	auto lo br0 
	iface lo inet loopback 

	# The primary network interface 
	allow-hotplug enp3s0 
	iface enp3s0 inet static 
		 address 172.16.0.5/24 
		 gateway 172.16.0.5 
		dns-nameservers 8.8.8.8

 	iface enp1s0 inet manual 
		 tc qdisc add dev enp1s0 root fq_codel 

	 iface enp2s0 inet manual 
		tc qdisc add dev enp2s0 root fq_codel 

	 # Bridge setup 
	iface br0 inet static 
		bridge_ports enp1s0 enp2s0 
		address 192.168.3.75 
		broadcast 192.168.3.255 
		netmask 255.255.255.0 
		gateway 192.168.3

note: I still have to run this command later, will troubleshoot at some point (unless you have suggestions to make it work):

tc qdisc add dev enp1s0 root fq_codel

To start, my pings from Machine A to Laptop C were around 0.75 msec, then I flooded the link from Machine A to Laptop C using:

dd if=/dev/urandom | ssh user@192.168.10.150 dd of=/dev/null

Then my pings went up to around 170 msec. Once I enabled fq_codel on the bridge machine B, my pings dropped to around 10 msec.

Hope this helps someone else working on a similar setup.

- Dev


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Bloat] fq_codel on bridge throughput test/config
  2019-01-04 20:33             ` [Bloat] fq_codel on bridge throughput test/config Dev
@ 2019-01-04 20:40               ` Toke Høiland-Jørgensen
  2019-01-16  0:54                 ` Dev
  2019-01-05  0:17               ` Stephen Hemminger
  1 sibling, 1 reply; 15+ messages in thread
From: Toke Høiland-Jørgensen @ 2019-01-04 20:40 UTC (permalink / raw)
  To: Dev, bloat

Dev <dev@logicalwebhost.com> writes:

> Okay, thanks to some help from the list, I’ve configured a transparent bridge running fq_codel which works for multiple subnet traffic. Here’s my setup:
>
> Machine A ——— 192.168.10.200 — — bridge fq_codel machine B —— laptop C 192.168.10.150
> Machine D ——— 192.168.3.50 — —| 
>
> On Machine A:
>
> straight gigE interface 192.168.10.200
>
> Bridge Machine B: enp3s0 mgmt interface
> 				enp2s0 bridge interface 1
> 				enp1s0 bridge interface 2
> 				br0 bridge for 1 and 2
> 	
> 	# The loopback network interface 
> 	auto lo br0 
> 	iface lo inet loopback 
>
> 	# The primary network interface 
> 	allow-hotplug enp3s0 
> 	iface enp3s0 inet static 
> 		 address 172.16.0.5/24 
> 		 gateway 172.16.0.5 
> 		dns-nameservers 8.8.8.8
>
>  	iface enp1s0 inet manual 
> 		 tc qdisc add dev enp1s0 root fq_codel 
>
> 	 iface enp2s0 inet manual 
> 		tc qdisc add dev enp2s0 root fq_codel 
>
> 	 # Bridge setup 
> 	iface br0 inet static 
> 		bridge_ports enp1s0 enp2s0 
> 		address 192.168.3.75 
> 		broadcast 192.168.3.255 
> 		netmask 255.255.255.0 
> 		gateway 192.168.3
>
> note: I still have to run this command later, will troubleshoot at
> some point (unless you have suggestions to make it work):

You can try setting the default qdisc to fq_codel; put:

net.core.default_qdisc=fq_codel

in /etc/sysctl.conf

-Toke

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Bloat] fq_codel on bridge throughput test/config
  2019-01-04 20:33             ` [Bloat] fq_codel on bridge throughput test/config Dev
  2019-01-04 20:40               ` Toke Høiland-Jørgensen
@ 2019-01-05  0:17               ` Stephen Hemminger
  1 sibling, 0 replies; 15+ messages in thread
From: Stephen Hemminger @ 2019-01-05  0:17 UTC (permalink / raw)
  To: Dev; +Cc: bloat

On Fri, 4 Jan 2019 12:33:28 -0800
Dev <dev@logicalwebhost.com> wrote:

> Okay, thanks to some help from the list, I’ve configured a transparent bridge running fq_codel which works for multiple subnet traffic. Here’s my setup:
> 
> Machine A ——— 192.168.10.200 — — bridge fq_codel machine B —— laptop C 192.168.10.150
> Machine D ——— 192.168.3.50 — —| 
> 
> On Machine A:
> 
> straight gigE interface 192.168.10.200
> 
> Bridge Machine B: enp3s0 mgmt interface
> 				enp2s0 bridge interface 1
> 				enp1s0 bridge interface 2
> 				br0 bridge for 1 and 2
> 	
> 	# The loopback network interface 
> 	auto lo br0 
> 	iface lo inet loopback 
> 
> 	# The primary network interface 
> 	allow-hotplug enp3s0 
> 	iface enp3s0 inet static 
> 		 address 172.16.0.5/24 
> 		 gateway 172.16.0.5 
> 		dns-nameservers 8.8.8.8
> 
>  	iface enp1s0 inet manual 
> 		 tc qdisc add dev enp1s0 root fq_codel 
> 
> 	 iface enp2s0 inet manual 
> 		tc qdisc add dev enp2s0 root fq_codel 
> 
> 	 # Bridge setup 
> 	iface br0 inet static 
> 		bridge_ports enp1s0 enp2s0 
> 		address 192.168.3.75 
> 		broadcast 192.168.3.255 
> 		netmask 255.255.255.0 
> 		gateway 192.168.3
> 
> note: I still have to run this command later, will troubleshoot at some point (unless you have suggestions to make it work):
> 
> tc qdisc add dev enp1s0 root fq_codel
> 
> To start, my pings from Machine A to Laptop C were around 0.75 msec, then I flooded the link from Machine A to Laptop C using:
> 
> dd if=/dev/urandom | ssh user@192.168.10.150 dd of=/dev/null
> 
> Then my pings went up to around 170 msec. Once I enabled fq_codel on the bridge machine B, my pings dropped to around 10 msec.
> 
> Hope this helps someone else working on a similar setup.
> 
> - Dev
> 
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat

Applying a qdisc to a bridge device only impacts the local traffic
going to that bridge (ie br0). It has no impact on traffic transiting
through the bridge. Since normally bridge pseudo device is queueless
putting qdisc on br0 has no impact. In other words packets being transmitted
on br0 go direct to the underlying device, therefore even if you put a
qdisc on br0 it isn't going to do what you expect (unless you layer some
rate control into the stack).

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [Bloat] fq_codel on bridge throughput test/config
  2019-01-04 20:40               ` Toke Høiland-Jørgensen
@ 2019-01-16  0:54                 ` Dev
  0 siblings, 0 replies; 15+ messages in thread
From: Dev @ 2019-01-16  0:54 UTC (permalink / raw)
  To: Toke Høiland-Jørgensen; +Cc: bloat

[-- Attachment #1: Type: text/plain, Size: 2604 bytes --]

Okay, here’s what I have to get fq_codel enabled on bridge on boot:

crontab -e 

# m h  dom mon dow   command
@reboot /usr/src/start.fq_codel.sh

vi /usr/src/start.fq_codel.sh
  #!/bin/bash

  /sbin/tc qdisc add dev enp1s0 root fq_codel
  /sbin/tc qdisc add dev enp2s0 root fq_codel

chmod 755 /usr/src/start.fq_codel.sh

it seems to be working under load (so far) though load will increase in the next 24 hours so I’ll watch it, but here’s what I get:

tc -s qdisc show dev enp2s0
  qdisc fq_codel 8002: root refcnt 2 limit 10240p flows 1024 quantum 1514 target 5.0ms interval 100.0ms memory_limit 32Mb ecn
  Sent 4273930406 bytes 3059139 pkt (dropped 0, overlimits 0 requeues 1269)
  backlog 0b 0p requeues 1269
  maxpacket 54504 drop_overlimit 0 new_flow_count 401 ecn_mark 0
  new_flows_len 0 old_flows_len 0

Should I change any settings? Where can I learn more about what my requeues should be, or really any of the other settings?

- Dev 

> On Jan 4, 2019, at 12:40 PM, Toke Høiland-Jørgensen <toke@toke.dk> wrote:
> 
> Dev <dev@logicalwebhost.com <mailto:dev@logicalwebhost.com>> writes:
> 
>> Okay, thanks to some help from the list, I’ve configured a transparent bridge running fq_codel which works for multiple subnet traffic. Here’s my setup:
>> 
>> Machine A ——— 192.168.10.200 — — bridge fq_codel machine B —— laptop C 192.168.10.150
>> Machine D ——— 192.168.3.50 — —| 
>> 
>> On Machine A:
>> 
>> straight gigE interface 192.168.10.200
>> 
>> Bridge Machine B: enp3s0 mgmt interface
>> 				enp2s0 bridge interface 1
>> 				enp1s0 bridge interface 2
>> 				br0 bridge for 1 and 2
>> 	
>> 	# The loopback network interface 
>> 	auto lo br0 
>> 	iface lo inet loopback 
>> 
>> 	# The primary network interface 
>> 	allow-hotplug enp3s0 
>> 	iface enp3s0 inet static 
>> 		 address 172.16.0.5/24 
>> 		 gateway 172.16.0.5 
>> 		dns-nameservers 8.8.8.8
>> 
>> 	iface enp1s0 inet manual 
>> 		 tc qdisc add dev enp1s0 root fq_codel 
>> 
>> 	 iface enp2s0 inet manual 
>> 		tc qdisc add dev enp2s0 root fq_codel 
>> 
>> 	 # Bridge setup 
>> 	iface br0 inet static 
>> 		bridge_ports enp1s0 enp2s0 
>> 		address 192.168.3.75 
>> 		broadcast 192.168.3.255 
>> 		netmask 255.255.255.0 
>> 		gateway 192.168.3
>> 
>> note: I still have to run this command later, will troubleshoot at
>> some point (unless you have suggestions to make it work):
> 
> You can try setting the default qdisc to fq_codel; put:
> 
> net.core.default_qdisc=fq_codel
> 
> in /etc/sysctl.conf
> 
> -Toke


[-- Attachment #2: Type: text/html, Size: 13845 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2019-01-16  0:54 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-03 17:32 [Bloat] fq_codel on bridge multiple subnets? Dev
2019-01-03 18:12 ` Toke Høiland-Jørgensen
2019-01-03 18:54   ` Pete Heist
2019-01-04  5:22     ` Dev
2019-01-04  9:19       ` Pete Heist
2019-01-04 18:20         ` Dev
2019-01-04 18:57           ` Dave Taht
2019-01-04 20:33             ` [Bloat] fq_codel on bridge throughput test/config Dev
2019-01-04 20:40               ` Toke Høiland-Jørgensen
2019-01-16  0:54                 ` Dev
2019-01-05  0:17               ` Stephen Hemminger
2019-01-03 21:23 ` [Bloat] reatime buffer monitoring? Dev
2019-01-03 21:51   ` Pete Heist
2019-01-03 22:31     ` Dave Taht
2019-01-04  2:06       ` Dev

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox