[Cerowrt-devel] [Bloat] talking at linux plumbers in portugal next week

Dave Taht dave.taht at gmail.com
Tue Sep 3 10:21:26 EDT 2019


On Tue, Sep 3, 2019 at 5:23 AM Mikael Abrahamsson <swmike at swm.pp.se> wrote:
>
> On Mon, 2 Sep 2019, Dave Taht wrote:
>
> > with copy-pasted parameters set in the 90s - openwrt's default, last I
> > looked, was 25/sec.
>
> -A syn_flood -p tcp -m tcp --tcp-flags FIN,SYN,RST,ACK SYN -m limit --limit 25/sec --limit-burst 50 -m comment --comment "!fw3" -j RETURN
> -A syn_flood -m comment --comment "!fw3" -j DROP
>
> Well, it's got a burst-size of 50. I agree that this is quite
> conservative.
>
> However, at least in my home we're not seeing drops:
>
> # iptables -nvL | grep -A 4 "Chain syn_flood"
> Chain syn_flood (1 references)
>   pkts bytes target     prot opt in     out     source               destination
>   2296  113K RETURN     tcp  --  *      *       0.0.0.0/0            0.0.0.0/0            tcp flags:0x17/0x02 limit: avg 25/sec burst 50 /* !fw3 */
>      0     0 DROP       all  --  *      *       0.0.0.0/0            0.0.0.0/0            /* !fw3 */
>
> But you might be right that in places with a lot more clients then this
> might indeed cause problems.

Well, *I* long ago had upped those params by 10x and don't see syn
drops either on my backbone. But I rather suspect the rest of the
world just copy-pasted it. It should scale as a function of bandwidth,
I suppose, or get updated as a side effect of setting QoS - or just
get bumped up. Start a bug over with openwrt? Take a hard look at
other firewall designs?

Like I said, though, my big question was is there a browser stat or
some other easily accessible stat to see how
often syns are rejected? Another context for this was syn negotiation
with ecn on and the fallback.

Interestingly, I've also seen a pretty big uptick in ecn marking over
the last year or so, on one uplink (we do have a lot of guests that
run apple gear), it's now at over 10% of of the drop ratio on
outbound.

This box is - I hope - the last cerowrt box running in the universe -
and the only reason it ever goes down is because of a long duration
power failure. I've been meaning to replace it for ages...

root at lounge:~# uptime
 07:14:53 up 55 days, 17:14,  load average: 0.16, 0.09, 0.10

outbound:

qdisc fq_codel 120: parent 1:12 limit 1001p flows 1024 quantum 300
target 5.0ms interval 100.0ms ecn
 Sent 159378714029 bytes 1038654784 pkt (dropped 426065, overlimits 0
requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 1514 drop_overlimit 0 new_flow_count 213282954 ecn_mark 48220
  new_flows_len 0 old_flows_len 1

inbound: (where comcast remarks most packets to CS1)

qdisc fq_codel 120: parent 1:12 limit 1001p flows 1024 quantum 1500
target 5.0ms interval 100.0ms ecn
 Sent 40391986695 bytes 34710741 pkt (dropped 420, overlimits 0 requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 1514 drop_overlimit 0 new_flow_count 5687382 ecn_mark 0
  new_flows_len 0 old_flows_len 2
qdisc fq_codel 130: parent 1:13 limit 1001p flows 1024 quantum 300
target 5.0ms interval 100.0ms ecn
 Sent 2285974845172 bytes 1748071724 pkt (dropped 61231, overlimits 0
requeues 0)
 backlog 0b 0p requeues 0
  maxpacket 1514 drop_overlimit 0 new_flow_count 229072930 ecn_mark 344
  new_flows_len 0 old_flows_len 1


>
> --
> Mikael Abrahamsson    email: swmike at swm.pp.se



-- 

Dave Täht
CTO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-831-205-9740


More information about the Cerowrt-devel mailing list