<p dir="ltr">This got mangled by my IP addr filter</p>
<p dir="ltr">On Jan 24, 2015 11:56 PM, "Dave Taht" <<a href="mailto:dave.taht@gmail.com">dave.taht@gmail.com</a>> wrote:<br>
><br>
> I want to make clear that I support dlang's design in the abstract... and am just arguing because it is a slow day.<br>
><br>
> On Sat, Jan 24, 2015 at 10:44 PM, David Lang <<a href="mailto:david@lang.hm">david@lang.hm</a>> wrote:<br>
> > On Sat, 24 Jan 2015, Dave Taht wrote:<br>
> ><br>
> >>>> A side comment, meant to discourage continuing to bridge rather than<br>
> >>>> route.<br>
> >>>><br>
> >>>> There's no reason that the AP's cannot have different IP addresses, but<br>
> >>>> a<br>
> >>>> common ESSID. Roaming between them would be like roaming among mesh<br>
> >>>> subnets. Assuming you are securing your APs' air interfaces using<br>
> >>>> encryption<br>
> >>>> over the air, you are already re-authenticating as you move from AP to<br>
> >>>> AP.<br>
> >>>> So using routing rather than bridging is a good idea for all the reasons<br>
> >>>> that routing rather than bridging is better for mesh.<br>
> >>><br>
> >>><br>
> >>><br>
> >>> The problem with doing this is that all existing TCP connections will<br>
> >>> break<br>
> >>> when you move from one AP to another and while some apps will quickly<br>
> >>> notice<br>
> >>> this and establish new connections, there are many apps that will not and<br>
> >>> this will cause noticable disruption to the user.<br>
> >><br>
> >><br>
> >> I am under the impression that network-manager and linux, at least,<br>
> >> tend to renegotiate<br>
> >> IPv6 addresses on an down/up, and preserve ipv4.<br>
> ><br>
> ><br>
> > It can't preserve the ipv4 address if you end up on a different network<br>
> > address range (and trying to have lots of separate networks with the same IP<br>
> > addresses would mean that you have to do NAT at each network, and if you did<br>
> > that, then when you ended up on a different AP with the same IP address, the<br>
> > NAT tables would not have records of your connections and they would<br>
> > terminate the connections when you tried to send the next packets.<br>
><br>
> Hmm? The first thing I ever do to a router is renumber it to a unique IP address range,<br>
> and rename the subnet in dns to something unique. The 3 sed lines for this are on a cerowrt web page somewhere. Adding ipv6 statically is a pita, but doable with care and a uci script, and mildly more doable as hnetd matures.<br>
><br>
> I run local dns services on each in the hope that at least some will be cached, and a local dhcp server to serve addresses out of that range. I turn off dhcp default route fetching on each routers external interface and use babel instead to find the right route(s) out of the system.<br>
><br>
> On the NAT front, there is no nat on the internal routers, just a flat address space (a /14 in my case). I push all the nat to the main egress gateway(s), and in a case like yours would probably use multiple external IPs and dnat rather than masquarade the entire subnet on one to free up port space. You rapidly run out of ports in a natted evironment with that many users. I've had to turn down NAT timeouts for udp in particular to truly unreasonable levels otherwise (20 seconds in some cases)<br>
><br>
> Doing this I can get a quick status on what is up with "ip route", and by monitoring the activity on each ip range, see if traffic is actually being passed, a failure of a given gateway fails over to another, and so on. There's a couple snmp hacks to do things like monitor active leases, and smokeping/mrtg to access other stats. There's a couple beagles that are on wifi that I ping on some APs. The beagles have not been very reliable for me, so they switch on and off with digiloggers gear when they fail a local ping. In fact the main logging beagle failed entirely the other month, sigh.<br>
><br>
> I use the ad-hoc links on cerowrt as backups (if they lose ethernet connectivity) and extenders (if there is no ethernet connectivity), and (as I have 5 different comcast exit nodes spread throughout the network), use babel-pinger on each to see if they are up, and insert default routes into the mix that are automatically the shortest "distance" between the node and exit gateway. If one gw goes down (usually) all the traffic ends up switching to the next nearest default gateway switching over in 16 seconds or so, breaking all the nat associations for the net they were on (sigh), as well as ipv6 native stuff, but it's happened so often without me noticing it that it's nice not to worry.<br>
><br>
> (I have a mostly failed attempt in play for doing better with ipv6 and hnetd on a couple of exit nodes, but that isn't solid enough to deploy as yet, so it's only sort of working in the yurtlab. I really wish I could buy PI space for ipv6 somehow)<br>
><br>
> (I have been fiddling with dns anycast to try to get more redundancy on the main dns gateways. That works pretty good)<br>
><br>
> Now, your method is simpler! (although mine is mostly scripted) I imagine you bridge everything on a vlan, and use a central dhcp/dns server to serve up dhcp across (say) a 10/16 subnet. And by blocking local multicast/broadcast, in particular, this scales across the 3k user population. You've got a critical single point of failure in your gateway, but at least that's only one, and I imagine you have that duplicated.<br>
><br>
> (In contrast my network is always broken somewhere, but unless two critical nodes break, it's pretty redundant and loss is confined to a a single AP - my biggest problem is that I need to upgrade the firmware on about half the network - which involves climbing trees - and my plan was to deploy hnetd last year so I could roll out ipv6)<br>
><br>
> How do you deal with a dead AP that is not actually connecting with traffic?<br>
><br>
> >>> Bridgeing allows the connections to remain intact. The wifi stack<br>
> >>> re-negotiates the encryption, but the encapsulated IP packets don't<br>
> >>> change.<br>
> >><br>
> >><br>
> >> While I actually agree with dlang on having all the same ssid and<br>
> >> bridging, and not routing, on a conference, as well as with the idea<br>
> >> of disabling broadcast (and I assume direct connectivity between two<br>
> >> people seated side by side), it is a pita:<br>
> >><br>
> >> More than once I've wanted to share a git tree with someone right next<br>
> >> to me. I try to hand them my ip to grab the tree, and they can't even<br>
> >> ping me, so I end uploading it somewhere, and he or she downloading it<br>
> >> from there. Similarly, breaking interconnectivity precludes sane usage<br>
> >> of in-conference<br>
> ><br>
> ><br>
> > True, it also blocks some abuse. People who really want direct connectivity<br>
> > can establish it as an ad-hoc network.<br>
><br>
> yes, I've often draped an ethernet cable between seats. :)<br>
><br>
> ><br>
> > For the normal user that we are trying to support at a conference, it's a<br>
> > win.<br>
> ><br>
> > I'll note that we also block streaming sites (which has the side effect of<br>
> > blocking some useful sites that share the same IPs, Amazon for example) to<br>
> > help make things better for everyone else, even at the cost of limiting what<br>
> > some people are able to do. Bandwidth is limited compared to the number of<br>
> > people we have, and we have to make choices.<br>
><br>
> Blocking ads is also effective.<br>
><br>
> > We do provide a local mirror of the debian based distros so that people can<br>
> > do the updates that they always tend to do at the conference (we would do<br>
> > the same for Fedora, but they make it too hard to do so)<br>
> ><br>
> >> In my case, since choosing to live in a routed, rather than bridged<br>
> >> world, I have modified the nailed up tools I use to be more<br>
> >> connectionless. Instead of ssh (tcp), I use mosh-multipath (udp),<br>
> >> which is far superior for interactive shells in lousy wifi<br>
> >> environments. For vpns, I switched to tinc, which will attempt direct<br>
> >> connections over udp, and tcp on both ipv4 and ipv6. For access to<br>
> >> google, I adopted quic in my chrome browser. Since doing all these<br>
> >> things I rarely notice losing a nailed up connection or migrating from<br>
> >> AP to AP. Additionally I use babel (where I control the network) and<br>
> >> ad-hoc wifi to transparently migrate from AP to AP, and (often) from<br>
> >> AP to wired to AP to wired as I change locations, also with no loss in<br>
> >> connectivity.<br>
> >><br>
> >> I don't expect the scale userbase to have made these adjustments in<br>
> >> behavior. :/<br>
> ><br>
> ><br>
> > :-)<br>
><br>
> It wouldn't hurt to recomend these tools (notably quic and mosh) to conference<br>
> participants. both are pretty awesome.<br>
><br>
> ><br>
> >>><br>
> >>> I do this with the wifi on it's own VLAN (actually separate VLANs for 2.4<br>
> >>> and 5GHz) and have the APs configured not to relay broadcast traffic from<br>
> >>> one wireless user to another. This cuts down a LOT on the problems of<br>
> >>> broadcasts.<br>
> >>><br>
> >>> In about a month I'm going to be running the wireless network for SCaLE<br>
> >>> again, and I would be happy to instrament the network to gather whatever<br>
> >>> info anyone is interested in. I will be using ~50 APs to handle the ~2800<br>
> >>> or<br>
> >><br>
> >><br>
> >> I will look into some tools bismark and others have.<br>
> >><br>
> >> Will you attempt to deploy ipv6?<br>
> ><br>
> ><br>
> > We have been offering IPv6 routable addresses for a few years.<br>
><br>
> How many do you get and from whom?<br>
><br>
> If I had time (doubtful) and budget (even more doubtful) I'd try to make scale to observe and help out.<br>
><br>
> >>> so devices that show up, with the footprint of each AP roughly covering a<br>
> >>> small meeting room (larger rooms have 2 APs in them, the largest room has<br>
> >>> 3,<br>
> >>> and I'm adding APs this year to cover the hallways better because the<br>
> >>> ones<br>
> >>> in the rooms aren't doing well enough at the low power settings I'm<br>
> >>> using)<br>
> >><br>
> >><br>
> >> I am of course interested in how fq_codel performs on your ISP link, and<br>
> >> are you planning on running it for your wifi?<br>
> ><br>
> ><br>
> > I'm running OpenWRT on the APs but haven't done anything in particular to<br>
> > activate it.<br>
><br>
> fq_codel is on by default in Barrier breaker and later on all interfaces. I note that it doesn't scale anywhere near as we would like under contention but that work is only beginning in chaos calmer. A thought I've had in an environment such as yours would be to rate limit each AP's ingress/egress ethernet interface to, say, 20mbits, thus pushing all the potential bloat to sqm on ethernet and out of the wifi (which would generally run faster). Might even force uploads from the users lower, also (say 10mbit). Might not, and just rely on people retaining low expectations. :)<br>
><br>
> Was it on openwrt last year?<br>
><br>
> > I'll check what we have on the firewall (a fairly up to day<br>
> > Debian build)<br>
><br>
> fq_codel has been a part of that for a long time.<br>
><br>
> I'd port over the sqm-scripts and use those, it's only a 1 line change.<br>
><br>
> > What's the best way to monitor the queues?<br>
><br>
> On each router?<br>
><br>
> I tend to use pdsh a lot, setting up a /etc/genders file for them all so I can do a<br>
><br>
> pdsh tc qdisc show dev wlan0 # or uptime or cat /etc/dhcp.leases | wc -l or whatever<br>
><br>
> Been meaning to get around to something that used snmp instead for a while.<br>
><br>
> ><br>
> > David Lang<br>
><br>
> --<br>
> Dave Täht<br>
><br>
> thttp://<a href="http://www.bufferbloat.net/projects/bloat/wiki/Upcoming_Talks">www.bufferbloat.net/projects/bloat/wiki/Upcoming_Talks</a></p>