we use RFC6598 addressing and CGNAT on the edge unless someone pays for an IP. Very slowly getting IPv6 implementation down. Poor CRM/Shaper support for it being a big struggle. Preseem+UISP for example requires that whatever is handling IPv6 PD be pollable from preseem so they can grab MAC:PD prefix and combine them. VISP has implemented this better recently but I haven't fully implemented yet, that's a goal for this month.
We don't 'tunnel' ie no MPLS/VPLS, SRV6, EVPN right now. We technically do GRE tunneling, terragraph works just like OSPFv3 on IPv6 w/ GRE tunnels.
Part of the reason not to tunnel is that we don't have a single headend, we have regional POPs and interconnections between so we can reduce outages caused by the POPs. Fully routed seems to only non-convoluted way to handle this. Not interested in a VRRP moving my tunnel endpoint addresses between regions, that's a mess. And so we route.
in my continued rip-van-winkle, living in the third world (california)
way, I am curious as to how y'all are managing your
ipv4 address supply and if you are deploying ipv6 to any extent?
In all this discussion of multi-gbit fiber, my own direct experience
is that AT&T's fiber rollout had very flaky ipv6, and more and more
services (like starlink) are appearing behind cgnats, which have their
own capex and opex costs.
I see a lot of rfc1918 being used as the operational overlay
elsewhere, tons of tunnels, also.
--
This song goes out to all the folk that thought Stadia would work:
https://www.linkedin.com/posts/dtaht_the-mushroom-song-activity-6981366665607352320-FXtz
Dave Täht CEO, TekLibre, LLC
_______________________________________________
LibreQoS mailing list
LibreQoS@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/libreqos