[Cerowrt-devel] vpn fw question
Dave Taht
dave.taht at gmail.com
Thu Oct 2 23:38:40 EDT 2014
On Thu, Oct 2, 2014 at 8:05 PM, Eric S. Johansson <esj at eggo.org> wrote:
>
> On 10/2/2014 10:24 PM, Joel Wirāmu Pauling wrote:
>>
>> I.e Your topology looks like this :
>>
>> [(Remote LAN) - VPN Client]---[INTERNET]---(Local
>> LAN)[WAN][LAN][REMOTE-LAN])
>>
>> Your Local LAN knows nothing about Remote LAN and Vice versa. There is
>> just a single Inteface/Client member that is a member of REMOTE-LAN.
>> So to get traffic from Local LAN to Remote LAN all Local-LAN traffic
>> needs to be masqueraded to that Single interface.
>
>
> ah, thanks for the clarification. my function oriented topology looks like
> this:
>
> [ 34-38 target lan - vpn server - fw ] - - - [ I ] - + -( fw - vpn client -
> - - lan - - - workerbees(6) )
> + -( rw worker bee )
> + -( rw worker bee )
> + -( cerowrt worker bee ) ...
>
> I don't think the natted form is going to work terribly well because all the
> WB's need access to all the target machines. Also our routing tables are…
> significant
Personally I find the output of
ip route show
to be much more readable and usable nowadays.
> Kernel IP routing table
> Destination Gateway Genmask Flags MSS Window irtt
> Iface
> 0.0.0.0 73.38.246.1 0.0.0.0 UG 0 0 0
> ge00
> 10.42.66.0 10.199.188.193 255.255.255.0 UG 0 0 0
> tun0
> 10.43.1.0 10.199.188.193 255.255.255.0 UG 0 0 0
> tun0
> 10.43.2.0 10.199.188.193 255.255.255.0 UG 0 0 0
> tun0
> 10.43.3.0 10.199.188.193 255.255.255.0 UG 0 0 0
> tun0
> 10.43.4.0 10.199.188.193 255.255.255.0 UG 0 0 0
> tun0
> 10.43.5.0 10.199.188.193 255.255.255.0 UG 0 0 0
> tun0
> 10.43.6.0 10.199.188.193 255.255.255.0 UG 0 0 0
> tun0
> 10.43.7.0 10.199.188.193 255.255.255.0 UG 0 0 0
> tun0
> 10.43.8.0 10.199.188.193 255.255.255.0 UG 0 0 0
> tun0
> 10.43.9.0 10.199.188.193 255.255.255.0 UG 0 0 0
> tun0
> 10.43.10.0 10.199.188.193 255.255.255.0 UG 0 0 0
> tun0
> 10.43.11.0 10.199.188.193 255.255.255.0 UG 0 0 0
> tun0
> 10.43.12.0 10.199.188.193 255.255.255.0 UG 0 0 0
> tun0
> 10.43.13.0 10.199.188.193 255.255.255.0 UG 0 0 0
> tun0
> 10.43.14.0 10.199.188.193 255.255.255.0 UG 0 0 0
> tun0
> 10.43.15.0 10.199.188.193 255.255.255.0 UG 0 0 0
> tun0
Ideally you should be able to shrink that 10.43 network into a single
10.43.0.0/20 route.
> 10.199.188.0 10.199.188.193 255.255.255.0 UG 0 0 0
> tun0
> 10.199.188.193 0.0.0.0 255.255.255.255 UH 0 0 0
> tun0
> 73.38.246.0 0.0.0.0 255.255.254.0 U 0 0 0
> ge00
> 172.30.42.0 0.0.0.0 255.255.255.224 U 0 0 0
> se00
> 172.30.42.0 0.0.0.0 255.255.255.0 ! 0 0 0 *
> 172.30.42.64 0.0.0.0 255.255.255.224 U 0 0 0
> sw00
> 172.30.42.96 0.0.0.0 255.255.255.224 U 0 0 0
> sw10
> 192.168.9.0 10.199.188.193 255.255.255.0 UG 0 0 0
> tun0
>
> and WTH is this?
> 172.30.42.0 0.0.0.0 255.255.255.0 ! 0 0 0 *
That is what is called a "covering route". The interfaces in cerowrt are
all /27s out of a single /24. Just as you could just do a 10.43.0.0/20 route
instead of the 16 10.43 routes above.
So we export via babel that single /24 by creating an "unreachable" route
for it, which is visible externally to the router, and internally to the router
we have the /27s that override the /24, that are not visible outside the
router.
Clear as mud, right?
Here is part of my route table. Old Cerowrt used to export 9 routes
visible to other routers..
172.21.0.0/27 dev ge00 proto kernel scope link src 172.21.0.18
172.21.0.64/27 via 172.21.0.1 dev ge00 proto babel onlink
172.21.0.96/27 via 172.21.0.1 dev ge00 proto babel onlink
172.21.0.128/27 via 172.21.0.1 dev ge00 proto babel onlink
... add the host gateway and the other 4 interfaces...
Toronto exports 1 (or 2) depending on the alternate paths available
The s+ and gw+ devices
172.21.18.0/24 via 172.21.0.7 dev ge00 proto babel onlink
The ge00 device is on another network, covered in this route
172.21.3.0/24 via 172.21.0.7 dev ge00 proto babel onlink
less exported routes = smaller routing packets, smaller routing
tables, faster routing updates, less route lookups while transferring
data, and better use of the distance->vector mechanisms, and so on.
In terms of scaling factors this makes it feasible to route together
at least 700 boxes without
too much fear of overwhelming anything. (But I haven't got around to
resimulating the results,
like so many other things - and the limit at least used to be some
inefficient code in babeld,
not any inherent limit to the protocol)
>
> --- eric
> _______________________________________________
> Cerowrt-devel mailing list
> Cerowrt-devel at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel
--
Dave Täht
https://www.bufferbloat.net/projects/make-wifi-fast
More information about the Cerowrt-devel
mailing list