[Cake] Configuring cake for VDSL2 bridged connection

moeller0 moeller0 at gmx.de
Tue Aug 23 10:27:10 EDT 2016


Hi techicist,


> On Aug 23, 2016, at 15:44 , techicist at gmail.com wrote:
> 
> I am using a TalkTalk (UK) VDSL2 connection via bridged PTM to my TP-LINK Archer C7 V2. I am running LEDE.
> 
> TalkTalk uses DHCP to obtain an IP address and not DHCP as most other ISPs do.

	I take it that one of the DHCPs should read PPPoE?

> 
> I am trying to configure cake and I see these options on bufferbloat.net:
> 
> There are eight new keywords which deal with the basic ADSL configurations. These switch on ATM cell-framing compensation, and set the overhead based on the raw IP packet as a baseline.
> 
> ipoa-vcmux (8)
> ipoa-llcsnap (16)
> bridged-vcmux (24)
> bridged-llcsnap (32)
> pppoa-vcmux (10)
> pppoa-llc (14)
> pppoe-vcmux (32)
> pppoe-llcsnap (40)bvn
> 
> How do I go about using these with OpenWRT?

	(C) None of the above ;)

Really, all f the above keywords only deal with encapsulations used on ATM links, not PTM links. All of these will automatically enable ATM cell accounting and hence will not do the right thing on PTM links even if the per-packet overhead should be correct. cake offers two PTM specific keywords, pppoe-ptm and pppoe-bridged that translate into 27 and 19 bytes respectively. It looks like they are not well enough documented though*. I would recommend to simply ignoring these keywords wholesale and use the explicit “overhead 27” statement instead, but we are getting ahead of ourselves here:

The first question is what is the true bottleneck link and what bandwidth and encapsulation is in use on that link. Often the VDSL2-link is the true bottleneck, but some ISPS like DTAG in Germany actually implement a shaper/policer at the BRAS/BNG level with lower thresholds than the vdsl2 link itself. Be that is it may, the first issue is figuring out the relevant bottleneck link bandwidth (we just assume that your ISP has no shaper in use):

Look into the status page of your VDSL2-modem and write down the values of the actual synchronization bandwidth for uplink and downlink.

Multiply both with 64/65 = 0.984615384615 as VDSL2 uses a continuous 64/65 encapsulation that will eat roughly 1.6% of the sync bandwidth; this now are your reference values for what can be sent over that link. Often 85 to 90%** of that reference bandwidth works well for downlinks, uplinks can work well up to 100% of the reference. I initially recommend to set both uplink and downlink to 50% of the reference values and run a speedtest (e.q. the dslreports of the sourceforge one that both also measure latency under load, or preferentially flent to a well connected netperf server) to get a feeling for the best case latency uunder load increase (at 50% of line rate both a potential ISPs shper as well as the real pe-packet overhead will not matter under real-world conditions).

Next you need to figure out the per-packet overhead, here is my handy cheat sheet:

###ATM: see RFC2684 http://www.faqs.org/rfcs/rfc2684.html
ATM (ADSL2+ PPPoE):	
	2 Byte PPP + 6 Byte PPPoE + 6 destination MAC + 6 source MAC + 2 ethertype + 3 ATM LLC + 5 ATM SNAP + 2 ATM pad + 8 ATM AAL5 SAR = 40 Byte
ATM (ADSL2+ PPPoE VLAN): 
	2 Byte PPP + 6 Byte PPPoE + 4 Byte VLAN + + 6 destination MAC + 6 source MAC + 2 ethertype + 3 ATM LLC + 5 ATM SNAP + 2 ATM pad + 8 ATM AAL5 SAR = 44 Byte

###VDSL2 see IEEE 802.3-2012 61.3 relevant for VDSL2: Note VDSL2 typically transports full ethernet frames including the FCS (shown as COMON below) 
VDSL2 (PPPoE VLAN)
	2 Byte PPP + 6 Byte PPPoE + 4 Byte VLAN + 1 Byte Start of Frame (S), 1 Byte End of Frame (Ck), 2 Byte TC-CRC (PTM-FCS), = 16 Byte
	COMMON: 4 Byte Frame Check Sequence (FCS) + 6 (dest MAC) + 6 (src MAC) + 2 (ethertype) = 18 byte
	total: 34 Byte
VDSL2 (your case?)
	1 Byte Start of Frame (S), 1 Byte End of Frame (Ck), 2 Byte TC-CRC (PTM-FCS), = 4 Byte
	COMMON: 4 Byte Frame Check Sequence (FCS) + 6 (dest MAC) + 6 (src MAC) + 2 (ethertype) = 18 byte
	total: 22 Byte

### Ethernet
FAST-ETHERNET (should also be valid for GB ethernet): 
	7 Byte Preamble + 1 Byte start of frame delimiter (SFD) + 12 Byte inter frame gap (IFG) = 20 Byte
	COMMON: 4 Byte Frame Check Sequence (FCS) + 6 (dest MAC) + 6 (src MAC) + 2 (ethertype) = 18 byte
	total: 38 Bytes worth of transmission time


So a per-packet overhead of 22 seems correct for your case. But the linux kernel will already add 14 bytes (for the part of the ethernet header it actually send to the device the MACs and the ethertype) automatically for most interfaces, so in all likelihood (assuming you connect via ethernet from your router to the DSL modem) you should specify 22-14 = 8 Bytes per packet overhead for SQM.


Now redo the tests from before but first keep the uplink at 50% and iteratively increase the downlink until you encounter too much latency under load increase (for your taste); set the uplink to 50% and iteratively increase the uplink until latency will increase (which might only happen once you increase above the uplink reference value calculated above). Finally set both values to the the independently tested optimal values and test again. Please note that reaching 100% of reference on egress without bufferbloat showing up is a decent indicator that you did not underestimate the per-packet overhead. The opposite unfortunately is not true, as you might simply have run into an ISP shaper.


*) There is still an open discussion how to best deal with the fact that there is considerable complexity in the different encapsulations schemes used on ATM and PTM links. I would prefer a table of encapsulation schemes wit the resulting numerical overheads and cake simply exposing the explicit numeric “overhead NN” argument. But others have argued that named symbolic keywords for common variant can be quite helpful. (My counter argument is that without more explanation neither alternative is self-explanatory and at least the numeric alternative is less work to maintain). But please let us know your opinion as a user.

**) Downlink shaping is more approximate than uplink shaping as sufficiently high incoming traffic rates will sort of back-spill also into your ISPs uplink buffers and that will cause unwanted latency under load increases that get more unlikely the larger the speed delta between the true bottleneck and the artificial bottleneck created by SQM actually is. In essence downlink shaping requires larger margins than uplink shaping…
> 
> _______________________________________________
> Cake mailing list
> Cake at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake



More information about the Cake mailing list