[Cake] overhead and mpu

Sebastian Moeller moeller0 at gmx.de
Wed Sep 6 03:22:57 EDT 2017


Hi Dennis,

> On Sep 5, 2017, at 22:19, Dennis Fedtke <dennisfedtke at gmail.com> wrote:
> 
> Hi!
> 
> Thank you for all answers.
> But for me this still makes no sense.
> Assuming we have an ethnernet connection running over a docsis line.
> docsis is able to transmit full 1500byte ethernet packets.
> Lets say it is an 50 Mbit/s Line. (I dont know now how exactly docsis works)
> So to reach the 50Mbit/s ethernet speed the docsis link rate needs to be higher 50,6 Mbit/s (50*1518/1500 ??)

	Exactly, but that is completely hidden from the end-user as DOCSIS per standard offers speedlimits assuming 18 Bytes Ethernet overhead. Now, unlike DSL-ISPs that often sell close to the physical maximal capacity of a link, docsis being a shared medium with segment bandwidth >> single link bandwidth DOCSIS-ISPs get away with this (even though it raises fairness questions under congesting conditions). And as recommended by CISCO (and other CMTS manufacturers?) DOCSIS-ISPs tend to over provision the links so that in the typical case the measurable TCP/IPv4/HTTP goodput is equal or larger than the contracted bandwidth. That has the issue that it misleads typical customers into thinking that the ISP actually guarantees goodput. Why do I say mislead? Well if you look at the gross bandwidth (and let's restrict ourselves to the ethernet overhead, diving into the DOCSIS details will only complicate matters) and the resulting goodput for different goodputs, or better let's look at the required gross bandwidth for 100Mbps for 1500byte packets and 64byte packets:

100 * ((1500+18)/(1500-20-20)) = 103.97
100 * ((46+18)/(46-20-20)) = 1066.67

And the resulting percentual bandwidth cost of the per packet overhead (at equal goodput):
100-100*((1500+18)/(1500-20-20)) = -3.97 %
100-100*((46+18)/(46-20-20)) = -966.67 %

You quickly realize that there is a factor of 10 between those. Now without congestion a DOCSIS ISP might still grant that, but under congestion conditions it seems clearly suboptimal to split the gross bandwidth based on goodput fairness instead of on gross-bandwidth share basis... For a DOCSIS system the numbers still work, since the segment gross bandwidth seems sufficiently hight to allow for 64 byte packets, but for XDSL links where the gross bandwidth << 10 * the contracted bandwidth it is obvious that guaranteeing goodput is a foo's errand.


> But when running a speedtest it will still not show the full speed. because of other overhead from underlying protocols (tcp/ip for example)

	Yes, there is a number of protocols that might or might not be taken into account, like VLAN tags(s), PPPoE, TCP, IPv4/IPv6, TCP-options, IPv6 extension headers, the HTTP header typical for browser based speedtests. Unfortunately not all of those actually are visible for the endpoints of a speedtest; you really need to get the effective overhead for the bottleneck link.

> So the ISP will set the sync rate even higher to compensate for that.

	Maybe, maybe not, some ISPs do overprovisioning some do not. Personally, while I like more for the same price, I am a bit concened if an ISP seems incapable to correctly inform about the to be expected bandwidth...

> 
> But does this matter for the end user?

	As the 1500/64 numbers above hopefully show, it does since the per packet overhead still is data that needs to be transported and will "eat" into the available gross bandwidth pool.

> In case of docsis does it make sense to account for 18 overhead?

	Empirically it does seem so. But please note that under speedtest conditions (or under any condition where the packet size on the loaded link direction stays constant) not accounting for the per-packet overhead can be compensated by setting the shaper bandwidth slightly lower ad so the effect of missing overhead can easily go unnoticed, it will however cause trouble once a new traffic pattern uses more smaller packets...

> The user will enter 50mbit and it will work. If the isp has provided a sligher higher syncrate.

	As I hope I made clear above this will only work for fixed conditions in which the distribution of packet sizes on the link does not change.

> 
> and the mpu setting. i don't know how cake handles this in detail.
> How the overhead gets added.
> lets i enter mpu 46.
> And cake we set 18 as overhead.
> Will this result in mpu 46 or 64?

	This will result in MPU 46 (as that is what you requested), so an ICMP echo request packet with size 18+20+8 = 46 will be accounted as 46 bytes and not like the 64 it really takes up on the link. As shown, cake will first add the overhead and only compare the packet_len+overhead to mpu and do max(mpu, (paket_len+overhead)).

> Can someone debug the code maybe please? :>
> I have the feeling with mpu 46 my pages lot a bit snappier. but could be placebo.

	Under most conditions this difference will not matter at all, since typically packets+overhead are larger than 64 byte, so how did you measure to come to that feeling?

Best Regards
	Sebastian

> 
> Thank you.
> 
> 
> 
> 
> _______________________________________________
> Cake mailing list
> Cake at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/cake



More information about the Cake mailing list