* [Bloat] Stab overhead caliculation and mpu for ingress shaping.
@ 2017-03-01 18:37 Y
2017-03-01 21:36 ` Sebastian Moeller
0 siblings, 1 reply; 6+ messages in thread
From: Y @ 2017-03-01 18:37 UTC (permalink / raw)
To: bloat
Hi , all.
I set root qdisc for traffic shaping like this
egress
$tc qdisc add dev $ext_ingress root handle 1: stab overhead -4
linklayer atm mpu 60 mtu 2048 tsize 256 hfsc default 13
ingress
$tc qdisc add dev $ext root handle 1: stab overhead -4 mpu 53 linklayer
atm mtu 2048 tsize 256 hfsc default 26
($ works at script )
PPPoA WAN - modem/router - Ethernet LAN - My pc 1 interface.
engress overhead of PPPoA via ethernet is -4 and mpu 53.
This is certain.
We can think without Ethernet padding at stab.
My asking is whether that This is also correct at ingress shaping or
not?
mpu for ingress in myscript = 60.
Because of minimal ethernet packet size = 60( with padding without FCS)
oevrhead for ingress in myscript = -4.
Because I want to shape connection , so , must set speed setting * 48 /
53.
This is certain or not?
Yuta.
For example.
see this section (Extensive framing compensation (for DSL/ATM/PPPoe))
https://www.bufferbloat.net/projects/codel/wiki/Cake/
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Bloat] Stab overhead caliculation and mpu for ingress shaping.
2017-03-01 18:37 [Bloat] Stab overhead caliculation and mpu for ingress shaping Y
@ 2017-03-01 21:36 ` Sebastian Moeller
2017-03-02 2:23 ` Y
` (2 more replies)
0 siblings, 3 replies; 6+ messages in thread
From: Sebastian Moeller @ 2017-03-01 21:36 UTC (permalink / raw)
To: Y; +Cc: bloat
Hi Yuta,
> On Mar 1, 2017, at 19:37, Y <intruder_tkyf@yahoo.fr> wrote:
>
> Hi , all.
>
> I set root qdisc for traffic shaping like this
>
> egress
> $tc qdisc add dev $ext_ingress root handle 1: stab overhead -4
> linklayer atm mpu 60 mtu 2048 tsize 256 hfsc default 13
>
> ingress
> $tc qdisc add dev $ext root handle 1: stab overhead -4 mpu 53 linklayer
> atm mtu 2048 tsize 256 hfsc default 26
> ($ works at script )
>
> PPPoA WAN - modem/router - Ethernet LAN - My pc 1 interface.
>
> engress overhead of PPPoA via ethernet is -4
So that indicates PPPoA VC/Mux, as encapsulation, correct? BUT that -4 assumes that the kernal already silently added 14 bytes to the packet size, which it does for ethernet interfaces. So is the traffic shaper running on the modem-router’s ATM interface directly or on your PC1? I typically would try to run https://github.com/moeller0/ATM_overhead_detector to empirically figure out the per packet overhead (but I note that this has never been tested with PPPoA data as far as I can remember)
> and mpu 53.
As far as I can tell with the linklayer ATM keyword the kernel will basically increase any runt packet to 53 automatically, so this seems redundant. (Linklayer atm will multiply the packet size by 53/48 and will also make packet size to be integer multiple of ATM cell size)
> This is certain.
> We can think without Ethernet padding at stab.
No, I disagree, _if_ the PPPoA packets exclude the padding we can ignore it but not otherwise…
>
> My asking is whether that This is also correct at ingress shaping or
> not?
>
> mpu for ingress in myscript = 60.
> Because of minimal ethernet packet size = 60( with padding without FCS)
I would assume this to be correct in your case
>
> oevrhead for ingress in myscript = -4.
> Because I want to shape connection , so , must set speed setting * 48 /
> 53.
No, that is what linklayer atm does for you (and also the accounting for the required padding to always use an integer number of ATM cells per user packet).
Best Regards
>
> This is certain or not?
>
> Yuta.
>
> For example.
> see this section (Extensive framing compensation (for DSL/ATM/PPPoe))
> https://www.bufferbloat.net/projects/codel/wiki/Cake/
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Bloat] Stab overhead caliculation and mpu for ingress shaping.
2017-03-01 21:36 ` Sebastian Moeller
@ 2017-03-02 2:23 ` Y
2017-03-02 2:28 ` Y
2017-03-02 7:55 ` Jesper Dangaard Brouer
2 siblings, 0 replies; 6+ messages in thread
From: Y @ 2017-03-02 2:23 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: bloat
Hi Moeller , and all
I found this after I send mail :) then , about egress ,I got it :)
https://www.spinics.net/lists/lartc/msg22399.html
and https://github.com/moeller0/ATM_overhead_detector shows
stab mtu 2048 mpu 53 tsize 256
but I run this inside LAN on my pc,and now this situation is still
same.
then I set no mpu,now.
at last, for ingress like this,
$tc qdisc add dev $ext_ingress root handle 1: stab overhead -4
linklayer ethernet mpu 60 mtu 2048 tsize 256 hfsc default 13
ingress shaping must think at table,
whole speed =(5 per packets ) + (packet size + 10 overhead for PPPoA =
n*48).
I have to measure it, more.
Thank you for reply , Moeller :)
Yuta.
2017-03-01 (水) の 22:36 +0100 に Sebastian Moeller さんは書きました:
> Hi Yuta,
>
> > On Mar 1, 2017, at 19:37, Y <intruder_tkyf@yahoo.fr> wrote:
> >
> > Hi , all.
> >
> > I set root qdisc for traffic shaping like this
> >
> > egress
> > $tc qdisc add dev $ext_ingress root handle 1: stab overhead -4
> > linklayer atm mpu 60 mtu 2048 tsize 256 hfsc default 13
> >
> > ingress
> > $tc qdisc add dev $ext root handle 1: stab overhead -4 mpu 53
> > linklayer
> > atm mtu 2048 tsize 256 hfsc default 26
> > ($ works at script )
> >
> > PPPoA WAN - modem/router - Ethernet LAN - My pc 1 interface.
> >
> > engress overhead of PPPoA via ethernet is -4
>
> So that indicates PPPoA VC/Mux, as encapsulation, correct? BUT
> that -4 assumes that the kernal already silently added 14 bytes to
> the packet size, which it does for ethernet interfaces. So is the
> traffic shaper running on the modem-router’s ATM interface directly
> or on your PC1? I typically would try to run https://github.com/moell
> er0/ATM_overhead_detector to empirically figure out the per packet
> overhead (but I note that this has never been tested with PPPoA data
> as far as I can remember)
>
> > and mpu 53.
>
> As far as I can tell with the linklayer ATM keyword the kernel
> will basically increase any runt packet to 53 automatically, so this
> seems redundant. (Linklayer atm will multiply the packet size by
> 53/48 and will also make packet size to be integer multiple of ATM
> cell size)
>
>
> > This is certain.
> > We can think without Ethernet padding at stab.
>
> No, I disagree, _if_ the PPPoA packets exclude the padding we
> can ignore it but not otherwise…
>
> >
> > My asking is whether that This is also correct at ingress shaping
> > or
> > not?
> >
> > mpu for ingress in myscript = 60.
> > Because of minimal ethernet packet size = 60( with padding without
> > FCS)
>
> I would assume this to be correct in your case
>
> >
> > oevrhead for ingress in myscript = -4.
> > Because I want to shape connection , so , must set speed setting *
> > 48 /
> > 53.
>
> No, that is what linklayer atm does for you (and also the
> accounting for the required padding to always use an integer number
> of ATM cells per user packet).
>
> Best Regards
>
> >
> > This is certain or not?
> >
> > Yuta.
> >
> > For example.
> > see this section (Extensive framing compensation (for
> > DSL/ATM/PPPoe))
> > https://www.bufferbloat.net/projects/codel/wiki/Cake/
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
>
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Bloat] Stab overhead caliculation and mpu for ingress shaping.
2017-03-01 21:36 ` Sebastian Moeller
2017-03-02 2:23 ` Y
@ 2017-03-02 2:28 ` Y
2017-03-02 7:55 ` Jesper Dangaard Brouer
2 siblings, 0 replies; 6+ messages in thread
From: Y @ 2017-03-02 2:28 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: bloat
I cannot receive mail only I wrote to mailing list omg.
and I found my mistake , again.
>ingress shaping must think at table,
>whole speed =(5 per packets ) + (packet size + 10 overhead for PPPoA =
>n*48).
This should set
ingress shaping must think at table,
whole speed =(5 per packets ) + (packet size + 10 overhead for PPPoA -
14 ethernet overhead = n*48).
2017-03-01 (水) の 22:36 +0100 に Sebastian Moeller さんは書きました:
> Hi Yuta,
>
> > On Mar 1, 2017, at 19:37, Y <intruder_tkyf@yahoo.fr> wrote:
> >
> > Hi , all.
> >
> > I set root qdisc for traffic shaping like this
> >
> > egress
> > $tc qdisc add dev $ext_ingress root handle 1: stab overhead -4
> > linklayer atm mpu 60 mtu 2048 tsize 256 hfsc default 13
> >
> > ingress
> > $tc qdisc add dev $ext root handle 1: stab overhead -4 mpu 53
> > linklayer
> > atm mtu 2048 tsize 256 hfsc default 26
> > ($ works at script )
> >
> > PPPoA WAN - modem/router - Ethernet LAN - My pc 1 interface.
> >
> > engress overhead of PPPoA via ethernet is -4
>
> So that indicates PPPoA VC/Mux, as encapsulation, correct? BUT
> that -4 assumes that the kernal already silently added 14 bytes to
> the packet size, which it does for ethernet interfaces. So is the
> traffic shaper running on the modem-router’s ATM interface directly
> or on your PC1? I typically would try to run https://github.com/moell
> er0/ATM_overhead_detector to empirically figure out the per packet
> overhead (but I note that this has never been tested with PPPoA data
> as far as I can remember)
>
> > and mpu 53.
>
> As far as I can tell with the linklayer ATM keyword the kernel
> will basically increase any runt packet to 53 automatically, so this
> seems redundant. (Linklayer atm will multiply the packet size by
> 53/48 and will also make packet size to be integer multiple of ATM
> cell size)
>
>
> > This is certain.
> > We can think without Ethernet padding at stab.
>
> No, I disagree, _if_ the PPPoA packets exclude the padding we
> can ignore it but not otherwise…
>
> >
> > My asking is whether that This is also correct at ingress shaping
> > or
> > not?
> >
> > mpu for ingress in myscript = 60.
> > Because of minimal ethernet packet size = 60( with padding without
> > FCS)
>
> I would assume this to be correct in your case
>
> >
> > oevrhead for ingress in myscript = -4.
> > Because I want to shape connection , so , must set speed setting *
> > 48 /
> > 53.
>
> No, that is what linklayer atm does for you (and also the
> accounting for the required padding to always use an integer number
> of ATM cells per user packet).
>
> Best Regards
>
> >
> > This is certain or not?
> >
> > Yuta.
> >
> > For example.
> > see this section (Extensive framing compensation (for
> > DSL/ATM/PPPoe))
> > https://www.bufferbloat.net/projects/codel/wiki/Cake/
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
>
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Bloat] Stab overhead caliculation and mpu for ingress shaping.
2017-03-01 21:36 ` Sebastian Moeller
2017-03-02 2:23 ` Y
2017-03-02 2:28 ` Y
@ 2017-03-02 7:55 ` Jesper Dangaard Brouer
2017-03-02 8:14 ` Sebastian Moeller
2 siblings, 1 reply; 6+ messages in thread
From: Jesper Dangaard Brouer @ 2017-03-02 7:55 UTC (permalink / raw)
To: Sebastian Moeller; +Cc: Y, bloat, brouer
On Wed, 1 Mar 2017 22:36:25 +0100
Sebastian Moeller <moeller0@gmx.de> wrote:
> I typically would try to run
> https://github.com/moeller0/ATM_overhead_detector to empirically
> figure out the per packet overhead (but I note that this has never
> been tested with PPPoA data as far as I can remember)
Cool project, you have an ATM_overhead_detector :-)
Something I always missed, and I never got around to create such a
tool. Thanks for doing this :-) (I don't have time atm to play with
it, but it looks cool)
--
Best regards,
Jesper Dangaard Brouer
MSc.CS, Principal Kernel Engineer at Red Hat
LinkedIn: http://www.linkedin.com/in/brouer
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [Bloat] Stab overhead caliculation and mpu for ingress shaping.
2017-03-02 7:55 ` Jesper Dangaard Brouer
@ 2017-03-02 8:14 ` Sebastian Moeller
0 siblings, 0 replies; 6+ messages in thread
From: Sebastian Moeller @ 2017-03-02 8:14 UTC (permalink / raw)
To: Jesper Dangaard Brouer; +Cc: Y, bloat
Hi Jesper,
> On Mar 2, 2017, at 08:55, Jesper Dangaard Brouer <brouer@redhat.com> wrote:
>
> On Wed, 1 Mar 2017 22:36:25 +0100
> Sebastian Moeller <moeller0@gmx.de> wrote:
>
>> I typically would try to run
>> https://github.com/moeller0/ATM_overhead_detector to empirically
>> figure out the per packet overhead (but I note that this has never
>> been tested with PPPoA data as far as I can remember)
>
> Cool project, you have an ATM_overhead_detector :-)
Well, the name is a bit over optimistic, but the principle is sound and seems to work quite well. It is basically, just as README.md says, a pretty straight forward implementation of what you have developed & demonstrated in your Master’s thesis. While simply trying to illustrate the ATM cell staircase, it dawned upon me that since each packet uses an integer number of atm cells, it should be possible to estimate the “hidden" overhead, simply by looking at the how many unaccounted bytes are missing in the plot of the first ATM cell…
Naming it detector is somewhat overplaying my cards, since I do not do a proper classification, but simply show whether the ATM staircase better fits the data than a simple linear fit… It is relatively useful though, on known ATM links to empirically figure out the actual per packet overhead.
>
> Something I always missed, and I never got around to create such a
> tool. Thanks for doing this :-) (I don't have time atm to play with
> it, but it looks cool)
Thanks for the kind words. It was yours and Russel Stuarts work that got me started in the first place, so that is quite flattering ;)
Best Regards
>
> --
> Best regards,
> Jesper Dangaard Brouer
> MSc.CS, Principal Kernel Engineer at Red Hat
> LinkedIn: http://www.linkedin.com/in/brouer
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2017-03-02 8:14 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-03-01 18:37 [Bloat] Stab overhead caliculation and mpu for ingress shaping Y
2017-03-01 21:36 ` Sebastian Moeller
2017-03-02 2:23 ` Y
2017-03-02 2:28 ` Y
2017-03-02 7:55 ` Jesper Dangaard Brouer
2017-03-02 8:14 ` Sebastian Moeller
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox