From: Dave Taht <dave.taht@gmail.com>
To: Jonathan Morton <chromatix99@gmail.com>
Cc: "Toke Høiland-Jørgensen" <toke@toke.dk>,
"Cake List" <cake@lists.bufferbloat.net>
Subject: Re: [Cake] bufferbloat still misunderstood & ignored
Date: Wed, 28 Mar 2018 18:07:59 -0700 [thread overview]
Message-ID: <CAA93jw4kJHT4p-Ry0pDdK9iCJLnWxMmVSocRewtXXkB7BU-a4A@mail.gmail.com> (raw)
In-Reply-To: <B02D8A8C-34B9-408B-A9C1-0040423F736F@gmail.com>
I so wish that the network nuetrality debate included discussions such as these.
On Wed, Mar 28, 2018 at 5:53 PM, Jonathan Morton <chromatix99@gmail.com> wrote:
>> On 29 Mar, 2018, at 3:26 am, Dave Taht <dave.taht@gmail.com> wrote:
>>
>> A finicky bit would be who to penalize when the underlying medium
>> (shared cable) is oversubscribed.
>
> Two obvious reasonable solutions: share equally per subscriber, or share proportionately to provisioned bandwidth per subscriber. Either should be fairly straightforward to implement in an integrated qdisc, and either would penalise the (instantaneously) heaviest users before affecting normal or light users.
>
> Equal sharing has the interesting side-effect that subscribers on lower tiers don't notice backhaul congestion at all until higher tiers have been forced down to their level. This potentially gives ISPs an incentive to avoid such extreme congestion (by upgrading backhaul to match demand), since rational customers won't pay for bandwidth they can't use. It also ensures that all subscribers retain a reasonable, basic level of service during abnormal congestion events.
>
> Conversely, proportional sharing might give a perverse incentive, since paying more gives a larger share of the pie, no matter how cramped it is. Artificial scarcity could then be used to aid up-selling in an anti-consumer manner, similar to what's been seen with Netflix. It would be naive to assume that ISPs won't do this, given the opportunity, so it would be better to build only the more consumer-friendly option into the software.
>
> Theoretically, a middle ground could be to assign a sharing weight separately from the provisioned bandwidth. This would permit, for example, subscribers provisioned at 100:1 bandwidths to receive 4:1 service under congested conditions. However, this would be under ISPs' control and fully documented, and would therefore be a little too tempting to abuse.
>
> - Jonathan Morton
>
--
Dave Täht
CEO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-669-226-2619
next prev parent reply other threads:[~2018-03-29 1:08 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
[not found] <mailman.172.1522252010.3573.cake@lists.bufferbloat.net>
2018-03-28 15:53 ` Toke Høiland-Jørgensen
2018-03-28 19:32 ` Dave Taht
2018-03-29 0:04 ` Jonathan Morton
2018-03-29 0:26 ` Dave Taht
2018-03-29 0:53 ` Jonathan Morton
2018-03-29 1:07 ` Dave Taht [this message]
2018-03-29 9:07 ` Andy Furniss
2018-03-30 8:05 ` Pete Heist
2018-03-30 15:29 ` Jonathan Morton
2018-03-31 10:55 ` Toke Høiland-Jørgensen
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/cake.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAA93jw4kJHT4p-Ry0pDdK9iCJLnWxMmVSocRewtXXkB7BU-a4A@mail.gmail.com \
--to=dave.taht@gmail.com \
--cc=cake@lists.bufferbloat.net \
--cc=chromatix99@gmail.com \
--cc=toke@toke.dk \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox