CoDel AQM discussions
 help / color / mirror / Atom feed
From: Kathleen Nichols <nichols@pollere.com>
To: codel@lists.bufferbloat.net
Subject: Re: [Codel] self tuning codel
Date: Thu, 19 Jul 2018 12:04:38 -0700	[thread overview]
Message-ID: <42959aa5-a266-bd75-a49b-38a726695f11@pollere.com> (raw)
In-Reply-To: <CAA93jw4JvAiDvEa3zAxNb9wyqmNyPKqGqW2H7h-fWnSPOj-x5g@mail.gmail.com>


On 7/18/18 3:42 PM, Dave Taht wrote:
> there's so much math in this that I cannot make heads or tails of it.
>
>
https://www.eee.hku.hk/~kcleung/papers/conferences/bufferbloat_multi-bottleneck:INFOCOM_2018/PID5170809.pdf
>

Dave,

I would never presume to critique math in a paper co-authored by Steve
Low, but I wonder about the model/premise of this work (quotes are taken
directly from the paper so I don't misrepresent anything):

> We consider TCP
> Reno in our model since it is one of the most popular TCP
> algorithms and our analysis in this paper mainly focuses on
> AQM algorithms.

I think the paper constructs bottlenecks as places where the arrival
rate exceeds the departure rate?

Then, there is this:

> The essential idea of Algorithm 1 is that every AQM router
> needs to know all the packet dropping probabilities at the
> bottleneck links in order to compute the window sizes of the
> flows from (5). In fact, Algorithm 1 is motivated by the widely
> used Open Shortest Path First (OSPF) routing protocol [17].
> In OSPF, there is a designated router (DR) for every network
> segment, which is responsible for gathering and forwarding
> link state updates. Thus, we let a specific AQM router serve
> as a centralized DR in the system. All routers send packet
> dropping probability updates to DR. DR then updates its list
> of packet dropping probabilities and sends it to all routers

Does this seem possible to anyone? There's an inherent assumption here
that "dropping probabilities" are constant over a period of time that is
long compared to the time to do the above. In what I've seen of traffic
and AQMs this seems unlikely but I'm certainly not as well-informed on
this as some people.

I have suggested that the setting of interval might be something to
examine (that is to be adaptable in some way) from the beginning. So
it's nice to see someone take a cut at this but I'm not sure the basic
notion, that the interval is supposed to be giving the end hosts enough
time to respond to a drop, has been retained here.

> From (34), we observe that the range of I0 is determined by
> several factors and will vary if these factors change over time.
> A static value of I0 cannot always guarantee the stability of
> the TCP/CoDel system. Thus, we propose Self-tuning CoDel
> to dynamically adjust I0 for CoDel at every bottleneck link
> based on network conditions, as shown in Algorithm 2.

Of course, the big problem is that the time to respond, or RTD, is going
to be different for every communicating pair and will increase with
congestion. This seems like a pretty hard problem to solve *optimally*
and I don't see how this contributes to a solution since its model is
pretty simple compared to what is seen in the Internet today.

> In this subsection, we present ns-2 simulation results to
> demonstrate that our proposed Self-tuning CoDel can effectively
> improve the system stability and performance. We use
> TCP Reno as the source algorithm, and compare Self-tuning
> CoDel with CoDel and PIE, which are two popular modern
> AQM algorithms, in terms of the stability of queueing delay

So that's all I could find about the simulation model. It *appears* (but
I won't put words in anyone's mouth, I sure hate it when folks do that
to me) these are a bunch of "forever" Reno TCP flows. There are a lot of
really bad ns-2 TCP models out there. And it is really hard to get
simulations to resemble the traffic patterns I've seen in packet traces
from the Real World. I had to look twice to see this was published in
2018. What?

I also wonder about the notion of "stability" of queue delays. I would
expect a healthy system to have spikes in delay but not persistent
delay. But I'm willing to entertain other viewpoints.

So, there IS a lot of math I don't understand and stuff I might
understand if I took the time, so I just look at the un-mathy stuff and
try to figure out what is the purpose of this work? How does it apply?

	Kathie



  reply	other threads:[~2018-07-19 19:04 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-07-18 22:42 Dave Taht
2018-07-19 19:04 ` Kathleen Nichols [this message]
2018-07-19 19:16   ` Jonathan Morton
2018-07-20  6:00 ` Andrew McGregor

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

  List information: https://lists.bufferbloat.net/postorius/lists/codel.lists.bufferbloat.net/

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=42959aa5-a266-bd75-a49b-38a726695f11@pollere.com \
    --to=nichols@pollere.com \
    --cc=codel@lists.bufferbloat.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox