From: Paolo Valente <paolo.valente@unimore.it>
To: MUSCARIELLO Luca OLNC/OLN <luca.muscariello@orange.com>
Cc: bloat <bloat@lists.bufferbloat.net>
Subject: Re: [Bloat] Fw: video about QFQ+ and DRR
Date: Fri, 09 Aug 2013 09:35:07 -0000 [thread overview]
Message-ID: <CDFB557A-D4FB-4EB3-9E0E-C24509FBC771@unimore.it> (raw)
In-Reply-To: <5204ACCE.50006@orange.com>
Il giorno 09/ago/2013, alle ore 10:48, MUSCARIELLO Luca OLNC/OLN ha scritto:
> Hi,
>
> few questions in line.
>
> Luca
>
>
> On 08/09/2013 09:58 AM, Paolo Valente wrote:
>> Il giorno 09/ago/2013, alle ore 08:45, MUSCARIELLO Luca OLNC/OLN ha scritto:
>>
>>> Hi,
>>>
>>> nice demo.
>>>
>> Thanks.
>>
>>> While I am not surprised about the good performance of QFQ+,
>>> I do not understand why DRR (I guess linux SFQ, i.e. per-flow DRR+SQdrop)
>>> works so bad.
>>>
>>> If the two schedulers are serving the same kind of flow (IP 5-tuple) the level
>>> of protection to low rate (< fair rate) flows should be the same (approx).
>>>
>> That 'approx' plays a critical role for the bad results with DRR. In particular, problems arise because of the following theoretical issue.
>> Consider the packet service time for a flow, i.e., the time to transmit one maximum-size packet of the flow at the rate reserved to the flow. For each flow, the worst-case packet delay/jitter guaranteed by QFQ+, with respect to packet completion times in an ideal, perfectly fair system, is equal to a few times the packet service time for the flow. In contrast, with DRR this delay/jitter is independent of the packet service time, and grows linearly with the number of flows.
>> Hence, the shorter the packet service time is, the higher this delay becomes with respect to the packet service time.
>>
>> In the In the test,
>> 1) the total number of flows N is equal to 501,
>> 2) the video-streaming server is reserved a bandwidth such that its packet service time complies with the frame playback period,
>> 3) the time to transmit 500 maximum-size packets at line rate is much higher than the packet service for the video-streaming server, and hence, of the frame period.
>
> 1) AFAIK, sch_drr.c is class-based queuing and not per-flow queuing (correct me if I am wrong).
Certainly it is a classful queueing discipline. Just to sync with you, what do you mean with per-flow as opposed to class-based?
Do you mean that the scheduler internally and autonomously differentiates packets into distinct flows according to some internal classification rule (as it is the case for SFQ)?
If so, then you are right.
> So, when you say N = 501 flows, do you mean that in your demo you configured class = flow (e.g. 5-tuple)
> or you have multiple applications in the same class sharing a single (class) queue?
>
500 classes, one for each UDP flow, plus one class for the video stream.
> 2) do you mean that the class "video" gets a weight (per-class weight, with one single video flow in this case?)
> such that, in average, it gets a weighted fair share large enough for the video rate?
>
Yes
> 3) can you plug other qdiscs in QFQ+?
If you mean creating a hierarchy in which some of the classes scheduled by QFQ+ contain both a different queueing discipline and other classes scheduled by that internal discipline, then yes.
> Is QFQ+ a candidate to replace HTB (or DRR) in Linux?,
Of course in my position I can answer such a question only in principle. HTB does things that QFQ+ does not, such as rate limitation. In my demo I used exactly HTB to limit the rate.
Instead, as for DRR, its final goal perfectly matches that of QFQ+. There are however scenarios where DRR is faster than QFQ+, and may even provide a better delay/jitter. Basically, it happens when all flows/classes have the same weight and the same packet length, i.e., exactly in the cases where you do not need a more accurate fair-queueing scheduler than DRR.
> or maybe HTB is already outdated and sch_drr.c might replace sch_htb.c in linux 3.7.
>
>> As a consequence, when DRR incurs its physiological O(N) delay, the playback buffer on the client side runs out of frames.
>>
>>> Maybe Paolo said that in the talk and I might have missed something.
>>> Is QFQ+ working on a different definition of flow than DRR?,
>> No, on the same.
> what is the definition of flow used or class?
>
flow = class in the test
Paolo
--
Paolo Valente
Algogroup
Dipartimento di Fisica, Informatica e Matematica
Via Campi, 213/B
41125 Modena - Italy
homepage: http://algogroup.unimore.it/people/paolo/
next prev parent reply other threads:[~2013-08-09 9:35 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2013-08-08 15:49 Stephen Hemminger
2013-08-08 16:09 ` Luigi Rizzo
2013-08-09 6:45 ` MUSCARIELLO Luca OLNC/OLN
2013-08-09 7:59 ` Paolo Valente
2013-08-09 8:48 ` MUSCARIELLO Luca OLNC/OLN
2013-08-09 9:35 ` Paolo Valente [this message]
2013-08-09 11:02 ` MUSCARIELLO Luca OLNC/OLN
2013-08-09 11:30 ` Luigi Rizzo
2013-08-09 13:06 ` MUSCARIELLO Luca OLNC/OLN
2013-08-09 13:53 ` Paolo Valente
2013-08-09 16:08 ` Dave Taht
2013-08-12 8:28 ` [Bloat] " Paolo Valente
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
List information: https://lists.bufferbloat.net/postorius/lists/bloat.lists.bufferbloat.net/
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CDFB557A-D4FB-4EB3-9E0E-C24509FBC771@unimore.it \
--to=paolo.valente@unimore.it \
--cc=bloat@lists.bufferbloat.net \
--cc=luca.muscariello@orange.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox