[Bloat] Fwd: Understanding HFSC

Dave Taht dave.taht at gmail.com
Sun Dec 4 01:55:03 EST 2011


As an example of how I'd like AQMs to be more deeply
understood, and perhaps the lartc list ressurected...

---------- Forwarded message ----------
From: John A. Sullivan III <jsullivan at opensourcedevel.com>
Date: Sun, Dec 4, 2011 at 5:57 AM
Subject: Understanding HFSC
To: netdev at vger.kernel.org


Hello, all.  I hope I am in the right place as this seems to be the
place to ask questions formerly asked on lartc.  For the last three
days, I've been banging my head against the wall trying to understand
HFSC and it's finally starting to crack (the wall, not my head although
that's close, too!).  It seems to be wonderful, powerful, mysterious,
and poorly understood.

I'm not sure I understand it either but it also seems much of what is
written about it is written by people who don't fully grasp it, e.g.,
mostly focusing on guaranteed bandwidth and hierarchical sharing but not
spending much time explaining the important concept of decoupling
latency requirements and bandwidth - the part most interesting to us.
So I'm hoping you'll indulge my questions and my attempt to articulate
my understanding to see if I get it or if I've completely missed the
plot!

One of the most confusing bits to me is, does the m1 rate apply to each
flow handled by the class or only to the entire class once it becomes
active? In other words, if I want to ensure that my VoIP packets jump in
front of my FTP bulk transfers as so fascinatingly illustrated on page 4
of http://trash.net/~kaber/hfsc/SIGCOM97.pdf and so specify a steeper m1
slope for the first 10 ms and I have a dozen RTP sessions running, does
that mean that as many sessions as snuck a packet into the first 10 ms
received that prioritized treatment and all the rest are treated at the
m2 rate or is the 10ms acceleration in deadline time applied to every
new RTP flow? I'm hoping the latter but it didn't appear to be
explicitly stated.

Perhaps it is even better illustrated by an example posted on
https://calomel.org/pf_hfsc.html where they describe a web server
serving its 10KB of text and then some large data files.  So if I set
umax=80kbits and dmax=200ms so that I deliver the first 10KB text of the
web page with no more than 200ms delay and then send the rest of the
images, videos, etc., at the m2 rate, what happens with multiple users?
The first user goes to the site, pulls down the 10KB text and then
starts on the 10MB video (assuming they are not pipelining).  This puts
the hfsc class firmly into m2.  A new user comes in while the first user
is still downloading the video.  Is the first 10KB for the second user
scheduled at the m2 rate or does m1 kick in to determine deadline and
jump those text packets in front of both the http video download and any
bulk file transfers that might be happening at the same time?

Second, what must go into the umax size? Let's assume we want umax to be
a single maximum sized packet on a non-jumbo frame Ethernet network.
Should umax be:
1500
1514 (add Ethernet)
1518 (add CRC)
1526 (add preamble)
1538 (add interframe gap)?

To keep this email from growing any longer, I'll put the rest in a
separate email? Thanks - John


--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo at vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


-- 
Dave Täht
SKYPE: davetaht
US Tel: 1-239-829-5608
FR Tel: 0638645374
http://www.bufferbloat.net



More information about the Bloat mailing list