On Sat, May 21, 2011 at 10:33 AM, Srikanth Sundaresan <srikanth@gatech.edu> wrote:

On May 21, 2011, at 6:08 PM, Dave Taht wrote:

>
>
> On Sat, May 21, 2011 at 9:48 AM, Srikanth Sundaresan <srikanth@gatech.edu> wrote:
> I do not have a problem with it when we know the effective bandwidth. My question is, what when do not? We cannot rely on volunteers to give us reliable information on that.
>
> I say we turn it *on only while testing*. That too, after we get an idea about each user's bandwidth. THis is feature that, in its current form needs to be tailored to each user. It is not a good idea to give everyone a default setting - as I mentioned in my previous email, unless we hit bulls eye (unlikely), it is either crippling, or useless.
>
> It could potentially seriously downgrade user experience.
>
>
> What part about 800ms latencies under load without QoS isn't about  a degraded user experience?

If the cost of reduced upload speed,  which could perhaps be as much as 30%, (if the default is 340kbps - the DSL connection here in my B&B gets up to 450k), can't be reduced, I certainly think that it's the higher price to pay than reduced latency.


I would urge you strongly to do more realistic testing, with more real users using the network...

...before making that call. I am assuming you are unleashing these devices on real users? with more than one person on the network?

RTT times will probably get even worse than 800ms with multiple streams running. I have not tested that, I'll get to it.

Certainly chats with network operators and cybercafe operators down there will also prove fruitful.

If you could exit the hotel and see if you can obtain some information from the real world around you down there about those kinds of usage, (or non-usage) of QoS techniques on their systems...

you might get some really good coffee and meet some interesting people.

Lastly your results pointed to a knee in the curves that I was not aware of, that happens at 256kbit, which is perilously close to the speeds you are encountering down there. I was getting about 12% overall single-threaded performance loss and nearly flat utilization with multi-threaded, while retaining good dns performance, with the existing scripts set to both 24Mbit and with them set to 1000/100, I was not aware of this knee until I got some data back from the field yesterday and had a chance to look it over this morning. I'm simulating it in the lab (or will be whenever I get there), and can hopefully do better.

See the ongoing bug:

http://www.bufferbloat.net/issues/171

 
- Srikanth




--
Dave Täht
SKYPE: davetaht
US Tel: 1-239-829-5608
http://the-edge.blogspot.com