A much longer test would be to download this
wget -4 http://gw.lab.bufferbloat.net/capetown/capetown-wndr3700v2/OpenWrt-ImageBuilder-ar71xx-for-Linux-x86_64.tar.bz2
while pinging somewhere fairly close by.
Over your wired and wireless connection(s)...
Over multiple downloads, too...
Sorry to make you work so hard. I am SO OVERJOYED to get some data on this problem, however.....
(yes, I'll change it to lab.bismarkproject.net as soon as DNS lands)
On Fri, May 20, 2011 at 4:35 PM, Nick Feamster <feamster@cc.gatech.edu> wrote:
Have run it 2x now.
On May 21, 2011, at 12:28 AM, Dave Taht wrote:
>
> And btw, what are the results of a speedtest from your location without QoS on?
>
I see about 850 kbps down and 400 kbps up.
The default upload setting is abusive.
Sorry. Nicaragua typically had 64k-128k up. Took my best guess.
> With QoS on, set to up/dl values within a few percentage points of that (and the overhead calculation disabled)
>
And with QoS set to say 840/380, with the overhead calculation disabled? Should be about 15% below that for a long bulk transfer. Can get closer...
Please note that setting these values on the basis of one datapoint in a majorish city will result in bad values deeper in the country....
Helps to also ping somewhere at the same time of the test to see your latencies start to go to heck as you get closer to the "edge". You'll see it start to jitter... then go wildly late... and at extreme values, tcp/ip will start to malfunction as per the bufferbloat diagrams...
I'd LOVE for a few tcpdumps of stuff like this, from where you are.....> I can bake a better default into the next build, but I was figuring you'd be lucky to be gettting 1000 down....Yep yep... though I don't think PowerBoost is enabled over here. Although, we'll find out. :-)
>
> I want to note that according to your previous study, the first 30 seconds of data need to be discarded in order for a speedtest to be valid, and speedtest.net doesn't do that...
-Nick