From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp73.iad3a.emailsrvr.com (smtp73.iad3a.emailsrvr.com [173.203.187.73]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by huchra.bufferbloat.net (Postfix) with ESMTPS id 4A19821F263 for ; Fri, 18 Apr 2014 11:48:10 -0700 (PDT) Received: from localhost (localhost.localdomain [127.0.0.1]) by smtp26.relay.iad3a.emailsrvr.com (SMTP Server) with ESMTP id E5FBE3380B5; Fri, 18 Apr 2014 14:48:08 -0400 (EDT) X-Virus-Scanned: OK Received: from app44.wa-webapps.iad3a (relay.iad3a.rsapps.net [172.27.255.110]) by smtp26.relay.iad3a.emailsrvr.com (SMTP Server) with ESMTP id C1ADF338097; Fri, 18 Apr 2014 14:48:08 -0400 (EDT) Received: from reed.com (localhost.localdomain [127.0.0.1]) by app44.wa-webapps.iad3a (Postfix) with ESMTP id AF19618007A; Fri, 18 Apr 2014 14:48:08 -0400 (EDT) Received: by apps.rackspace.com (Authenticated sender: dpreed@reed.com, from: dpreed@reed.com) with HTTP; Fri, 18 Apr 2014 14:48:08 -0400 (EDT) Date: Fri, 18 Apr 2014 14:48:08 -0400 (EDT) From: dpreed@reed.com To: "Greg White" MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----=_20140418144808000000_69530" Importance: Normal X-Priority: 3 (Normal) X-Type: html In-Reply-To: References: Message-ID: <1397846888.714812176@apps.rackspace.com> X-Mailer: webmail7.0 Cc: bloat , "aqm@ietf.org" , "cerowrt-devel@lists.bufferbloat.net" , =?utf-8?Q?William_Chan_=28=E9=99=88=E6=99=BA=E6=98=8C=29?= Subject: Re: [Cerowrt-devel] [aqm] chrome web page benchmarker fixed X-BeenThere: cerowrt-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Development issues regarding the cerowrt test router project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 18 Apr 2014 18:48:10 -0000 ------=_20140418144808000000_69530 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable =0AWhy is the DNS PLR so high? 1% is pretty depressing.=0A =0AAlso, it see= ms odd to eliminate 19% of the content retrieval because the tail is fat an= d long rather than short. Wouldn't it be better to have 1000 servers?=0A = =0A =0A=0A=0AOn Friday, April 18, 2014 2:15pm, "Greg White" said:=0A=0A=0A=0A> Dave,=0A> =0A> We used the 25k object size for = a short time back in 2012 until we had=0A> resources to build a more advanc= ed model (appendix A). I did a bunch of=0A> captures of real web pages bac= k in 2011 and compared the object size=0A> statistics to models that I'd se= en published. Lognormal didn't seem to be=0A> *exactly* right, but it wasn= 't a bad fit to what I saw. I've attached a=0A> CDF.=0A> =0A> The choice o= f 4 servers was based somewhat on logistics, and also on a=0A> finding that= across our data set, the average web page retrieved 81% of=0A> its resourc= es from the top 4 servers. Increasing to 5 servers only=0A> increased that= percentage to 84%.=0A> =0A> The choice of RTTs also came from the web traf= fic captures. I saw=0A> RTTmin=3D16ms, RTTmean=3D53.8ms, RTTmax=3D134ms.=0A= > =0A> Much of this can be found in=0A> https://tools.ietf.org/html/draft-w= hite-httpbis-spdy-analysis-00=0A> =0A> In many of the cases that we've simu= lated, the packet drop probability is=0A> less than 1% for DNS packets. In= our web model, there are a total of 4=0A> servers, so 4 DNS lookups assumi= ng none of the addresses are cached. If=0A> PLR =3D 1%, there would be a 3.= 9% chance of losing one or more DNS packets=0A> (with a resulting ~5 second= additional delay on load time). I've probably=0A> oversimplified this, bu= t Kathie N. and I made the call that it would be=0A> significantly easier t= o just do this math than to build a dns=0A> implementation in ns2. We've o= pen sourced the web model (it's on Kathie's=0A> web page and will be part o= f ns2.36) with an encouragement to the=0A> community to improve on it. If = you'd like to port it to ns3 and add a dns=0A> model, that would be fantast= ic.=0A> =0A> -Greg=0A> =0A> =0A> On 4/17/14, 3:07 PM, "Dave Taht" wrote:=0A> =0A> >On Thu, Apr 17, 2014 at 12:01 PM, William Cha= n (=E9=99=88=E6=99=BA=E6=98=8C)=0A> > wrote:=0A> >> = Speaking as the primary Chromium developer in charge of this relevant=0A> >= >code,=0A> >> I would like to caution putting too much trust in the numbers= =0A> >>generated. Any=0A> >> statistical claims about the numbers are proba= bly unreasonable to make.=0A> >=0A> >Sigh. Other benchmarks such as the apa= che ("ab") benchmark=0A> >are primarily designed as stress testers for web = servers, not as realistic=0A> >traffic. Modern web traffic has such a high = level of dynamicism in it,=0A> >that static web page loads along any distri= bution, seem insufficient,=0A> >passive analysis of aggregated traffic "fee= ls" incorrect relative to the=0A> >sorts of home and small business traffic= I've seen, and so on.=0A> >=0A> >Famous papers, such as this one:=0A> >=0A= > >http://ccr.sigcomm.org/archive/1995/jan95/ccr-9501-leland.pdf=0A> >=0A> = >Seem possibly irrelevant to draw conclusions from given the kind=0A> >of d= ata they analysed and proceeding from an incorrect model or=0A> >gut feel f= or how the network behaves today seems to be foolish.=0A> >=0A> >Even the m= ost basic of tools, such as httping, had three basic bugs=0A> >that I found= in a few minutes of trying to come up with some basic=0A> >behaviors yeste= rday:=0A> >=0A> >https://lists.bufferbloat.net/pipermail/bloat/2014-April/0= 01890.html=0A> >=0A> >Those are going to be a lot easier to fix than diving= into the chromium=0A> >codebase!=0A> >=0A> >There are very few tools worth= trusting, and I am always dubious=0A> >of papers that publish results with= unavailable tools and data. The only=0A> >tools I have any faith in for ne= twork analysis are netperf,=0A> >netperf-wrapper,=0A> >tcpdump and xplot.or= g, and to a large extent wireshark. Toke and I have=0A> >been tearing apart= d-itg and I hope to one day be able to add that to=0A> >my trustable list.= .. but better tools are needed!=0A> >=0A> >Tools that I don't have a lot of= faith in include that, iperf, anything=0A> >written=0A> >in java or other = high level languages, speedtest.net, and things like=0A> >shaperprobe.=0A> = >=0A> >Have very little faith in ns2, slightly more in ns3, and I've been m= eaning=0A> >to look over the mininet and other simulators whenever I got so= me spare=0A> >time; the mininet results stanford gets seem pretty reasonabl= e and I=0A> >adore their reproducing results effort. Haven't explored ndt, = keep meaning=0A> >to...=0A> >=0A> >> Reasons:=0A> >> * We don't actively ma= intain this code. It's behind the command line=0A> >>flags.=0A> >> They are= broken. The fact that it still results in numbers on the=0A> >>benchmark= =0A> >> extension is an example of where unmaintained code doesn't have the= UI=0A> >> disabled, even though the internal workings of the code fail to= =0A> >>guarantee=0A> >> correct operation. We haven't disabled it because, = well, it's=0A> >>unmaintained.=0A> >=0A> >As I mentioned I was gearing up f= or a hacking run...=0A> >=0A> >The vast majority of results I look at are a= ctually obtained via=0A> >looking at packet captures. I mostly use benchmar= ks as abstractions=0A> >to see if they make some sense relative to the capt= ures and tend=0A> >to compare different benchmarks against each other.=0A> = >=0A> >I realize others don't go into that level of detail, so you have giv= en=0A> >fair warning! In our case we used the web page benchmarker as=0A> >= a means to try and rapidly acquire some typical distributions of=0A> >get a= nd tcp stream requests from things like the alexa top 1000,=0A> >and as a w= ay to A/B different aqm/packet scheduling setups.=0A> >=0A> >... but the on= ly easily publishable results were from the benchmark=0A> >itself,=0A> >and= we (reluctantly) only published one graph from all the work that=0A> >went= into it 2+ years back and used it as a test driver for the famous=0A> >iet= f video, comparing two identical boxes running it at the same time=0A> >und= er different network conditions:=0A> >=0A> >https://www.bufferbloat.net/pro= jects/cerowrt/wiki/Bloat-videos#IETF-demo-s=0A> >ide-by-side-of-a-normal-ca= ble-modem-vs-fq_codel=0A> >=0A> >from what I fiddled with today, it is at l= east still useful for that?=0A> >=0A> >moving on...=0A> >=0A> >The web mode= l in the cablelabs work doesn't look much like my captures,=0A> >in additio= n to not modeling dns at all, and using a smaller IW than google=0A> >it lo= oks like this:=0A> >=0A> >>> Model single user web page download as follows= :=0A> >=0A> >>> - Web page modeled as single HTML page + 100 objects spread= evenly=0A> >>> across 4 servers. Web object sizes are currently fixed at 2= 5 kB=0A> each,=0A> >>> whereas the initial HTML page is 100 kB. Appendix A = provides an=0A> >>> alternative page model that may be explored in future w= ork.=0A> >=0A> >Where what I see is a huge number of stuff that fits into a= single=0A> >iw10 slow start episode and some level of pipelining on larger= stuff, so=0A> >that a=0A> >large number of object sizes of less than 7k wi= th a lightly tailed=0A> >distribution=0A> >outside of that makes more sense= .=0A> >=0A> >(I'm not staring at appendix A right now, I'm under the impres= sion=0A> > it was better)=0A> >=0A> >I certainly would like more suggestion= s for models and types=0A> >of web traffic, as well as simulation of https = + pfs traffic,=0A> >spdy, quic, etc....=0A> >=0A> >>> - Server RTTs set as = follows (20 ms, 30 ms, 50 ms, 100 ms).=0A> >=0A> >Server RTTs from my own w= eb history tend to be lower than 50ms.=0A> >=0A> >>> - Initial HTTP GET to = retrieve a moderately sized object (100 kB=0A> HTML=0A> >>> page) from serv= er 1.=0A> >=0A> >An initial GET to google fits into iw10 - it's about 7k.= =0A> >=0A> >>> - Once initial HTTP GET completes, initiate 24 simultaneous = HTTP=0A> GETs=0A> >>> (via separate TCP connections), 6 connections each to= 4 different=0A> >>> server nodes=0A> >=0A> >I usually don't see more than = 15. and certainly not 25k sized objects.=0A> >=0A> > > - Once each individu= al HTTP GET completes, initiate a subsequent GET=0A> >> to the same server,= until 25 objects have been retrieved from each=0A> >> server.=0A> >=0A> >= =0A> >> * We don't make sure to flush all the network state in between runs= , so=0A> >>if=0A> >> you're using that option, don't trust it to work.=0A> = >=0A> >The typical scenario we used was a run against dozens or hundreds of= urls,=0A> >capturing traffic, while varying network conditions.=0A> >=0A> = >Regarded the first run as the most interesting.=0A> >=0A> >Can exit the br= owser and restart after a run like that.=0A> >=0A> >At moment, merely plan = to use the tool primarily to survey various=0A> >web sites and load times w= hile doing packet captures. Hope was=0A> >to get valid data from the networ= k portion of the load, tho...=0A> >=0A> >> * If you have an advanced Chromi= um setup, this definitely does not=0A> >>work. I=0A> >> advise using the be= nchmark extension only with a separate Chromium=0A> >>profile=0A> >> for te= sting purposes. Our flushing of sockets, caches, etc does not=0A> >>actuall= y=0A> >> work correctly when you use the Chromium multiprofile feature and = also=0A> >>fails=0A> >> to flush lots of our other network caches.=0A> >=0A= > >noted.=0A> >=0A> >=0A> >> * No one on Chromium really believes the time = to paint numbers that we=0A> >> output :) It's complicated. Our graphics st= ack is complicated. The time=0A> >>from=0A> >=0A> >I actually care only abo= ut time-to-full layout as that's a core network=0A> >effect...=0A> >=0A> >>= when Blink thinks it painted to when the GPU actually blits to the=0A> >>s= creen=0A> >> cannot currently be corroborated with any high degree of accur= acy from=0A> >> within our code.=0A> >=0A> >> * It has not been maintained = since 2010. It is quite likely there are=0A> >>many=0A> >> other subtle ina= ccuracies here.=0A> >=0A> >Grok.=0A> >=0A> >> In short, while you can expec= t it to give you a very high level=0A> >> understanding of performance issu= es, I advise against placing=0A> >>non-trivial=0A> >> confidence in the acc= uracy of the numbers generated by the benchmark=0A> >> extension. The fact = that numbers are produced by the extension should=0A> >>not be=0A> >> treat= ed as evidence that the extension actually functions correctly.=0A> >=0A> >= OK, noted. Still delighted to be able to have a simple load generator=0A> >= that exercises the browsers and generates some results, however=0A> >dubiou= s.=0A> >=0A> >>=0A> >> Cheers.=0A> >>=0A> >>=0A> >> On Thu, Apr 17, 2014 at= 10:49 AM, Dave Taht =0A> wrote:=0A> >>>=0A> >>> Getti= ng a grip on real web page load time behavior in an age of=0A> >>> sharded = websites,=0A> >>> dozens of dns lookups, javascript, and fairly random beha= vior in ad=0A> >>> services=0A> >>> and cdns against how a modern browsers = behaves is very, very hard.=0A> >>>=0A> >>> it turns out if you run=0A> >>>= =0A> >>> google-chrome --enable-benchmarking --enable-net-benchmarking=0A> = >>>=0A> >>> (Mac users have to embed these options in their startup script = - see=0A> >>> http://www.chromium.org/developers/how-tos/run-chromium-with= -flags=0A> )=0A> >>>=0A> >>> enable developer options and install and run t= he chrome web page=0A> >>> benchmarker,=0A> >>> (=0A> >>>=0A> >>>https://ch= rome.google.com/webstore/detail/page-benchmarker/channimfdomah=0A> >>>ekjca= hlbpccbgaopjll?hl=3Den=0A> >>> )=0A> >>>=0A> >>> that it works (at least fo= r me, on a brief test of the latest=0A> chrome,=0A> >>>on=0A> >>> linux.=0A= > >>> Can someone try windows and mac?)=0A> >>>=0A> >>> You can then feed i= n a list of urls to test against, and post=0A> process=0A> >>> the resultin= g .csv file to your hearts content. We used to use this=0A> >>> benchmark a= lot while trying to characterise typical web behaviors=0A> >>> under aqm a= nd packet scheduling systems under load. Running=0A> >>> it simultaneously = with a rrul test or one of the simpler tcp upload=0A> or=0A> >>> download= =0A> >>> tests in the rrul suite was often quite interesting.=0A> >>>=0A> >= >> It turned out the doc has been wrong a while as to the name of the=0A> >= >>second=0A> >>> command lnie option. I was gearing up mentally for having = to look at=0A> >>> the source....=0A> >>>=0A> >>> http://code.google.com/p/= chromium/issues/detail?id=3D338705=0A> >>>=0A> >>> /me happy=0A> >>>=0A> >>= > --=0A> >>> Dave T=C3=A4ht=0A> >>>=0A> >>> Heartbleed POC on wifi campus n= etworks with EAP auth:=0A> >>> http://www.eduroam.edu.au/advisory.html=0A> = >>>=0A> >>> _______________________________________________=0A> >>> aqm mai= ling list=0A> >>> aqm@ietf.org=0A> >>> https://www.ietf.org/mailman/listinf= o/aqm=0A> >>=0A> >>=0A> >=0A> >=0A> >=0A> >--=0A> >Dave T=C3=A4ht=0A> >=0A>= >NSFW:=0A> >https://w2.eff.org/Censorship/Internet_censorship_bills/russel= l_0296_indec=0A> >ent.article=0A> >=0A> >__________________________________= _____________=0A> >aqm mailing list=0A> >aqm@ietf.org=0A> >https://www.ietf= .org/mailman/listinfo/aqm=0A> =0A> ________________________________________= _______=0A> Cerowrt-devel mailing list=0A> Cerowrt-devel@lists.bufferbloat.= net=0A> https://lists.bufferbloat.net/listinfo/cerowrt-devel=0A> ------=_20140418144808000000_69530 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable

Why is the= DNS PLR so high?  1% is pretty depressing.

=0A

 

=0A

Also, it seems o= dd to eliminate 19% of the content retrieval because the tail is fat and lo= ng rather than short.  Wouldn't it be better to have 1000 servers?

= =0A

 

=0A

 

=0A=0A



On Friday, April 1= 8, 2014 2:15pm, "Greg White" <g.white@CableLabs.com> said:

=0A
=0A

> Dave,
>
> We used the 25k object size for a short t= ime back in 2012 until we had
> resources to build a more advanced = model (appendix A). I did a bunch of
> captures of real web pages = back in 2011 and compared the object size
> statistics to models th= at I'd seen published. Lognormal didn't seem to be
> *exactly* rig= ht, but it wasn't a bad fit to what I saw. I've attached a
> CDF.<= br />>
> The choice of 4 servers was based somewhat on logistic= s, and also on a
> finding that across our data set, the average we= b page retrieved 81% of
> its resources from the top 4 servers. In= creasing to 5 servers only
> increased that percentage to 84%.
>
> The choice of RTTs also came from the web traffic captures= . I saw
> RTTmin=3D16ms, RTTmean=3D53.8ms, RTTmax=3D134ms.
>= ;
> Much of this can be found in
> https://tools.ietf.org/= html/draft-white-httpbis-spdy-analysis-00
>
> In many of t= he cases that we've simulated, the packet drop probability is
> les= s than 1% for DNS packets. In our web model, there are a total of 4
&= gt; servers, so 4 DNS lookups assuming none of the addresses are cached. If=
> PLR =3D 1%, there would be a 3.9% chance of losing one or more D= NS packets
> (with a resulting ~5 second additional delay on load t= ime). I've probably
> oversimplified this, but Kathie N. and I mad= e the call that it would be
> significantly easier to just do this = math than to build a dns
> implementation in ns2. We've open sourc= ed the web model (it's on Kathie's
> web page and will be part of n= s2.36) with an encouragement to the
> community to improve on it. = If you'd like to port it to ns3 and add a dns
> model, that would b= e fantastic.
>
> -Greg
>
>
> On = 4/17/14, 3:07 PM, "Dave Taht" <dave.taht@gmail.com> wrote:
> =
> >On Thu, Apr 17, 2014 at 12:01 PM, William Chan (=E9=99=88=E6= =99=BA=E6=98=8C)
> ><willchan@chromium.org> wrote:
&g= t; >> Speaking as the primary Chromium developer in charge of this re= levant
> >>code,
> >> I would like to caution p= utting too much trust in the numbers
> >>generated. Any
= > >> statistical claims about the numbers are probably unreasonabl= e to make.
> >
> >Sigh. Other benchmarks such as the = apache ("ab") benchmark
> >are primarily designed as stress test= ers for web servers, not as realistic
> >traffic. Modern web tra= ffic has such a high level of dynamicism in it,
> >that static w= eb page loads along any distribution, seem insufficient,
> >pass= ive analysis of aggregated traffic "feels" incorrect relative to the
&= gt; >sorts of home and small business traffic I've seen, and so on.
> >
> >Famous papers, such as this one:
> >> >http://ccr.sigcomm.org/archive/1995/jan95/ccr-9501-leland.pdf> >
> >Seem possibly irrelevant to draw conclusions fr= om given the kind
> >of data they analysed and proceeding from a= n incorrect model or
> >gut feel for how the network behaves tod= ay seems to be foolish.
> >
> >Even the most basic of= tools, such as httping, had three basic bugs
> >that I found in= a few minutes of trying to come up with some basic
> >behaviors= yesterday:
> >
> >https://lists.bufferbloat.net/pipe= rmail/bloat/2014-April/001890.html
> >
> >Those are g= oing to be a lot easier to fix than diving into the chromium
> >= codebase!
> >
> >There are very few tools worth trust= ing, and I am always dubious
> >of papers that publish results w= ith unavailable tools and data. The only
> >tools I have any fai= th in for network analysis are netperf,
> >netperf-wrapper,
> >tcpdump and xplot.org, and to a large extent wireshark. Toke and = I have
> >been tearing apart d-itg and I hope to one day be able= to add that to
> >my trustable list... but better tools are nee= ded!
> >
> >Tools that I don't have a lot of faith in= include that, iperf, anything
> >written
> >in java = or other high level languages, speedtest.net, and things like
> >= ;shaperprobe.
> >
> >Have very little faith in ns2, s= lightly more in ns3, and I've been meaning
> >to look over the m= ininet and other simulators whenever I got some spare
> >time; t= he mininet results stanford gets seem pretty reasonable and I
> >= ;adore their reproducing results effort. Haven't explored ndt, keep meaning=
> >to...
> >
> >> Reasons:
> &= gt;> * We don't actively maintain this code. It's behind the command lin= e
> >>flags.
> >> They are broken. The fact tha= t it still results in numbers on the
> >>benchmark
> = >> extension is an example of where unmaintained code doesn't have th= e UI
> >> disabled, even though the internal workings of the = code fail to
> >>guarantee
> >> correct operati= on. We haven't disabled it because, well, it's
> >>unmaintain= ed.
> >
> >As I mentioned I was gearing up for a hack= ing run...
> >
> >The vast majority of results I look= at are actually obtained via
> >looking at packet captures. I m= ostly use benchmarks as abstractions
> >to see if they make some= sense relative to the captures and tend
> >to compare different= benchmarks against each other.
> >
> >I realize othe= rs don't go into that level of detail, so you have given
> >fair= warning! In our case we used the web page benchmarker as
> >a m= eans to try and rapidly acquire some typical distributions of
> >= ;get and tcp stream requests from things like the alexa top 1000,
>= >and as a way to A/B different aqm/packet scheduling setups.
> = >
> >... but the only easily publishable results were from th= e benchmark
> >itself,
> >and we (reluctantly) only p= ublished one graph from all the work that
> >went into it 2+ yea= rs back and used it as a test driver for the famous
> >ietf vide= o, comparing two identical boxes running it at the same time
> >= under different network conditions:
> >
> >https://ww= w.bufferbloat.net/projects/cerowrt/wiki/Bloat-videos#IETF-demo-s
> = >ide-by-side-of-a-normal-cable-modem-vs-fq_codel
> >
>= ; >from what I fiddled with today, it is at least still useful for that?=
> >
> >moving on...
> >
> >The= web model in the cablelabs work doesn't look much like my captures,
&= gt; >in addition to not modeling dns at all, and using a smaller IW than= google
> >it looks like this:
> >
> >>= > Model single user web page download as follows:
> >
&g= t; >>> - Web page modeled as single HTML page + 100 objects spread= evenly
> >>> across 4 servers. Web object sizes are curre= ntly fixed at 25 kB
> each,
> >>> whereas the init= ial HTML page is 100 kB. Appendix A provides an
> >>> alte= rnative page model that may be explored in future work.
> >
> >Where what I see is a huge number of stuff that fits into a singl= e
> >iw10 slow start episode and some level of pipelining on lar= ger stuff, so
> >that a
> >large number of object siz= es of less than 7k with a lightly tailed
> >distribution
&g= t; >outside of that makes more sense.
> >
> >(I'm = not staring at appendix A right now, I'm under the impression
> >= ; it was better)
> >
> >I certainly would like more s= uggestions for models and types
> >of web traffic, as well as si= mulation of https + pfs traffic,
> >spdy, quic, etc....
>= ; >
> >>> - Server RTTs set as follows (20 ms, 30 ms, 5= 0 ms, 100 ms).
> >
> >Server RTTs from my own web his= tory tend to be lower than 50ms.
> >
> >>> - In= itial HTTP GET to retrieve a moderately sized object (100 kB
> HTML=
> >>> page) from server 1.
> >
> >A= n initial GET to google fits into iw10 - it's about 7k.
> >
> >>> - Once initial HTTP GET completes, initiate 24 simultane= ous HTTP
> GETs
> >>> (via separate TCP connection= s), 6 connections each to 4 different
> >>> server nodes> >
> >I usually don't see more than 15. and certainly= not 25k sized objects.
> >
> > > - Once each indi= vidual HTTP GET completes, initiate a subsequent GET
> >> to = the same server, until 25 objects have been retrieved from each
> &= gt;> server.
> >
> >
> >> * We don't= make sure to flush all the network state in between runs, so
> >= ;>if
> >> you're using that option, don't trust it to work= .
> >
> >The typical scenario we used was a run again= st dozens or hundreds of urls,
> >capturing traffic, while varyi= ng network conditions.
> >
> >Regarded the first run = as the most interesting.
> >
> >Can exit the browser = and restart after a run like that.
> >
> >At moment, = merely plan to use the tool primarily to survey various
> >web s= ites and load times while doing packet captures. Hope was
> >to = get valid data from the network portion of the load, tho...
> ><= br />> >> * If you have an advanced Chromium setup, this definitel= y does not
> >>work. I
> >> advise using the be= nchmark extension only with a separate Chromium
> >>profile> >> for testing purposes. Our flushing of sockets, caches, et= c does not
> >>actually
> >> work correctly whe= n you use the Chromium multiprofile feature and also
> >>fail= s
> >> to flush lots of our other network caches.
> &= gt;
> >noted.
> >
> >
> >> *= No one on Chromium really believes the time to paint numbers that we
= > >> output :) It's complicated. Our graphics stack is complicated= . The time
> >>from
> >
> >I actually c= are only about time-to-full layout as that's a core network
> >e= ffect...
> >
> >> when Blink thinks it painted to = when the GPU actually blits to the
> >>screen
> >&= gt; cannot currently be corroborated with any high degree of accuracy from<= br />> >> within our code.
> >
> >> * It = has not been maintained since 2010. It is quite likely there are
> = >>many
> >> other subtle inaccuracies here.
> &= gt;
> >Grok.
> >
> >> In short, while y= ou can expect it to give you a very high level
> >> understan= ding of performance issues, I advise against placing
> >>non-= trivial
> >> confidence in the accuracy of the numbers genera= ted by the benchmark
> >> extension. The fact that numbers ar= e produced by the extension should
> >>not be
> >&= gt; treated as evidence that the extension actually functions correctly.> >
> >OK, noted. Still delighted to be able to have a = simple load generator
> >that exercises the browsers and generat= es some results, however
> >dubious.
> >
> &g= t;>
> >> Cheers.
> >>
> >>
> >> On Thu, Apr 17, 2014 at 10:49 AM, Dave Taht <dave.taht@g= mail.com>
> wrote:
> >>>
> >>>= Getting a grip on real web page load time behavior in an age of
> = >>> sharded websites,
> >>> dozens of dns lookups= , javascript, and fairly random behavior in ad
> >>> servi= ces
> >>> and cdns against how a modern browsers behaves i= s very, very hard.
> >>>
> >>> it turns o= ut if you run
> >>>
> >>> google-chrome -= -enable-benchmarking --enable-net-benchmarking
> >>>
= > >>> (Mac users have to embed these options in their startup s= cript - see
> >>> http://www.chromium.org/developers/how-= tos/run-chromium-with-flags
> )
> >>>
> &g= t;>> enable developer options and install and run the chrome web page=
> >>> benchmarker,
> >>> (
> >= ;>>
> >>>https://chrome.google.com/webstore/detail/p= age-benchmarker/channimfdomah
> >>>ekjcahlbpccbgaopjll?hl= =3Den
> >>> )
> >>>
> >>>= ; that it works (at least for me, on a brief test of the latest
> c= hrome,
> >>>on
> >>> linux.
> >= ;>> Can someone try windows and mac?)
> >>>
>= ; >>> You can then feed in a list of urls to test against, and pos= t
> process
> >>> the resulting .csv file to your = hearts content. We used to use this
> >>> benchmark a lot = while trying to characterise typical web behaviors
> >>> u= nder aqm and packet scheduling systems under load. Running
> >&g= t;> it simultaneously with a rrul test or one of the simpler tcp upload<= br />> or
> >>> download
> >>> tests i= n the rrul suite was often quite interesting.
> >>>
&= gt; >>> It turned out the doc has been wrong a while as to the nam= e of the
> >>>second
> >>> command lnie o= ption. I was gearing up mentally for having to look at
> >>&g= t; the source....
> >>>
> >>> http://code= .google.com/p/chromium/issues/detail?id=3D338705
> >>>
> >>> /me happy
> >>>
> >>>= --
> >>> Dave T=C3=A4ht
> >>>
> = >>> Heartbleed POC on wifi campus networks with EAP auth:
>= ; >>> http://www.eduroam.edu.au/advisory.html
> >>&g= t;
> >>> _______________________________________________> >>> aqm mailing list
> >>> aqm@ietf.org<= br />> >>> https://www.ietf.org/mailman/listinfo/aqm
> = >>
> >>
> >
> >
> >> >--
> >Dave T=C3=A4ht
> >
> >NS= FW:
> >https://w2.eff.org/Censorship/Internet_censorship_bills/r= ussell_0296_indec
> >ent.article
> >
> >__= _____________________________________________
> >aqm mailing lis= t
> >aqm@ietf.org
> >https://www.ietf.org/mailman/lis= tinfo/aqm
>
> ____________________________________________= ___
> Cerowrt-devel mailing list
> Cerowrt-devel@lists.buff= erbloat.net
> https://lists.bufferbloat.net/listinfo/cerowrt-devel<= br />>

=0A
------=_20140418144808000000_69530--