From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from smtp81.iad3a.emailsrvr.com (smtp81.iad3a.emailsrvr.com [173.203.187.81]) (using TLSv1 with cipher DHE-RSA-AES256-SHA (256/256 bits)) (Client did not present a certificate) by huchra.bufferbloat.net (Postfix) with ESMTPS id 9916D21F443 for ; Fri, 14 Nov 2014 06:45:42 -0800 (PST) Received: from localhost (localhost.localdomain [127.0.0.1]) by smtp19.relay.iad3a.emailsrvr.com (SMTP Server) with ESMTP id 0D38B1804E3; Fri, 14 Nov 2014 09:45:41 -0500 (EST) X-Virus-Scanned: OK Received: by smtp19.relay.iad3a.emailsrvr.com (Authenticated sender: dpreed-AT-reed.com) with ESMTPSA id B40EC1804CA; Fri, 14 Nov 2014 09:45:39 -0500 (EST) X-Sender-Id: dpreed@reed.com Received: from [100.82.203.150] (88.sub-70-192-2.myvzw.com [70.192.2.88]) (using TLSv1.2 with cipher DHE-RSA-AES256-SHA) by 0.0.0.0:465 (trex/5.3.2); Fri, 14 Nov 2014 14:45:40 GMT User-Agent: K-@ Mail for Android X-Priority: 3 In-Reply-To: References: MIME-Version: 1.0 Content-Type: multipart/alternative; boundary="----WKRLFUP24RQJAZMVQOUDIS52GO4PHT" Content-Transfer-Encoding: 7bit From: "David P. Reed" Date: Fri, 14 Nov 2014 09:45:34 -0500 To: Aaron Wood ,"Mike \"dave\" Taht" Message-ID: Cc: Frank Horowitz , "cerowrt-devel@lists.bufferbloat.net" Subject: Re: [Cerowrt-devel] High Performance (SSH) Data Transfers using fq_codel? X-BeenThere: cerowrt-devel@lists.bufferbloat.net X-Mailman-Version: 2.1.13 Precedence: list List-Id: Development issues regarding the cerowrt test router project List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Fri, 14 Nov 2014 14:46:11 -0000 ------WKRLFUP24RQJAZMVQOUDIS52GO4PHT Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Filling intermediate buffers doesn't make the tcp congestion algorithms wor= k=2E They just kick in when the buffers are full! And then you end up with= a pile of packets that will be duplicated which amplifies the pressure on = buffers! If there could be no buffering,the big file transfers would home = in on the available capacity more quickly and waste fewer retransmit - whil= e being friendly to folks sharing the bottleneck! The HPC guys really don'= t understand a thing about control theory=2E=2E=2E On Nov 13, 2014, Aaron = Wood wrote: >I have a couple friends in that crowd, a= nd they _also_ aren't using >shared >lines=2E So they don't worry in the s= lightest about congestion when >they're >trying to keep dedicated links ful= ly saturated=2E They're big issue with >dropped packets is that some of th= e TCP congestion-control algorithms >kick >in on a single dropped packet: >= http://fasterdata=2Ees=2Enet/network-tuning/tcp-issues-explained/packet-los= s/ > >I'm thinking that some forward-error-correction would make their live= s >much, much better=2E > >-Aaron > >On Thu, Nov 13, 2014 at 7:11 PM, Dave = Taht wrote: > >> One thing the HPC crowd has miss= ed is that in their quest for big >> buffers for contenental distances, the= y hurt themselves on shorter >> ones=2E=2E=2E >> >> =2E=2E=2E and also that= big buffers with FQ on them works just fine in the >> general case=2E >> >= > As always I recomend benchmarking - do a rrul test between the two >> poi= nts, for example, with their recomendations=2E >> >> >> On Fri, Oct 17, 201= 4 at 4:21 PM, Frank Horowitz >wrote: >> > G=E2=80=99Day= folks, >> > >> > Long time lurker=2E I=E2=80=99ve been using Cero for my h= ome router for quite >a >> while now, with reasonable results (modulo blood= y OSX wifi stuffola)=2E >> > >> > I=E2=80=99m running into issues doing zfs= send/receive over ssh across a >> (mostly) internet2 backbone between Corn= ell (where I work) and West >> Virginia University (where we have a collabo= rator on a DOE sponsored >> project=2E Both ends are linux machines running= fq_codel configured >like so: >> > tc qdisc >> > qdisc fq_= codel 0: dev eth0 root refcnt 2 limit 10240p flows >1024 >> quantum 1514 ta= rget 5=2E0ms interval 100=2E0ms ecn >> > >> > I stumbled across hpn-ssh >and >> =E2=80=94 of particular= interest to this group =E2=80=94 their page on tuning TCP >> parameters: >= > > >> > >> > = >> > N=2EB=2E their advice to increase buffer size=E2=80=A6 >> > >> > I=E2= =80=99m curious, what part (if any) of that advice survives with >fq_codel = >> running on both ends? >> > >> > Any advice from the experts here would b= e gratefully received! >> > >> > (And thanks for all of your collective and= individual efforts!) >> > >> > Cheers, >> > Frank Horowitz >> > >>= > >> > _______________________________________________ >> > Cerowrt-devel = mailing list >> > Cerowrt-devel@lists=2Ebufferbloat=2Enet >> > https://list= s=2Ebufferbloat=2Enet/listinfo/cerowrt-devel >> > >> >> >> >> -- >> Dave T= =C3=A4ht >> >> thttp://www=2Ebufferbloat=2Enet/projects/bloat/wiki/Upcoming= _Talks >> _______________________________________________ >> Cerowrt-devel = mailing list >> Cerowrt-devel@lists=2Ebufferbloat=2Enet >> https://lists=2E= bufferbloat=2Enet/listinfo/cerowrt-devel >> > > >--------------------------= ---------------------------------------------- > >_________________________= ______________________ >Cerowrt-devel mailing list >Cerowrt-devel@lists=2Eb= ufferbloat=2Enet >https://lists=2Ebufferbloat=2Enet/listinfo/cerowrt-devel = -- Sent from my Android device with K-@ Mail=2E Please excuse my brevity= =2E ------WKRLFUP24RQJAZMVQOUDIS52GO4PHT Content-Type: text/html; charset=utf-8 Content-Transfer-Encoding: quoted-printable Filling intermediate buffers doesn't make the tcp = congestion algorithms work=2E  They just kick in when the buffers are = full! And then you end up with a pile of packets that will be duplicated wh= ich amplifies the pressure on buffers!

If there could be no buffering,the big file transfers would home in on t= he available capacity more quickly and waste fewer retransmit - while being= friendly to folks sharing the bottleneck!

The HPC guys really don't understand a thing about control theory=2E= =2E=2E

On N= ov 13, 2014, Aaron Wood <woody77@gmail=2Ecom> wrote:
I have a coupl= e friends in that crowd, and they _also_ aren't using shared lines=2E = So they don't worry in the slightest about congestion when they're trying = to keep dedicated links fully saturated=2E  They're big issue with dro= pped packets is that some of the TCP congestion-control algorithms kick in = on a single dropped packet:  http://fast= erdata=2Ees=2Enet/network-tuning/tcp-issues-explained/packet-loss/
=
I'm thinking that some forward-error-correcti= on would make their lives much, much better=2E

-Aaron

On Thu, Nov 13, 2014 at 7:11 PM, Dave Taht <dave=2Etaht@gmail=2Ecom> wrote:
One thing the HPC crowd has miss= ed is that in their quest for big
buffers for contenenta= l distances, they hurt themselves on shorter
ones=2E=2E= =2E

=2E=2E=2E and also that big buff= ers with FQ on them works just fine in the
general case= =2E

As always I recomend benchmarkin= g - do a rrul test between the two
points, for example, = with their recomendations=2E


On Fri, Oct 17, 2014 at 4:21 PM, Frank = Horowitz <frank@horo= w=2Enet> wrote:
> G’Day folks,
>
> Long time lurker=2E I’ve been= using Cero for my home router for quite a while now, with reasonable resul= ts (modulo bloody OSX wifi stuffola)=2E
>
> I’m running into issues doing zfs send/receive over ssh = across a (mostly) internet2 backbone between Cornell (where I work) and Wes= t Virginia University (where we have a collaborator on a DOE sponsored proj= ect=2E Both ends are linux machines running fq_codel configured like so: >         tc qdisc
>         qdisc fq_codel 0: dev eth0 ro= ot refcnt 2 limit 10240p flows 1024 quantum 1514 target 5=2E0ms interval 10= 0=2E0ms ecn
>
> I stumbled acro= ss hpn-ssh <https://www=2Epsc=2Eedu/index=2Ephp/hpn-ssh> and —  of particular interest to this group — their = page on tuning TCP parameters:
>
&= gt; <
http://www=2Epsc=2Eedu/index=2Ephp/net= working/641-tcp-tune>
>
>= ; N=2EB=2E their advice to increase buffer size…
= >
> I’m curious, what part (if any) of that = advice survives with fq_codel running on both ends?
>=
> Any advice from the experts here would be grateful= ly received!
>
> (And thanks fo= r all of your collective and individual efforts!)
> > Cheers,
>      =    Frank Horowitz
>
>=
> ______________________________________= _________
> Cerowrt-devel mailing list
> Cerowrt-devel@lists=2Ebufferbloat=2Enet
>= ; https://lists=2Ebufferbloat=2Enet/listinfo/cer= owrt-devel
>



--
Dave Täht

thttp://www=2Ebu= fferbloat=2Enet/projects/bloat/wiki/Upcoming_Talks
_= ______________________________________________
Cerowrt-d= evel mailing list
Cerowrt-devel@lists=2Ebufferbloat=2Enet=
https://lists=2Ebufferbl= oat=2Enet/listinfo/cerowrt-devel



Cerowrt-devel mailing list
Cerow= rt-devel@lists=2Ebufferbloat=2Enet
https://lists= =2Ebufferbloat=2Enet/listinfo/cerowrt-devel

-- Sent from my Android device with K-@ Mail=2E Please excuse my brevity=2E= ------WKRLFUP24RQJAZMVQOUDIS52GO4PHT--