From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-qt0-x22c.google.com (mail-qt0-x22c.google.com [IPv6:2607:f8b0:400d:c0d::22c]) (using TLSv1.2 with cipher ECDHE-RSA-AES128-GCM-SHA256 (128/128 bits)) (No client certificate requested) by lists.bufferbloat.net (Postfix) with ESMTPS id 028133BA8E for ; Mon, 9 Oct 2017 17:04:36 -0400 (EDT) Received: by mail-qt0-x22c.google.com with SMTP id n61so17720334qte.10 for ; Mon, 09 Oct 2017 14:04:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=broadcom.com; s=google; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=SzBt+ZOOv8h1Onngjqd6jIzeqiHgWxU8q5oE/h+GJds=; b=eLfnoUmyLbdz5pX+BfeuCTtM3WsjPqlZ7LBdho3CyHGpd/OTZW9DCFFOxeiFBZB7je Qpa1zaWhzGa8oyq8zlnS4E4v7D+qlp76I/bHiApQp6aB6j6hgcfKtDwCP8a0xYssrB4O H8/BLIIi+jy4v+hBAldMIj9/faugHjvcQgAYc= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=SzBt+ZOOv8h1Onngjqd6jIzeqiHgWxU8q5oE/h+GJds=; b=grHkAmES9t8uHNXShxNEVlhAyEX4fWL89evW7dkdxL4jFOBv6ckDAU0tJCVtepf/R7 bXOwNY9Pl76/fTOBIpnFG5ozNqpgKlJJhRrxUOu/3SlsChhlxknhYLAtMmEw+yfU5Mto bPLjH8M07SpTaSVCT7E278FN2nRbNvjMW6tT3g9Ll2MDpos7S4Itiy+pA/ENRnYbA1Rc P/lnIIWgOrih0Yq7NOcRwYif9PnF2SGbjGx0l+xEbmm6FOGhv8H00uEB+H1JLyUxmGuY gQCaR0CEze/26Vqz2/dAKYMP8jt45USgv07hrs+yYKWeMpOK5K/rIKzuzgPDeXUdycf7 QUOQ== X-Gm-Message-State: AMCzsaXSheKrh+IuFXdaQ0zMVg94dv9q31e8ityKdN1ddw8JJbtFKsT6 pgUUSo+UgfjchAaprY7ak7EQ0OQ0G+t18EISGXXBZQ== X-Google-Smtp-Source: AOwi7QCREXnYPbj9KKUOuu6fSlQeovks2Gd1jV0JIlYkmqcxvQBAz2S3F4BbKrNxpVjlk2MT8h/+ALhwHzsD/kKD2gw= X-Received: by 10.237.34.28 with SMTP id n28mr9990252qtc.30.1507583076189; Mon, 09 Oct 2017 14:04:36 -0700 (PDT) MIME-Version: 1.0 Received: by 10.12.142.67 with HTTP; Mon, 9 Oct 2017 14:04:35 -0700 (PDT) In-Reply-To: <1507581711.45638427@apps.rackspace.com> References: <1507581711.45638427@apps.rackspace.com> From: Bob McMahon Date: Mon, 9 Oct 2017 14:04:35 -0700 Message-ID: To: David Reed Cc: Dave Taht , make-wifi-fast@lists.bufferbloat.net, Johannes Berg Content-Type: multipart/alternative; boundary="001a113e4e86b9fb9d055b238923" Subject: Re: [Make-wifi-fast] less latency, more filling... for wifi X-BeenThere: make-wifi-fast@lists.bufferbloat.net X-Mailman-Version: 2.1.20 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , X-List-Received-Date: Mon, 09 Oct 2017 21:04:37 -0000 --001a113e4e86b9fb9d055b238923 Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable Hi, Not sure if this is helpful but we've added end/end latency measurements for UDP traffic in iperf 2.0.10 . It does require the clocks to be synched. I use a spectracom tsync pcie card with either an oven controlled oscillator or a GPS disciplined one, then use precision time protocol to distribute the clock over ip multicast. For Linux, the traffic threads are set to realtime scheduling to minimize latency adds per thread scheduling.. I'm also in the process of implementing a very simple isochronous option where the iperf client (tx) accepts a frames per second commmand line value (e.g. 60) as well as a log normal distribution for the input to somewhat simulate variable bit rates. On the iperf receiver considering implementing an underflow/overflow counter per the expected frames per second. Latency does seem to be a significant metric. Also is power consumption. Comments welcome. Bob On Mon, Oct 9, 2017 at 1:41 PM, wrote: > It's worth setting a stretch latency goal that is in principle achievable= . > > > > I get the sense that the wireless group obsesses over maximum channel > utilization rather than excellent latency. This is where it's important = to > put latency as a primary goal, and utilization as the secondary goal, > rather than vice versa. > > > > It's easy to get at this by observing that the minimum latency on the > shared channel is achieved by round-robin scheduling of packets that are = of > sufficient size that per packet overhead doesn't dominate. > > > > So only aggregate when there are few contenders for the channel, or the > packets are quite small compared to the per-packet overhead. When there a= re > more contenders, still aggregate small packets, but only those that are > actually waiting. But large packets shouldn't be aggregated. > > > > Multicast should be avoided by higher level protocols for the most part, > and the latency of multicast should be a non-issue. In wireless, it's kin= d > of a dumb idea anyway, given that stations have widely varying propagatio= n > characteristics. Do just enough to support DHCP and so forth. > > > > It's so much fun for tha hardware designers to throw in stuff that only > helps in marketing benchmarks (like getting a few percent on throughput i= n > lab conditions that never happen in the field) that it is tempting for OS > driver writers to use those features (like deep queues and offload > processing bells and whistles). But the real issue to be solved is that > turn-taking "bloat" that comes from too much attempt to aggregate, to > handle the "sole transmitter to dedicated receiver case" etc. > > > > I use 10 GigE in my house. I don't use it because I want to do 10 Gig Fil= e > Transfers all day and measure them. I use it because (properly managed) i= t > gives me *low latency*. That low latency is what matters, not throughput. > My average load, if spread out across 24 hours, could be handled by 802.1= 1b > for the entire house. > > > > We are soon going to have 802.11ax in the home. That's approximately 10 > Gb/sec, but wireless. No TV streaming can fill it. It's not for continuou= s > isochronous traffic at all. > > > > What it is for is *low latency*. So if the adapters and the drivers won't > give me that low latency, what good is 10 Gb/sec at all. This is true for > 802.11ac, as well. > > > > We aren't building Dragsters fueled with nitro, to run down 1/4 mile of > track but unable to steer. > > > > Instead, we want to be able to connect musical instruments in an > electronic symphony, where timing is everything. > > > > > > On Monday, October 9, 2017 4:13pm, "Dave Taht" said= : > > > There were five ideas I'd wanted to pursue at some point. I''m not > > presently on linux-wireless, nor do I have time to pay attention right > > now - but I'm enjoying that thread passively. > > > > To get those ideas "out there" again: > > > > * adding a fixed length fq'd queue for multicast. > > > > * Reducing retransmits at low rates > > > > See the recent paper: > > > > "Resolving Bufferbloat in TCP Communication over IEEE 802.11 n WLAN by > > Reducing MAC Retransmission Limit at Low Data Rate" (I'd paste a link > > but for some reason that doesn't work well) > > > > Even with their simple bi-modal model it worked pretty well. > > > > It also reduces contention with "bad" stations more automagically. > > > > * Less buffering at the driver. > > > > Presently (ath9k) there are two-three aggregates stacked up at the > driver. > > > > With a good estimate for how long it will take to service one, forming > > another within that deadline seems feasible, so you only need to have > > one in the hardware itself. > > > > Simple example: you have data in the hardware projected to take a > > minimum of 4ms to transmit. Don't form a new aggregate and submit it > > to the hardware for 3.5ms. > > > > I know full well that a "good" estimate is hard, and things like > > mu-mimo complicate things. Still, I'd like to get below 20ms of > > latency within the driver, and this is one way to get there. > > > > * Reducing the size of a txop under contention > > > > if you have 5 stations getting blasted away at 5ms each, and one that > > only wants 1ms worth of traffic, "soon", temporarily reducing the size > > of the txop for everybody so you can service more stations faster, > > seems useful. > > > > * Merging acs when sane to do so > > > > sane aggregation in general works better than prioritizing does, as > > shown in ending the anomaly. > > > > -- > > > > Dave T=C3=A4ht > > CEO, TekLibre, LLC > > http://www.teklibre.com > > Tel: 1-669-226-2619 <(669)%20226-2619> > > _______________________________________________ > > Make-wifi-fast mailing list > > Make-wifi-fast@lists.bufferbloat.net > > https://lists.bufferbloat.net/listinfo/make-wifi-fast > > _______________________________________________ > Make-wifi-fast mailing list > Make-wifi-fast@lists.bufferbloat.net > https://lists.bufferbloat.net/listinfo/make-wifi-fast > --001a113e4e86b9fb9d055b238923 Content-Type: text/html; charset="UTF-8" Content-Transfer-Encoding: quoted-printable
Hi,

Not sure if this is helpful but we've added= end/end latency measurements for UDP traffic in iperf 2.0.10. =C2=A0 It does require the clo= cks to be synched.=C2=A0 I use a spectracom tsync pcie card with either an = oven controlled oscillator or a GPS disciplined one, then use precision tim= e protocol to distribute the clock over ip multicast.=C2=A0 For Linux, the = traffic threads are set to realtime scheduling to minimize latency adds per= thread scheduling..

I'm also in the process of impl= ementing a very simple isochronous option where the iperf client (tx) accep= ts a frames per second commmand line value (e.g. 60) as well as a log no= rmal distribution for the input to somewhat simulate variable bit rates= .=C2=A0 On the iperf receiver considering implementing an underflow/overflo= w counter per the expected frames per second.

Latency does seem to b= e a significant metric.=C2=A0 Also is power consumption.

= Comments welcome.

Bob

On Mon, Oct 9, 2017 at 1:41 PM, <= span dir=3D"ltr"><d= preed@reed.com> wrote:

It's worth setting a stretch latency goal that is= in principle achievable.

=C2=A0

I get th= e sense that the wireless group obsesses over maximum channel utilization r= ather than excellent latency.=C2=A0 This is where it's important to put= latency as a primary goal, and utilization as the secondary goal, rather t= han vice versa.

=C2=A0

It's= easy to get at this by observing that the minimum latency on the shared ch= annel is achieved by round-robin scheduling of packets that are of sufficie= nt size that per packet overhead doesn't dominate.

=C2=A0

So only = aggregate when there are few contenders for the channel, or the packets are= quite small compared to the per-packet overhead. When there are more conte= nders, still aggregate small packets, but only those that are actually wait= ing. But large packets shouldn't be aggregated.

=C2=A0

Multicas= t should be avoided by higher level protocols for the most part, and the la= tency of multicast should be a non-issue. In wireless, it's kind of a d= umb idea anyway, given that stations have widely varying propagation charac= teristics. Do just enough to support DHCP and so forth.

=C2=A0

It's= so much fun for tha hardware designers to throw in stuff that only helps i= n marketing benchmarks (like getting a few percent on throughput in lab con= ditions that never happen in the field) that it is tempting for OS driver w= riters to use those features (like deep queues and offload processing bells= and whistles). But the real issue to be solved is that turn-taking "b= loat" that comes from too much attempt to aggregate, to handle the &qu= ot;sole transmitter to dedicated receiver case" etc.

=C2=A0

I use 10= GigE in my house. I don't use it because I want to do 10 Gig File Tran= sfers all day and measure them. I use it because (properly managed) it give= s me *low latency*. That low latency is what matters, not throughput. My av= erage load, if spread out across 24 hours, could be handled by 802.11b for = the entire house.

=C2=A0

We are s= oon going to have 802.11ax in the home. That's approximately 10 Gb/sec,= but wireless. No TV streaming can fill it. It's not for continuous iso= chronous traffic at all.

=C2=A0

What it = is for is *low latency*. So if the adapters and the drivers won't give = me that low latency, what good is 10 Gb/sec at all. This is true for 802.11= ac, as well.

=C2=A0

We aren&= #39;t building Dragsters fueled with nitro, to run down 1/4 mile of track b= ut unable to steer.

=C2=A0

Instead,= we want to be able to connect musical instruments in an electronic symphon= y, where timing is everything.

=C2=A0



= On Monday, October 9, 2017 4:13pm, "Dave Taht" <dave.taht@gmail.com> said= :

> The= re were five ideas I'd wanted to pursue at some point. I''m not=
> presently on linux-wireless, nor do I have time to pay attention r= ight
> now - but I'm enjoying that thread passively.
>
= > To get those ideas "out there" again:
>
> * add= ing a fixed length fq'd queue for multicast.
>
> * Reducin= g retransmits at low rates
>
> See the recent paper:
> <= br>> "Resolving Bufferbloat in TCP Communication over IEEE 802.11 n= WLAN by
> Reducing MAC Retransmission Limit at Low Data Rate" (= I'd paste a link
> but for some reason that doesn't work well= )
>
> Even with their simple bi-modal model it worked pretty w= ell.
>
> It also reduces contention with "bad" stati= ons more automagically.
>
> * Less buffering at the driver.>
> Presently (ath9k) there are two-three aggregates stacked up = at the driver.
>
> With a good estimate for how long it will t= ake to service one, forming
> another within that deadline seems feas= ible, so you only need to have
> one in the hardware itself.
> =
> Simple example: you have data in the hardware projected to take a<= br>> minimum of 4ms to transmit. Don't form a new aggregate and subm= it it
> to the hardware for 3.5ms.
>
> I know full well = that a "good" estimate is hard, and things like
> mu-mimo c= omplicate things. Still, I'd like to get below 20ms of
> latency = within the driver, and this is one way to get there.
>
> * Red= ucing the size of a txop under contention
>
> if you have 5 st= ations getting blasted away at 5ms each, and one that
> only wants 1m= s worth of traffic, "soon", temporarily reducing the size
>= of the txop for everybody so you can service more stations faster,
>= seems useful.
>
> * Merging acs when sane to do so
> > sane aggregation in general works better than prioritizing does, as<= br>> shown in ending the anomaly.
>
> --
>
> D= ave T=C3=A4ht
> CEO, TekLibre, LLC
> http://www.teklibre.com
> Tel: 1-669-2= 26-2619
> _______________________________________________> Make-wifi-fast mailing list
> Make-wifi-fast@lists.bufferbl= oat.net
> https://lists.bufferbloat.net/listinfo/ma= ke-wifi-fast


_________________________________________= ______
Make-wifi-fast mailing list
Make-wifi-fast@list= s.bufferbloat.net
https://lists.bufferbloat.net/listinfo/mak= e-wifi-fast

--001a113e4e86b9fb9d055b238923--