[Make-wifi-fast] less latency, more filling... for wifi

Bob McMahon bob.mcmahon at broadcom.com
Wed Oct 11 16:03:34 EDT 2017


FYI, we're considering adding support for "--udp-triggers" in iperf
2.0.10+.   Setting this option will cause a "magic number" to be placed in
the UDP payload such that logic moving bytes through the system can be
triggered to append their own timestamps into the udp payload, i.e. as the
payload moves through each subsystem.  This can help one can analyze the
latency path of a single packet, as an example.   Note: the standard iperf
microsecond timestamps come from the application level (on tx) and from
SO_TIMESTAMPs on receive (assuming SO_TIMESTAMPs is supported, otherwise
its a syscall() after the socket receive)   Being able to instrument each
logic paths contribution to a single packet's latency can be helpful, at
least for driver/firmware/ucode engineers.

On the server side, we'll probably add a --histogram option so the latency
distributions can be displayed (per each -i interval time) and higher level
scripts can produce PDFs, CDFs and CCDFs for latencies.

Let me know if generalizing this support in iperf is useful.

Bob

On Mon, Oct 9, 2017 at 3:02 PM, Bob McMahon <bob.mcmahon at broadcom.com>
wrote:

> Not sure how to determine when one way latency is above round trip.
> Iperf traffic for latency uses UDP where nothing is coming back.  For TCP,
> the iperf client will report a sampled RTT per the network stack (on
> operating systems that support this.)
>
> One idea - have two traffic streams, one TCP and one UDP, and use higher
> level script (e.g. via python
> <https://sourceforge.net/p/iperf2/code/ci/master/tree/flows/flows.py>) to
> poll data from each and perform the compare?    Though, not sure if this
> would give you what you're looking for.
>
> Bob
>
> On Mon, Oct 9, 2017 at 2:44 PM, Simon Barber <simon at superduper.net> wrote:
>
>> Very nice - I’m using iperf3.2 and always have to figure packets per
>> second by combining packet size and bandwidth. This will be much easier.
>> Also direct reporting of one way latency variance above minimum round trip
>> would be very useful.
>>
>> Simon
>>
>> On Oct 9, 2017, at 2:04 PM, Bob McMahon <bob.mcmahon at broadcom.com> wrote:
>>
>> Hi,
>>
>> Not sure if this is helpful but we've added end/end latency measurements
>> for UDP traffic in iperf 2.0.10
>> <https://sourceforge.net/projects/iperf2/>.   It does require the clocks
>> to be synched.  I use a spectracom tsync pcie card with either an oven
>> controlled oscillator or a GPS disciplined one, then use precision time
>> protocol to distribute the clock over ip multicast.  For Linux, the traffic
>> threads are set to realtime scheduling to minimize latency adds per thread
>> scheduling..
>>
>> I'm also in the process of implementing a very simple isochronous option
>> where the iperf client (tx) accepts a frames per second commmand line value
>> (e.g. 60) as well as a log normal distribution
>> <https://sourceforge.net/p/iperf2/code/ci/master/tree/src/pdfs.c> for
>> the input to somewhat simulate variable bit rates.  On the iperf receiver
>> considering implementing an underflow/overflow counter per the expected
>> frames per second.
>>
>> Latency does seem to be a significant metric.  Also is power consumption.
>>
>> Comments welcome.
>>
>> Bob
>>
>> On Mon, Oct 9, 2017 at 1:41 PM, <dpreed at reed.com> wrote:
>>
>>> It's worth setting a stretch latency goal that is in principle
>>> achievable.
>>>
>>>
>>> I get the sense that the wireless group obsesses over maximum channel
>>> utilization rather than excellent latency.  This is where it's important to
>>> put latency as a primary goal, and utilization as the secondary goal,
>>> rather than vice versa.
>>>
>>>
>>> It's easy to get at this by observing that the minimum latency on the
>>> shared channel is achieved by round-robin scheduling of packets that are of
>>> sufficient size that per packet overhead doesn't dominate.
>>>
>>>
>>> So only aggregate when there are few contenders for the channel, or the
>>> packets are quite small compared to the per-packet overhead. When there are
>>> more contenders, still aggregate small packets, but only those that are
>>> actually waiting. But large packets shouldn't be aggregated.
>>>
>>>
>>> Multicast should be avoided by higher level protocols for the most part,
>>> and the latency of multicast should be a non-issue. In wireless, it's kind
>>> of a dumb idea anyway, given that stations have widely varying propagation
>>> characteristics. Do just enough to support DHCP and so forth.
>>>
>>>
>>> It's so much fun for tha hardware designers to throw in stuff that only
>>> helps in marketing benchmarks (like getting a few percent on throughput in
>>> lab conditions that never happen in the field) that it is tempting for OS
>>> driver writers to use those features (like deep queues and offload
>>> processing bells and whistles). But the real issue to be solved is that
>>> turn-taking "bloat" that comes from too much attempt to aggregate, to
>>> handle the "sole transmitter to dedicated receiver case" etc.
>>>
>>>
>>> I use 10 GigE in my house. I don't use it because I want to do 10 Gig
>>> File Transfers all day and measure them. I use it because (properly
>>> managed) it gives me *low latency*. That low latency is what matters, not
>>> throughput. My average load, if spread out across 24 hours, could be
>>> handled by 802.11b for the entire house.
>>>
>>>
>>> We are soon going to have 802.11ax in the home. That's approximately 10
>>> Gb/sec, but wireless. No TV streaming can fill it. It's not for continuous
>>> isochronous traffic at all.
>>>
>>>
>>> What it is for is *low latency*. So if the adapters and the drivers
>>> won't give me that low latency, what good is 10 Gb/sec at all. This is true
>>> for 802.11ac, as well.
>>>
>>>
>>> We aren't building Dragsters fueled with nitro, to run down 1/4 mile of
>>> track but unable to steer.
>>>
>>>
>>> Instead, we want to be able to connect musical instruments in an
>>> electronic symphony, where timing is everything.
>>>
>>>
>>>
>>>
>>> On Monday, October 9, 2017 4:13pm, "Dave Taht" <dave.taht at gmail.com>
>>> said:
>>>
>>> > There were five ideas I'd wanted to pursue at some point. I''m not
>>> > presently on linux-wireless, nor do I have time to pay attention right
>>> > now - but I'm enjoying that thread passively.
>>> >
>>> > To get those ideas "out there" again:
>>> >
>>> > * adding a fixed length fq'd queue for multicast.
>>> >
>>> > * Reducing retransmits at low rates
>>> >
>>> > See the recent paper:
>>> >
>>> > "Resolving Bufferbloat in TCP Communication over IEEE 802.11 n WLAN by
>>> > Reducing MAC Retransmission Limit at Low Data Rate" (I'd paste a link
>>> > but for some reason that doesn't work well)
>>> >
>>> > Even with their simple bi-modal model it worked pretty well.
>>> >
>>> > It also reduces contention with "bad" stations more automagically.
>>> >
>>> > * Less buffering at the driver.
>>> >
>>> > Presently (ath9k) there are two-three aggregates stacked up at the
>>> driver.
>>> >
>>> > With a good estimate for how long it will take to service one, forming
>>> > another within that deadline seems feasible, so you only need to have
>>> > one in the hardware itself.
>>> >
>>> > Simple example: you have data in the hardware projected to take a
>>> > minimum of 4ms to transmit. Don't form a new aggregate and submit it
>>> > to the hardware for 3.5ms.
>>> >
>>> > I know full well that a "good" estimate is hard, and things like
>>> > mu-mimo complicate things. Still, I'd like to get below 20ms of
>>> > latency within the driver, and this is one way to get there.
>>> >
>>> > * Reducing the size of a txop under contention
>>> >
>>> > if you have 5 stations getting blasted away at 5ms each, and one that
>>> > only wants 1ms worth of traffic, "soon", temporarily reducing the size
>>> > of the txop for everybody so you can service more stations faster,
>>> > seems useful.
>>> >
>>> > * Merging acs when sane to do so
>>> >
>>> > sane aggregation in general works better than prioritizing does, as
>>> > shown in ending the anomaly.
>>> >
>>> > --
>>> >
>>> > Dave Täht
>>> > CEO, TekLibre, LLC
>>> > http://www.teklibre.com
>>> > Tel: 1-669-226-2619 <(669)%20226-2619>
>>> > _______________________________________________
>>> > Make-wifi-fast mailing list
>>> > Make-wifi-fast at lists.bufferbloat.net
>>> > https://lists.bufferbloat.net/listinfo/make-wifi-fast
>>>
>>> _______________________________________________
>>> Make-wifi-fast mailing list
>>> Make-wifi-fast at lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/make-wifi-fast
>>>
>>
>> _______________________________________________
>> Make-wifi-fast mailing list
>> Make-wifi-fast at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/make-wifi-fast
>>
>>
>>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/make-wifi-fast/attachments/20171011/469856ed/attachment.html>


More information about the Make-wifi-fast mailing list