[Cake] [Bloat] Little's Law mea culpa, but not invalidating my main point
Bob McMahon
bob.mcmahon at broadcom.com
Sun Jul 18 15:06:24 EDT 2021
Just an FYI,
iperf 2 uses a 4 usec delay for TCP and 100 usec delay for UDP to fill the
token bucket. We thought about providing a knob for this but decided not
to. We figured a busy wait CPU thread wasn't a big deal because of the
trend of many CPU cores. The threaded design works well for this. We also
support fq-pacing and isochronous traffic using clock_nanosleep() to
schedule the writes. We'll probably add Markov chain support but that's not
critical and may not affect actionable engineering. We found isoch as a
useful traffic profile, at least for our WiFi testing. I'm going to add
support for TCP_NOTSENT_LOWAT for select()/write() based transmissions. I'm
doubtful this is very useful as event based scheduling based on times seems
better. We'll probably use it for unit testing WiFi aggregation and see if
it helps there or not. I'll see if it aligns with the OWD measurements.
On queue depth, we use two techniques. The most obvious is to measure the
end to end delay and use rx histograms, getting all the samples without
averaging. The second, internal for us only, is using network telemetry and
mapping all the clock domains to the GPS domain. Any moment in time the
end/end path can be inspected to where every packet is.
Our automated testing is focused around unit tests and used to
statistically monitor code changes (which come at a high rate and apply to
a broad range of chips) - so the requirements can be very different from a
network or service provider.
Agreed that the amount of knobs and reactive components are a challenge.
And one must assume non-linearity which becomes obvious after a few direct
measurements (i.e. no averaging.) The challenge of statistical;y
reproducible is always there. We find Montec Carlo techniques can be useful
only when they are proven to be statistically reproducible.
Bob
On Sat, Jul 17, 2021 at 4:29 PM Aaron Wood <woody77 at gmail.com> wrote:
> On Mon, Jul 12, 2021 at 1:32 PM Ben Greear <greearb at candelatech.com>
> wrote:
>
>> UDP is better for getting actual packet latency, for sure. TCP is
>> typical-user-experience-latency though,
>> so it is also useful.
>>
>> I'm interested in the test and visualization side of this. If there were
>> a way to give engineers
>> a good real-time look at a complex real-world network, then they have
>> something to go on while trying
>> to tune various knobs in their network to improve it.
>>
>
> I've always liked the smoke-ping visualization, although a single graph is
> only really useful for a single pair of endpoints (or a single segment,
> maybe). But I can see using a repeated set of graphs (Tufte has some
> examples), that can represent an overview of pairwise collections of
> latency+loss:
> https://www.edwardtufte.com/bboard/images/0003Cs-8047.GIF
> https://www.edwardtufte.com/tufte/psysvcs_p2
>
> These work for understanding because the tiled graphs are all identically
> constructed, and the reader first learns how to read a single tile, and
> then learns the pattern of which tiles represent which measurements.
>
> Further, they are opinionated. In the second link above, the y axis is
> not based on the measured data, but standardized expected values, which (I
> think) is key to quick readability. You never need to read the axes. Much
> like setting up gauges such that "nominal" is always at the same indicator
> position for all graphs (e.g. straight up). At a glance, you can see if
> things are "correct" or not.
>
> That tiling arrangement wouldn't be great for showing interrelationships
> (although it may give you a good historical view of correlated behavior).
> One thought is to overlay a network graph diagram (graph of all network
> links) with small "sparkline" type graphs.
>
> For a more physical-based network graph, I could see visualizing the queue
> depth for each egress port (max value over a time of X, or percentage of
> time at max depth).
>
> Taken together, the timewise correlation could be useful (which peers are
> having problems communicating, and which ports between them are impacted?).
>
> I think getting good data about queue depth may be the hard part,
> especially catching transients and the duty cycle / pulse-width of the load
> (and then converting that to a number). Back when I uncovered the iperf
> application-level pacing granularity was too high 5 years ago, I called it
> them "millibursts", and maybe dtaht pointed out that link utilization is
> always 0% or 100%, and it's just a matter of the PWM of the packet rate
> that makes it look like something in between.
> https://burntchrome.blogspot.com/2016/09/iperf3-and-microbursts.html
>
>
>
> I'll let others try to figure out how build and tune the knobs, but the
>> data acquisition and
>> visualization is something we might try to accomplish. I have a feeling
>> I'm not the
>> first person to think of this, however....probably someone already has
>> done such
>> a thing.
>>
>> Thanks,
>> Ben
>>
>> On 7/12/21 1:04 PM, Bob McMahon wrote:
>> > I believe end host's TCP stats are insufficient as seen per the
>> "failed" congested control mechanisms over the last decades. I think Jaffe
>> pointed this out in
>> > 1979 though he was using what's been deemed on this thread as
>> "spherical cow queueing theory."
>> >
>> > "Flow control in store-and-forward computer networks is appropriate for
>> decentralized execution. A formal description of a class of "decentralized
>> flow control
>> > algorithms" is given. The feasibility of maximizing power with such
>> algorithms is investigated. On the assumption that communication links
>> behave like M/M/1
>> > servers it is shown that no "decentralized flow control algorithm" can
>> maximize network power. Power has been suggested in the literature as a
>> network
>> > performance objective. It is also shown that no objective based only on
>> the users' throughputs and average delay is decentralizable. Finally, a
>> restricted class
>> > of algorithms cannot even approximate power."
>> >
>> > https://ieeexplore.ieee.org/document/1095152
>> >
>> > Did Jaffe make a mistake?
>> >
>> > Also, it's been observed that latency is non-parametric in it's
>> distributions and computing gaussians per the central limit theorem for OWD
>> feedback loops
>> > aren't effective. How does one design a control loop around things that
>> are non-parametric? It also begs the question, what are the feed forward
>> knobs that can
>> > actually help?
>> >
>> > Bob
>> >
>> > On Mon, Jul 12, 2021 at 12:07 PM Ben Greear <greearb at candelatech.com
>> <mailto:greearb at candelatech.com>> wrote:
>> >
>> > Measuring one or a few links provides a bit of data, but seems like
>> if someone is trying to understand
>> > a large and real network, then the OWD between point A and B needs
>> to just be input into something much
>> > more grand. Assuming real-time OWD data exists between 100 to 1000
>> endpoint pairs, has anyone found a way
>> > to visualize this in a useful manner?
>> >
>> > Also, considering something better than ntp may not really scale to
>> 1000+ endpoints, maybe round-trip
>> > time is only viable way to get this type of data. In that case,
>> maybe clever logic could use things
>> > like trace-route to get some idea of how long it takes to get
>> 'onto' the internet proper, and so estimate
>> > the last-mile latency. My assumption is that the last-mile latency
>> is where most of the pervasive
>> > assymetric network latencies would exist (or just ping 8.8.8.8
>> which is 20ms from everywhere due to
>> > $magic).
>> >
>> > Endpoints could also triangulate a bit if needed, using some anchor
>> points in the network
>> > under test.
>> >
>> > Thanks,
>> > Ben
>> >
>> > On 7/12/21 11:21 AM, Bob McMahon wrote:
>> > > iperf 2 supports OWD and gives full histograms for TCP write to
>> read, TCP connect times, latency of packets (with UDP), latency of "frames"
>> with
>> > > simulated video traffic (TCP and UDP), xfer times of bursts with
>> low duty cycle traffic, and TCP RTT (sampling based.) It also has support
>> for sampling (per
>> > > interval reports) down to 100 usecs if configured with
>> --enable-fastsampling, otherwise the fastest sampling is 5 ms. We've
>> released all this as open source.
>> > >
>> > > OWD only works if the end realtime clocks are synchronized using
>> a "machine level" protocol such as IEEE 1588 or PTP. Sadly, *most data
>> centers don't
>> > provide
>> > > sufficient level of clock accuracy and the GPS pulse per second
>> * to colo and vm customers.
>> > >
>> > > https://iperf2.sourceforge.io/iperf-manpage.html
>> > >
>> > > Bob
>> > >
>> > > On Mon, Jul 12, 2021 at 10:40 AM David P. Reed <
>> dpreed at deepplum.com <mailto:dpreed at deepplum.com> <mailto:
>> dpreed at deepplum.com
>> > <mailto:dpreed at deepplum.com>>> wrote:
>> > >
>> > >
>> > > On Monday, July 12, 2021 9:46am, "Livingood, Jason" <
>> Jason_Livingood at comcast.com <mailto:Jason_Livingood at comcast.com>
>> > <mailto:Jason_Livingood at comcast.com <mailto:
>> Jason_Livingood at comcast.com>>> said:
>> > >
>> > > > I think latency/delay is becoming seen to be as important
>> certainly, if not a more direct proxy for end user QoE. This is all still
>> evolving and I
>> > have
>> > > to say is a super interesting & fun thing to work on. :-)
>> > >
>> > > If I could manage to sell one idea to the management
>> hierarchy of communications industry CEOs (operators, vendors, ...) it is
>> this one:
>> > >
>> > > "It's the end-to-end latency, stupid!"
>> > >
>> > > And I mean, by end-to-end, latency to complete a task at a
>> relevant layer of abstraction.
>> > >
>> > > At the link level, it's packet send to packet receive
>> completion.
>> > >
>> > > But at the transport level including retransmission buffers,
>> it's datagram (or message) origination until the acknowledgement arrives
>> for that
>> > message being
>> > > delivered after whatever number of retransmissions, freeing
>> the retransmission buffer.
>> > >
>> > > At the WWW level, it's mouse click to display update
>> corresponding to completion of the request.
>> > >
>> > > What should be noted is that lower level latencies don't
>> directly predict the magnitude of higher-level latencies. But longer lower
>> level latencies
>> > almost
>> > > always amplfify higher level latencies. Often non-linearly.
>> > >
>> > > Throughput is very, very weakly related to these latencies,
>> in contrast.
>> > >
>> > > The amplification process has to do with the presence of
>> queueing. Queueing is ALWAYS bad for latency, and throughput only helps if
>> it is in exactly the
>> > > right place (the so-called input queue of the bottleneck
>> process, which is often a link, but not always).
>> > >
>> > > Can we get that slogan into Harvard Business Review? Can we
>> get it taught in Managerial Accounting at HBS? (which does address
>> logistics/supply chain
>> > queueing).
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > >
>> > > This electronic communication and the information and any files
>> transmitted with it, or attached to it, are confidential and are intended
>> solely for the
>> > use of
>> > > the individual or entity to whom it is addressed and may contain
>> information that is confidential, legally privileged, protected by privacy
>> laws, or
>> > otherwise
>> > > restricted from disclosure to anyone else. If you are not the
>> intended recipient or the person responsible for delivering the e-mail to
>> the intended
>> > recipient,
>> > > you are hereby notified that any use, copying, distributing,
>> dissemination, forwarding, printing, or copying of this e-mail is strictly
>> prohibited. If you
>> > > received this e-mail in error, please return the e-mail to the
>> sender, delete it from your computer, and destroy any printed copy of it.
>> >
>> >
>> > --
>> > Ben Greear <greearb at candelatech.com <mailto:greearb at candelatech.com
>> >>
>> > Candela Technologies Inc http://www.candelatech.com
>> >
>> >
>> > This electronic communication and the information and any files
>> transmitted with it, or attached to it, are confidential and are intended
>> solely for the use of
>> > the individual or entity to whom it is addressed and may contain
>> information that is confidential, legally privileged, protected by privacy
>> laws, or otherwise
>> > restricted from disclosure to anyone else. If you are not the intended
>> recipient or the person responsible for delivering the e-mail to the
>> intended recipient,
>> > you are hereby notified that any use, copying, distributing,
>> dissemination, forwarding, printing, or copying of this e-mail is strictly
>> prohibited. If you
>> > received this e-mail in error, please return the e-mail to the sender,
>> delete it from your computer, and destroy any printed copy of it.
>>
>>
>> --
>> Ben Greear <greearb at candelatech.com>
>> Candela Technologies Inc http://www.candelatech.com
>>
>> _______________________________________________
>> Bloat mailing list
>> Bloat at lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/bloat
>>
>
--
This electronic communication and the information and any files transmitted
with it, or attached to it, are confidential and are intended solely for
the use of the individual or entity to whom it is addressed and may contain
information that is confidential, legally privileged, protected by privacy
laws, or otherwise restricted from disclosure to anyone else. If you are
not the intended recipient or the person responsible for delivering the
e-mail to the intended recipient, you are hereby notified that any use,
copying, distributing, dissemination, forwarding, printing, or copying of
this e-mail is strictly prohibited. If you received this e-mail in error,
please return the e-mail to the sender, delete it from your computer, and
destroy any printed copy of it.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://lists.bufferbloat.net/pipermail/cake/attachments/20210718/89b0a4dd/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: smime.p7s
Type: application/pkcs7-signature
Size: 4206 bytes
Desc: S/MIME Cryptographic Signature
URL: <https://lists.bufferbloat.net/pipermail/cake/attachments/20210718/89b0a4dd/attachment-0001.bin>
More information about the Cake
mailing list