* Re: [Bloat] TCP congestion detection - random thoughts
@ 2015-06-23 5:20 Ingemar Johansson S
0 siblings, 0 replies; 9+ messages in thread
From: Ingemar Johansson S @ 2015-06-23 5:20 UTC (permalink / raw)
To: bloat
Hi
FYI SCReAM (Self-Clocked Rate Adaptation for Multimedia) is an offspring of the LEDBAT algorithm
http://tools.ietf.org/wg/rmcat/draft-ietf-rmcat-scream-cc/
The congestion control part of SCReAM is designed to be a tad more opportunistic than LEDBAT, the reason to this is that the source in this case is a rate adaptive video encoder.
/Ingemar
> Message: 1
> Date: Mon, 22 Jun 2015 09:12:18 -0700
> From: Dave Taht <dave.taht@gmail.com>
> To: Juliusz Chroboczek <jch@pps.univ-paris-diderot.fr>
> Cc: bloat <bloat@lists.bufferbloat.net>
> Subject: Re: [Bloat] TCP congestion detection - random thoughts
> Message-ID:
> <CAA93jw4gyCR9SfXd9fiAQsqgc2WO1XgMCmPUNOGpBtTPG7
> +3WQ@mail.gmail.com>
> Content-Type: text/plain; charset=UTF-8
>
> On Mon, Jun 22, 2015 at 8:55 AM, Juliusz Chroboczek <jch@pps.univ-paris-
> diderot.fr> wrote:
> > To add to what my honourable prelocutors have said, ?TP, which is used
> > by modern BitTorrent implementations, uses the LEDBAT congestion
> > control algorithm, which is based on delay. The fact that LEDBAT is
> > crowded out by Reno is a desirable feature in this case -- you do want
> > your BitTorrent traffic to be crowded out by HTTP and friends.
> >
> > https://en.wikipedia.org/wiki/LEDBAT
>
> Yep. I note that OWD is more desirable than RTT, particularly in modern
> asymmetric networks that have a ratio of up to down bandwidths of 1x10 or
> more.
>
> A lot of folk have treated that return path as inconsequential when it can
> actually be the biggest source of delay or be the most contested part of the
> path.
>
> After having much success in squashing torrent down to being invisible using
> classification in cake last week, I realized this morning that also putting the
> short acks into the same bin was perhaps not always the right thing as that
> hurt download throughput..... Perhaps
> stretch(ier) acks are feasible in ledbat/torrent? Or revisiting the packet size
> to shrink once again under contention? Reducing the number of flows?
>
> >
> > -- Juliusz
> >
> > _______________________________________________
> > Bloat mailing list
> > Bloat@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/bloat
>
>
>
> --
> Dave T?ht
> worldwide bufferbloat report:
> http://www.dslreports.com/speedtest/results/bufferbloat
> And:
> What will it take to vastly improve wifi for everyone?
> https://plus.google.com/u/0/explore/makewififast
>
>
> ------------------------------
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
>
> End of Bloat Digest, Vol 54, Issue 39
> *************************************
^ permalink raw reply [flat|nested] 9+ messages in thread
* [Bloat] TCP congestion detection - random thoughts
@ 2015-06-21 16:19 Benjamin Cronce
2015-06-21 17:05 ` Alan Jenkins
` (3 more replies)
0 siblings, 4 replies; 9+ messages in thread
From: Benjamin Cronce @ 2015-06-21 16:19 UTC (permalink / raw)
To: bloat
[-- Attachment #1: Type: text/plain, Size: 2224 bytes --]
Just a random Sunday morning thought that has probably already been thought
of before, but I currently can't think of hearing it before.
My understanding of most TCP congestion control algorithms is they
primarily watch for drops, but drops are indicated by the receiving party
via ACKs. The issue with this is TCP keeps pushing more data into the
window until a drop is signaled, even if the rate received is not
increased. What if the sending TCP also monitors rate received and backs
off cramming more segments into a window if the received rate does not
increase.
Two things to measure this. RTT which is part of TCP statistics already and
the rate at which bytes are ACKed. If you double the number of segments
being sent, but in a time frame relative to the RTT, you do not see a
meaningful increase in the rate at which bytes are being ACKed, may want to
back off.
It just seems to me that if you have a 50ms RTT and 10 seconds of
bufferbloat, TCP is cramming data down the path with no care in the world
about how quickly data is actually getting ACKed, it's just waiting for the
first segment to get dropped, which would never happen in an infinitely
buffered network.
TCP should be able to keep state that tracks the minimum RTT and maximum
ACK rate. Between these two, it should not be able to go over the max path
rate except when attempting to probe for a new max or min. Min RTT is
probably a good target because path latency should be relatively static,
however path free-bandwidth is not static. The desirable number of segments
in flight would need to change but would be bounded by the max.
Of course naggle type algorithms can mess with this because when ACKs occur
is no longer based entirely when a segment is received, but also by some
other additional amount of time. If you assume that naggle will coalesce N
segments into a single ACK, then you need to add to the RTT, the amount of
time at the current PPS, how long until you expect another ACK assuming N
number of segments will be coalesced. This would be even important for low
latency low bandwidth paths. Coalesce information could be assumed,
negotiated, or inferred. Negotiated would be best.
Anyway, just some random Sunday thoughts.
[-- Attachment #2: Type: text/html, Size: 2413 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] TCP congestion detection - random thoughts
2015-06-21 16:19 Benjamin Cronce
@ 2015-06-21 17:05 ` Alan Jenkins
2015-06-21 17:33 ` Jonathan Morton
2015-06-21 19:34 ` Benjamin Cronce
2015-06-21 17:53 ` G B
` (2 subsequent siblings)
3 siblings, 2 replies; 9+ messages in thread
From: Alan Jenkins @ 2015-06-21 17:05 UTC (permalink / raw)
To: Benjamin Cronce; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 3421 bytes --]
Hi Ben
Some possible Sunday reading relating to these thoughts :).
https://lwn.net/Articles/645115/ "Delay-gradient congestion control"
[2015, Linux partial implementation]
our Dave's reply to a comment:
https://lwn.net/Articles/647322/
Quote "there is a huge bias (based on the experimental evidence) that
classic delay based tcps lost out to loss based in an undeployable fashion"
- not to argue the quote either way. But there's some implicit
references there that are relevant. Primarily a well-documented result
on TCP Vegas. AIUI Vegas uses increased delay as well as loss/marks as
a congestion signal. As a result, it gets a lower share of the
bottleneck bandwidth when competing with other TCPs. Secondly uTP has a
latency (increase) target (of 100ms :p), _deliberately_ to de-prioritize
itself. (This is called LEDBAT and has also been implemented as a TCP).
Alan
On 21/06/15 17:19, Benjamin Cronce wrote:
> Just a random Sunday morning thought that has probably already been
> thought of before, but I currently can't think of hearing it before.
>
> My understanding of most TCP congestion control algorithms is they
> primarily watch for drops, but drops are indicated by the receiving
> party via ACKs. The issue with this is TCP keeps pushing more data
> into the window until a drop is signaled, even if the rate received is
> not increased. What if the sending TCP also monitors rate received and
> backs off cramming more segments into a window if the received rate
> does not increase.
>
> Two things to measure this. RTT which is part of TCP statistics
> already and the rate at which bytes are ACKed. If you double the
> number of segments being sent, but in a time frame relative to the
> RTT, you do not see a meaningful increase in the rate at which bytes
> are being ACKed, may want to back off.
>
> It just seems to me that if you have a 50ms RTT and 10 seconds of
> bufferbloat, TCP is cramming data down the path with no care in the
> world about how quickly data is actually getting ACKed, it's just
> waiting for the first segment to get dropped, which would never happen
> in an infinitely buffered network.
>
> TCP should be able to keep state that tracks the minimum RTT and
> maximum ACK rate. Between these two, it should not be able to go over
> the max path rate except when attempting to probe for a new max or
> min. Min RTT is probably a good target because path latency should be
> relatively static, however path free-bandwidth is not static. The
> desirable number of segments in flight would need to change but would
> be bounded by the max.
>
> Of course naggle type algorithms can mess with this because when ACKs
> occur is no longer based entirely when a segment is received, but also
> by some other additional amount of time. If you assume that naggle
> will coalesce N segments into a single ACK, then you need to add to
> the RTT, the amount of time at the current PPS, how long until you
> expect another ACK assuming N number of segments will be coalesced.
> This would be even important for low latency low bandwidth paths.
> Coalesce information could be assumed, negotiated, or inferred.
> Negotiated would be best.
>
> Anyway, just some random Sunday thoughts.
>
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
[-- Attachment #2: Type: text/html, Size: 4988 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] TCP congestion detection - random thoughts
2015-06-21 17:05 ` Alan Jenkins
@ 2015-06-21 17:33 ` Jonathan Morton
2015-06-21 19:34 ` Benjamin Cronce
1 sibling, 0 replies; 9+ messages in thread
From: Jonathan Morton @ 2015-06-21 17:33 UTC (permalink / raw)
To: Alan Jenkins; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 803 bytes --]
There are also a couple of TCPs that are sensitive to the RTT signal in the
way you describe, but don't completely stop their window increases, in
order to avoid being permanently outcompeted. Illinois is in Linux, and
Compound TCP is a Microsoft thing.
Also noteworthy is Westwood+, which uses RTT and window size to compute
available bandwidth, then upon receiving a congestion signal it uses the
minimum RTT and the bandwidth to infer the correct new window size rather
than blindly halving it. This actually works pretty well with AQM's
preference for short queues, without sacrificing bandwidth; the main
shortcoming is that the smoothed bandwidth estimate tends to be an
underestimate during the early phases of the connection, ie at the critical
moment of exiting slow start.
- Jonathan Morton
[-- Attachment #2: Type: text/html, Size: 870 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] TCP congestion detection - random thoughts
2015-06-21 17:05 ` Alan Jenkins
2015-06-21 17:33 ` Jonathan Morton
@ 2015-06-21 19:34 ` Benjamin Cronce
1 sibling, 0 replies; 9+ messages in thread
From: Benjamin Cronce @ 2015-06-21 19:34 UTC (permalink / raw)
To: Alan Jenkins; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 5045 bytes --]
I'll have to find some time to look at those links.
I guess I wasn't thinking of using latency to determine the rate but only
nudge the rate, but use it only to determine to back off for a bit to allow
the buffer to empty, but maintain the same rate overall. So if you have a
60ms RTT but a 50ms min RTT, keep your current rate but skip one segment
periodically or maybe only make it a smaller segment like 1/2 the max size,
but not often enough to make a large difference. Maybe allow latency to
only reduce the current target rate by no more than 5% or something.
I guess the starting pattern could be something like this
1) build up, 2x segments per ACK
2) make sure ACKd bytes increases at the same rate as bytes sent
3) once bytes ACKd stops increasing, attempt to reduce segment size or skip
a packet until current RTT is near the target RTT, but no more 5%
Of course this is only good for discovering the current free bandwidth.
There needs to be a way to periodically probe to discover new free
bandwidth. The idea was the max detected should rarely change, so don't be
aggresive when probing near the max, but when below the max, attempt to
find free bandwidth by adding additional segments and seeing if the ACKd
byte rate increases. If it does, start growing.
I don't have an engineering background, just playing with thoughts.
[Bloat] TCP congestion detection - random thoughts
>
> Alan Jenkins alan.christopher.jenkins at gmail.com
> Sun Jun 21 10:05:52 PDT 2015
> Previous message: [Bloat] TCP congestion detection - random thoughts
> Next message: [Bloat] TCP congestion detection - random thoughts
> Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
> Hi Ben
>
> Some possible Sunday reading relating to these thoughts :).
>
> https://lwn.net/Articles/645115/ "Delay-gradient congestion control"
> [2015, Linux partial implementation]
>
> our Dave's reply to a comment:
>
> https://lwn.net/Articles/647322/
>
> Quote "there is a huge bias (based on the experimental evidence) that
> classic delay based tcps lost out to loss based in an undeployable
fashion"
>
> - not to argue the quote either way. But there's some implicit
> references there that are relevant. Primarily a well-documented result
> on TCP Vegas. AIUI Vegas uses increased delay as well as loss/marks as
> a congestion signal. As a result, it gets a lower share of the
> bottleneck bandwidth when competing with other TCPs. Secondly uTP has a
> latency (increase) target (of 100ms :p), _deliberately_ to de-prioritize
> itself. (This is called LEDBAT and has also been implemented as a TCP).
>
> Alan
>
>
> On 21/06/15 17:19, Benjamin Cronce wrote:
> > Just a random Sunday morning thought that has probably already been
> > thought of before, but I currently can't think of hearing it before.
> >
> > My understanding of most TCP congestion control algorithms is they
> > primarily watch for drops, but drops are indicated by the receiving
> > party via ACKs. The issue with this is TCP keeps pushing more data
> > into the window until a drop is signaled, even if the rate received is
> > not increased. What if the sending TCP also monitors rate received and
> > backs off cramming more segments into a window if the received rate
> > does not increase.
> >
> > Two things to measure this. RTT which is part of TCP statistics
> > already and the rate at which bytes are ACKed. If you double the
> > number of segments being sent, but in a time frame relative to the
> > RTT, you do not see a meaningful increase in the rate at which bytes
> > are being ACKed, may want to back off.
> >
> > It just seems to me that if you have a 50ms RTT and 10 seconds of
> > bufferbloat, TCP is cramming data down the path with no care in the
> > world about how quickly data is actually getting ACKed, it's just
> > waiting for the first segment to get dropped, which would never happen
> > in an infinitely buffered network.
> >
> > TCP should be able to keep state that tracks the minimum RTT and
> > maximum ACK rate. Between these two, it should not be able to go over
> > the max path rate except when attempting to probe for a new max or
> > min. Min RTT is probably a good target because path latency should be
> > relatively static, however path free-bandwidth is not static. The
> > desirable number of segments in flight would need to change but would
> > be bounded by the max.
> >
> > Of course naggle type algorithms can mess with this because when ACKs
> > occur is no longer based entirely when a segment is received, but also
> > by some other additional amount of time. If you assume that naggle
> > will coalesce N segments into a single ACK, then you need to add to
> > the RTT, the amount of time at the current PPS, how long until you
> > expect another ACK assuming N number of segments will be coalesced.
> > This would be even important for low latency low bandwidth paths.
> > Coalesce information could be assumed, negotiated, or inferred.
> > Negotiated would be best.
> >
> > Anyway, just some random Sunday thoughts.
>
[-- Attachment #2: Type: text/html, Size: 8454 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] TCP congestion detection - random thoughts
2015-06-21 16:19 Benjamin Cronce
2015-06-21 17:05 ` Alan Jenkins
@ 2015-06-21 17:53 ` G B
2015-06-22 1:50 ` Stephen Hemminger
2015-06-22 15:55 ` Juliusz Chroboczek
3 siblings, 0 replies; 9+ messages in thread
From: G B @ 2015-06-21 17:53 UTC (permalink / raw)
To: Benjamin Cronce; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 4771 bytes --]
There should also be a way to track the "ack backlog". By that I mean, if
you can see that the packets being acked were sent 10 seconds ago and they
are consistently so, you should then be able to determine that you are
likely (10 seconds - real RTT - processing delay) deep in buffers
somewhere. If you back off on the number of packets in flight and that
ack backlog doesn't seem to change much, then the congestion is probably
not related to your specific flow. It is likely due to aggregate
congestion somewhere in the path. Could be a congested peering point, pop,
busy distant end, whatever. But if the backing off DOES significantly
reduce the ack backlog (acks are now arriving for packets sent only 5
seconds ago rather than 10) then you have a notion that the flow is a
significant contributor to the total backlog. Exactly what one would do
with that information is the question, I guess.
Is the backlog consistent across all flows or just one? If it is
consistent across all flows then the source of buffering is very close to
you. If it is wildly different, it is likely somewhere in the path of that
particular flow. And looking at the document linked concerning CDG, I see
they take that into account. If I back off but the RTT doesn't decrease,
then my flow is not a significant contributor to the delay. The problem
with the algorithm to my mind is that finding the size of "the queue" for
any particular flow is practically impossible because each flow will have
its own specific amount of buffering along the path and if you get into
things like asymmetric routing where the reply path might not be the same
as the send path (multihomed transit provider or end node sending reply
traffic over different peer than the traffic in the other direction is
arriving on) or (worse) where ECMP is being done across peers on a packet
by packed and not flow-based basis. At that point it is impossible to
really profile the path.
So if I were designing such an algorithm, I would try to determine: Is the
delay consistent across all flows? Is the delay consistent even within a
single flow? When I reduce my rate, does the backlog drop? Exactly what I
would do with that information would require more thought.
On Sun, Jun 21, 2015 at 9:19 AM, Benjamin Cronce <bcronce@gmail.com> wrote:
> Just a random Sunday morning thought that has probably already been
> thought of before, but I currently can't think of hearing it before.
>
> My understanding of most TCP congestion control algorithms is they
> primarily watch for drops, but drops are indicated by the receiving party
> via ACKs. The issue with this is TCP keeps pushing more data into the
> window until a drop is signaled, even if the rate received is not
> increased. What if the sending TCP also monitors rate received and backs
> off cramming more segments into a window if the received rate does not
> increase.
>
> Two things to measure this. RTT which is part of TCP statistics already
> and the rate at which bytes are ACKed. If you double the number of segments
> being sent, but in a time frame relative to the RTT, you do not see a
> meaningful increase in the rate at which bytes are being ACKed, may want to
> back off.
>
> It just seems to me that if you have a 50ms RTT and 10 seconds of
> bufferbloat, TCP is cramming data down the path with no care in the world
> about how quickly data is actually getting ACKed, it's just waiting for the
> first segment to get dropped, which would never happen in an infinitely
> buffered network.
>
> TCP should be able to keep state that tracks the minimum RTT and maximum
> ACK rate. Between these two, it should not be able to go over the max path
> rate except when attempting to probe for a new max or min. Min RTT is
> probably a good target because path latency should be relatively static,
> however path free-bandwidth is not static. The desirable number of segments
> in flight would need to change but would be bounded by the max.
>
> Of course naggle type algorithms can mess with this because when ACKs
> occur is no longer based entirely when a segment is received, but also by
> some other additional amount of time. If you assume that naggle will
> coalesce N segments into a single ACK, then you need to add to the RTT, the
> amount of time at the current PPS, how long until you expect another ACK
> assuming N number of segments will be coalesced. This would be even
> important for low latency low bandwidth paths. Coalesce information could
> be assumed, negotiated, or inferred. Negotiated would be best.
>
> Anyway, just some random Sunday thoughts.
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
>
>
[-- Attachment #2: Type: text/html, Size: 5455 bytes --]
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] TCP congestion detection - random thoughts
2015-06-21 16:19 Benjamin Cronce
2015-06-21 17:05 ` Alan Jenkins
2015-06-21 17:53 ` G B
@ 2015-06-22 1:50 ` Stephen Hemminger
2015-06-22 15:55 ` Juliusz Chroboczek
3 siblings, 0 replies; 9+ messages in thread
From: Stephen Hemminger @ 2015-06-22 1:50 UTC (permalink / raw)
To: Benjamin Cronce; +Cc: bloat
You just reinvented delay based congestion control.
This has been tried int many forms dating back to TCP Vegas.
https://en.wikipedia.org/wiki/TCP_Vegas
Unfortunately, it often failed in practice (which no one ever
wanted to publish), Some of the reasons are:
* Delay based CC is sensitive to cross traffic congestion
where the perceived congestion event was not caused by
that flow. I.e other elephant stomps on ant.
* Delay based CC is not aggressive enough to compete with
loss-based CC. Vegas flows lose to Reno.
* Delay based CC requires careful tuning, one variant was FAST TCP
which was highly tuned for 1G networks in research. It went proprietary
never heard how well it works in modern networks.
* Delay based CC was sensitive to middleboxes, polling intervals
and other timing effects in the wild.
* RTT data has a noise to signal ratio, a flow has to be
consistently maintaining a given rate in order to get
consistent feedback.
Google has some delay based congestion control that is promised
to be released some time, I am waiting.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] TCP congestion detection - random thoughts
2015-06-21 16:19 Benjamin Cronce
` (2 preceding siblings ...)
2015-06-22 1:50 ` Stephen Hemminger
@ 2015-06-22 15:55 ` Juliusz Chroboczek
2015-06-22 16:12 ` Dave Taht
3 siblings, 1 reply; 9+ messages in thread
From: Juliusz Chroboczek @ 2015-06-22 15:55 UTC (permalink / raw)
To: Benjamin Cronce; +Cc: bloat
To add to what my honourable prelocutors have said, µTP, which is used by
modern BitTorrent implementations, uses the LEDBAT congestion control
algorithm, which is based on delay. The fact that LEDBAT is crowded out
by Reno is a desirable feature in this case -- you do want your BitTorrent
traffic to be crowded out by HTTP and friends.
https://en.wikipedia.org/wiki/LEDBAT
-- Juliusz
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: [Bloat] TCP congestion detection - random thoughts
2015-06-22 15:55 ` Juliusz Chroboczek
@ 2015-06-22 16:12 ` Dave Taht
0 siblings, 0 replies; 9+ messages in thread
From: Dave Taht @ 2015-06-22 16:12 UTC (permalink / raw)
To: Juliusz Chroboczek; +Cc: bloat
On Mon, Jun 22, 2015 at 8:55 AM, Juliusz Chroboczek
<jch@pps.univ-paris-diderot.fr> wrote:
> To add to what my honourable prelocutors have said, µTP, which is used by
> modern BitTorrent implementations, uses the LEDBAT congestion control
> algorithm, which is based on delay. The fact that LEDBAT is crowded out by
> Reno is a desirable feature in this case -- you do want your BitTorrent
> traffic to be crowded out by HTTP and friends.
>
> https://en.wikipedia.org/wiki/LEDBAT
Yep. I note that OWD is more desirable than RTT, particularly in
modern asymmetric networks that have a ratio of up to down bandwidths
of 1x10 or more.
A lot of folk have treated that return path as inconsequential when it
can actually be the biggest source of delay or be the most contested
part of the path.
After having much success in squashing torrent down to being invisible
using classification in cake last week, I realized this morning that
also putting the short acks into the same bin was perhaps not always
the right thing as that hurt download throughput..... Perhaps
stretch(ier) acks are feasible in ledbat/torrent? Or revisiting the
packet size to shrink once again under contention? Reducing the number
of flows?
>
> -- Juliusz
>
> _______________________________________________
> Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
--
Dave Täht
worldwide bufferbloat report:
http://www.dslreports.com/speedtest/results/bufferbloat
And:
What will it take to vastly improve wifi for everyone?
https://plus.google.com/u/0/explore/makewififast
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2015-06-23 5:21 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-06-23 5:20 [Bloat] TCP congestion detection - random thoughts Ingemar Johansson S
-- strict thread matches above, loose matches on Subject: below --
2015-06-21 16:19 Benjamin Cronce
2015-06-21 17:05 ` Alan Jenkins
2015-06-21 17:33 ` Jonathan Morton
2015-06-21 19:34 ` Benjamin Cronce
2015-06-21 17:53 ` G B
2015-06-22 1:50 ` Stephen Hemminger
2015-06-22 15:55 ` Juliusz Chroboczek
2015-06-22 16:12 ` Dave Taht
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox