* One-way delay measurement for netperf-wrapper
@ 2013-11-28 18:51 Toke Høiland-Jørgensen
2013-11-29 8:45 ` Eggert, Lars
0 siblings, 1 reply; 12+ messages in thread
From: Toke Høiland-Jørgensen @ 2013-11-28 18:51 UTC (permalink / raw)
To: bloat-devel
[-- Attachment #1: Type: text/plain, Size: 770 bytes --]
So I've been thinking about adding one-way delay measurement to
netperf-wrapper, but thought I'd solicit some input on how best to do
that.
The obvious way would be to parse owamp[1] output, but since
that relies on clock synchronisation (and is another dependency),
perhaps it might be worth looking at other approaches.
The people at LINCS seem to have had some success with passive
measurements of induced queueing delay based on TCP timestamps[2];
would adding something like that to (e.g.) netperf be worthwhile? And is
it possible (as in, is there an API to get to the timestamp values) for
TCP? Other ideas?
-Toke
[1] http://www.infres.enst.fr/~drossi/index.php?n=Dataset.BufferbloatMethodology
[2] http://www.enst.fr/~drossi/dataset/bufferbloat-methodology
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 489 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: One-way delay measurement for netperf-wrapper
2013-11-28 18:51 One-way delay measurement for netperf-wrapper Toke Høiland-Jørgensen
@ 2013-11-29 8:45 ` Eggert, Lars
2013-11-29 9:42 ` Toke Høiland-Jørgensen
0 siblings, 1 reply; 12+ messages in thread
From: Eggert, Lars @ 2013-11-29 8:45 UTC (permalink / raw)
To: Toke Høiland-Jørgensen; +Cc: bloat-devel
[-- Attachment #1: Type: text/plain, Size: 1624 bytes --]
Hi,
On 2013-11-28, at 19:51, Toke Høiland-Jørgensen <toke@toke.dk> wrote:
> So I've been thinking about adding one-way delay measurement to
> netperf-wrapper, but thought I'd solicit some input on how best to do
> that.
that would be a great addition. But I think it will require some fundamental change to the wrapper (actually , probably to netperf.) Or at least a solution a complete solution would.
If I remember correctly, the wrapper runs various netperf flows to generate load, and in parallel runs a ping session in order to measure latency.
That works fine if all these flows go through all the same bottlenecks, but when they don't (because the switch does something funky for ICMP, etc.) it fails. It also doesn't account for stack-induced latencies.
I'd really like to see some measurement tool (netperf, flowgrind, etc.) grow support for measuring latencies based on the actual load-generating data flow. Ideally and assuming fully sync'ed clocks, I'd like to timestamp each byte of a TCP stream when an app does write(), and I'd like to timestamp it again when the receiving app read()s it. The difference between the two timestamps is the latency that byte saw end-to-end.
(Yes, you wouldn't want/need to do this for each byte, I'm just speaking conceptually.)
That measurement would include the stack/driver latencies which you don't currently capture with a parallel ping. For datacenter scenarios with very low RTTs, these sources of latency begin to matter.
I think that Stas' thrulay tool did measure latencies in this way, but it has accumulated some serious bitrot.
Lars
[-- Attachment #2: Message signed with OpenPGP using GPGMail --]
[-- Type: application/pgp-signature, Size: 273 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: One-way delay measurement for netperf-wrapper
2013-11-29 8:45 ` Eggert, Lars
@ 2013-11-29 9:42 ` Toke Høiland-Jørgensen
2013-11-29 10:20 ` Eggert, Lars
2013-12-02 18:11 ` Rick Jones
0 siblings, 2 replies; 12+ messages in thread
From: Toke Høiland-Jørgensen @ 2013-11-29 9:42 UTC (permalink / raw)
To: Eggert, Lars; +Cc: bloat-devel
[-- Attachment #1: Type: text/plain, Size: 2658 bytes --]
"Eggert, Lars" <lars@netapp.com> writes:
> that would be a great addition. But I think it will require some
> fundamental change to the wrapper (actually , probably to netperf.) Or
> at least a solution a complete solution would.
Yeah, I was thinking of putting the functionality into netperf.
> I'd really like to see some measurement tool (netperf, flowgrind,
> etc.) grow support for measuring latencies based on the actual
> load-generating data flow. Ideally and assuming fully sync'ed clocks,
> I'd like to timestamp each byte of a TCP stream when an app does
> write(), and I'd like to timestamp it again when the receiving app
> read()s it. The difference between the two timestamps is the latency
> that byte saw end-to-end.
Well, what the LINCS people have done (the link in my previous mail) is
basically this: Sniff TCP packets that have timestamps on them (i.e.
with the TCP timestamp option enabled), and compute the delta between
the timestamps as a latency measure. Now this only gives an absolute
latency measure if the clocks are synchronised; however, if we're
interested in measuring queueing latency, i.e. induced *extra* latency,
this can be calculated as (latency - min-latency) where min-latency is
the minimum observed latency throughout the lifetime of the connection
(this is the same mechanism LEDBAT uses, btw).
In this case the unknown clock discrepancy cancels out (assuming no
clock drift over the course of the measurement, although there's
presumably a way to compensate for that, but I haven't been able to get
hold of the actual paper even though it's references in several
others...). The LINCS paper indicates that the estimates of queueing
latency from this method can be fairly accurate.
So I guess my question is firstly whether this way of measuring OWD
would be worthwhile, and secondly if anyone has any idea whether it will
be possible to implement (it would require access to the raw timestamp
values of the TCP data packets).
Putting timestamps into the TCP stream and reading them out at the other
end might work; but is there a way to force each timestamp to be in a
separate packet?
> That measurement would include the stack/driver latencies which you
> don't currently capture with a parallel ping. For datacenter scenarios
> with very low RTTs, these sources of latency begin to matter.
Yeah, I'm aware of that issue and fixing it was one of the reasons I
wanted to do this... :)
> I think that Stas' thrulay tool did measure latencies in this way, but
> it has accumulated some serious bitrot.
Do you know how that worked more specifically and/or do you have a link
to the source code?
-Toke
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 489 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: One-way delay measurement for netperf-wrapper
2013-11-29 9:42 ` Toke Høiland-Jørgensen
@ 2013-11-29 10:20 ` Eggert, Lars
2013-11-29 13:04 ` Toke Høiland-Jørgensen
2013-12-02 18:11 ` Rick Jones
1 sibling, 1 reply; 12+ messages in thread
From: Eggert, Lars @ 2013-11-29 10:20 UTC (permalink / raw)
To: Toke Høiland-Jørgensen; +Cc: Midori Kato, bloat-devel
[-- Attachment #1: Type: text/plain, Size: 1230 bytes --]
Hi,
On 2013-11-29, at 10:42, Toke Høiland-Jørgensen <toke@toke.dk> wrote:
> Well, what the LINCS people have done (the link in my previous mail) is
> basically this: Sniff TCP packets that have timestamps on them (i.e.
> with the TCP timestamp option enabled), and compute the delta between
> the timestamps as a latency measure.
we tried this too. The TCP timestamps are too coarse-grained for datacenter latency measurements, I think under at least Linux and FreeBSD they get rounded up to 1ms or something. (Midori, do you remember the exact value?)
> Putting timestamps into the TCP stream and reading them out at the other
> end might work; but is there a way to force each timestamp to be in a
> separate packet?
No, but the sender and receiver can agree to embed them every X bytes in the stream. Yeah, sometimes that timestamp may be transmitted in two segments, but I guess that should be OK?
> Do you know how that worked more specifically and/or do you have a link
> to the source code?
http://e2epi.internet2.edu/thrulay/ is the original. There are several variants, but I think they also have been abandoned:
http://thrulay-hd.sourceforge.net/
http://thrulay-ng.sourceforge.net/
Lars
[-- Attachment #2: Message signed with OpenPGP using GPGMail --]
[-- Type: application/pgp-signature, Size: 273 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: One-way delay measurement for netperf-wrapper
2013-11-29 10:20 ` Eggert, Lars
@ 2013-11-29 13:04 ` Toke Høiland-Jørgensen
2013-11-29 14:30 ` Eggert, Lars
0 siblings, 1 reply; 12+ messages in thread
From: Toke Høiland-Jørgensen @ 2013-11-29 13:04 UTC (permalink / raw)
To: Eggert, Lars; +Cc: Midori Kato, bloat-devel
[-- Attachment #1: Type: text/plain, Size: 3424 bytes --]
"Eggert, Lars" <lars@netapp.com> writes:
> we tried this too. The TCP timestamps are too coarse-grained for
> datacenter latency measurements, I think under at least Linux and
> FreeBSD they get rounded up to 1ms or something. (Midori, do you
> remember the exact value?)
Right. Well now that you mention it, I do seem to recall having read
that Linux uses the clock ticks (related to the kernel hz value; i.e.
between 250 and 1000 hz depending on configuration) as timestamp units.
I suppose FreeBSD is similar.
> No, but the sender and receiver can agree to embed them every X bytes
> in the stream. Yeah, sometimes that timestamp may be transmitted in
> two segments, but I guess that should be OK?
Right, so a protocol might be something like this (I'm still envisioning
this in the context of the netperf TCP_STREAM / TCP_MAERTS tests):
1. Insert a sufficiently accurate timestamp into the TCP bandwidth
measurement stream every X bytes (or maybe every X milliseconds?).
2. On the receiver side, look for these timestamps and each time one is
received, calculate the delay (also in a sufficiently accurate, i.e.
sub-millisecond, unit). Echo this calculated delay back to the
sender, probably with a fresh timestamp attached.
3. The sender receives the delay measurements and either just outputs it
straight away, or holds on to them until the end of the test and
normalises them to be deltas against the minimum observed delay.
Now, some possible issues with this:
- Are we measuring the right thing? This will measure the time it takes
a message to get from the application level on one side to the
application level on another. There are a lot of things that could
impact this apart from queueing latency; the most obvious one is
packet loss and retransmissions which will give some spurious results
I suppose (?). Doing the measurement with UDP packets would alleviate
this, but then we're back to not being in-stream...
- As for point 3, not normalising the result and just outputting the
computed delay as-is means that the numbers will be meaningless
without very accurately synchronised clocks. On the other hand, not
processing the numbers before outputting them will allow people who
*do* have synchronised clocks to do something useful with them.
Perhaps a --assume-sync-clocks parameter?
- Echoing back the delay measurements causes traffic which may or may
not be significant; I'm thinking mostly in terms of running
bidirectional measurements. Is that significant? A solution could be
for the receiver to hold on to all the measurements until the end of
the test and then send them back on the control connection.
- Is clock drift something to worry about over the timescales of these
tests?
https://www.usenix.org/legacy/events/iptps10/tech/slides/cohen.pdf
seems to suggest it shouldn't be, as long as the tests only run for at
most a few minutes.
> http://e2epi.internet2.edu/thrulay/ is the original. There are several
> variants, but I think they also have been abandoned:
Thanks. From what I can tell, the measurement here basically works by
something akin to the above: for TCP, the timestamp is just echoed back
by the receiver, so roundtrip time is measured. For UDP, the receiver
calculates the delay, so presumably clock synchronisation is a
prerequisite.
So anyway, thoughts? Is the above something worth pursuing?
-Toke
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 489 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: One-way delay measurement for netperf-wrapper
2013-11-29 13:04 ` Toke Høiland-Jørgensen
@ 2013-11-29 14:30 ` Eggert, Lars
2013-11-29 16:55 ` Dave Taht
0 siblings, 1 reply; 12+ messages in thread
From: Eggert, Lars @ 2013-11-29 14:30 UTC (permalink / raw)
To: Toke Høiland-Jørgensen; +Cc: Midori Kato, bloat-devel
[-- Attachment #1: Type: text/plain, Size: 2441 bytes --]
Hi,
On 2013-11-29, at 14:04, Toke Høiland-Jørgensen <toke@toke.dk> wrote:
> Now, some possible issues with this:
>
> - Are we measuring the right thing? This will measure the time it takes
> a message to get from the application level on one side to the
> application level on another. There are a lot of things that could
> impact this apart from queueing latency; the most obvious one is
> packet loss and retransmissions which will give some spurious results
> I suppose (?). Doing the measurement with UDP packets would alleviate
> this, but then we're back to not being in-stream...
right. I happen to be interested in that metric, but not everyone may be. Some folks may only care about latencies added in the network, in which case the TCP-timestamp-based approach might be better (if the kernel can be convinced to generate timestamps with sufficient resolution).
You could also combine the two approaches. That way, you might be able to account for sender, network and receiver latencies.
> - As for point 3, not normalising the result and just outputting the
> computed delay as-is means that the numbers will be meaningless
> without very accurately synchronised clocks. On the other hand, not
> processing the numbers before outputting them will allow people who
> *do* have synchronised clocks to do something useful with them.
> Perhaps a --assume-sync-clocks parameter?
Yep. Or you could check for the accuracy of the NTP synchronization, as suggested by Hal.
> - Echoing back the delay measurements causes traffic which may or may
> not be significant; I'm thinking mostly in terms of running
> bidirectional measurements. Is that significant? A solution could be
> for the receiver to hold on to all the measurements until the end of
> the test and then send them back on the control connection.
Depends on the measurement interval. I'm guessing it won't matter much if you e.g. only timestamp once a millisecond or something.
> - Is clock drift something to worry about over the timescales of these
> tests?
> https://www.usenix.org/legacy/events/iptps10/tech/slides/cohen.pdf
> seems to suggest it shouldn't be, as long as the tests only run for at
> most a few minutes.
Wouldn't think so.
> So anyway, thoughts? Is the above something worth pursuing?
I certainly would like this test. It may also be a good proposal for the IPPM metric.
Lars
[-- Attachment #2: Message signed with OpenPGP using GPGMail --]
[-- Type: application/pgp-signature, Size: 273 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: One-way delay measurement for netperf-wrapper
2013-11-29 14:30 ` Eggert, Lars
@ 2013-11-29 16:55 ` Dave Taht
0 siblings, 0 replies; 12+ messages in thread
From: Dave Taht @ 2013-11-29 16:55 UTC (permalink / raw)
To: Lars Eggert; +Cc: Midori Kato, bloat-devel
[-- Attachment #1: Type: text/plain, Size: 3050 bytes --]
On Nov 29, 2013 6:30 AM, "Eggert, Lars" <lars@netapp.com> wrote:
>
> Hi,
>
> On 2013-11-29, at 14:04, Toke Høiland-Jørgensen <toke@toke.dk> wrote:
> > Now, some possible issues with this:
> >
> > - Are we measuring the right thing? This will measure the time it takes
> > a message to get from the application level on one side to the
> > application level on another. There are a lot of things that could
> > impact this apart from queueing latency; the most obvious one is
> > packet loss and retransmissions which will give some spurious results
> > I suppose (?). Doing the measurement with UDP packets would alleviate
> > this, but then we're back to not being in-stream...
>
> right. I happen to be interested in that metric, but not everyone may be.
Some folks may only care about latencies added in the network, in which
case the TCP-timestamp-based approach might be better (if the kernel can be
convinced to generate timestamps with sufficient resolution).
>
> You could also combine the two approaches. That way, you might be able to
account for sender, network and receiver latencies.
>
> > - As for point 3, not normalising the result and just outputting the
> > computed delay as-is means that the numbers will be meaningless
> > without very accurately synchronised clocks. On the other hand, not
> > processing the numbers before outputting them will allow people who
> > *do* have synchronised clocks to do something useful with them.
> > Perhaps a --assume-sync-clocks parameter?
>
> Yep. Or you could check for the accuracy of the NTP synchronization, as
suggested by Hal.
>
> > - Echoing back the delay measurements causes traffic which may or may
> > not be significant; I'm thinking mostly in terms of running
> > bidirectional measurements. Is that significant? A solution could be
> > for the receiver to hold on to all the measurements until the end of
> > the test and then send them back on the control connection.
>
> Depends on the measurement interval. I'm guessing it won't matter much if
you e.g. only timestamp once a millisecond or something.
>
> > - Is clock drift something to worry about over the timescales of these
> > tests?
> > https://www.usenix.org/legacy/events/iptps10/tech/slides/cohen.pdf
> > seems to suggest it shouldn't be, as long as the tests only run for at
> > most a few minutes.
>
> Wouldn't think so.
>
> > So anyway, thoughts? Is the above something worth pursuing?
>
> I certainly would like this test. It may also be a good proposal for the
IPPM metric.
>
> Lars
>
> _______________________________________________
> Bloat-devel mailing list
> Bloat-devel@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat-devel
>
The idea of creating a generic ipv6 header for timestamps was discussed at
ietf. I am under the impression this is an old SNA feature.
http://www.ietf.org/proceedings/88/slides/slides-88-ippm-0.pdf
I liked the chart and I seem to recall there was an implementation.
[-- Attachment #2: Type: text/html, Size: 3969 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: One-way delay measurement for netperf-wrapper
2013-11-29 9:42 ` Toke Høiland-Jørgensen
2013-11-29 10:20 ` Eggert, Lars
@ 2013-12-02 18:11 ` Rick Jones
2013-12-02 18:20 ` Toke Høiland-Jørgensen
1 sibling, 1 reply; 12+ messages in thread
From: Rick Jones @ 2013-12-02 18:11 UTC (permalink / raw)
To: Toke Høiland-Jørgensen, Eggert, Lars; +Cc: bloat-devel
On 11/29/2013 01:42 AM, Toke Høiland-Jørgensen wrote:
> Well, what the LINCS people have done (the link in my previous mail) is
> basically this: Sniff TCP packets that have timestamps on them (i.e.
> with the TCP timestamp option enabled), and compute the delta between
> the timestamps as a latency measure. Now this only gives an absolute
> latency measure if the clocks are synchronised; however, if we're
> interested in measuring queueing latency, i.e. induced *extra* latency,
> this can be calculated as (latency - min-latency) where min-latency is
> the minimum observed latency throughout the lifetime of the connection
> (this is the same mechanism LEDBAT uses, btw).
Those TCP timestamps are generated (iirc) when TCP transmits the data,
not when the application presents the data to TCP. Not a deal breaker
necessarily, but something to keep in mind.
rick jones
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: One-way delay measurement for netperf-wrapper
2013-12-02 18:11 ` Rick Jones
@ 2013-12-02 18:20 ` Toke Høiland-Jørgensen
0 siblings, 0 replies; 12+ messages in thread
From: Toke Høiland-Jørgensen @ 2013-12-02 18:20 UTC (permalink / raw)
To: Rick Jones; +Cc: bloat-devel
[-- Attachment #1: Type: text/plain, Size: 684 bytes --]
Rick Jones <rick.jones2@hp.com> writes:
> Those TCP timestamps are generated (iirc) when TCP transmits the data,
> not when the application presents the data to TCP. Not a deal breaker
> necessarily, but something to keep in mind.
Yeah, I was counting on that, actually. But I get your point: it's
measuring two different things. I did not have the other use case in
mind (application layer to application layer) until Lars mentioned it. I
suppose doing both would be worthwhile, and probably some of the
time-keeping logic can be reused.
Still leaves the question of whether it's possible to get to the TCP
timestamps short of sniffing the packets, though... Anyone knows?
-Toke
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 489 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: One-way delay measurement for netperf-wrapper
2013-12-02 5:45 Hal Murray
@ 2013-12-02 8:34 ` Toke Høiland-Jørgensen
0 siblings, 0 replies; 12+ messages in thread
From: Toke Høiland-Jørgensen @ 2013-12-02 8:34 UTC (permalink / raw)
To: Hal Murray; +Cc: bloat-devel
[-- Attachment #1: Type: text/plain, Size: 739 bytes --]
Hal Murray <hmurray@megapathdsl.net> writes:
> How long are you running the tests?
Not longer than a few minutes at a time, normally.
> How accurate do you expect the timings to be?
As accurate as possible, I suppose...
> ntpd assumes the network delays are symmetric. It's easy to break that
> assumption by sending a lot of traffic in one direction. ntpd tries to
> avoid that problem. It remembers the last 8 packets to a server and
> uses the one with the shortest round trip time. If you run tests for
> long enough you will overflow that buffer. The time scale is somewhere
> between minutes and hours.
Right, well I'm not too worried then; especially not for the case where
there actually is a well-synchronised clock.
-Toke
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 489 bytes --]
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: One-way delay measurement for netperf-wrapper
@ 2013-12-02 5:45 Hal Murray
2013-12-02 8:34 ` Toke Høiland-Jørgensen
0 siblings, 1 reply; 12+ messages in thread
From: Hal Murray @ 2013-12-02 5:45 UTC (permalink / raw)
To: bloat-devel
toke@toke.dk said:
> - Is clock drift something to worry about over the timescales of these
> tests?
It depends. :)
How long are you running the tests? How accurate do you expect the timings
to be?
ntpd assumes the network delays are symmetric. It's easy to break that
assumption by sending a lot of traffic in one direction. ntpd tries to avoid
that problem. It remembers the last 8 packets to a server and uses the one
with the shortest round trip time. If you run tests for long enough you will
overflow that buffer. The time scale is somewhere between minutes and hours.
--
These are my opinions. I hate spam.
^ permalink raw reply [flat|nested] 12+ messages in thread
* Re: One-way delay measurement for netperf-wrapper
@ 2013-11-28 20:30 Hal Murray
0 siblings, 0 replies; 12+ messages in thread
From: Hal Murray @ 2013-11-28 20:30 UTC (permalink / raw)
To: bloat-devel
toke@toke.dk said:
> So I've been thinking about adding one-way delay measurement to
> netperf-wrapper, but thought I'd solicit some input on how best to do that.
I haven't worked with netperf in ages, but I have done a lot of one-way
measurements using ntp.
I think you have two choices. One is to assume that both clocks are
accurate. The other is to assume that the network delays are symmetric.
What sort of delays are you measuring and/or what sort of accuracy do you
want/expect?
If you assume the clocks are accurate, it's easy to make graphs and such.
You can get how-accurate info from ntpd's log files if you turn on enough
logging. It's not unreasonable to setup your own GPS receiver so you can be
sure your clock is accurate. (I'll say more if anybody wants.)
Suppose you don't trust the times. Here is the handwave recipe for measuring
the time offset assuming the network delays are symmetric:
Take a bunch of NTP like calibration measurements. Each measurement gives
you 3 time stamps: left here, got there, got back here. That's enough to
compute the round trip time. Graph the round trip time vs time-of-test.
There should be a band of points clustered around some minimum round trip
time. Those are the good ones. Everything else hit some queuing delays. If
you assume the network delays are symmetric you can compute the offset. If
you assume the offset is 0 (aka clocks are good) you can compute the one-way
delays.
If you are interested in that approach, I suggest writing some calibration
code, collecting some data and graphing it to get a feel for things.
If you know in advance the pairs of machines you are likely to use for
testing, you can set their ntpds to point to eachother and turn on rawstats
logging and ntpd will collect calibration data for you. That will give you
an eye-ball level sanity check.
--
These are my opinions. I hate spam.
^ permalink raw reply [flat|nested] 12+ messages in thread
end of thread, other threads:[~2013-12-02 18:20 UTC | newest]
Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2013-11-28 18:51 One-way delay measurement for netperf-wrapper Toke Høiland-Jørgensen
2013-11-29 8:45 ` Eggert, Lars
2013-11-29 9:42 ` Toke Høiland-Jørgensen
2013-11-29 10:20 ` Eggert, Lars
2013-11-29 13:04 ` Toke Høiland-Jørgensen
2013-11-29 14:30 ` Eggert, Lars
2013-11-29 16:55 ` Dave Taht
2013-12-02 18:11 ` Rick Jones
2013-12-02 18:20 ` Toke Høiland-Jørgensen
2013-11-28 20:30 Hal Murray
2013-12-02 5:45 Hal Murray
2013-12-02 8:34 ` Toke Høiland-Jørgensen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox