* [Starlink] CFP march 1 - network measurement conference
@ 2023-12-06 20:00 Dave Taht
2023-12-06 21:46 ` [Starlink] [NNagain] " Sauli Kiviranta
2024-01-14 16:54 ` [Starlink] " Dave Taht
0 siblings, 2 replies; 10+ messages in thread
From: Dave Taht @ 2023-12-06 20:00 UTC (permalink / raw)
To: Dave Taht via Starlink, bloat,
Network Neutrality is back! Let´s make the technical
aspects heard this time!
This CFP looks pretty good to me: https://tma.ifip.org/2024/call-for-papers/
Because:
¨To further encourage the results’ faithfulness and avoid publication
bias, the conference will particularly encourage negative results
revealed by novel measurement methods or vantage points. All regular
papers are hence encouraged to discuss the limitations of the
presented approaches and also mention which experiments did not work.
Additionally, TMA will also be open to accepting papers that
exclusively deal with negative results, especially when new
measurement methods or perspectives offer insight into the limitations
and challenges of network measurement in practice. Negative results
will be evaluated based on their impact (e.g. revealed in realistic
production networks) as well as the novelty of the vantage points
(e.g. scarce data source) or measurement techniques that revealed
them."
--
:( My old R&D campus is up for sale: https://tinyurl.com/yurtlab
Dave Täht CSO, LibreQos
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Starlink] [NNagain] CFP march 1 - network measurement conference
2023-12-06 20:00 [Starlink] CFP march 1 - network measurement conference Dave Taht
@ 2023-12-06 21:46 ` Sauli Kiviranta
2023-12-07 2:22 ` Bill Woodcock
2024-01-14 16:54 ` [Starlink] " Dave Taht
1 sibling, 1 reply; 10+ messages in thread
From: Sauli Kiviranta @ 2023-12-06 21:46 UTC (permalink / raw)
To: Network Neutrality is back! Let´s make the technical
aspects heard this time!
Cc: Dave Taht via Starlink, bloat, Dave Taht
Thank you for sharing! This looks very promising!
What would be a comprehensive measurement? Should cover all/most relevant areas?
Payload Size: The size of data being transmitted.
Event Rate: The frequency at which payloads are transmitted.
Bitrate: The combination of rate and size transferred in a given test.
Bandwidth: The data transfer capacity available on the test path.
Throughput: The data transfer capability achieved on the test path.
Transfer Efficiency: The ratio of useful payload data to the overhead data.
Round-Trip Time (RTT): The ping delay time to the target server and back.
RTT Jitter: The variation in the delay of round-trip time.
Latency: The transmission delay time to the target server and back.
Latency Jitter: The variation in delay of latency.
Bit Error Rate: The corrupted bits as a percentage of the total
transmitted data.
Packet Loss: The percentage of packets lost that needed to be recovered.
Energy Efficiency: The amount of energy consumed to achieve the test result.
Did I overlook something? Too many dimensions to cover? Obviously some
of those are derived, so not part of the whole set as such.
Maybe then next would be to have different profiles where any of those
parameters may vary over the test run. e.g. profiles that model
congested base stations for mobile data. Different use case specific
payload profiles e.g. gop for video transfer?
Best regards,
Sauli
On 12/6/23, Dave Taht via Nnagain <nnagain@lists.bufferbloat.net> wrote:
> This CFP looks pretty good to me: https://tma.ifip.org/2024/call-for-papers/
>
> Because:
>
> ¨To further encourage the results’ faithfulness and avoid publication
> bias, the conference will particularly encourage negative results
> revealed by novel measurement methods or vantage points. All regular
> papers are hence encouraged to discuss the limitations of the
> presented approaches and also mention which experiments did not work.
> Additionally, TMA will also be open to accepting papers that
> exclusively deal with negative results, especially when new
> measurement methods or perspectives offer insight into the limitations
> and challenges of network measurement in practice. Negative results
> will be evaluated based on their impact (e.g. revealed in realistic
> production networks) as well as the novelty of the vantage points
> (e.g. scarce data source) or measurement techniques that revealed
> them."
>
>
> --
> :( My old R&D campus is up for sale: https://tinyurl.com/yurtlab
> Dave Täht CSO, LibreQos
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Starlink] [NNagain] CFP march 1 - network measurement conference
2023-12-06 21:46 ` [Starlink] [NNagain] " Sauli Kiviranta
@ 2023-12-07 2:22 ` Bill Woodcock
2023-12-07 2:49 ` Ricky Mok
2023-12-08 6:03 ` rjmcmahon
0 siblings, 2 replies; 10+ messages in thread
From: Bill Woodcock @ 2023-12-07 2:22 UTC (permalink / raw)
To: Network Neutrality is back! Let´s make the technical
aspects heard this time!
Cc: Sauli Kiviranta, Dave Taht via Starlink, bloat
[-- Attachment #1: Type: text/plain, Size: 3550 bytes --]
> On Dec 6, 2023, at 22:46, Sauli Kiviranta via Nnagain <nnagain@lists.bufferbloat.net> wrote:
> What would be a comprehensive measurement? Should cover all/most relevant areas?
It’s easy to specify a suite of measurements which is too heavy to be easily implemented or supported on the network. Also, as you point out, many things can be derived from raw data, so don’t necessarily require additional specific measurements.
> Payload Size: The size of data being transmitted.
> Event Rate: The frequency at which payloads are transmitted.
> Bitrate: The combination of rate and size transferred in a given test.
> Throughput: The data transfer capability achieved on the test path.
All of that can probably be derived from sufficiently finely-grained TCP data. i.e. if you had a PCAP of a TCP flow that constituted the measurement, you’d be able to derive all of the above.
> Bandwidth: The data transfer capacity available on the test path.
Presumably the goal of a TCP transaction measurement would be to enable this calculation.
> Transfer Efficiency: The ratio of useful payload data to the overhead data.
This is a how-its-used rather than a property-of-the-network. If there are network-inherent overheads, they’re likely to be not directly visible to endpoints, only inferable, and might require external knowledge of the network. So, I’d put this out-of-scope.
> Round-Trip Time (RTT): The ping delay time to the target server and back.
> RTT Jitter: The variation in the delay of round-trip time.
> Latency: The transmission delay time to the target server and back.
> Latency Jitter: The variation in delay of latency.
RTT is measurable. If Latency is RTT minus processing delay on the remote end, I’m not sure it’s really measurable, per se, without the remote end being able to accurately clock itself, or an independent vantage point adjacent to the remote end. This is the old one-way-delay measurement problem in different guise, I think. Anyway, I think RTT is easy and necessary, and I think latency is difficult and probably an anchor not worth attaching to anything we want to see done in the near term. Latency jitter likewise.
> Bit Error Rate: The corrupted bits as a percentage of the total
> transmitted data.
This seems like it can be derived from a PCAP, but doesn’t really constitute an independent measurement.
> Packet Loss: The percentage of packets lost that needed to be recovered.
Yep.
> Energy Efficiency: The amount of energy consumed to achieve the test result.
Not measurable.
> Did I overlook something?
Out-of-order delivery is the fourth classical quality criterion. There are folks who argue that it doesn’t matter anymore, and others who (more compellingly, to my mind) argue that it’s at least as relevant as ever.
Thus, for an actual measurement suite:
- A TCP transaction
…from which we can observe:
- Loss
- RTT (which I’ll just call “Latency” because that’s what people have called it in the past)
- out-of-order delivery
- Jitter in the above three, if the transaction continues long enough
…and we can calculate:
- Goodput
In addition to these, I think it’s necessary to also associate a traceroute (and, if available and reliable, a reverse-path traceroute) in order that it be clear what was measured, and a timestamp, and a digital signature over the whole thing, so we can know who’s attesting to the measurement.
-Bill
[-- Attachment #2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 1463 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Starlink] [NNagain] CFP march 1 - network measurement conference
2023-12-07 2:22 ` Bill Woodcock
@ 2023-12-07 2:49 ` Ricky Mok
2023-12-07 11:43 ` Nitinder Mohan
2023-12-07 20:05 ` Sauli Kiviranta
2023-12-08 6:03 ` rjmcmahon
1 sibling, 2 replies; 10+ messages in thread
From: Ricky Mok @ 2023-12-07 2:49 UTC (permalink / raw)
To: starlink
How about applications? youtube and netflix?
(TPC of this conference this year)
Ricky
On 12/6/23 18:22, Bill Woodcock via Starlink wrote:
>
>> On Dec 6, 2023, at 22:46, Sauli Kiviranta via Nnagain <nnagain@lists.bufferbloat.net> wrote:
>> What would be a comprehensive measurement? Should cover all/most relevant areas?
> It’s easy to specify a suite of measurements which is too heavy to be easily implemented or supported on the network. Also, as you point out, many things can be derived from raw data, so don’t necessarily require additional specific measurements.
>
>> Payload Size: The size of data being transmitted.
>> Event Rate: The frequency at which payloads are transmitted.
>> Bitrate: The combination of rate and size transferred in a given test.
>> Throughput: The data transfer capability achieved on the test path.
> All of that can probably be derived from sufficiently finely-grained TCP data. i.e. if you had a PCAP of a TCP flow that constituted the measurement, you’d be able to derive all of the above.
>
>> Bandwidth: The data transfer capacity available on the test path.
> Presumably the goal of a TCP transaction measurement would be to enable this calculation.
>
>> Transfer Efficiency: The ratio of useful payload data to the overhead data.
> This is a how-its-used rather than a property-of-the-network. If there are network-inherent overheads, they’re likely to be not directly visible to endpoints, only inferable, and might require external knowledge of the network. So, I’d put this out-of-scope.
>
>> Round-Trip Time (RTT): The ping delay time to the target server and back.
>> RTT Jitter: The variation in the delay of round-trip time.
>> Latency: The transmission delay time to the target server and back.
>> Latency Jitter: The variation in delay of latency.
> RTT is measurable. If Latency is RTT minus processing delay on the remote end, I’m not sure it’s really measurable, per se, without the remote end being able to accurately clock itself, or an independent vantage point adjacent to the remote end. This is the old one-way-delay measurement problem in different guise, I think. Anyway, I think RTT is easy and necessary, and I think latency is difficult and probably an anchor not worth attaching to anything we want to see done in the near term. Latency jitter likewise.
>
>> Bit Error Rate: The corrupted bits as a percentage of the total
>> transmitted data.
> This seems like it can be derived from a PCAP, but doesn’t really constitute an independent measurement.
>
>> Packet Loss: The percentage of packets lost that needed to be recovered.
> Yep.
>
>> Energy Efficiency: The amount of energy consumed to achieve the test result.
> Not measurable.
>
>> Did I overlook something?
> Out-of-order delivery is the fourth classical quality criterion. There are folks who argue that it doesn’t matter anymore, and others who (more compellingly, to my mind) argue that it’s at least as relevant as ever.
>
> Thus, for an actual measurement suite:
>
> - A TCP transaction
>
> …from which we can observe:
>
> - Loss
> - RTT (which I’ll just call “Latency” because that’s what people have called it in the past)
> - out-of-order delivery
> - Jitter in the above three, if the transaction continues long enough
>
> …and we can calculate:
>
> - Goodput
>
> In addition to these, I think it’s necessary to also associate a traceroute (and, if available and reliable, a reverse-path traceroute) in order that it be clear what was measured, and a timestamp, and a digital signature over the whole thing, so we can know who’s attesting to the measurement.
>
> -Bill
>
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Starlink] [NNagain] CFP march 1 - network measurement conference
2023-12-07 2:49 ` Ricky Mok
@ 2023-12-07 11:43 ` Nitinder Mohan
2023-12-07 20:05 ` Sauli Kiviranta
1 sibling, 0 replies; 10+ messages in thread
From: Nitinder Mohan @ 2023-12-07 11:43 UTC (permalink / raw)
To: Ricky Mok, starlink
[-- Attachment #1: Type: text/plain, Size: 5392 bytes --]
Hi Dave (Ricky, all),
Thanks for sharing the conference call. I am one of the TPC chair of the conference and we are really looking forward to cool submissions. So please get the idea mill going :)
[Putting my TPC chair hat aside] Application/CDN measurements would be cool! Most CDNs would probably map Starlink user to CDN server based on anycast but I am not sure if the server selection would be close to teh PoP or to the actual user location. Especially for EU users who are mapped to PoPs in other countries, would you get content of the PoP location country or of your own? What about offshore places that are connecting to GSes via long ISL chains (e.g. islands in south of Africa)?
Like many folks on this mailing list can already attest, performing real application workload experiments will be key here — performance under load is very different from ping/traceroutes based results that majority of publications in this space have relied on.
Thanks and Regards
Nitinder Mohan
Technical University Munich (TUM)
https://www.nitindermohan.com/
From: Ricky Mok via Starlink <starlink@lists.bufferbloat.net>
Reply: Ricky Mok <cskpmok@caida.org>
Date: 7. December 2023 at 03:49:54
To: starlink@lists.bufferbloat.net <starlink@lists.bufferbloat.net>
Subject: Re: [Starlink] [NNagain] CFP march 1 - network measurement conference
How about applications? youtube and netflix?
(TPC of this conference this year)
Ricky
On 12/6/23 18:22, Bill Woodcock via Starlink wrote:
>
>> On Dec 6, 2023, at 22:46, Sauli Kiviranta via Nnagain <nnagain@lists.bufferbloat.net> wrote:
>> What would be a comprehensive measurement? Should cover all/most relevant areas?
> It’s easy to specify a suite of measurements which is too heavy to be easily implemented or supported on the network. Also, as you point out, many things can be derived from raw data, so don’t necessarily require additional specific measurements.
>
>> Payload Size: The size of data being transmitted.
>> Event Rate: The frequency at which payloads are transmitted.
>> Bitrate: The combination of rate and size transferred in a given test.
>> Throughput: The data transfer capability achieved on the test path.
> All of that can probably be derived from sufficiently finely-grained TCP data. i.e. if you had a PCAP of a TCP flow that constituted the measurement, you’d be able to derive all of the above.
>
>> Bandwidth: The data transfer capacity available on the test path.
> Presumably the goal of a TCP transaction measurement would be to enable this calculation.
>
>> Transfer Efficiency: The ratio of useful payload data to the overhead data.
> This is a how-its-used rather than a property-of-the-network. If there are network-inherent overheads, they’re likely to be not directly visible to endpoints, only inferable, and might require external knowledge of the network. So, I’d put this out-of-scope.
>
>> Round-Trip Time (RTT): The ping delay time to the target server and back.
>> RTT Jitter: The variation in the delay of round-trip time.
>> Latency: The transmission delay time to the target server and back.
>> Latency Jitter: The variation in delay of latency.
> RTT is measurable. If Latency is RTT minus processing delay on the remote end, I’m not sure it’s really measurable, per se, without the remote end being able to accurately clock itself, or an independent vantage point adjacent to the remote end. This is the old one-way-delay measurement problem in different guise, I think. Anyway, I think RTT is easy and necessary, and I think latency is difficult and probably an anchor not worth attaching to anything we want to see done in the near term. Latency jitter likewise.
>
>> Bit Error Rate: The corrupted bits as a percentage of the total
>> transmitted data.
> This seems like it can be derived from a PCAP, but doesn’t really constitute an independent measurement.
>
>> Packet Loss: The percentage of packets lost that needed to be recovered.
> Yep.
>
>> Energy Efficiency: The amount of energy consumed to achieve the test result.
> Not measurable.
>
>> Did I overlook something?
> Out-of-order delivery is the fourth classical quality criterion. There are folks who argue that it doesn’t matter anymore, and others who (more compellingly, to my mind) argue that it’s at least as relevant as ever.
>
> Thus, for an actual measurement suite:
>
> - A TCP transaction
>
> …from which we can observe:
>
> - Loss
> - RTT (which I’ll just call “Latency” because that’s what people have called it in the past)
> - out-of-order delivery
> - Jitter in the above three, if the transaction continues long enough
>
> …and we can calculate:
>
> - Goodput
>
> In addition to these, I think it’s necessary to also associate a traceroute (and, if available and reliable, a reverse-path traceroute) in order that it be clear what was measured, and a timestamp, and a digital signature over the whole thing, so we can know who’s attesting to the measurement.
>
> -Bill
>
>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
_______________________________________________
Starlink mailing list
Starlink@lists.bufferbloat.net
https://lists.bufferbloat.net/listinfo/starlink
[-- Attachment #2: Type: text/html, Size: 7654 bytes --]
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Starlink] [NNagain] CFP march 1 - network measurement conference
2023-12-07 2:49 ` Ricky Mok
2023-12-07 11:43 ` Nitinder Mohan
@ 2023-12-07 20:05 ` Sauli Kiviranta
1 sibling, 0 replies; 10+ messages in thread
From: Sauli Kiviranta @ 2023-12-07 20:05 UTC (permalink / raw)
To: Ricky Mok; +Cc: starlink
Thank you Jack, Bill and Ricky for your comments! (And everyone after!)
Jack:
"> amount (bytes, datagrams) presumed lost and re-transmitted by the sender"
I would consider those lost packets and just recovered through time
complexity e.g. retransmission with TCP and that retransmission may
require another retransmission etc. Then those re-transmitted data are
counted towards overhead.
Equivalent for UDP would be redundancy as in paying lost data recovery
with Space Complexity. If the payload can not be recovered then that
would be considered waste of the whole payload as packet loss
addition.
"> amount (bytes, datagrams) discarded at the receiver because they
were already received"
This is simply from my perspective just overhead. Any extra cost in
Space Complexity over the original payload is inefficiency and should
be just counted as such.
"> amount (bytes, datagrams) discarded at the receiver because they
arrived too late to be useful"
This is more tricky, because we are taking stance on use-case. Better
would be just to characterize in terms of time distribution as
percentiles at some specific useful intervals e.g. RTT multiples or
100ms, 500ms, 1000ms etc to give rough estimate how much time needs to
be paid to recover if feedback loop is involved.
"> With such data, it would be possible to measure things like "useful
throughput", i.e., the data successfully delivered from source to
destination which was actually useful for the associated user's
application."
Here we have few different components at play. 1. Use-case required
bandwidth, use-case consumed bandwidth. 2. Link saturation as
non-use-case specific how much of the available path are we able to
utilize effectively assuming that use case is equal or larger than
this for example for file transfer as we do not have real time budget
from the use-case perspective.
Bill:
"> All of that can probably be derived from sufficiently
finely-grained TCP data. i.e. if you had a PCAP of a TCP flow that
constituted the measurement, you’d be able to derive all of the
above."
What I would say that cannot be derived is the behavior of transport
considering these combinations. What was very interesting, and what I
have also experienced was here:
https://www.youtube.com/watch?v=XHls8PvCVws&t=319s
Specially the second talk about measurements at fine grained payload
resolution. You see a very specific pattern on TCP when the payload
size increases you will introduce extra RTT.
Thus I think what is done for example in iperf is not exactly good
method as there is just 1 payload size (?) and this result shows that
the payload size itself will determine your results. So we need more
dimensions in our tests.
"> Bandwidth: The data transfer capacity available on the test path.
Presumably the goal of a TCP transaction measurement would be to
enable this calculation."
This in the light of the previous video e.g. "we know X is available,
how was it utilized?"
"> Transfer Efficiency: The ratio of useful payload data to the overhead data.
This is a how-its-used rather than a property-of-the-network. If
there are network-inherent overheads, they’re likely to be not
directly visible to endpoints, only inferable, and might require
external knowledge of the network. So, I’d put this out-of-scope."
I think differently, this is extremely important factor, and it is
getting more and more important. How much waste there is to useful
data. Every single extra bit is waste. Is it redundant data for
recovery on non-feedbackloop utilizing transports or just ACK for
retransmission the goal is same, we need to construct the payload,
either by paying in time budget (wait for rentransmission) or space
budget (add enough redundancy to make sure we can recreate regardless
of lost data).
"> RTT is measurable. If Latency is RTT minus processing delay on the
remote end, I’m not sure it’s really measurable, per se, without the
remote end being able to accurately clock itself, or an independent
vantage point adjacent to the remote end. This is the old
one-way-delay measurement problem in different guise, I think.
Anyway, I think RTT is easy and necessary, and I think latency is
difficult and probably an anchor not worth attaching to anything we
want to see done in the near term. Latency jitter likewise."
Yes, this is difficult without time sync. That is why it has to be
laboratory conditions or glass to glass across relay. Very very tricky
to make sure the timings are correct to no garbage measurements!
For truthfully understanding performance from gradient of use-cases we
need to be able to distinguish the baseline network fluctuation (e.g.
Starlink) and what delays on top of that are caused by our
transport.Thus I would really like to keep these 2 separate, as they
will allow us to distinguish did we spend extra 300ms on few
retransmission loops or did we get extra 5ms latency just as a error
from the gradient in the baseline.
"> This seems like it can be derived from a PCAP, but doesn’t really
constitute an independent measurement."
Agree, if we condition the network for lab test, it would be good to
be able to derive the conditioned loss to make sure we know what we
are doing.
"> Energy Efficiency: The amount of energy consumed to achieve the test result.
Not measurable."
We should try! See attached picture about QUIC, just from the previous
email, these things matter when we have constrained devices in IoT or
space, or drones etc. We can get artificially good performance if we
just burn enormous amount of energy. We should try to make sure we
measure total energy spent on the overall transmission. Even if
difficult, we should absolutely worry about the cost not only in
bandwidth, time but also energy as consequence of computational
complexity of some approaches at our disposal even for error
correction.
"> Did I overlook something?
Out-of-order delivery is the fourth classical quality criterion.
There are folks who argue that it doesn’t matter anymore, and others
who (more compellingly, to my mind) argue that it’s at least as
relevant as ever."
Good point, is this concern for receive buffer size bloating if we
need to make jitter buffer and pay in time complexity to recover
misordered data? My perspective is mostly on application layer so
forgive if I missed your point.
"Thus, for an actual measurement suite:
- A TCP transaction
…from which we can observe:
- Loss
- RTT (which I’ll just call “Latency” because that’s what people have
called it in the past)
- out-of-order delivery
- Jitter in the above three, if the transaction continues long enough
…and we can calculate:
- Goodput"
I see, yes, my comments would have been doing all that
TCP/LTP/UDP/QUIC... whatnot to have finally good overview where we
stand as humanity on our ability to transfer bits for arbitrary
use-case with quadrant on data intensity X axis and event rate on Y
axis followed by environmental factors as latency and packetloss.
"In addition to these, I think it’s necessary to also associate a
traceroute (and, if available and reliable, a reverse-path traceroute)
in order that it be clear what was measured, and a timestamp, and a
digital signature over the whole thing, so we can know who’s attesting
to the measurement."
Yes, specially if we do tests in the wilderness, fully agreed!
Overall all tests should be so accurately documented that they can be
reproduced within error margins. If not, its not really science or
even engineering, just artisans with rules of thumb!
Best regards,
Sauli
On 12/7/23, Ricky Mok via Starlink <starlink@lists.bufferbloat.net> wrote:
> How about applications? youtube and netflix?
>
> (TPC of this conference this year)
>
> Ricky
>
> On 12/6/23 18:22, Bill Woodcock via Starlink wrote:
>>
>>> On Dec 6, 2023, at 22:46, Sauli Kiviranta via Nnagain
>>> <nnagain@lists.bufferbloat.net> wrote:
>>> What would be a comprehensive measurement? Should cover all/most relevant
>>> areas?
>> It’s easy to specify a suite of measurements which is too heavy to be
>> easily implemented or supported on the network. Also, as you point out,
>> many things can be derived from raw data, so don’t necessarily require
>> additional specific measurements.
>>
>>> Payload Size: The size of data being transmitted.
>>> Event Rate: The frequency at which payloads are transmitted.
>>> Bitrate: The combination of rate and size transferred in a given test.
>>> Throughput: The data transfer capability achieved on the test path.
>> All of that can probably be derived from sufficiently finely-grained TCP
>> data. i.e. if you had a PCAP of a TCP flow that constituted the
>> measurement, you’d be able to derive all of the above.
>>
>>> Bandwidth: The data transfer capacity available on the test path.
>> Presumably the goal of a TCP transaction measurement would be to enable
>> this calculation.
>>
>>> Transfer Efficiency: The ratio of useful payload data to the overhead
>>> data.
>> This is a how-its-used rather than a property-of-the-network. If there
>> are network-inherent overheads, they’re likely to be not directly visible
>> to endpoints, only inferable, and might require external knowledge of the
>> network. So, I’d put this out-of-scope.
>>
>>> Round-Trip Time (RTT): The ping delay time to the target server and back.
>>> RTT Jitter: The variation in the delay of round-trip time.
>>> Latency: The transmission delay time to the target server and back.
>>> Latency Jitter: The variation in delay of latency.
>> RTT is measurable. If Latency is RTT minus processing delay on the remote
>> end, I’m not sure it’s really measurable, per se, without the remote end
>> being able to accurately clock itself, or an independent vantage point
>> adjacent to the remote end. This is the old one-way-delay measurement
>> problem in different guise, I think. Anyway, I think RTT is easy and
>> necessary, and I think latency is difficult and probably an anchor not
>> worth attaching to anything we want to see done in the near term. Latency
>> jitter likewise.
>>
>>> Bit Error Rate: The corrupted bits as a percentage of the total
>>> transmitted data.
>> This seems like it can be derived from a PCAP, but doesn’t really
>> constitute an independent measurement.
>>
>>> Packet Loss: The percentage of packets lost that needed to be recovered.
>> Yep.
>>
>>> Energy Efficiency: The amount of energy consumed to achieve the test
>>> result.
>> Not measurable.
>>
>>> Did I overlook something?
>> Out-of-order delivery is the fourth classical quality criterion. There
>> are folks who argue that it doesn’t matter anymore, and others who (more
>> compellingly, to my mind) argue that it’s at least as relevant as ever.
>>
>> Thus, for an actual measurement suite:
>>
>> - A TCP transaction
>>
>> …from which we can observe:
>>
>> - Loss
>> - RTT (which I’ll just call “Latency” because that’s what people have
>> called it in the past)
>> - out-of-order delivery
>> - Jitter in the above three, if the transaction continues long enough
>>
>> …and we can calculate:
>>
>> - Goodput
>>
>> In addition to these, I think it’s necessary to also associate a
>> traceroute (and, if available and reliable, a reverse-path traceroute) in
>> order that it be clear what was measured, and a timestamp, and a digital
>> signature over the whole thing, so we can know who’s attesting to the
>> measurement.
>>
>> -Bill
>>
>>
>> _______________________________________________
>> Starlink mailing list
>> Starlink@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/starlink
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
>
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Starlink] [NNagain] CFP march 1 - network measurement conference
2023-12-07 2:22 ` Bill Woodcock
2023-12-07 2:49 ` Ricky Mok
@ 2023-12-08 6:03 ` rjmcmahon
1 sibling, 0 replies; 10+ messages in thread
From: rjmcmahon @ 2023-12-08 6:03 UTC (permalink / raw)
To: Network Neutrality is back! Let´s make the technical
aspects heard this time!
Cc: Bill Woodcock, Dave Taht via Starlink, bloat
iperf 2 supports OWD in multiple forms.
A raspberry pi 5 has a realtime clock and hardware PTP and gpio PPS. The
retail cost for a pi5 with GPS atomic clock and active fan is less than
$150
[rjmcmahon@fedora iperf2-code]$ src/iperf -c 192.168.1.35 --bounceback
--trip-times --bounceback-period 0 -i 1 -t 4
------------------------------------------------------------
Client connecting to 192.168.1.35, TCP port 5001 with pid 48142 (1/0
flows/load)
Bounceback test (req/reply size = 100 Byte/ 100 Byte) (server hold req=0
usecs & tcp_quickack)
TCP congestion control using cubic
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 85.0 KByte (default)
Event based writes (pending queue watermark at 16384 bytes)
------------------------------------------------------------
[ 1] local 192.168.1.103%enp4s0 port 50558 connected with 192.168.1.35
port 5001 (prefetch=16384) (bb w/quickack req/reply/hold=100/100/0)
(trip-times) (sock=3) (icwnd/mss/irtt=14/1448/541) (ct=0.59 ms) on
2023-12-07 22:01:39.240 (PST)
[ ID] Interval Transfer Bandwidth BB
cnt=avg/min/max/stdev Rtry Cwnd/RTT RPS(avg)
[ 1] 0.00-1.00 sec 739 KBytes 6.05 Mbits/sec
7566=0.130/0.099/0.627/0.007 ms 0 14K/115 us 7666 rps
[ 1] 0.00-1.00 sec OWD (ms) Cnt=7566 TX=0.072/0.038/0.163/0.002
RX=0.058/0.047/0.156/0.004 Asymmetry=0.015/0.001/0.103/0.004
[ 1] 1.00-2.00 sec 745 KBytes 6.10 Mbits/sec
7630=0.130/0.082/0.422/0.005 ms 0 14K/114 us 7722 rps
[ 1] 1.00-2.00 sec OWD (ms) Cnt=7630 TX=0.073/0.027/0.364/0.004
RX=0.057/0.048/0.097/0.003 Asymmetry=0.016/0.000/0.306/0.005
[ 1] 2.00-3.00 sec 749 KBytes 6.14 Mbits/sec
7671=0.129/0.085/0.252/0.004 ms 0 14K/113 us 7756 rps
[ 1] 2.00-3.00 sec OWD (ms) Cnt=7671 TX=0.073/0.031/0.193/0.003
RX=0.056/0.047/0.102/0.003 Asymmetry=0.017/0.000/0.134/0.004
[ 1] 3.00-4.00 sec 737 KBytes 6.04 Mbits/sec
7546=0.131/0.085/0.290/0.004 ms 0 14K/115 us 7629 rps
[ 1] 3.00-4.00 sec OWD (ms) Cnt=7546 TX=0.073/0.030/0.231/0.003
RX=0.058/0.047/0.105/0.003 Asymmetry=0.015/0.000/0.172/0.004
[ 1] 0.00-4.00 sec 2.90 MBytes 6.08 Mbits/sec
30414=0.130/0.082/0.627/0.005 ms 0 14K/376 us 7693 rps
[ 1] 0.00-4.00 sec OWD (ms) Cnt=30414 TX=0.073/0.027/0.364/0.003
RX=0.057/0.047/0.156/0.004 Asymmetry=0.016/0.000/0.306/0.004
[ 1] 0.00-4.00 sec OWD-TX(f)-PDF:
bin(w=100us):cnt(30414)=1:30393,2:19,3:1,4:1
(5.00/95.00/99.7%=1/1/1,Outliers=0,obl/obu=0/0)
[ 1] 0.00-4.00 sec OWD-RX(f)-PDF: bin(w=100us):cnt(30414)=1:30400,2:14
(5.00/95.00/99.7%=1/1/1,Outliers=0,obl/obu=0/0)
[ 1] 0.00-4.00 sec BB8(f)-PDF:
bin(w=100us):cnt(30414)=1:6,2:30392,3:14,5:1,7:1
(5.00/95.00/99.7%=2/2/2,Outliers=16,obl/obu=0/0)
Bob
>> On Dec 6, 2023, at 22:46, Sauli Kiviranta via Nnagain
>> <nnagain@lists.bufferbloat.net> wrote:
>> What would be a comprehensive measurement? Should cover all/most
>> relevant areas?
>
> It’s easy to specify a suite of measurements which is too heavy to be
> easily implemented or supported on the network. Also, as you point
> out, many things can be derived from raw data, so don’t necessarily
> require additional specific measurements.
>
>> Payload Size: The size of data being transmitted.
>> Event Rate: The frequency at which payloads are transmitted.
>> Bitrate: The combination of rate and size transferred in a given test.
>> Throughput: The data transfer capability achieved on the test path.
>
> All of that can probably be derived from sufficiently finely-grained
> TCP data. i.e. if you had a PCAP of a TCP flow that constituted the
> measurement, you’d be able to derive all of the above.
>
>> Bandwidth: The data transfer capacity available on the test path.
>
> Presumably the goal of a TCP transaction measurement would be to
> enable this calculation.
>
>> Transfer Efficiency: The ratio of useful payload data to the overhead
>> data.
>
> This is a how-its-used rather than a property-of-the-network. If
> there are network-inherent overheads, they’re likely to be not
> directly visible to endpoints, only inferable, and might require
> external knowledge of the network. So, I’d put this out-of-scope.
>
>> Round-Trip Time (RTT): The ping delay time to the target server and
>> back.
>> RTT Jitter: The variation in the delay of round-trip time.
>> Latency: The transmission delay time to the target server and back.
>> Latency Jitter: The variation in delay of latency.
>
> RTT is measurable. If Latency is RTT minus processing delay on the
> remote end, I’m not sure it’s really measurable, per se, without the
> remote end being able to accurately clock itself, or an independent
> vantage point adjacent to the remote end. This is the
> old[rjmcmahon@fedora iperf2-code]$ src/iperf -c 192.168.1.35
> --bounceback --trip-times --bounceback-period 0 -i 1 -t 4
------------------------------------------------------------
Client connecting to 192.168.1.35, TCP port 5001 with pid 46358 (1/0
flows/load)
Bounceback test (req/reply size = 100 Byte/ 100 Byte) (server hold req=0
usecs & tcp_quickack)
TCP congestion control using cubic
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 85.0 KByte (default)
Event based writes (pending queue watermark at 16384 bytes)
------------------------------------------------------------
[ 1] local 192.168.1.103%enp4s0 port 60788 connected with 192.168.1.35
port 5001 (prefetch=16384) (bb w/quickack req/reply/hold=100/100/0)
(trip-times) (sock=3) (icwnd/mss/irtt=14/1448/168) (ct=0.23 ms) on
2023-12-07 21:21:31.417 (PST)
[ ID] Interval Transfer Bandwidth BB
cnt=avg/min/max/stdev Rtry Cwnd/RTT RPS(avg)
[ 1] 0.00-1.00 sec 745 KBytes 6.10 Mbits/sec
7631=0.129/0.096/0.637/0.007 ms 0 14K/114 us 7733 rps
[ 1] 0.00-1.00 sec OWD (ms) Cnt=7631 TX=0.068/0.034/0.191/0.003
RX=0.061/0.049/0.118/0.004 Asymmetry=0.009/0.000/0.130/0.004
** reset
[ 1] 1.00-2.00 sec 751 KBytes 6.15 Mbits/sec
7689=0.129/0.092/0.350/0.005 ms 0 14K/115 us 7782 rps
[ 1] 1.00-2.00 sec OWD (ms) Cnt=7689 TX=0.069/0.030/0.288/0.004
RX=0.060/0.052/0.116/0.003 Asymmetry=0.009/0.000/0.227/0.004
** reset
[ 1] 2.00-3.00 sec 748 KBytes 6.13 Mbits/sec
7664=0.129/0.085/0.378/0.004 ms 0 14K/115 us 7751 rps
[ 1] 2.00-3.00 sec OWD (ms) Cnt=7664 TX=0.069/0.025/0.313/0.003
RX=0.060/0.053/0.098/0.002 Asymmetry=0.008/0.000/0.248/0.004
** reset
[ 1] 3.00-4.00 sec 752 KBytes 6.16 Mbits/sec
7698=0.128/0.087/0.322/0.004 ms 0 14K/114 us 7787 rps
[ 1] 3.00-4.00 sec OWD (ms) Cnt=7698 TX=0.068/0.023/0.257/0.003
RX=0.060/0.052/0.091/0.002 Asymmetry=0.009/0.000/0.192/0.004
** reset
** reset
[ 1] 0.00-4.00 sec 2.93 MBytes 6.13 Mbits/sec
30683=0.129/0.085/0.637/0.005 ms 0 14K/408 us 7763 rps
[ 1] 0.00-4.00 sec OWD (ms) Cnt=30683 TX=0.068/0.023/0.313/0.003
RX=0.060/0.049/0.118/0.003 Asymmetry=0.009/0.000/0.248/0.004
[ 1] 0.00-4.00 sec OWD-TX(f)-PDF:
bin(w=100us):cnt(30683)=1:30663,2:17,3:2,4:1
(5.00/95.00/99.7%=1/1/1,Outliers=0,obl/obu=0/0)
[ 1] 0.00-4.00 sec OWD-RX(f)-PDF: bin(w=100us):cnt(30683)=1:30669,2:14
(5.00/95.00/99.7%=1/1/1,Outliers=0,obl/obu=0/0)
[ 1] 0.00-4.00 sec BB8(f)-PDF:
bin(w=100us):cnt(30683)=1:7,2:30663,3:9,4:3,7:1
(5.00/95.00/99.7%=2/2/2,Outliers=13,obl/obu=0/0)
[rjmcmahon@fedora iperf2-code]$ emacs src/Reporter.c
[rjmcmahon@fedora iperf2-code]$ make -j
make all-recursive
make[1]: Entering directory '/home/rjmcmahon/Code/csv/iperf2-code'
Making all in compat
make[2]: Entering directory
'/home/rjmcmahon/Code/csv/iperf2-code/compat'
make[2]: Nothing to be done for 'all'.
make[2]: Leaving directory '/home/rjmcmahon/Code/csv/iperf2-code/compat'
Making all in doc
make[2]: Entering directory '/home/rjmcmahon/Code/csv/iperf2-code/doc'
make[2]: Nothing to be done for 'all'.
make[2]: Leaving directory '/home/rjmcmahon/Code/csv/iperf2-code/doc'
Making all in include
make[2]: Entering directory
'/home/rjmcmahon/Code/csv/iperf2-code/include'
make[2]: Nothing to be done for 'all'.
make[2]: Leaving directory
'/home/rjmcmahon/Code/csv/iperf2-code/include'
Making all in src
make[2]: Entering directory '/home/rjmcmahon/Code/csv/iperf2-code/src'
gcc -DHAVE_CONFIG_H -I. -I.. -I../include -I../include -Wall -O2 -O2
-MT Reporter.o -MD -MP -MF .deps/Reporter.Tpo -c -o Reporter.o
Reporter.c
mv -f .deps/Reporter.Tpo .deps/Reporter.Po
g++ -Wall -O2 -O2 -O2 -pthread -DHAVE_CONFIG_H -o iperf Client.o
Extractor.o isochronous.o Launch.o active_hosts.o Listener.o Locale.o
PerfSocket.o Reporter.o Reports.o ReportOutputs.o Server.o Settings.o
SocketAddr.o gnu_getopt.o gnu_getopt_long.o histogram.o main.o service.o
socket_io.o stdio.o packet_ring.o tcp_window_size.o pdfs.o dscp.o
iperf_formattime.o iperf_multicast_api.o checksums.o
../compat/libcompat.a -lrt
make[2]: Leaving directory '/home/rjmcmahon/Code/csv/iperf2-code/src'
Making all in man
make[2]: Entering directory '/home/rjmcmahon/Code/csv/iperf2-code/man'
make[2]: Nothing to be done for 'all'.
make[2]: Leaving directory '/home/rjmcmahon/Code/csv/iperf2-code/man'
Making all in flows
make[2]: Entering directory '/home/rjmcmahon/Code/csv/iperf2-code/flows'
make[2]: Nothing to be done for 'all'.
make[2]: Leaving directory '/home/rjmcmahon/Code/csv/iperf2-code/flows'
make[2]: Entering directory '/home/rjmcmahon/Code/csv/iperf2-code'
make[2]: Leaving directory '/home/rjmcmahon/Code/csv/iperf2-code'
make[1]: Leaving directory '/home/rjmcmahon/Code/csv/iperf2-code'
[rjmcmahon@fedora iperf2-code]$ src/iperf -c 192.168.1.35 --bounceback
--trip-times --bounceback-period 0 -i 1 -t 4
------------------------------------------------------------
Client connecting to 192.168.1.35, TCP port 5001 with pid 46427 (1/0
flows/load)
Bounceback test (req/reply size = 100 Byte/ 100 Byte) (server hold req=0
usecs & tcp_quickack)
TCP congestion control using cubic
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 85.0 KByte (default)
Event based writes (pending queue watermark at 16384 bytes)
------------------------------------------------------------
[ 1] local 192.168.1.103%enp4s0 port 37748 connected with 192.168.1.35
port 5001 (prefetch=16384) (bb w/quickack req/reply/hold=100/100/0)
(trip-times) (sock=3) (icwnd/mss/irtt=14/1448/177) (ct=0.23 ms) on
2023-12-07 21:21:46.282 (PST)
[ ID] Interval Transfer Bandwidth BB
cnt=avg/min/max/stdev Rtry Cwnd/RTT RPS(avg)
[ 1] 0.00-1.00 sec 738 KBytes 6.04 Mbits/sec
7552=0.131/0.115/0.642/0.007 ms 0 14K/115 us 7652 rps
[ 1] 0.00-1.00 sec OWD (ms) Cnt=7552 TX=0.070/0.053/0.184/0.002
RX=0.061/0.048/0.113/0.004 Asymmetry=0.009/0.000/0.116/0.004
[ 1] 1.00-2.00 sec 739 KBytes 6.05 Mbits/sec
7568=0.131/0.078/0.362/0.004 ms 0 14K/114 us 7657 rps
[ 1] 1.00-2.00 sec OWD (ms) Cnt=7568 TX=0.070/0.020/0.298/0.003
RX=0.061/0.051/0.097/0.003 Asymmetry=0.009/0.000/0.235/0.004
[ 1] 2.00-3.00 sec 739 KBytes 6.06 Mbits/sec
7571=0.131/0.089/0.279/0.005 ms 0 14K/115 us 7660 rps
[ 1] 2.00-3.00 sec OWD (ms) Cnt=7571 TX=0.070/0.025/0.215/0.003
RX=0.060/0.051/0.181/0.004 Asymmetry=0.010/0.000/0.151/0.004
[ 1] 3.00-4.00 sec 751 KBytes 6.15 Mbits/sec
7693=0.129/0.090/0.284/0.004 ms 0 14K/115 us 7780 rps
[ 1] 3.00-4.00 sec OWD (ms) Cnt=7693 TX=0.070/0.032/0.223/0.004
RX=0.058/0.050/0.091/0.002 Asymmetry=0.013/0.000/0.162/0.004
[ 1] 0.00-4.00 sec 2.90 MBytes 6.07 Mbits/sec
30385=0.130/0.078/0.642/0.005 ms 0 14K/404 us 7687 rps
[ 1] 0.00-4.00 sec OWD (ms) Cnt=30385 TX=0.070/0.020/0.298/0.003
RX=0.060/0.048/0.181/0.004 Asymmetry=0.010/0.000/0.235/0.004
[ 1] 0.00-4.00 sec OWD-TX(f)-PDF:
bin(w=100us):cnt(30385)=1:30366,2:16,3:3
(5.00/95.00/99.7%=1/1/1,Outliers=0,obl/obu=0/0)
[ 1] 0.00-4.00 sec OWD-RX(f)-PDF: bin(w=100us):cnt(30385)=1:30360,2:25
(5.00/95.00/99.7%=1/1/1,Outliers=0,obl/obu=0/0)
[ 1] 0.00-4.00 sec BB8(f)-PDF:
bin(w=100us):cnt(30385)=1:4,2:30366,3:13,4:1,7:1
(5.00/95.00/99.7%=2/2/2,Outliers=15,obl/obu=0/0)
[rjmcmahon@fedora iperf2-code]$ git diff
> one-way-delay measurement problem in different guise, I think.
> Anyway, I think RTT is easy and necessary, and I think latency is
> difficult and probably an anchor not worth attaching to anything we
> want to see done in the near term. Latency jitter likewise.
>
>> Bit Error Rate: The corrupted bits as a percentage of the total
>> transmitted data.
>
> This seems like it can be derived from a PCAP, but doesn’t really
> constitute an independent measurement.
>
>> Packet Loss: The percentage of packets lost that needed to be
>> recovered.
>
> Yep.
>
>> Energy Efficiency: The amount of energy consumed to achieve the test
>> result.
>
> Not measurable.
>
>> Did I overlook something?
>
> Out-of-order delivery is the fourth classical quality criterion.
> There are folks who argue that it doesn’t matter anymore, and others
> who (more compellingly, to my mind) argue that it’s at least as
> relevant as ever.
>
> Thus, for an actual measurement suite:
>
> - A TCP transaction
>
> …from which we can observe:
>
> - Loss
> - RTT (which I’ll just call “Latency” because that’s what people have
> called it in the past)
> - out-of-order delivery
> - Jitter in the above three, if the transaction continues long enough
>
> …and we can calculate:
>
> - Goodput
>
> In addition to these, I think it’s necessary to also associate a
> traceroute (and, if available and reliable, a reverse-path traceroute)
> in order that it be clear what was measured, and a timestamp, and a
> digital signature over the whole thing, so we can know who’s attesting
> to the measurement.
>
> -Bill
>
>
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Starlink] CFP march 1 - network measurement conference
2023-12-06 20:00 [Starlink] CFP march 1 - network measurement conference Dave Taht
2023-12-06 21:46 ` [Starlink] [NNagain] " Sauli Kiviranta
@ 2024-01-14 16:54 ` Dave Taht
1 sibling, 0 replies; 10+ messages in thread
From: Dave Taht @ 2024-01-14 16:54 UTC (permalink / raw)
To: Dave Taht via Starlink, bloat
I am hoping people are all over this. I have a few thoughts towards a
paper from one of my projects, or perhaps I could just update and
summarize multiple rants from my blog, and get in somehow?
Anyway, if anyone would like a insanely cruel pre-reviewer, an
impossible to satisfy test designer, and a wild experimentalist in on
*their* paper... and someone that is occasionally good with a ringing
phrase or two... please contact me privately. I am just terrible with
TeX.
On Wed, Dec 6, 2023 at 3:00 PM Dave Taht <dave.taht@gmail.com> wrote:
>
> This CFP looks pretty good to me: https://tma.ifip.org/2024/call-for-papers/
>
> Because:
>
> ¨To further encourage the results’ faithfulness and avoid publication
> bias, the conference will particularly encourage negative results
> revealed by novel measurement methods or vantage points. All regular
> papers are hence encouraged to discuss the limitations of the
> presented approaches and also mention which experiments did not work.
> Additionally, TMA will also be open to accepting papers that
> exclusively deal with negative results, especially when new
> measurement methods or perspectives offer insight into the limitations
> and challenges of network measurement in practice. Negative results
> will be evaluated based on their impact (e.g. revealed in realistic
> production networks) as well as the novelty of the vantage points
> (e.g. scarce data source) or measurement techniques that revealed
> them."
>
>
> --
> :( My old R&D campus is up for sale: https://tinyurl.com/yurtlab
> Dave Täht CSO, LibreQos
--
40 years of net history, a couple songs:
https://www.youtube.com/watch?v=D9RGX6QFm5E
Dave Täht CSO, LibreQos
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Starlink] [NNagain] CFP march 1 - network measurement conference
2023-12-07 16:14 [Starlink] [NNagain] " David Fernández
@ 2023-12-07 18:23 ` Ricky Mok
0 siblings, 0 replies; 10+ messages in thread
From: Ricky Mok @ 2023-12-07 18:23 UTC (permalink / raw)
To: starlink
i think these are different but correlated problems.
DASH and MOQ are more about application layer (segment size, video
buffer, keyframes...) and webtransport (QUIC/HTTP3...), which is
independent of how starlink users connect to CDNs and cloud. Both are
good problem, just depending on your measurement objectives.
It will interesting to see how noisy wireless links eventually translate
into packet loss or jitter. how would it different from 5G or wifi.
Ricky
On 12/7/23 08:14, David Fernández via Starlink wrote:
> How about media over QUIC vs. DASH latency performance?
> https://github.com/Dash-Industry-Forum/Dash-Industry-Forum.github.io/files/13530147/Media-over-QUIC-Will-Law.pdf
> https://github.com/Dash-Industry-Forum/Dash-Industry-Forum.github.io/files/13531331/MOQ_2023_12.pdf
>
> Then, there is the question of video codecs, because sending random
> bytes is one thing, to measure performance, but in the end we want to
> measure the capability of networks to convey meaningful information,
> e.g. a video.
>
> When measuring packet losses, I think it is important to count
> separately the packet losses in buffers from packet errors, at least
> for wireless links that could be experiencing high bit error rates in
> the link.
>
> Then, if ECN is in use, the number of packets marked as CE (Congestion
> Experienced) can be also an interesting metric to collect.
>
> Besides the energy efficiency, in terms of Joules consumed per bit
> correctly received, which maybe can just be estimated, not accurately
> measured end-to-end, just at the receiver, maybe, I think it is also
> important to consider the spectrum efficiency (bit/s/Hz) of the
> communication system or network.
>
> Regards,
>
> David
>
> Date: Thu, 7 Dec 2023 12:43:53 +0100
> From: Nitinder Mohan <mohan@in.tum.de>
> To: Ricky Mok <cskpmok@caida.org>, starlink@lists.bufferbloat.net
> Subject: Re: [Starlink] [NNagain] CFP march 1 - network measurement
> conference
> Message-ID: <etPan.6571b008.e9946df.1586b@in.tum.de>
> Content-Type: text/plain; charset="utf-8"
>
> Hi Dave (Ricky, all),
>
> Thanks for sharing the conference call. I am one of the TPC chair of
> the conference and we are really looking forward to cool submissions.
> So please get the idea mill going :)
>
> [Putting my TPC chair hat aside] Application/CDN measurements would be
> cool! Most CDNs would probably map Starlink user to CDN server based
> on anycast but I am not sure if the server selection would be close to
> teh PoP or to the actual user location. Especially for EU users who
> are mapped to PoPs in other countries, would you get content of the
> PoP location country or of your own? What about offshore places that
> are connecting to GSes via long ISL chains (e.g. islands in south of
> Africa)?
>
> Like many folks on this mailing list can already attest, performing
> real application workload experiments will be key here — performance
> under load is very different from ping/traceroutes based results that
> majority of publications in this space have relied on.
>
> Thanks and Regards
>
> Nitinder Mohan
> Technical University Munich (TUM)
> https://www.nitindermohan.com/
>
> From: Ricky Mok via Starlink <starlink@lists.bufferbloat.net>
> Reply: Ricky Mok <cskpmok@caida.org>
> Date: 7. December 2023 at 03:49:54
> To: starlink@lists.bufferbloat.net <starlink@lists.bufferbloat.net>
> Subject: Re: [Starlink] [NNagain] CFP march 1 - network measurement conference
>
> How about applications? youtube and netflix?
>
> (TPC of this conference this year)
>
> Ricky
>
> On 12/6/23 18:22, Bill Woodcock via Starlink wrote:
>>> On Dec 6, 2023, at 22:46, Sauli Kiviranta via Nnagain <nnagain@lists.bufferbloat.net> wrote:
>>> What would be a comprehensive measurement? Should cover all/most relevant areas?
>> It’s easy to specify a suite of measurements which is too heavy to be easily implemented or supported on the network. Also, as you point out, many things can be derived from raw data, so don’t necessarily require additional specific measurements.
>>
>>> Payload Size: The size of data being transmitted.
>>> Event Rate: The frequency at which payloads are transmitted.
>>> Bitrate: The combination of rate and size transferred in a given test.
>>> Throughput: The data transfer capability achieved on the test path.
>> All of that can probably be derived from sufficiently finely-grained TCP data. i.e. if you had a PCAP of a TCP flow that constituted the measurement, you’d be able to derive all of the above.
>>
>>> Bandwidth: The data transfer capacity available on the test path.
>> Presumably the goal of a TCP transaction measurement would be to enable this calculation.
>>
>>> Transfer Efficiency: The ratio of useful payload data to the overhead data.
>> This is a how-its-used rather than a property-of-the-network. If there are network-inherent overheads, they’re likely to be not directly visible to endpoints, only inferable, and might require external knowledge of the network. So, I’d put this out-of-scope.
>>
>>> Round-Trip Time (RTT): The ping delay time to the target server and back.
>>> RTT Jitter: The variation in the delay of round-trip time.
>>> Latency: The transmission delay time to the target server and back.
>>> Latency Jitter: The variation in delay of latency.
>> RTT is measurable. If Latency is RTT minus processing delay on the remote end, I’m not sure it’s really measurable, per se, without the remote end being able to accurately clock itself, or an independent vantage point adjacent to the remote end. This is the old one-way-delay measurement problem in different guise, I think. Anyway, I think RTT is easy and necessary, and I think latency is difficult and probably an anchor not worth attaching to anything we want to see done in the near term. Latency jitter likewise.
>>
>>> Bit Error Rate: The corrupted bits as a percentage of the total
>>> transmitted data.
>> This seems like it can be derived from a PCAP, but doesn’t really constitute an independent measurement.
>>
>>> Packet Loss: The percentage of packets lost that needed to be recovered.
>> Yep.
>>
>>> Energy Efficiency: The amount of energy consumed to achieve the test result.
>> Not measurable.
>>
>>> Did I overlook something?
>> Out-of-order delivery is the fourth classical quality criterion. There are folks who argue that it doesn’t matter anymore, and others who (more compellingly, to my mind) argue that it’s at least as relevant as ever.
>>
>> Thus, for an actual measurement suite:
>>
>> - A TCP transaction
>>
>> …from which we can observe:
>>
>> - Loss
>> - RTT (which I’ll just call “Latency” because that’s what people have called it in the past)
>> - out-of-order delivery
>> - Jitter in the above three, if the transaction continues long enough
>>
>> …and we can calculate:
>>
>> - Goodput
>>
>> In addition to these, I think it’s necessary to also associate a traceroute (and, if available and reliable, a reverse-path traceroute) in order that it be clear what was measured, and a timestamp, and a digital signature over the whole thing, so we can know who’s attesting to the measurement.
>>
>> -Bill
>>
> _______________________________________________
> Starlink mailing list
> Starlink@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [Starlink] [NNagain] CFP march 1 - network measurement conference
@ 2023-12-07 16:14 David Fernández
2023-12-07 18:23 ` Ricky Mok
0 siblings, 1 reply; 10+ messages in thread
From: David Fernández @ 2023-12-07 16:14 UTC (permalink / raw)
To: starlink
How about media over QUIC vs. DASH latency performance?
https://github.com/Dash-Industry-Forum/Dash-Industry-Forum.github.io/files/13530147/Media-over-QUIC-Will-Law.pdf
https://github.com/Dash-Industry-Forum/Dash-Industry-Forum.github.io/files/13531331/MOQ_2023_12.pdf
Then, there is the question of video codecs, because sending random
bytes is one thing, to measure performance, but in the end we want to
measure the capability of networks to convey meaningful information,
e.g. a video.
When measuring packet losses, I think it is important to count
separately the packet losses in buffers from packet errors, at least
for wireless links that could be experiencing high bit error rates in
the link.
Then, if ECN is in use, the number of packets marked as CE (Congestion
Experienced) can be also an interesting metric to collect.
Besides the energy efficiency, in terms of Joules consumed per bit
correctly received, which maybe can just be estimated, not accurately
measured end-to-end, just at the receiver, maybe, I think it is also
important to consider the spectrum efficiency (bit/s/Hz) of the
communication system or network.
Regards,
David
Date: Thu, 7 Dec 2023 12:43:53 +0100
From: Nitinder Mohan <mohan@in.tum.de>
To: Ricky Mok <cskpmok@caida.org>, starlink@lists.bufferbloat.net
Subject: Re: [Starlink] [NNagain] CFP march 1 - network measurement
conference
Message-ID: <etPan.6571b008.e9946df.1586b@in.tum.de>
Content-Type: text/plain; charset="utf-8"
Hi Dave (Ricky, all),
Thanks for sharing the conference call. I am one of the TPC chair of
the conference and we are really looking forward to cool submissions.
So please get the idea mill going :)
[Putting my TPC chair hat aside] Application/CDN measurements would be
cool! Most CDNs would probably map Starlink user to CDN server based
on anycast but I am not sure if the server selection would be close to
teh PoP or to the actual user location. Especially for EU users who
are mapped to PoPs in other countries, would you get content of the
PoP location country or of your own? What about offshore places that
are connecting to GSes via long ISL chains (e.g. islands in south of
Africa)?
Like many folks on this mailing list can already attest, performing
real application workload experiments will be key here — performance
under load is very different from ping/traceroutes based results that
majority of publications in this space have relied on.
Thanks and Regards
Nitinder Mohan
Technical University Munich (TUM)
https://www.nitindermohan.com/
From: Ricky Mok via Starlink <starlink@lists.bufferbloat.net>
Reply: Ricky Mok <cskpmok@caida.org>
Date: 7. December 2023 at 03:49:54
To: starlink@lists.bufferbloat.net <starlink@lists.bufferbloat.net>
Subject: Re: [Starlink] [NNagain] CFP march 1 - network measurement conference
How about applications? youtube and netflix?
(TPC of this conference this year)
Ricky
On 12/6/23 18:22, Bill Woodcock via Starlink wrote:
>
>> On Dec 6, 2023, at 22:46, Sauli Kiviranta via Nnagain <nnagain@lists.bufferbloat.net> wrote:
>> What would be a comprehensive measurement? Should cover all/most relevant areas?
> It’s easy to specify a suite of measurements which is too heavy to be easily implemented or supported on the network. Also, as you point out, many things can be derived from raw data, so don’t necessarily require additional specific measurements.
>
>> Payload Size: The size of data being transmitted.
>> Event Rate: The frequency at which payloads are transmitted.
>> Bitrate: The combination of rate and size transferred in a given test.
>> Throughput: The data transfer capability achieved on the test path.
> All of that can probably be derived from sufficiently finely-grained TCP data. i.e. if you had a PCAP of a TCP flow that constituted the measurement, you’d be able to derive all of the above.
>
>> Bandwidth: The data transfer capacity available on the test path.
> Presumably the goal of a TCP transaction measurement would be to enable this calculation.
>
>> Transfer Efficiency: The ratio of useful payload data to the overhead data.
> This is a how-its-used rather than a property-of-the-network. If there are network-inherent overheads, they’re likely to be not directly visible to endpoints, only inferable, and might require external knowledge of the network. So, I’d put this out-of-scope.
>
>> Round-Trip Time (RTT): The ping delay time to the target server and back.
>> RTT Jitter: The variation in the delay of round-trip time.
>> Latency: The transmission delay time to the target server and back.
>> Latency Jitter: The variation in delay of latency.
> RTT is measurable. If Latency is RTT minus processing delay on the remote end, I’m not sure it’s really measurable, per se, without the remote end being able to accurately clock itself, or an independent vantage point adjacent to the remote end. This is the old one-way-delay measurement problem in different guise, I think. Anyway, I think RTT is easy and necessary, and I think latency is difficult and probably an anchor not worth attaching to anything we want to see done in the near term. Latency jitter likewise.
>
>> Bit Error Rate: The corrupted bits as a percentage of the total
>> transmitted data.
> This seems like it can be derived from a PCAP, but doesn’t really constitute an independent measurement.
>
>> Packet Loss: The percentage of packets lost that needed to be recovered.
> Yep.
>
>> Energy Efficiency: The amount of energy consumed to achieve the test result.
> Not measurable.
>
>> Did I overlook something?
> Out-of-order delivery is the fourth classical quality criterion. There are folks who argue that it doesn’t matter anymore, and others who (more compellingly, to my mind) argue that it’s at least as relevant as ever.
>
> Thus, for an actual measurement suite:
>
> - A TCP transaction
>
> …from which we can observe:
>
> - Loss
> - RTT (which I’ll just call “Latency” because that’s what people have called it in the past)
> - out-of-order delivery
> - Jitter in the above three, if the transaction continues long enough
>
> …and we can calculate:
>
> - Goodput
>
> In addition to these, I think it’s necessary to also associate a traceroute (and, if available and reliable, a reverse-path traceroute) in order that it be clear what was measured, and a timestamp, and a digital signature over the whole thing, so we can know who’s attesting to the measurement.
>
> -Bill
>
^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2024-01-14 16:55 UTC | newest]
Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-12-06 20:00 [Starlink] CFP march 1 - network measurement conference Dave Taht
2023-12-06 21:46 ` [Starlink] [NNagain] " Sauli Kiviranta
2023-12-07 2:22 ` Bill Woodcock
2023-12-07 2:49 ` Ricky Mok
2023-12-07 11:43 ` Nitinder Mohan
2023-12-07 20:05 ` Sauli Kiviranta
2023-12-08 6:03 ` rjmcmahon
2024-01-14 16:54 ` [Starlink] " Dave Taht
2023-12-07 16:14 [Starlink] [NNagain] " David Fernández
2023-12-07 18:23 ` Ricky Mok
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox