[Starlink] [NNagain] CFP march 1 - network measurement conference
Ricky Mok
cskpmok at caida.org
Thu Dec 7 13:23:09 EST 2023
i think these are different but correlated problems.
DASH and MOQ are more about application layer (segment size, video
buffer, keyframes...) and webtransport (QUIC/HTTP3...), which is
independent of how starlink users connect to CDNs and cloud. Both are
good problem, just depending on your measurement objectives.
It will interesting to see how noisy wireless links eventually translate
into packet loss or jitter. how would it different from 5G or wifi.
Ricky
On 12/7/23 08:14, David Fernández via Starlink wrote:
> How about media over QUIC vs. DASH latency performance?
> https://github.com/Dash-Industry-Forum/Dash-Industry-Forum.github.io/files/13530147/Media-over-QUIC-Will-Law.pdf
> https://github.com/Dash-Industry-Forum/Dash-Industry-Forum.github.io/files/13531331/MOQ_2023_12.pdf
>
> Then, there is the question of video codecs, because sending random
> bytes is one thing, to measure performance, but in the end we want to
> measure the capability of networks to convey meaningful information,
> e.g. a video.
>
> When measuring packet losses, I think it is important to count
> separately the packet losses in buffers from packet errors, at least
> for wireless links that could be experiencing high bit error rates in
> the link.
>
> Then, if ECN is in use, the number of packets marked as CE (Congestion
> Experienced) can be also an interesting metric to collect.
>
> Besides the energy efficiency, in terms of Joules consumed per bit
> correctly received, which maybe can just be estimated, not accurately
> measured end-to-end, just at the receiver, maybe, I think it is also
> important to consider the spectrum efficiency (bit/s/Hz) of the
> communication system or network.
>
> Regards,
>
> David
>
> Date: Thu, 7 Dec 2023 12:43:53 +0100
> From: Nitinder Mohan <mohan at in.tum.de>
> To: Ricky Mok <cskpmok at caida.org>, starlink at lists.bufferbloat.net
> Subject: Re: [Starlink] [NNagain] CFP march 1 - network measurement
> conference
> Message-ID: <etPan.6571b008.e9946df.1586b at in.tum.de>
> Content-Type: text/plain; charset="utf-8"
>
> Hi Dave (Ricky, all),
>
> Thanks for sharing the conference call. I am one of the TPC chair of
> the conference and we are really looking forward to cool submissions.
> So please get the idea mill going :)
>
> [Putting my TPC chair hat aside] Application/CDN measurements would be
> cool! Most CDNs would probably map Starlink user to CDN server based
> on anycast but I am not sure if the server selection would be close to
> teh PoP or to the actual user location. Especially for EU users who
> are mapped to PoPs in other countries, would you get content of the
> PoP location country or of your own? What about offshore places that
> are connecting to GSes via long ISL chains (e.g. islands in south of
> Africa)?
>
> Like many folks on this mailing list can already attest, performing
> real application workload experiments will be key here — performance
> under load is very different from ping/traceroutes based results that
> majority of publications in this space have relied on.
>
> Thanks and Regards
>
> Nitinder Mohan
> Technical University Munich (TUM)
> https://www.nitindermohan.com/
>
> From: Ricky Mok via Starlink <starlink at lists.bufferbloat.net>
> Reply: Ricky Mok <cskpmok at caida.org>
> Date: 7. December 2023 at 03:49:54
> To: starlink at lists.bufferbloat.net <starlink at lists.bufferbloat.net>
> Subject: Re: [Starlink] [NNagain] CFP march 1 - network measurement conference
>
> How about applications? youtube and netflix?
>
> (TPC of this conference this year)
>
> Ricky
>
> On 12/6/23 18:22, Bill Woodcock via Starlink wrote:
>>> On Dec 6, 2023, at 22:46, Sauli Kiviranta via Nnagain <nnagain at lists.bufferbloat.net> wrote:
>>> What would be a comprehensive measurement? Should cover all/most relevant areas?
>> It’s easy to specify a suite of measurements which is too heavy to be easily implemented or supported on the network. Also, as you point out, many things can be derived from raw data, so don’t necessarily require additional specific measurements.
>>
>>> Payload Size: The size of data being transmitted.
>>> Event Rate: The frequency at which payloads are transmitted.
>>> Bitrate: The combination of rate and size transferred in a given test.
>>> Throughput: The data transfer capability achieved on the test path.
>> All of that can probably be derived from sufficiently finely-grained TCP data. i.e. if you had a PCAP of a TCP flow that constituted the measurement, you’d be able to derive all of the above.
>>
>>> Bandwidth: The data transfer capacity available on the test path.
>> Presumably the goal of a TCP transaction measurement would be to enable this calculation.
>>
>>> Transfer Efficiency: The ratio of useful payload data to the overhead data.
>> This is a how-its-used rather than a property-of-the-network. If there are network-inherent overheads, they’re likely to be not directly visible to endpoints, only inferable, and might require external knowledge of the network. So, I’d put this out-of-scope.
>>
>>> Round-Trip Time (RTT): The ping delay time to the target server and back.
>>> RTT Jitter: The variation in the delay of round-trip time.
>>> Latency: The transmission delay time to the target server and back.
>>> Latency Jitter: The variation in delay of latency.
>> RTT is measurable. If Latency is RTT minus processing delay on the remote end, I’m not sure it’s really measurable, per se, without the remote end being able to accurately clock itself, or an independent vantage point adjacent to the remote end. This is the old one-way-delay measurement problem in different guise, I think. Anyway, I think RTT is easy and necessary, and I think latency is difficult and probably an anchor not worth attaching to anything we want to see done in the near term. Latency jitter likewise.
>>
>>> Bit Error Rate: The corrupted bits as a percentage of the total
>>> transmitted data.
>> This seems like it can be derived from a PCAP, but doesn’t really constitute an independent measurement.
>>
>>> Packet Loss: The percentage of packets lost that needed to be recovered.
>> Yep.
>>
>>> Energy Efficiency: The amount of energy consumed to achieve the test result.
>> Not measurable.
>>
>>> Did I overlook something?
>> Out-of-order delivery is the fourth classical quality criterion. There are folks who argue that it doesn’t matter anymore, and others who (more compellingly, to my mind) argue that it’s at least as relevant as ever.
>>
>> Thus, for an actual measurement suite:
>>
>> - A TCP transaction
>>
>> …from which we can observe:
>>
>> - Loss
>> - RTT (which I’ll just call “Latency” because that’s what people have called it in the past)
>> - out-of-order delivery
>> - Jitter in the above three, if the transaction continues long enough
>>
>> …and we can calculate:
>>
>> - Goodput
>>
>> In addition to these, I think it’s necessary to also associate a traceroute (and, if available and reliable, a reverse-path traceroute) in order that it be clear what was measured, and a timestamp, and a digital signature over the whole thing, so we can know who’s attesting to the measurement.
>>
>> -Bill
>>
> _______________________________________________
> Starlink mailing list
> Starlink at lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/starlink
More information about the Starlink
mailing list