* [NNagain] CFP march 1 - network measurement conference
@ 2023-12-06 20:00 Dave Taht
2023-12-06 21:46 ` Sauli Kiviranta
0 siblings, 1 reply; 6+ messages in thread
From: Dave Taht @ 2023-12-06 20:00 UTC (permalink / raw)
To: Dave Taht via Starlink, bloat,
Network Neutrality is back! Let´s make the technical
aspects heard this time!
This CFP looks pretty good to me: https://tma.ifip.org/2024/call-for-papers/
Because:
¨To further encourage the results’ faithfulness and avoid publication
bias, the conference will particularly encourage negative results
revealed by novel measurement methods or vantage points. All regular
papers are hence encouraged to discuss the limitations of the
presented approaches and also mention which experiments did not work.
Additionally, TMA will also be open to accepting papers that
exclusively deal with negative results, especially when new
measurement methods or perspectives offer insight into the limitations
and challenges of network measurement in practice. Negative results
will be evaluated based on their impact (e.g. revealed in realistic
production networks) as well as the novelty of the vantage points
(e.g. scarce data source) or measurement techniques that revealed
them."
--
:( My old R&D campus is up for sale: https://tinyurl.com/yurtlab
Dave Täht CSO, LibreQos
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [NNagain] CFP march 1 - network measurement conference
2023-12-06 20:00 [NNagain] CFP march 1 - network measurement conference Dave Taht
@ 2023-12-06 21:46 ` Sauli Kiviranta
2023-12-07 1:54 ` Jack Haverty
2023-12-07 2:22 ` Bill Woodcock
0 siblings, 2 replies; 6+ messages in thread
From: Sauli Kiviranta @ 2023-12-06 21:46 UTC (permalink / raw)
To: Network Neutrality is back! Let´s make the technical
aspects heard this time!
Cc: Dave Taht via Starlink, bloat, Dave Taht
Thank you for sharing! This looks very promising!
What would be a comprehensive measurement? Should cover all/most relevant areas?
Payload Size: The size of data being transmitted.
Event Rate: The frequency at which payloads are transmitted.
Bitrate: The combination of rate and size transferred in a given test.
Bandwidth: The data transfer capacity available on the test path.
Throughput: The data transfer capability achieved on the test path.
Transfer Efficiency: The ratio of useful payload data to the overhead data.
Round-Trip Time (RTT): The ping delay time to the target server and back.
RTT Jitter: The variation in the delay of round-trip time.
Latency: The transmission delay time to the target server and back.
Latency Jitter: The variation in delay of latency.
Bit Error Rate: The corrupted bits as a percentage of the total
transmitted data.
Packet Loss: The percentage of packets lost that needed to be recovered.
Energy Efficiency: The amount of energy consumed to achieve the test result.
Did I overlook something? Too many dimensions to cover? Obviously some
of those are derived, so not part of the whole set as such.
Maybe then next would be to have different profiles where any of those
parameters may vary over the test run. e.g. profiles that model
congested base stations for mobile data. Different use case specific
payload profiles e.g. gop for video transfer?
Best regards,
Sauli
On 12/6/23, Dave Taht via Nnagain <nnagain@lists.bufferbloat.net> wrote:
> This CFP looks pretty good to me: https://tma.ifip.org/2024/call-for-papers/
>
> Because:
>
> ¨To further encourage the results’ faithfulness and avoid publication
> bias, the conference will particularly encourage negative results
> revealed by novel measurement methods or vantage points. All regular
> papers are hence encouraged to discuss the limitations of the
> presented approaches and also mention which experiments did not work.
> Additionally, TMA will also be open to accepting papers that
> exclusively deal with negative results, especially when new
> measurement methods or perspectives offer insight into the limitations
> and challenges of network measurement in practice. Negative results
> will be evaluated based on their impact (e.g. revealed in realistic
> production networks) as well as the novelty of the vantage points
> (e.g. scarce data source) or measurement techniques that revealed
> them."
>
>
> --
> :( My old R&D campus is up for sale: https://tinyurl.com/yurtlab
> Dave Täht CSO, LibreQos
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [NNagain] CFP march 1 - network measurement conference
2023-12-06 21:46 ` Sauli Kiviranta
@ 2023-12-07 1:54 ` Jack Haverty
2023-12-07 2:22 ` Bill Woodcock
1 sibling, 0 replies; 6+ messages in thread
From: Jack Haverty @ 2023-12-07 1:54 UTC (permalink / raw)
To: nnagain
[-- Attachment #1: Type: text/plain, Size: 4544 bytes --]
IMHO, characterizing performance warrants more measurements, e.g.,:
- amount (bytes, datagrams) presumed lost and re-transmitted by the sender
- amount (bytes, datagrams) discarded at the receiver because they were
already received
- amount (bytes, datagrams) discarded at the receiver because they
arrived too late to be useful
With such data, it would be possible to measure things like "useful
throughput", i.e., the data successfully delivered from source to
destination which was actually useful for the associated user's application.
Some useful measurements were included in RFC 1213, e.g., in the "TCP"
and "UDP" sections, such as number of retransmissions. These
measurements likely require instrumentation in the sending and receiving
computers, where the TCP and similar algorithms operate.
Measurements also may depend strongly on the particular computer type
and operating system. Measurements obtained from various "probe"
sources and destinations, such as a "test host", provide only part of a
"comprehensive measurement". A complete characterization of the
performance achieved would include measurement data from the users' host
computers attached to the Internet, as they are doing whatever the user
does.
I don't recall if the SNMP MIB definitions were ever extended to include
metrics appropriate for uses such as VOIP, video conferencing, remote
operation, et al. Those are the kinds of uses which are most sensitive
to latency and variance in latency and throughput, and probably most
interesting to measure.
Jack Haverty
On 12/6/23 13:46, Sauli Kiviranta via Nnagain wrote:
> Thank you for sharing! This looks very promising!
>
> What would be a comprehensive measurement? Should cover all/most relevant areas?
>
> Payload Size: The size of data being transmitted.
> Event Rate: The frequency at which payloads are transmitted.
> Bitrate: The combination of rate and size transferred in a given test.
> Bandwidth: The data transfer capacity available on the test path.
> Throughput: The data transfer capability achieved on the test path.
> Transfer Efficiency: The ratio of useful payload data to the overhead data.
> Round-Trip Time (RTT): The ping delay time to the target server and back.
> RTT Jitter: The variation in the delay of round-trip time.
> Latency: The transmission delay time to the target server and back.
> Latency Jitter: The variation in delay of latency.
> Bit Error Rate: The corrupted bits as a percentage of the total
> transmitted data.
> Packet Loss: The percentage of packets lost that needed to be recovered.
> Energy Efficiency: The amount of energy consumed to achieve the test result.
>
> Did I overlook something? Too many dimensions to cover? Obviously some
> of those are derived, so not part of the whole set as such.
>
> Maybe then next would be to have different profiles where any of those
> parameters may vary over the test run. e.g. profiles that model
> congested base stations for mobile data. Different use case specific
> payload profiles e.g. gop for video transfer?
>
> Best regards,
> Sauli
>
>
> On 12/6/23, Dave Taht via Nnagain<nnagain@lists.bufferbloat.net> wrote:
>> This CFP looks pretty good to me:https://tma.ifip.org/2024/call-for-papers/
>>
>> Because:
>>
>> ¨To further encourage the results’ faithfulness and avoid publication
>> bias, the conference will particularly encourage negative results
>> revealed by novel measurement methods or vantage points. All regular
>> papers are hence encouraged to discuss the limitations of the
>> presented approaches and also mention which experiments did not work.
>> Additionally, TMA will also be open to accepting papers that
>> exclusively deal with negative results, especially when new
>> measurement methods or perspectives offer insight into the limitations
>> and challenges of network measurement in practice. Negative results
>> will be evaluated based on their impact (e.g. revealed in realistic
>> production networks) as well as the novelty of the vantage points
>> (e.g. scarce data source) or measurement techniques that revealed
>> them."
>>
>>
>> --
>> :( My old R&D campus is up for sale:https://tinyurl.com/yurtlab
>> Dave Täht CSO, LibreQos
>> _______________________________________________
>> Nnagain mailing list
>> Nnagain@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/nnagain
>>
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
[-- Attachment #2: Type: text/html, Size: 5735 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [NNagain] CFP march 1 - network measurement conference
2023-12-06 21:46 ` Sauli Kiviranta
2023-12-07 1:54 ` Jack Haverty
@ 2023-12-07 2:22 ` Bill Woodcock
2023-12-07 14:16 ` [NNagain] [Bloat] " Kathleen Nichols
2023-12-08 6:03 ` [NNagain] " rjmcmahon
1 sibling, 2 replies; 6+ messages in thread
From: Bill Woodcock @ 2023-12-07 2:22 UTC (permalink / raw)
To: Network Neutrality is back! Let´s make the technical
aspects heard this time!
Cc: Sauli Kiviranta, Dave Taht via Starlink, bloat
[-- Attachment #1: Type: text/plain, Size: 3550 bytes --]
> On Dec 6, 2023, at 22:46, Sauli Kiviranta via Nnagain <nnagain@lists.bufferbloat.net> wrote:
> What would be a comprehensive measurement? Should cover all/most relevant areas?
It’s easy to specify a suite of measurements which is too heavy to be easily implemented or supported on the network. Also, as you point out, many things can be derived from raw data, so don’t necessarily require additional specific measurements.
> Payload Size: The size of data being transmitted.
> Event Rate: The frequency at which payloads are transmitted.
> Bitrate: The combination of rate and size transferred in a given test.
> Throughput: The data transfer capability achieved on the test path.
All of that can probably be derived from sufficiently finely-grained TCP data. i.e. if you had a PCAP of a TCP flow that constituted the measurement, you’d be able to derive all of the above.
> Bandwidth: The data transfer capacity available on the test path.
Presumably the goal of a TCP transaction measurement would be to enable this calculation.
> Transfer Efficiency: The ratio of useful payload data to the overhead data.
This is a how-its-used rather than a property-of-the-network. If there are network-inherent overheads, they’re likely to be not directly visible to endpoints, only inferable, and might require external knowledge of the network. So, I’d put this out-of-scope.
> Round-Trip Time (RTT): The ping delay time to the target server and back.
> RTT Jitter: The variation in the delay of round-trip time.
> Latency: The transmission delay time to the target server and back.
> Latency Jitter: The variation in delay of latency.
RTT is measurable. If Latency is RTT minus processing delay on the remote end, I’m not sure it’s really measurable, per se, without the remote end being able to accurately clock itself, or an independent vantage point adjacent to the remote end. This is the old one-way-delay measurement problem in different guise, I think. Anyway, I think RTT is easy and necessary, and I think latency is difficult and probably an anchor not worth attaching to anything we want to see done in the near term. Latency jitter likewise.
> Bit Error Rate: The corrupted bits as a percentage of the total
> transmitted data.
This seems like it can be derived from a PCAP, but doesn’t really constitute an independent measurement.
> Packet Loss: The percentage of packets lost that needed to be recovered.
Yep.
> Energy Efficiency: The amount of energy consumed to achieve the test result.
Not measurable.
> Did I overlook something?
Out-of-order delivery is the fourth classical quality criterion. There are folks who argue that it doesn’t matter anymore, and others who (more compellingly, to my mind) argue that it’s at least as relevant as ever.
Thus, for an actual measurement suite:
- A TCP transaction
…from which we can observe:
- Loss
- RTT (which I’ll just call “Latency” because that’s what people have called it in the past)
- out-of-order delivery
- Jitter in the above three, if the transaction continues long enough
…and we can calculate:
- Goodput
In addition to these, I think it’s necessary to also associate a traceroute (and, if available and reliable, a reverse-path traceroute) in order that it be clear what was measured, and a timestamp, and a digital signature over the whole thing, so we can know who’s attesting to the measurement.
-Bill
[-- Attachment #2: smime.p7s --]
[-- Type: application/pkcs7-signature, Size: 1463 bytes --]
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [NNagain] [Bloat] CFP march 1 - network measurement conference
2023-12-07 2:22 ` Bill Woodcock
@ 2023-12-07 14:16 ` Kathleen Nichols
2023-12-08 6:03 ` [NNagain] " rjmcmahon
1 sibling, 0 replies; 6+ messages in thread
From: Kathleen Nichols @ 2023-12-07 14:16 UTC (permalink / raw)
To: Bill Woodcock; +Cc: bloat, nnagain
Can I give an email "thumbs up" to Bill Woodcock's email?
That information is certainly there in TCP. I don't know how much of
QUIC is in the clear.
On 12/6/23 6:22 PM, Bill Woodcock via Bloat wrote:
>
>
>> On Dec 6, 2023, at 22:46, Sauli Kiviranta via Nnagain
>> <nnagain@lists.bufferbloat.net> wrote: What would be a
>> comprehensive measurement? Should cover all/most relevant areas?
>
> It’s easy to specify a suite of measurements which is too heavy to be
> easily implemented or supported on the network. Also, as you point
> out, many things can be derived from raw data, so don’t necessarily
> require additional specific measurements.
>
>> Payload Size: The size of data being transmitted. Event Rate: The
>> frequency at which payloads are transmitted. Bitrate: The
>> combination of rate and size transferred in a given test.
>> Throughput: The data transfer capability achieved on the test
>> path.
>
> All of that can probably be derived from sufficiently finely-grained
> TCP data. i.e. if you had a PCAP of a TCP flow that constituted the
> measurement, you’d be able to derive all of the above.
>
>> Bandwidth: The data transfer capacity available on the test path.
>
> Presumably the goal of a TCP transaction measurement would be to
> enable this calculation.
>
>> Transfer Efficiency: The ratio of useful payload data to the
>> overhead data.
>
> This is a how-its-used rather than a property-of-the-network. If
> there are network-inherent overheads, they’re likely to be not
> directly visible to endpoints, only inferable, and might require
> external knowledge of the network. So, I’d put this out-of-scope.
>
>> Round-Trip Time (RTT): The ping delay time to the target server and
>> back. RTT Jitter: The variation in the delay of round-trip time.
>> Latency: The transmission delay time to the target server and
>> back. Latency Jitter: The variation in delay of latency.
>
> RTT is measurable. If Latency is RTT minus processing delay on the
> remote end, I’m not sure it’s really measurable, per se, without the
> remote end being able to accurately clock itself, or an independent
> vantage point adjacent to the remote end. This is the old
> one-way-delay measurement problem in different guise, I think.
> Anyway, I think RTT is easy and necessary, and I think latency is
> difficult and probably an anchor not worth attaching to anything we
> want to see done in the near term. Latency jitter likewise.
>
>> Bit Error Rate: The corrupted bits as a percentage of the total
>> transmitted data.
>
> This seems like it can be derived from a PCAP, but doesn’t really
> constitute an independent measurement.
>
>> Packet Loss: The percentage of packets lost that needed to be
>> recovered.
>
> Yep.
>
>> Energy Efficiency: The amount of energy consumed to achieve the
>> test result.
>
> Not measurable.
>
>> Did I overlook something?
>
> Out-of-order delivery is the fourth classical quality criterion.
> There are folks who argue that it doesn’t matter anymore, and others
> who (more compellingly, to my mind) argue that it’s at least as
> relevant as ever.
>
> Thus, for an actual measurement suite:
>
> - A TCP transaction
>
> …from which we can observe:
>
> - Loss - RTT (which I’ll just call “Latency” because that’s what
> people have called it in the past) - out-of-order delivery - Jitter
> in the above three, if the transaction continues long enough
>
> …and we can calculate:
>
> - Goodput
>
> In addition to these, I think it’s necessary to also associate a
> traceroute (and, if available and reliable, a reverse-path
> traceroute) in order that it be clear what was measured, and a
> timestamp, and a digital signature over the whole thing, so we can
> know who’s attesting to the measurement.
>
> -Bill
>
>
> _______________________________________________ Bloat mailing list
> Bloat@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/bloat
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [NNagain] CFP march 1 - network measurement conference
2023-12-07 2:22 ` Bill Woodcock
2023-12-07 14:16 ` [NNagain] [Bloat] " Kathleen Nichols
@ 2023-12-08 6:03 ` rjmcmahon
1 sibling, 0 replies; 6+ messages in thread
From: rjmcmahon @ 2023-12-08 6:03 UTC (permalink / raw)
To: Network Neutrality is back! Let´s make the technical
aspects heard this time!
Cc: Bill Woodcock, Dave Taht via Starlink, bloat
iperf 2 supports OWD in multiple forms.
A raspberry pi 5 has a realtime clock and hardware PTP and gpio PPS. The
retail cost for a pi5 with GPS atomic clock and active fan is less than
$150
[rjmcmahon@fedora iperf2-code]$ src/iperf -c 192.168.1.35 --bounceback
--trip-times --bounceback-period 0 -i 1 -t 4
------------------------------------------------------------
Client connecting to 192.168.1.35, TCP port 5001 with pid 48142 (1/0
flows/load)
Bounceback test (req/reply size = 100 Byte/ 100 Byte) (server hold req=0
usecs & tcp_quickack)
TCP congestion control using cubic
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 85.0 KByte (default)
Event based writes (pending queue watermark at 16384 bytes)
------------------------------------------------------------
[ 1] local 192.168.1.103%enp4s0 port 50558 connected with 192.168.1.35
port 5001 (prefetch=16384) (bb w/quickack req/reply/hold=100/100/0)
(trip-times) (sock=3) (icwnd/mss/irtt=14/1448/541) (ct=0.59 ms) on
2023-12-07 22:01:39.240 (PST)
[ ID] Interval Transfer Bandwidth BB
cnt=avg/min/max/stdev Rtry Cwnd/RTT RPS(avg)
[ 1] 0.00-1.00 sec 739 KBytes 6.05 Mbits/sec
7566=0.130/0.099/0.627/0.007 ms 0 14K/115 us 7666 rps
[ 1] 0.00-1.00 sec OWD (ms) Cnt=7566 TX=0.072/0.038/0.163/0.002
RX=0.058/0.047/0.156/0.004 Asymmetry=0.015/0.001/0.103/0.004
[ 1] 1.00-2.00 sec 745 KBytes 6.10 Mbits/sec
7630=0.130/0.082/0.422/0.005 ms 0 14K/114 us 7722 rps
[ 1] 1.00-2.00 sec OWD (ms) Cnt=7630 TX=0.073/0.027/0.364/0.004
RX=0.057/0.048/0.097/0.003 Asymmetry=0.016/0.000/0.306/0.005
[ 1] 2.00-3.00 sec 749 KBytes 6.14 Mbits/sec
7671=0.129/0.085/0.252/0.004 ms 0 14K/113 us 7756 rps
[ 1] 2.00-3.00 sec OWD (ms) Cnt=7671 TX=0.073/0.031/0.193/0.003
RX=0.056/0.047/0.102/0.003 Asymmetry=0.017/0.000/0.134/0.004
[ 1] 3.00-4.00 sec 737 KBytes 6.04 Mbits/sec
7546=0.131/0.085/0.290/0.004 ms 0 14K/115 us 7629 rps
[ 1] 3.00-4.00 sec OWD (ms) Cnt=7546 TX=0.073/0.030/0.231/0.003
RX=0.058/0.047/0.105/0.003 Asymmetry=0.015/0.000/0.172/0.004
[ 1] 0.00-4.00 sec 2.90 MBytes 6.08 Mbits/sec
30414=0.130/0.082/0.627/0.005 ms 0 14K/376 us 7693 rps
[ 1] 0.00-4.00 sec OWD (ms) Cnt=30414 TX=0.073/0.027/0.364/0.003
RX=0.057/0.047/0.156/0.004 Asymmetry=0.016/0.000/0.306/0.004
[ 1] 0.00-4.00 sec OWD-TX(f)-PDF:
bin(w=100us):cnt(30414)=1:30393,2:19,3:1,4:1
(5.00/95.00/99.7%=1/1/1,Outliers=0,obl/obu=0/0)
[ 1] 0.00-4.00 sec OWD-RX(f)-PDF: bin(w=100us):cnt(30414)=1:30400,2:14
(5.00/95.00/99.7%=1/1/1,Outliers=0,obl/obu=0/0)
[ 1] 0.00-4.00 sec BB8(f)-PDF:
bin(w=100us):cnt(30414)=1:6,2:30392,3:14,5:1,7:1
(5.00/95.00/99.7%=2/2/2,Outliers=16,obl/obu=0/0)
Bob
>> On Dec 6, 2023, at 22:46, Sauli Kiviranta via Nnagain
>> <nnagain@lists.bufferbloat.net> wrote:
>> What would be a comprehensive measurement? Should cover all/most
>> relevant areas?
>
> It’s easy to specify a suite of measurements which is too heavy to be
> easily implemented or supported on the network. Also, as you point
> out, many things can be derived from raw data, so don’t necessarily
> require additional specific measurements.
>
>> Payload Size: The size of data being transmitted.
>> Event Rate: The frequency at which payloads are transmitted.
>> Bitrate: The combination of rate and size transferred in a given test.
>> Throughput: The data transfer capability achieved on the test path.
>
> All of that can probably be derived from sufficiently finely-grained
> TCP data. i.e. if you had a PCAP of a TCP flow that constituted the
> measurement, you’d be able to derive all of the above.
>
>> Bandwidth: The data transfer capacity available on the test path.
>
> Presumably the goal of a TCP transaction measurement would be to
> enable this calculation.
>
>> Transfer Efficiency: The ratio of useful payload data to the overhead
>> data.
>
> This is a how-its-used rather than a property-of-the-network. If
> there are network-inherent overheads, they’re likely to be not
> directly visible to endpoints, only inferable, and might require
> external knowledge of the network. So, I’d put this out-of-scope.
>
>> Round-Trip Time (RTT): The ping delay time to the target server and
>> back.
>> RTT Jitter: The variation in the delay of round-trip time.
>> Latency: The transmission delay time to the target server and back.
>> Latency Jitter: The variation in delay of latency.
>
> RTT is measurable. If Latency is RTT minus processing delay on the
> remote end, I’m not sure it’s really measurable, per se, without the
> remote end being able to accurately clock itself, or an independent
> vantage point adjacent to the remote end. This is the
> old[rjmcmahon@fedora iperf2-code]$ src/iperf -c 192.168.1.35
> --bounceback --trip-times --bounceback-period 0 -i 1 -t 4
------------------------------------------------------------
Client connecting to 192.168.1.35, TCP port 5001 with pid 46358 (1/0
flows/load)
Bounceback test (req/reply size = 100 Byte/ 100 Byte) (server hold req=0
usecs & tcp_quickack)
TCP congestion control using cubic
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 85.0 KByte (default)
Event based writes (pending queue watermark at 16384 bytes)
------------------------------------------------------------
[ 1] local 192.168.1.103%enp4s0 port 60788 connected with 192.168.1.35
port 5001 (prefetch=16384) (bb w/quickack req/reply/hold=100/100/0)
(trip-times) (sock=3) (icwnd/mss/irtt=14/1448/168) (ct=0.23 ms) on
2023-12-07 21:21:31.417 (PST)
[ ID] Interval Transfer Bandwidth BB
cnt=avg/min/max/stdev Rtry Cwnd/RTT RPS(avg)
[ 1] 0.00-1.00 sec 745 KBytes 6.10 Mbits/sec
7631=0.129/0.096/0.637/0.007 ms 0 14K/114 us 7733 rps
[ 1] 0.00-1.00 sec OWD (ms) Cnt=7631 TX=0.068/0.034/0.191/0.003
RX=0.061/0.049/0.118/0.004 Asymmetry=0.009/0.000/0.130/0.004
** reset
[ 1] 1.00-2.00 sec 751 KBytes 6.15 Mbits/sec
7689=0.129/0.092/0.350/0.005 ms 0 14K/115 us 7782 rps
[ 1] 1.00-2.00 sec OWD (ms) Cnt=7689 TX=0.069/0.030/0.288/0.004
RX=0.060/0.052/0.116/0.003 Asymmetry=0.009/0.000/0.227/0.004
** reset
[ 1] 2.00-3.00 sec 748 KBytes 6.13 Mbits/sec
7664=0.129/0.085/0.378/0.004 ms 0 14K/115 us 7751 rps
[ 1] 2.00-3.00 sec OWD (ms) Cnt=7664 TX=0.069/0.025/0.313/0.003
RX=0.060/0.053/0.098/0.002 Asymmetry=0.008/0.000/0.248/0.004
** reset
[ 1] 3.00-4.00 sec 752 KBytes 6.16 Mbits/sec
7698=0.128/0.087/0.322/0.004 ms 0 14K/114 us 7787 rps
[ 1] 3.00-4.00 sec OWD (ms) Cnt=7698 TX=0.068/0.023/0.257/0.003
RX=0.060/0.052/0.091/0.002 Asymmetry=0.009/0.000/0.192/0.004
** reset
** reset
[ 1] 0.00-4.00 sec 2.93 MBytes 6.13 Mbits/sec
30683=0.129/0.085/0.637/0.005 ms 0 14K/408 us 7763 rps
[ 1] 0.00-4.00 sec OWD (ms) Cnt=30683 TX=0.068/0.023/0.313/0.003
RX=0.060/0.049/0.118/0.003 Asymmetry=0.009/0.000/0.248/0.004
[ 1] 0.00-4.00 sec OWD-TX(f)-PDF:
bin(w=100us):cnt(30683)=1:30663,2:17,3:2,4:1
(5.00/95.00/99.7%=1/1/1,Outliers=0,obl/obu=0/0)
[ 1] 0.00-4.00 sec OWD-RX(f)-PDF: bin(w=100us):cnt(30683)=1:30669,2:14
(5.00/95.00/99.7%=1/1/1,Outliers=0,obl/obu=0/0)
[ 1] 0.00-4.00 sec BB8(f)-PDF:
bin(w=100us):cnt(30683)=1:7,2:30663,3:9,4:3,7:1
(5.00/95.00/99.7%=2/2/2,Outliers=13,obl/obu=0/0)
[rjmcmahon@fedora iperf2-code]$ emacs src/Reporter.c
[rjmcmahon@fedora iperf2-code]$ make -j
make all-recursive
make[1]: Entering directory '/home/rjmcmahon/Code/csv/iperf2-code'
Making all in compat
make[2]: Entering directory
'/home/rjmcmahon/Code/csv/iperf2-code/compat'
make[2]: Nothing to be done for 'all'.
make[2]: Leaving directory '/home/rjmcmahon/Code/csv/iperf2-code/compat'
Making all in doc
make[2]: Entering directory '/home/rjmcmahon/Code/csv/iperf2-code/doc'
make[2]: Nothing to be done for 'all'.
make[2]: Leaving directory '/home/rjmcmahon/Code/csv/iperf2-code/doc'
Making all in include
make[2]: Entering directory
'/home/rjmcmahon/Code/csv/iperf2-code/include'
make[2]: Nothing to be done for 'all'.
make[2]: Leaving directory
'/home/rjmcmahon/Code/csv/iperf2-code/include'
Making all in src
make[2]: Entering directory '/home/rjmcmahon/Code/csv/iperf2-code/src'
gcc -DHAVE_CONFIG_H -I. -I.. -I../include -I../include -Wall -O2 -O2
-MT Reporter.o -MD -MP -MF .deps/Reporter.Tpo -c -o Reporter.o
Reporter.c
mv -f .deps/Reporter.Tpo .deps/Reporter.Po
g++ -Wall -O2 -O2 -O2 -pthread -DHAVE_CONFIG_H -o iperf Client.o
Extractor.o isochronous.o Launch.o active_hosts.o Listener.o Locale.o
PerfSocket.o Reporter.o Reports.o ReportOutputs.o Server.o Settings.o
SocketAddr.o gnu_getopt.o gnu_getopt_long.o histogram.o main.o service.o
socket_io.o stdio.o packet_ring.o tcp_window_size.o pdfs.o dscp.o
iperf_formattime.o iperf_multicast_api.o checksums.o
../compat/libcompat.a -lrt
make[2]: Leaving directory '/home/rjmcmahon/Code/csv/iperf2-code/src'
Making all in man
make[2]: Entering directory '/home/rjmcmahon/Code/csv/iperf2-code/man'
make[2]: Nothing to be done for 'all'.
make[2]: Leaving directory '/home/rjmcmahon/Code/csv/iperf2-code/man'
Making all in flows
make[2]: Entering directory '/home/rjmcmahon/Code/csv/iperf2-code/flows'
make[2]: Nothing to be done for 'all'.
make[2]: Leaving directory '/home/rjmcmahon/Code/csv/iperf2-code/flows'
make[2]: Entering directory '/home/rjmcmahon/Code/csv/iperf2-code'
make[2]: Leaving directory '/home/rjmcmahon/Code/csv/iperf2-code'
make[1]: Leaving directory '/home/rjmcmahon/Code/csv/iperf2-code'
[rjmcmahon@fedora iperf2-code]$ src/iperf -c 192.168.1.35 --bounceback
--trip-times --bounceback-period 0 -i 1 -t 4
------------------------------------------------------------
Client connecting to 192.168.1.35, TCP port 5001 with pid 46427 (1/0
flows/load)
Bounceback test (req/reply size = 100 Byte/ 100 Byte) (server hold req=0
usecs & tcp_quickack)
TCP congestion control using cubic
TOS set to 0x0 and nodelay (Nagle off)
TCP window size: 85.0 KByte (default)
Event based writes (pending queue watermark at 16384 bytes)
------------------------------------------------------------
[ 1] local 192.168.1.103%enp4s0 port 37748 connected with 192.168.1.35
port 5001 (prefetch=16384) (bb w/quickack req/reply/hold=100/100/0)
(trip-times) (sock=3) (icwnd/mss/irtt=14/1448/177) (ct=0.23 ms) on
2023-12-07 21:21:46.282 (PST)
[ ID] Interval Transfer Bandwidth BB
cnt=avg/min/max/stdev Rtry Cwnd/RTT RPS(avg)
[ 1] 0.00-1.00 sec 738 KBytes 6.04 Mbits/sec
7552=0.131/0.115/0.642/0.007 ms 0 14K/115 us 7652 rps
[ 1] 0.00-1.00 sec OWD (ms) Cnt=7552 TX=0.070/0.053/0.184/0.002
RX=0.061/0.048/0.113/0.004 Asymmetry=0.009/0.000/0.116/0.004
[ 1] 1.00-2.00 sec 739 KBytes 6.05 Mbits/sec
7568=0.131/0.078/0.362/0.004 ms 0 14K/114 us 7657 rps
[ 1] 1.00-2.00 sec OWD (ms) Cnt=7568 TX=0.070/0.020/0.298/0.003
RX=0.061/0.051/0.097/0.003 Asymmetry=0.009/0.000/0.235/0.004
[ 1] 2.00-3.00 sec 739 KBytes 6.06 Mbits/sec
7571=0.131/0.089/0.279/0.005 ms 0 14K/115 us 7660 rps
[ 1] 2.00-3.00 sec OWD (ms) Cnt=7571 TX=0.070/0.025/0.215/0.003
RX=0.060/0.051/0.181/0.004 Asymmetry=0.010/0.000/0.151/0.004
[ 1] 3.00-4.00 sec 751 KBytes 6.15 Mbits/sec
7693=0.129/0.090/0.284/0.004 ms 0 14K/115 us 7780 rps
[ 1] 3.00-4.00 sec OWD (ms) Cnt=7693 TX=0.070/0.032/0.223/0.004
RX=0.058/0.050/0.091/0.002 Asymmetry=0.013/0.000/0.162/0.004
[ 1] 0.00-4.00 sec 2.90 MBytes 6.07 Mbits/sec
30385=0.130/0.078/0.642/0.005 ms 0 14K/404 us 7687 rps
[ 1] 0.00-4.00 sec OWD (ms) Cnt=30385 TX=0.070/0.020/0.298/0.003
RX=0.060/0.048/0.181/0.004 Asymmetry=0.010/0.000/0.235/0.004
[ 1] 0.00-4.00 sec OWD-TX(f)-PDF:
bin(w=100us):cnt(30385)=1:30366,2:16,3:3
(5.00/95.00/99.7%=1/1/1,Outliers=0,obl/obu=0/0)
[ 1] 0.00-4.00 sec OWD-RX(f)-PDF: bin(w=100us):cnt(30385)=1:30360,2:25
(5.00/95.00/99.7%=1/1/1,Outliers=0,obl/obu=0/0)
[ 1] 0.00-4.00 sec BB8(f)-PDF:
bin(w=100us):cnt(30385)=1:4,2:30366,3:13,4:1,7:1
(5.00/95.00/99.7%=2/2/2,Outliers=15,obl/obu=0/0)
[rjmcmahon@fedora iperf2-code]$ git diff
> one-way-delay measurement problem in different guise, I think.
> Anyway, I think RTT is easy and necessary, and I think latency is
> difficult and probably an anchor not worth attaching to anything we
> want to see done in the near term. Latency jitter likewise.
>
>> Bit Error Rate: The corrupted bits as a percentage of the total
>> transmitted data.
>
> This seems like it can be derived from a PCAP, but doesn’t really
> constitute an independent measurement.
>
>> Packet Loss: The percentage of packets lost that needed to be
>> recovered.
>
> Yep.
>
>> Energy Efficiency: The amount of energy consumed to achieve the test
>> result.
>
> Not measurable.
>
>> Did I overlook something?
>
> Out-of-order delivery is the fourth classical quality criterion.
> There are folks who argue that it doesn’t matter anymore, and others
> who (more compellingly, to my mind) argue that it’s at least as
> relevant as ever.
>
> Thus, for an actual measurement suite:
>
> - A TCP transaction
>
> …from which we can observe:
>
> - Loss
> - RTT (which I’ll just call “Latency” because that’s what people have
> called it in the past)
> - out-of-order delivery
> - Jitter in the above three, if the transaction continues long enough
>
> …and we can calculate:
>
> - Goodput
>
> In addition to these, I think it’s necessary to also associate a
> traceroute (and, if available and reliable, a reverse-path traceroute)
> in order that it be clear what was measured, and a timestamp, and a
> digital signature over the whole thing, so we can know who’s attesting
> to the measurement.
>
> -Bill
>
>
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2023-12-08 6:03 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-12-06 20:00 [NNagain] CFP march 1 - network measurement conference Dave Taht
2023-12-06 21:46 ` Sauli Kiviranta
2023-12-07 1:54 ` Jack Haverty
2023-12-07 2:22 ` Bill Woodcock
2023-12-07 14:16 ` [NNagain] [Bloat] " Kathleen Nichols
2023-12-08 6:03 ` [NNagain] " rjmcmahon
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox