* [Bloat] $106 achieved and flent-farm status
@ 2019-02-05 3:37 Dave Taht
2019-02-05 9:37 ` Pete Heist
0 siblings, 1 reply; 2+ messages in thread
From: Dave Taht @ 2019-02-05 3:37 UTC (permalink / raw)
To: bloat
Thank you mikael and jake, matt and matthew and richard! (and jon, and
dev for trying)
the linode bill is now paid. For the record, peak "earnings" for the
patreon contribution page was $212 in may 2016, and fell below $100 in
May, 2018. I forget when I scaled the number of servers back in
between those times.
In cerowrt's glory days we were burning ~2k a month on the openwrt
build farm, which I had to quit doing 6 months after the google grant
did. Prior to that we had donated servers from ISC, which had to get
out of the free ISP business. I kind of miss running on bare metal,
with tons of disk space. Cloud services are pretty cheap, cloud
storage isn't.
Oy! I remember the headaches and hassles when the number of openwrt
buildslaves dropped to 1 for 6 weeks during a critical phase....
Thankfully that farm looks healthy at the moment:
http://phase1.builds.openwrt.org/buildslaves
Costs on the "flent-farm" continue to drop. Our earliest linode
servers cost $20/month and our two latest ones (nanoservers) cost
$5/month. For "science!" I've been generally unwilling to
update/change these much, the most critical keeping the same kernel
versions they had for the last couple years. I note that linode at
least a year+ ago, started defaulting to a kernel with fq_codel
enabled *by default*, bql just works, irtt, flent, netperf all "just
install" from apt, etc, etc.
It's currently a matter of a few minutes to get a basic flent server
running in the cloud, and sometimes I wish we had more worldwide
coverage - australia - aws - france - china - It turns out arm based
servers are quite cheap nowadays... yep, here I am 6 bucks to the good
and trying to figure out another server on which to spend it. :)
I have not been maintaining the flent network all that well, but your
support inspired me to go fix two of them.
flent-singapore.taht.net: the newest of these servers
de.taht.net: the second newest
now support BBR, cubic, and reno, and have irtt running with support
for 1ms intervals. (thx pete for showing me how on
flent-london.bufferbloat.net)
Linux flent-singapore.taht.net 4.18.16-x86_64-linode118 #1 SMP PREEMPT
Mon Oct 29 15:38:25 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux
Linux de.taht.net 4.15.0-45-generic #48-Ubuntu SMP Tue Jan 29 16:28:13
UTC 2019 x86_64 x86_64 x86_64 GNU/Linux
In the teklibre sub-basement are a few dozen tiny openwrt/armbian
boxes, and two honkin, but ancient 12 core xeon servers ("spaceheater"
and "ceres") with 10TB of storage. I keep a variety of vms on those in
complicated network emulation setups, an indexed copy of all the
source code in the world, my openwrt build system, etc.
and here's a puzzler for you! Both boxes are running ntp yet one box
was still *30* seconds off.
irtt 1ms between california and germany has a mindboggling amount of
loss... in america... ahh... tools...
5. be-3651-cr02.sunnyvale.ca.ibone.comcast.net
71.9% 33 11.4 11.6 9.4 16.0 2.3
6. be-11083-pe02.529bryant.ca.ibone.comcast.net
56.2% 33 13.1 11.3 9.5 14.8 1.6
Min Mean Median Max Stddev
--- ---- ------ --- ------
RTT 166.2ms 175.4ms 174.3ms 232.7ms 4.85ms
send delay 27.9s 27.9s 27.9s 28s 4.07ms
receive delay -27.7s -27.7s -27.7s -27.7s 1.98ms
IPDV (jitter) 434ns 1.46ms 979µs 37.64ms 1.78ms
send IPDV 408ns 1.36ms 954µs 34.92ms 1.51ms
receive IPDV 2ns 378µs 72.2µs 32.98ms 1.08ms
send call time 3.55µs 14.9µs 1.61ms 16.7µs
timer error 0s 19.6µs 5.52ms 68.3µs
server proc. time 710ns 7.67µs 10.33ms 91.6µs
duration: 1m1s (wait 698.1ms)
packets sent/received: 58852/44855 (23.78% loss)
server packets received: 45579/58852 (22.55%/1.59% loss up/down)
bytes sent/received: 3531120/2691300
send/receive rate: 470.8 Kbps / 358.8 Kbps
packet length: 60 bytes
timer stats: 1148/60000 (1.91%) missed, 1.96% error
restarted ntp, and:
Min Mean Median Max Stddev
--- ---- ------ --- ------
RTT 166.3ms 175.9ms 175.1ms 233ms 4.78ms
send delay 83.97ms 92.57ms 91.9ms 137.4ms 4.1ms
receive delay 80.86ms 83.32ms 82.86ms 137.4ms 1.89ms
IPDV (jitter) 316ns 1.44ms 977µs 63.68ms 1.75ms
send IPDV 41ns 1.35ms 933µs 35.49ms 1.5ms
receive IPDV 2ns 345µs 83.8µs 55.06ms 971µs
send call time 3.55µs 16.4µs 1.07ms 11.8µs
timer error 1ns 23.4µs 5.02ms 76.8µs
server proc. time 716ns 8.22µs 13.13ms 117µs
duration: 1m1s (wait 699.1ms)
packets sent/received: 58779/45154 (23.18% loss)
server packets received: 45912/58779 (21.89%/1.65% loss up/down)
bytes sent/received: 3526740/2709240
send/receive rate: 470.2 Kbps / 361.2 Kbps
packet length: 60 bytes
timer stats: 1221/60000 (2.04%) missed, 2.34% error
Verses singapore.
Min Mean Median Max Stddev
--- ---- ------ --- ------
RTT 173.6ms 179.2ms 178.7ms 194.7ms 2.5ms
send delay 113ms 118.5ms 118ms 131.9ms 2.32ms
receive delay 59.7ms 60.68ms 60.45ms 74.75ms 980µs
IPDV (jitter) 203ns 1.24ms 978µs 18.8ms 1.01ms
send IPDV 20ns 1.19ms 983µs 14ms 881µs
receive IPDV 0s 195µs 20.3µs 14.11ms 539µs
send call time 3.67µs 14.7µs 1.75ms 13.2µs
timer error 1ns 21.4µs 7.82ms 93.9µs
server proc. time 680ns 3.27µs 435µs 9.35µs
duration: 1m1s (wait 584ms)
packets sent/received: 58709/55848 (4.87% loss)
server packets received: 55848/58709 (4.87%/0.00% loss up/down)
bytes sent/received: 3522540/3350880
send/receive rate: 469.7 Kbps / 446.8 Kbps
packet length: 60 bytes
timer stats: 1290/59999 (2.15%) missed, 2.14% error
--
Dave Täht
CTO, TekLibre, LLC
http://www.teklibre.com
Tel: 1-831-205-9740
^ permalink raw reply [flat|nested] 2+ messages in thread
* Re: [Bloat] $106 achieved and flent-farm status
2019-02-05 3:37 [Bloat] $106 achieved and flent-farm status Dave Taht
@ 2019-02-05 9:37 ` Pete Heist
0 siblings, 0 replies; 2+ messages in thread
From: Pete Heist @ 2019-02-05 9:37 UTC (permalink / raw)
To: Dave Taht; +Cc: bloat
[-- Attachment #1: Type: text/plain, Size: 4817 bytes --]
> On Feb 5, 2019, at 4:37 AM, Dave Taht <dave.taht@gmail.com> wrote:
>
> Thank you mikael and jake, matt and matthew and richard! (and jon, and
> dev for trying)
+1
> and here's a puzzler for you! Both boxes are running ntp yet one box
> was still *30* seconds off.
I haven’t explored this fully but I find it works best when ntp is configured in an identical way across servers, either all running systemd-timesyncd, or ntpd, or chronyd, and to the same server pool. When that’s the case, I often see it works within a few milliseconds.
For me, it looks like clocks are currently about 10ms off to london and 75ms off to singapore. I’m using systemd-timesyncd with the default Debian servers- "0.debian.pool.ntp.org 1.debian.pool.ntp.org 2.debian.pool.ntp.org 3.debian.pool.ntp.org”.
> irtt 1ms between california and germany has a mindboggling amount of
> loss... in america... ahh... tools…
Curiously, I see less loss at 1ms to singapore than london, london’s loss comes from the downstream, and I wasn’t seeing it four days ago. The upstream loss is my NLOS uplink as I get it straight to my next hop, which also has an irtt server.
Next hop router:
packets sent/received: 4999/4864 (2.70% loss)
server packets received: 4864/4999 (2.70%/0.00% loss up/down)
London:
packets sent/received: 5000/4124 (17.52% loss)
server packets received: 4817/5000 (3.66%/14.39% loss up/down)
Singapore:
packets sent/received: 5000/4830 (3.40% loss)
server packets received: 4830/5000 (3.40%/0.00% loss up/down)
Also curiously, RTT to london has increased for some reason, mainly with increased receive delay, but this could easily be from our peering provider, which we hope to switch soon for other reasons.
Feb. 1, 2019
------------
$ irtt client -q -i 10ms -d 1s flent-london.bufferbloat.net <http://flent-london.bufferbloat.net/>
[Connecting] connecting to flent-london.bufferbloat.net <http://flent-london.bufferbloat.net/>
[176.58.107.8:2112] [Connected] connection established
[176.58.107.8:2112] [WaitForPackets] waiting 135ms for final packets
Min Mean Median Max Stddev
--- ---- ------ --- ------
RTT 31.08ms 37.09ms 37.04ms 44.98ms 2.75ms
send delay 16.02ms 21.55ms 21.39ms 29.51ms 2.67ms
receive delay 14.3ms 15.54ms 15.61ms 17.2ms 470µs
IPDV (jitter) 49.5µs 3.27ms 2.45ms 10.85ms 2.88ms
send IPDV 63.4µs 3.16ms 2.22ms 10.79ms 2.84ms
receive IPDV 903ns 400µs 203µs 2.82ms 506µs
send call time 21.3µs 79.1µs 146µs 28.9µs
timer error 1.84µs 721µs 2.22ms 501µs
server proc. time 1.29µs 31.4µs 2.23ms 222µs
duration: 1.13s (wait 135ms)
packets sent/received: 100/100 (0.00% loss)
server packets received: 100/100 (0.00%/0.00% loss up/down)
bytes sent/received: 6000/6000
send/receive rate: 48.5 Kbps / 48.8 Kbps
packet length: 60 bytes
timer stats: 0/100 (0.00%) missed, 7.21% error
Feb. 5, 2019
------------
$ irtt client -q -i 10ms -d 1s flent-london.bufferbloat.net
[Connecting] connecting to flent-london.bufferbloat.net
[176.58.107.8:2112] [Connected] connection established
[176.58.107.8:2112] [WaitForPackets] waiting 176.8ms for final packets
Min Mean Median Max Stddev
--- ---- ------ --- ------
RTT 53.38ms 54.59ms 54.38ms 58.93ms 1.05ms
send delay 16.81ms 18.07ms 17.87ms 22.17ms 998µs
receive delay 35.72ms 36.52ms 36.5ms 38.07ms 333µs
IPDV (jitter) 47.1µs 960µs 370µs 4.69ms 1.03ms
send IPDV 13.4µs 974µs 646µs 4.69ms 1.04ms
receive IPDV 5.05µs 379µs 287µs 1.51ms 322µs
send call time 32.5µs 35.5µs 61µs 3.65µs
timer error 90ns 15.1µs 190µs 25.5µs
server proc. time 7.47µs 13.4µs 191µs 20.6µs
duration: 1.17s (wait 176.8ms)
packets sent/received: 99/77 (22.22% loss)
server packets received: 99/99 (0.00%/22.22% loss up/down)
bytes sent/received: 5940/4620
send/receive rate: 48.0 Kbps / 37.3 Kbps
packet length: 60 bytes
timer stats: 1/100 (1.00%) missed, 0.15% error
[-- Attachment #2: Type: text/html, Size: 12048 bytes --]
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2019-02-05 9:37 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-02-05 3:37 [Bloat] $106 achieved and flent-farm status Dave Taht
2019-02-05 9:37 ` Pete Heist
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox