Starlink has bufferbloat. Bad.
 help / color / mirror / Atom feed
* Re: [Starlink] RFC: bufferbloat observability project (Dave Taht)
       [not found] <mailman.7.1678636801.9000.starlink@lists.bufferbloat.net>
@ 2023-03-12 18:51 ` David P. Reed
  2023-03-12 18:56   ` David P. Reed
       [not found]   ` <CAA93jw40sExcWj5t1HUHZ7pGCSP7b18quY3aBx4MCfyggvamgw@mail.gmail.com>
  0 siblings, 2 replies; 4+ messages in thread
From: David P. Reed @ 2023-03-12 18:51 UTC (permalink / raw)
  To: starlink; +Cc: starlink

[-- Attachment #1: Type: text/plain, Size: 2368 bytes --]


Regarding unbounded queues
On Sunday, March 12, 2023 12:00pm, Dave Taht <dave.taht@gmail.com> said:

> Also it increasingly bothers me to see unbounded queues in so many new
> language libraries.


I disagree somewhat. Unbounded queueing is perfectly fine in a programming language like Haskell, where there are no inherent semantics about timing - a queue is an ordered list with append, and it's a GREAT way to formulate many algorithms that process items in order.
 
Where the problem with queues arises is in finite (bounded) real-time programming systems. Which include network protocol execution machines.
 
It's weird to me that people seem to think that languages intended for data-transformation algorithms, parsers, ... are appropriate for programming network switches, TCP/IP stacks, etc. It always has seemed weird beyond belief. I mean, yeah, Go has queues and goroutines, but those aren't real-time appropriate components.
 
What may be the better thing to say is that it increasingly bothers you that NO ONE seems to be willing to create a high-level programming abstraction for highly concurrent interacting distributed machines.
 
There actually are three commercial programming languages (which are about at the level of C++ in abstraction, with the last maybe being at the level of Haskell).
1. Verilog
2. VHDL
3. BlueSpec
 
For each one, there is a large community of programmers proficient in them. You might also consider Erlang as a candidate, but I think its "queues" are not what you want to see.
 
Why doesn't IETF bother to try to delegate a team to create such an expressive programming language or whatever? I'd suggest that starting with Verilog might be a good idea.
 
A caveat about my point: I write Verilog moderately well, and find it quite expressive for modeling networking systems in my mind. I also write Haskell quite well, and since BlueSpec draws on Haskell's model of computation I find it easy to read, but I've not written much Haskell.
 
To me, those who write networking code in C or C++ are stuck in the past when protocols were documented by bit-layouts of packets and hand-waving English "standards" without any way to verify correctness. We need to stop worshipping those archaic RFCs as golden tablets handed down from gods.
 
Who am I to criticize the academic networking gods, though?

[-- Attachment #2: Type: text/html, Size: 4969 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Starlink] RFC: bufferbloat observability project (Dave Taht)
  2023-03-12 18:51 ` [Starlink] RFC: bufferbloat observability project (Dave Taht) David P. Reed
@ 2023-03-12 18:56   ` David P. Reed
  2023-03-12 21:10     ` Sauli Kiviranta
       [not found]   ` <CAA93jw40sExcWj5t1HUHZ7pGCSP7b18quY3aBx4MCfyggvamgw@mail.gmail.com>
  1 sibling, 1 reply; 4+ messages in thread
From: David P. Reed @ 2023-03-12 18:56 UTC (permalink / raw)
  To: starlink

[-- Attachment #1: Type: text/plain, Size: 2622 bytes --]


I should have added this: I am aware of a full TCP stack implementation implemented in Verilog. (In fact, my son built it, and it is in production use on Wall St.).
 
On Sunday, March 12, 2023 2:51pm, "David P. Reed" <dpreed@deepplum.com> said:



Regarding unbounded queues
On Sunday, March 12, 2023 12:00pm, Dave Taht <dave.taht@gmail.com> said:

> Also it increasingly bothers me to see unbounded queues in so many new
> language libraries.


I disagree somewhat. Unbounded queueing is perfectly fine in a programming language like Haskell, where there are no inherent semantics about timing - a queue is an ordered list with append, and it's a GREAT way to formulate many algorithms that process items in order.
 
Where the problem with queues arises is in finite (bounded) real-time programming systems. Which include network protocol execution machines.
 
It's weird to me that people seem to think that languages intended for data-transformation algorithms, parsers, ... are appropriate for programming network switches, TCP/IP stacks, etc. It always has seemed weird beyond belief. I mean, yeah, Go has queues and goroutines, but those aren't real-time appropriate components.
 
What may be the better thing to say is that it increasingly bothers you that NO ONE seems to be willing to create a high-level programming abstraction for highly concurrent interacting distributed machines.
 
There actually are three commercial programming languages (which are about at the level of C++ in abstraction, with the last maybe being at the level of Haskell).
1. Verilog
2. VHDL
3. BlueSpec
 
For each one, there is a large community of programmers proficient in them. You might also consider Erlang as a candidate, but I think its "queues" are not what you want to see.
 
Why doesn't IETF bother to try to delegate a team to create such an expressive programming language or whatever? I'd suggest that starting with Verilog might be a good idea.
 
A caveat about my point: I write Verilog moderately well, and find it quite expressive for modeling networking systems in my mind. I also write Haskell quite well, and since BlueSpec draws on Haskell's model of computation I find it easy to read, but I've not written much Haskell.
 
To me, those who write networking code in C or C++ are stuck in the past when protocols were documented by bit-layouts of packets and hand-waving English "standards" without any way to verify correctness. We need to stop worshipping those archaic RFCs as golden tablets handed down from gods.
 
Who am I to criticize the academic networking gods, though?

[-- Attachment #2: Type: text/html, Size: 6150 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [Starlink] Fwd: async circuits 30+ years later
       [not found]     ` <CAA93jw7fKqGUFUDE0z7dGrmgnDknpZgYo-bDtiOg8mh8Lkk2_A@mail.gmail.com>
@ 2023-03-12 20:17       ` Dave Taht
  0 siblings, 0 replies; 4+ messages in thread
From: Dave Taht @ 2023-03-12 20:17 UTC (permalink / raw)
  To: Dave Taht via Starlink

I often think, I should never have played with this, I should never
have learned how to think like this about hardware, I am nearly alone
in the world, and completely unable to cope with modern hardware
design tools because of this:

https://authors.library.caltech.edu/43698/1/25YearsAgo.pdf

I am happy to see there is still research ongoing, like I pointed to
in my previous message. I thought (2002?) that the "amulet" async arm
processor was going to sweep the world for wireless-related circuits
because of the low self-noise...

Anyway, the shedding work problem and the unbounded queue problem are
distinctly different problems, thank you for helping me get the two
clear in my head vs language design!


-- 
Come Heckle Mar 6-9 at: https://www.understandinglatency.com/
Dave Täht CEO, TekLibre, LLC

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [Starlink] RFC: bufferbloat observability project (Dave Taht)
  2023-03-12 18:56   ` David P. Reed
@ 2023-03-12 21:10     ` Sauli Kiviranta
  0 siblings, 0 replies; 4+ messages in thread
From: Sauli Kiviranta @ 2023-03-12 21:10 UTC (permalink / raw)
  To: starlink

David, you make a valid point.

I have a corner case in mind that you maybe did not intend, forgive me
if this was beyond the scope of your response to Dave.

Unbounded queueing is perfectly fine when two conditions are met: an
object in the queue is assumed to be valid until infinity, never
expires -and- consumption is always higher or equal to production of
items in the queue. Thus, delays can not accumulate causing us to need
infinitely large buffers and infinite time to ultimately empty the
queue. Order to me is just vertical dependency between objects
(continuity) and timing is horizontal dependency (referential
integrity), both valid. So even in the first case you outlined, I
think we are still dealing with finite and bounded scenario, just of a
vertical type.

Obviously infinite space nor time do not exist with the real world systems.

I am a bit nitpicking here, but there is a real world example of this
assumption, that it exists, and that is the serious design flaw of
TCP. We do not have the infinity and what we thought as infinity,
relative infinity that I think you thought of with Haskell example. In
my opinion this premise should never exist. It should not exists in
the design "we assume our objects to have infinite lifetime validity
and for the consumption to always be larger than production of objects
or else we just increase our buffers to make sure things average out
until infinity". The moment someone pronounced "guaranteed delivery"
it was game over for TCP in dealing with real-time like systems, a
step to infinity.

Once you draw a boundary, bound with time constraint to spoil an
object or space by selectively discarding objects, effectively
reducing rate, then you avoid infinite growth of the queue. I think
this is bothering Dave deep inside, the reality, what he is not seeing
considered enough?

We need some form of queue management if we can not have feedback loop
to the production source about consumption rate or if production can
not provide TTL information to the processing. There is no escape, as
far as I am concerned. If our set of possibilities includes a moment
that your consumption falls below production rate and the objects have
infinite like lifetime, we will be in trouble sooner or later.
Bufferbloat.

Even queue management is slightly wrong, as it is just trying to
resolve symptoms instead of tooling the root cause out: ability to
signal back to production about consumption rate so that the use-case
can make decisions what to do on its side (e.g. video transmission use
case can reduce bitrate) or alternatively use-case must be given
possibility to say about time validity of data. If those are not
tooled in, we have no other option but to start throwing objects to
trash at our convenience when buffers overflow.

This is a fundamental issue, and should be treated as such.

If you think about any kind of control system that has some form of
parallelism e.g. multiple sensors like in a self-driving car, you need
queues if those signals are converging at any point down the chain for
the purpose of decision making (continuity in time for a given signal
and parallel referential integrity in time). You end up with same mess
that we have with streaming today: artisan like rules of thumb instead
of doing rigorous engineering and try to be scientific literally,
properly analyzing the beast and taking control of it. Jitterbuffers /
queues are fundamental necessity, and the management of those queues
must be properly understood and accepted premise, not afterthought.
Again, if there is any dependency between the objects being queued
(temporal continuity or spatial referential integrity).

TCP has this flaw, under the premise of "guaranteed delivery" without
mentioning the asterisk "... with infinite scope". In TCP, reality of
is then being handled by buffer overflows and timeouts, instead of
accepting the reality and dealing with it like we deal with everything
else (Can I have redundancy too? Maybe that is too much to ask...).

Summary:
1. When we have concurrency (with vertical or horizontal
dependencies), queue is a must
2. When we have queue, rate control is a must
3. Rate control can be proactive with TTL (payment with space) or
reactive with feedback loop (payment with time)
4. Only then we can set safeguards to prevent bufferbloat as our world
is not infinite, also we better communicate clearly how we do the
housekeeping, as it really should be exception case when use-case is
doing stupid things regardless of our best effort to inform about the
conditions.

Conforming to reality and being rigorous would be a nice start. I love
TCP, it is absolutely beautiful in its nature, the founding fathers
did care to large extend. Just not enough... time at their hands back
then to take it to finish. We too should care and conform to reality,
and hope that we will be given enough time.

Just my 2 cents.

Best regards,
Sauli


On 12/03/2023, David P. Reed via Starlink
<starlink@lists.bufferbloat.net> wrote:
>
> I should have added this: I am aware of a full TCP stack implementation
> implemented in Verilog. (In fact, my son built it, and it is in production
> use on Wall St.).
>
> On Sunday, March 12, 2023 2:51pm, "David P. Reed" <dpreed@deepplum.com>
> said:
>
>
>
> Regarding unbounded queues
> On Sunday, March 12, 2023 12:00pm, Dave Taht <dave.taht@gmail.com> said:
>
>> Also it increasingly bothers me to see unbounded queues in so many new
>> language libraries.
>
>
> I disagree somewhat. Unbounded queueing is perfectly fine in a programming
> language like Haskell, where there are no inherent semantics about timing -
> a queue is an ordered list with append, and it's a GREAT way to formulate
> many algorithms that process items in order.
>
> Where the problem with queues arises is in finite (bounded) real-time
> programming systems. Which include network protocol execution machines.
>
> It's weird to me that people seem to think that languages intended for
> data-transformation algorithms, parsers, ... are appropriate for programming
> network switches, TCP/IP stacks, etc. It always has seemed weird beyond
> belief. I mean, yeah, Go has queues and goroutines, but those aren't
> real-time appropriate components.
>
> What may be the better thing to say is that it increasingly bothers you that
> NO ONE seems to be willing to create a high-level programming abstraction
> for highly concurrent interacting distributed machines.
>
> There actually are three commercial programming languages (which are about
> at the level of C++ in abstraction, with the last maybe being at the level
> of Haskell).
> 1. Verilog
> 2. VHDL
> 3. BlueSpec
>
> For each one, there is a large community of programmers proficient in them.
> You might also consider Erlang as a candidate, but I think its "queues" are
> not what you want to see.
>
> Why doesn't IETF bother to try to delegate a team to create such an
> expressive programming language or whatever? I'd suggest that starting with
> Verilog might be a good idea.
>
> A caveat about my point: I write Verilog moderately well, and find it quite
> expressive for modeling networking systems in my mind. I also write Haskell
> quite well, and since BlueSpec draws on Haskell's model of computation I
> find it easy to read, but I've not written much Haskell.
>
> To me, those who write networking code in C or C++ are stuck in the past
> when protocols were documented by bit-layouts of packets and hand-waving
> English "standards" without any way to verify correctness. We need to stop
> worshipping those archaic RFCs as golden tablets handed down from gods.
>
> Who am I to criticize the academic networking gods, though?

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2023-03-12 21:10 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <mailman.7.1678636801.9000.starlink@lists.bufferbloat.net>
2023-03-12 18:51 ` [Starlink] RFC: bufferbloat observability project (Dave Taht) David P. Reed
2023-03-12 18:56   ` David P. Reed
2023-03-12 21:10     ` Sauli Kiviranta
     [not found]   ` <CAA93jw40sExcWj5t1HUHZ7pGCSP7b18quY3aBx4MCfyggvamgw@mail.gmail.com>
     [not found]     ` <CAA93jw7fKqGUFUDE0z7dGrmgnDknpZgYo-bDtiOg8mh8Lkk2_A@mail.gmail.com>
2023-03-12 20:17       ` [Starlink] Fwd: async circuits 30+ years later Dave Taht

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox