The argument is absolutely correct for Reno, CUBIC and all other self-clocked protocols. One of the core assumptions in Jacobson88, was that the clock for the entire system comes from packets draining through the bottleneck queue. In this world, the clock is intrinsically brittle if the buffers are too small. The drain time needs to be a substantial fraction of the RTT.
However, we have reached the point where we need to discard that requirement. One of the side points of BBR is that in many environments it is cheaper to burn serving CPU to pace into short queue networks than it is to "right size" the network queues.
The fundamental problem with the old way is that in some contexts the buffer memory has to beat Moore's law, because to maintain constant drain time the memory size and BW both have to scale with the link (laser) BW.
Note that we are talking about DC and Internet core. At the edge, BW is low enough where memory is relatively cheap. In some sense BB came about because memory is too cheap in these environments.
Thanks,
--MM--
The best way to predict the future is to create it. - Alan Kay
We must not tolerate intolerance;
however our response must be carefully measured:
too strong would be hypocritical and risks spiraling out of control;
too weak risks being mistaken for tacit approval.