[Cerowrt-devel] Fastpass: A Centralized "Zero-Queue" Datacenter Network

Dave Taht dave.taht at gmail.com
Fri Jul 18 20:36:52 EDT 2014


oops, meant to post this on this thread not the other:

http://www.eecs.berkeley.edu/~sylvia/cs268-2014/papers//FQ1989.pdf

Reading it now it is a model of clarity, that states stuff we've had
to discover anew.

Also, at the time VJ's congestion control stuff was new. I love
history, I love trying to understand stuff in context...

Now that I've finally found this one, it would be great to find the
papers it cites too.

On Fri, Jul 18, 2014 at 5:23 PM, Dave Taht <dave.taht at gmail.com> wrote:
> On Fri, Jul 18, 2014 at 4:27 PM, David Lang <david at lang.hm> wrote:
>> On Fri, 18 Jul 2014, David Lang wrote:
>>
>>> On Fri, 18 Jul 2014, Jim Reisert AD1C wrote:
>>>
>>>> "Fastpass is a datacenter network framework that aims for high
>>>> utilization with zero queueing. It provides low median and tail
>>>> latencies for packets, high data rates between machines, and flexible
>>>> network resource allocation policies. The key idea in Fastpass is
>>>> fine-grained control over packet transmission times and network
>>>> paths."
>>>>
>>>> Read more at....
>>>>
>>>> http://fastpass.mit.edu/
>>>
>>>
>>> <sarcasam>
>>> and all it takes is making one central point aware of all the
>>> communications that is going to take place so that it can coordinate
>>> everything.
>>>
>>> That is sure to scale to an entire datacenter, and beyond that to the
>>> Internet
>>> </sarcasam>
>
> Your tag is incomplete, therefore the rest of your argument fits under
> it too. :)
>
> What I find really puzzling is that this paper makes no references to
> the fair queuing literature at all.
>
> I was even more puzzled by one of the cited papers when it came out,
> where what they
> are implementing is basically just a version of "shortest queue first":
>
> http://web.stanford.edu/~skatti/pubs/sigcomm13-pfabric.pdf
>
> vs
>
> http://www.internetsociety.org/sites/default/files/pdf/accepted/4_sqf_isoc.pdf
>
> (can't find the sqf paper I wanted to cite, I think it's in the cites above)
>
> Which also didn't cite any of the fair queue-ing literature that goes
> back to 1989.They use different terms ("priorities"), language, etc,
> which indicates that it was never read... and yet it seems, once you
> translate the terminology, all the ideas and papers cited in both the
> mit and stanford papers seem to have clear roots in FQ.
>
> Maybe it takes having to fundamentally change the architecture of the
> internet to get an idea published nowadays? You have to make it
> unimplementable and non-threatening to the powers-that-be? Studiously
> avoid work from the previous decade? Maybe the authors have to speak
> in code? Or maybe an idea looks better when multiple people discover
> it and describe it different ways, and ultimately get together to
> resolve their differences? Don't know...
>
> If you substitute out pfabric's suggested complete replacement for IP
> and TCP for a conventional IP architecture and drop some of their
> version of shortest queue first, and substitute nearly any form of
> fair queuing (SQF is quite good, however, fq_codel seems better), you
> get similar, possibly even better, results, and you don't need to
> change anything important.
>
> Going back to the MIT paper, I liked that they measured "normal"
> queuing delay in the data center (not looking at the paper now...
> 3.4ms?) under those conditions, showed what could happen if buffering
> was reduced by a huge amount (no matter the underlying algorithm doing
> so), liked the fact they worked with a very advanced, clever SDK that
> rips out networking from the linux kernel core (and moves most packet
> processing into intel's huge cache), and a few other things that were
> very interesting. Yes, the central clock idea is a bit crazy, but with
> switching times measured in ns, it might actually work over short
> (single rack) distances. And the "incast" problem is, really, really
> hard and benefits from this sort of approach. There's a whole new ietf
> wg dedicated (dclc I think it's called).
>
> There is often hidden value in many a paper, even if the central idea
> is problematic.
>
> In particular, I'd *really love* to rip most of the network stack out
> of the kernel and into userspace. And I really like the idea of
> writable hardware that can talk to virtual memory from userspace (the
> zynq can)
>
> --
> Dave Täht
>
> NSFW: https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_indecent.article



-- 
Dave Täht

NSFW: https://w2.eff.org/Censorship/Internet_censorship_bills/russell_0296_indecent.article



More information about the Cerowrt-devel mailing list