[Bloat] one benefit of turning off shaping + fq_codel
Dave Taht
dave at taht.net
Fri Nov 23 11:26:43 EST 2018
Pete Heist <pete at heistp.net> writes:
> On Nov 13, 2018, at 5:54 PM, Dave Taht <dave.taht at gmail.com>
> wrote:
>
>
>
> It turns out we are contributing to global warming.
>
> https://community.ubnt.com/t5/UniFi-Routing-Switching/USG-temperature/m-p/2547046/highlight/true#M115060
>
>
>
> Would it be right to say that the biggest opportunity for reducing
> consumption is to avoid shaping, i.e. by adding BQL-like functionality
> to all classes of device drivers
Shaping outbound with BQL's support for a dynamic interrupt would be
*free*. A few ethernet chips already have that. Basically you set a
register saying "you are really a 200Mbit interface, return a completion
interrupt after the equivalent of that amount of time has passed".
I can neither remember what chips can do this already, or the name of
the bql feature that does it, this morning.
But it's a register you twiddle and a simple divider circuit.
But outbound is not the problem for us from a heat generation standpoint...
>and/or by deploying congestion control globally that avoids the need for it?
I think it would be interesting to compare energy per byte successfully
delivered across various technologies. Driving fiber lines is pretty
high energy, though, and I think (without a back of envelope handy),
that that would be far more expensive than shaping currently is.
still, adding 6 C to everybody's home router to shape inbound under
heavy load is pretty costly both in energy and reduced service life.
> Other ideas: move queue management into hardware
I have increasingly high hopes for P4 and other forms of hardware to
finally do shaping and queue management right.
https://github.com/ralfkundel/p4-codel/issues/2
Back in the day, I was a huge fan of async logic, which I first
encountered via caltech's cpu and later the amulet.
https://en.wikipedia.org/wiki/Asynchronous_circuit#Asynchronous_CPU
This reduces power consumption enormously. The caltech logic design
system is now open source, and I'd looked it over a few years ago hoping
I could use it to ressurect my ancient skills in this department. I
can't find it this morning, either. there's coffee around here
somewhere... My *big* interest in this tech was because it essentially
eliminates clock noise and you can build a much more sensitive wireless
reciever with it. I got bit by DRAMs being "too loud" on several occasions.
Fulcrum (before they got bought by intel) used async logic in their switch chips.
I think (but am not sure) that the technique is undergoing a
renanassance in the AI chips. The big IBM chip uses it, and it just
totally makes sense if you have zillions of small cpus doing neaural
networks, to only power them up when needed. No crazy P1,P2,P3 etc clock
states are needed, the chip just speeds up or slows down as a function
of heat.
I've never really understood why it didn't take off, I think, in part,
it doesn't scale to wide busses well, and that centrally clocked designs
are how most engineers and fpgas and code got designed since. Anything
with delay built into it seems hard for EEs to grasp.... but I wish I
knew why, or had the time to go play with circuits again at a reasonable
scale.
> power network
> equipment with renewables, or just use the Internet less. :)
I am glad to see more of the former happening. A recent data center
design in singapore basically needed it's own nuclear power plant.
In my case I've always wanted the computing to take place under the
users fingers, I do not like the centralization trend we are in today at
all. I like that apple seems to be leading the way to be putting all
these cool new AI tools in your own hands.
As for the latter... I'm using browsers less now (emacs rocks), and
seem to be getting more done.
>
> Pete
>
> (I noticed an audience member brought this up in Toke’s thesis
> defense)
I sadly slept through that. I hope it was recorded.
More information about the Bloat
mailing list