<div dir="ltr"><div dir="ltr">actually, as I recall, the IAB proposed the use of the CLNP <b>format </b>but not the protocol - since it already had a 128 bit address field and catered to connectionless functionality. It was assumed that the IAB was proposing to adopt the protocol and there was an immediate and negative reaction to that. POISED was the result. </div><div dir="ltr"><br></div><div dir="ltr">v<br><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Tue, Mar 22, 2022 at 2:14 PM Dave Taht <<a href="mailto:dave.taht@gmail.com">dave.taht@gmail.com</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">some ipv6 history I didn't know.<br>
<br>
---------- Forwarded message ---------<br>
From: William Allen Simpson <<a href="mailto:william.allen.simpson@gmail.com" target="_blank">william.allen.simpson@gmail.com</a>><br>
Date: Tue, Mar 22, 2022 at 12:09 PM<br>
Subject: Re: IPv6 "bloat" history<br>
To: <<a href="mailto:nanog@nanog.org" target="_blank">nanog@nanog.org</a>><br>
<br>
<br>
Admitting to not having read every message in these threads,<br>
but would like to highlight a bit of the history.<br>
<br>
IMnsHO, the otherwise useful history is missing a few steps.<br>
<br>
1) The IAB selected ISO CLNP as the next version of IP.<br>
<br>
2) The IETF got angry, disbanded, replaced, and renamed IAB.<br>
<br>
3) On the Big-Internet list, my Practical Internet Protocol Extensions<br>
(PIPE) was an early proposal, and I'd registered V6 with IANA.<br>
<br>
I was self-funding. PIPE was cognizant of the needs of ISPs and<br>
deployment.<br>
<br>
4) Lixia Zhang wrote me that Steve Deering was proposing something<br>
similar, and urged us to pool our efforts. That became Simple<br>
Internet Protocol (SIP). We used 64 bit addresses. We had a clear<br>
path for migration, using the upper 32-bits for the ASN and the old<br>
IPv4 address in the lower 32-bits. We had running code.<br>
<br>
5) The IP Address Extension (IPAE) proposal had some overlapping features,<br>
and we asked them to merge with us. That added some complexity.<br>
<br>
6) The Paul Francis (the originator of NAT) Polymorphic Internet Protocol<br>
(PIP) had some overlapping features, so we also asked them to merge<br>
with us (July 1993). More complexity in the protocol header chaining.<br>
<br>
7) The result was SIPP. We had 2 interoperable implementations: Naval<br>
Research Labs, and KA9Q NOS (Phil Karn and me). There were others<br>
well underway.<br>
<br>
8) As noted by John Curran, there was a committee of "powers that be".<br>
After IETF had strong consensus for SIPP, and we had running code,<br>
the "powers that be" decided to throw all that away.<br>
<br>
9) The old junk was added back into IPv6 by committee.<br>
<br>
There was also a mention that the Linux IP stack is fairly compact and<br>
that IPv6 is somewhat smaller than the IPv4. That's because the Linux<br>
stack was ported by Alan Cox from KA9Q NOS. We gave Alan permission to<br>
change from our personal copyright to GPL.<br>
<br>
It has a lot of the features we'd developed, such as packet buffers and<br>
pushdown functions for adding headers, complimentary to BSD pullup.<br>
They made SIPP/IPv6 fairly easy to implement.<br>
<br>
<br>
On 3/22/22 10:04 AM, Masataka Ohta wrote:<br>
> Owen DeLong wrote:<br>
><br>
>>> IPv6 optional header chain, even after it was widely recognized that IPv4 options are useless/harmful and were deprecated is an example of IPv6 bloat.<br>
>>><br>
>>> Extensive use of link multicast for nothing is another example of<br>
>>> IPv6 bloat. Note that IPv4 works without any multicast.<br>
>><br>
>> Yes, but IPv6 works without any broadcast. At the time IPv6 was being<br>
>> developed, broadcasts were rather inconvenient and it was believed<br>
>> that ethernet switches (which were just beginning to be a thing then)<br>
>> would facilitate more efficient capabilities by making extensive use<br>
>> of link multicast instead of broadcast.<br>
><br>
> No, the history around it is that there was some presentation<br>
> in IPng WG by ATM people stating that ATM, or NBMA (Non-Broadcast<br>
> Multiple Access)in general, is multicast capable though not<br>
> broadcast capable, which was blindly believed by most, if not<br>
> all excluding *me*, people there.<br>
><br>
<br>
Both Owen and Masataka are correct, in their own way.<br>
<br>
IPv4 options were recognized as harmful. SIPP used header chains instead.<br>
But the whole idea was to speed processing, eliminating hop-by-hop.<br>
<br>
Then the committees added back the hop by hop processing (type 0).<br>
Terrible!<br>
<br>
Admittedly, I was also skeptical of packet shredding (what we called<br>
ATM). Sadly, the Chicago NAP required ATM support, and that's where<br>
my connections were located.<br>
<br>
<br>
> It should be noted that IPv6 was less bloat because<br>
> ND abandoned its initial goal to support IP over NBMA.<br>
><br>
<br>
Neighbor Discovery is/was agnostic to NBMA. Putting all the old<br>
ARP and DHCP and other cruft into the IP-layer was my goal, so<br>
that it would be forever link agnostic.<br>
<br>
<br>
> > There is still a valid argument to be made that in a switched<br>
> > ethernet world, multicast could offer efficiencies if networks were<br>
> > better tuned to accommodate it vs. broadcast.<br>
><br>
> That is against the CATENET model that each datalink only<br>
> contain small number of hosts where broadcast is not a<br>
> problem at all. Though, in CERN, single Ethernet with<br>
> thousands of hosts was operated, of course poorly, it<br>
> was abandoned to be inoperational a lot before IPv6,<br>
> which is partly why IPv6 is inoperational.<br>
><br>
<br>
Yes, we were also getting a push from Fermi Labs and CERN for very<br>
large numbers of nodes per link, rather than old ethernet maximum.<br>
<br>
That's the underlying design for Neighbor Discovery. Less chatty.<br>
<br>
Also, my alma mater was Michigan State University, operating the<br>
largest bridged ethernet in the world in the '80s. Agreed, it was<br>
"inoperational". My epiphany was splitting it with KA9Q routers.<br>
<br>
Suddenly the engineering building and the computing center each had<br>
great throughput. Turns out it was the administration's IBM that<br>
had been clogging the campus. Simple KA9Q routers didn't pass the<br>
bad packets. That's how I'd become a routing over bridging convert.<br>
<br>
Still, there are data centers with thousand port switches.<br>
<br>
Also, TRILL.<br>
<br>
<br>
-- <br>
I tried to build a better future, a few times:<br>
<a href="https://wayforward.archive.org/?site=https%3A%2F%2Fwww.icei.org" rel="noreferrer" target="_blank">https://wayforward.archive.org/?site=https%3A%2F%2Fwww.icei.org</a><br>
<br>
Dave Täht CEO, TekLibre, LLC<br>
_______________________________________________<br>
Starlink mailing list<br>
<a href="mailto:Starlink@lists.bufferbloat.net" target="_blank">Starlink@lists.bufferbloat.net</a><br>
<a href="https://lists.bufferbloat.net/listinfo/starlink" rel="noreferrer" target="_blank">https://lists.bufferbloat.net/listinfo/starlink</a><br>
</blockquote></div><br clear="all"><div><br></div>-- <br><div dir="ltr" class="gmail_signature"><div dir="ltr"><div>Please send any postal/overnight deliveries to:</div><div>Vint Cerf</div><div>1435 Woodhurst Blvd </div><div>McLean, VA 22102</div><div>703-448-0965</div><div><br></div><div>until further notice</div><div><br></div><div><br></div><div><br></div></div></div></div>