Network Neutrality is back! Let´s make the technical aspects heard this time!
 help / color / mirror / Atom feed
* [NNagain] Introduction: Dr. David Bray
@ 2023-10-02 18:15 Dave Taht
  2023-10-02 19:38 ` David Bray, PhD
  0 siblings, 1 reply; 15+ messages in thread
From: Dave Taht @ 2023-10-02 18:15 UTC (permalink / raw)
  To: Network Neutrality is back! Let´s make the technical
	aspects heard this time!

All:

I have spent the last several days reaching out to as many people I
know with a deep understanding of the policy and technical issues
surrounding the internet, to participate on this list. I encourage you
all to reach out on your own, especially to those that you can
constructively and civilly disagree with, and hopefully work with, to
establish technical steps forward. Quite a few have joined silently!
So far, 168 people have joined!

Please welcome Dr David Bray[1], a self-described "human flack jacket"
who, in the last NN debate, stood up for the non -partisan FCC IT team
that successfully kept the system up 99.4% of the time despite the
comment floods and network abuses from all sides. He has shared with
me privately many sad (and some hilarious!) stories of that era, and I
do kind of hope now, that some of that history surfaces, and we can
learn from it.

Thank you very much, David, for putting down your painful memories[2],
and agreeing to join here. There is a lot to tackle here, going
forward.

[1] https://www.stimson.org/ppl/david-bray/
[2] "Pain shared is reduced. Joy shared, increased." - Spider Robinson


-- 
Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
Dave Täht CSO, LibreQos

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [NNagain] Introduction: Dr. David Bray
  2023-10-02 18:15 [NNagain] Introduction: Dr. David Bray Dave Taht
@ 2023-10-02 19:38 ` David Bray, PhD
  2023-10-05 20:43   ` Jack Haverty
  0 siblings, 1 reply; 15+ messages in thread
From: David Bray, PhD @ 2023-10-02 19:38 UTC (permalink / raw)
  To: Network Neutrality is back! Let´s make the technical
	aspects heard this time!

[-- Attachment #1: Type: text/plain, Size: 4731 bytes --]

Greetings all and thank you Dave Taht for that very kind intro...

First, I'll open with I'm a gosh-darn non-partisan, which means I swore an
oath to uphold the Constitution first and serve the United States - not a
specific party, tribe, or ideology. This often means, especially in today's
era of 24/7 news and social media, non-partisans have to "top cover".

Second, I'll share that in what happened in 2017 (which itself was 10x what
we saw in 2014) my biggest concern was and remains that a few actors
attempted to flood the system with less-than-authentic comments.

In some respects this is not new. The whole "notice and comment" process is
a legacy process that goes back decades. And the FCC (and others) have had
postcard floods of comments, mimeographed letters of comments, faxed floods
of comments, and now this - which, when combined with generative AI, will
be yet another flood.

Which gets me to my biggest concern as a non-partisan in 2023-2024, namely
how LLMs might misuse and abuse the commenting process further.

Both in 2014 and 2017, I asked FCC General Counsel if I could use CAPTChA
to try to reduce the volume of web scrapers or bots both filing and pulling
info from the Electronic Comment Filing System.

Both times I was told *no* out of concerns that they might prevent someone
from filing. I asked if I could block obvious spam, defined as someone
filing a comment >100 times a minute, and was similarly told no because one
of those possible comments might be genuine and/or it could be an ex party
filing en masse for others.

For 2017 we had to spin up 30x the number of AWS cloud instances to handle
the load - and this was a flood of comments at 4am, 5am, and 6am ET at
night which normally shouldn’t see such volumes. When I said there was a
combination of actual humans wanting to leave comments and others who were
effectively denying service to others (especially because if anyone wanted
to do a batch upload of 100,000 comments or more they could submit a CSV
file or a comment with 100,000 signatories) - both parties said no, that
couldn’t be happening.

Until 2021 when the NY Attorney General proved that was exactly what was
happening with 18m of the 23m apparently from non-authentic origin with ~9m
from one side of the political aisle (and six companies) and ~9m from the
other side of the political aisle (and one or more teenagers).

So with Net Neutrality back on the agenda - here’s a simple prediction,
even if the volume of comments is somehow controlled, 10,000+ pages of
comments produced by ChatGPT or a different LLM is both possible and
probably will be done. The question is if someone includes a legitimate
legal argument on page 6,517 - will FCC’s lawyers spot it and respond to it
as part of the NPRM?

Hope this helps and with highest regards,

-d.
-- 

Principal, LeadDoAdapt Ventures, Inc. <https://www.leaddoadapt.com/> &
Distinguished Fellow

Henry S. Stimson Center <https://www.stimson.org/ppl/david-bray/>, Business
Executives for National Security <https://bens.org/people/dr-david-bray/>



On Mon, Oct 2, 2023 at 2:15 PM Dave Taht via Nnagain <
nnagain@lists.bufferbloat.net> wrote:

> All:
>
> I have spent the last several days reaching out to as many people I
> know with a deep understanding of the policy and technical issues
> surrounding the internet, to participate on this list. I encourage you
> all to reach out on your own, especially to those that you can
> constructively and civilly disagree with, and hopefully work with, to
> establish technical steps forward. Quite a few have joined silently!
> So far, 168 people have joined!
>
> Please welcome Dr David Bray[1], a self-described "human flack jacket"
> who, in the last NN debate, stood up for the non -partisan FCC IT team
> that successfully kept the system up 99.4% of the time despite the
> comment floods and network abuses from all sides. He has shared with
> me privately many sad (and some hilarious!) stories of that era, and I
> do kind of hope now, that some of that history surfaces, and we can
> learn from it.
>
> Thank you very much, David, for putting down your painful memories[2],
> and agreeing to join here. There is a lot to tackle here, going
> forward.
>
> [1] https://www.stimson.org/ppl/david-bray/
> [2] "Pain shared is reduced. Joy shared, increased." - Spider Robinson
>
>
> --
> Oct 30:
> https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
> Dave Täht CSO, LibreQos
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
>

[-- Attachment #2: Type: text/html, Size: 6759 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [NNagain] Introduction: Dr. David Bray
  2023-10-02 19:38 ` David Bray, PhD
@ 2023-10-05 20:43   ` Jack Haverty
  2023-10-05 21:21     ` David Bray, PhD
  0 siblings, 1 reply; 15+ messages in thread
From: Jack Haverty @ 2023-10-05 20:43 UTC (permalink / raw)
  To: nnagain

[-- Attachment #1: Type: text/plain, Size: 7756 bytes --]

Thanks for all your efforts to keep the "feedback loop" to the 
rulemakers functioning!

I'd like to offer a suggestion for a hopefully politically acceptable 
way to handle the deluge, derived from my own battles with "email" over 
the years (decades).

Back in the 1970s, I implemented one of the first email systems on the 
Arpanet, under the mentorship of JCR Licklider, who had been pursuing 
his vision of a "Galactic Network" at ARPA and MIT.   One of the things 
we discovered was the significance of anonymity.   At the time, 
anonymity was forbidden on the Arpanet; you needed an account on some 
computer, protected by passwords, in order to legitimately use the 
network.   The mechanisms were crude and easily broken, but the 
principle applied.

Over the years, that principle has been forgotten, and the right to be 
anonymous has become entrenched.   But many uses of the network, and 
needs of its users, demand accountability, so all sorts of mechanisms 
have been pasted on top of the network to provide ways to judge user 
identity.  Banks, medical services, governments, and businesses all 
demand some way of proving your identity, with passwords, various 
schemes of 2FA, VPNs, or other such technology, with varying degrees of 
protection.   It is still possible to be anonymous on the net, but many 
things you do require you to prove, to some extent, who you are.

So, my suggestion for handling the deluge of "comments" is:

1/ create some mechanism for "registering" your intent to submit a 
comment.   Make it hard for bots to register.  Perhaps you can leverage 
the work of various partners, e.g., ISPs, retailers, government 
agencies, financial institutions, of others who already have some way of 
identifying their users.

2/ Also make registration optional - anyone can still submit comments 
anonymously if they choose.

3/ for "registered commenters", provide a way to "edit" your previous 
comment - i.e., advise that your comment is always the last one you 
submitted.   I.E., whoever you are, you can only submit one comment, 
which will be the last one you submit.

4/ In the thousands of pages of comments, somehow flag the ones that are 
from registered commenters, visible to the people who read the 
comments.   Even better, provide those "information consumers" with ways 
to sort, filter, and search through the body of comments.

This may not reduce the deluge of comments, but I'd expect it to help 
the lawyers and politicians keep their heads above the water.

Anonymity is an important issue for Net Neutrality too, but I'll opine 
about that separately.....

Jack Haverty


On 10/2/23 12:38, David Bray, PhD via Nnagain wrote:
> Greetings all and thank you Dave Taht for that very kind intro...
>
> First, I'll open with I'm a gosh-darn non-partisan, which means I 
> swore an oath to uphold the Constitution first and serve the United 
> States - not a specific party, tribe, or ideology. This often means, 
> especially in today's era of 24/7 news and social media, non-partisans 
> have to "top cover".
>
> Second, I'll share that in what happened in 2017 (which itself was 10x 
> what we saw in 2014) my biggest concern was and remains that a few 
> actors attempted to flood the system with less-than-authentic comments.
>
> In some respects this is not new. The whole "notice and comment" 
> process is a legacy process that goes back decades. And the FCC (and 
> others) have had postcard floods of comments, mimeographed letters of 
> comments, faxed floods of comments, and now this - which, when 
> combined with generative AI, will be yet another flood.
>
> Which gets me to my biggest concern as a non-partisan in 2023-2024, 
> namely how LLMs might misuse and abuse the commenting process further.
>
> Both in 2014 and 2017, I asked FCC General Counsel if I could use 
> CAPTChA to try to reduce the volume of web scrapers or bots both 
> filing and pulling info from the Electronic Comment Filing System.
>
> Both times I was told *no* out of concerns that they might prevent 
> someone from filing. I asked if I could block obvious spam, defined as 
> someone filing a comment >100 times a minute, and was similarly told 
> no because one of those possible comments might be genuine and/or it 
> could be an ex party filing en masse for others.
>
> For 2017 we had to spin up 30x the number of AWS cloud instances to 
> handle the load - and this was a flood of comments at 4am, 5am, and 
> 6am ET at night which normally shouldn’t see such volumes. When I said 
> there was a combination of actual humans wanting to leave comments and 
> others who were effectively denying service to others (especially 
> because if anyone wanted to do a batch upload of 100,000 comments or 
> more they could submit a CSV file or a comment with 100,000 
> signatories) - both parties said no, that couldn’t be happening.
>
> Until 2021 when the NY Attorney General proved that was exactly what 
> was happening with 18m of the 23m apparently from non-authentic origin 
> with ~9m from one side of the political aisle (and six companies) and 
> ~9m from the other side of the political aisle (and one or more 
> teenagers).
>
> So with Net Neutrality back on the agenda - here’s a simple 
> prediction, even if the volume of comments is somehow controlled, 
> 10,000+ pages of comments produced by ChatGPT or a different LLM is 
> both possible and probably will be done. The question is if someone 
> includes a legitimate legal argument on page 6,517 - will FCC’s 
> lawyers spot it and respond to it as part of the NPRM?
>
> Hope this helps and with highest regards,
>
> -d.
> -- 
>
> Principal, LeadDoAdapt Ventures, Inc. <https://www.leaddoadapt.com/> & 
> Distinguished Fellow
>
> Henry S. Stimson Center <https://www.stimson.org/ppl/david-bray/>, 
> Business Executives for National Security 
> <https://bens.org/people/dr-david-bray/>
>
>
>
> On Mon, Oct 2, 2023 at 2:15 PM Dave Taht via Nnagain 
> <nnagain@lists.bufferbloat.net> wrote:
>
>     All:
>
>     I have spent the last several days reaching out to as many people I
>     know with a deep understanding of the policy and technical issues
>     surrounding the internet, to participate on this list. I encourage you
>     all to reach out on your own, especially to those that you can
>     constructively and civilly disagree with, and hopefully work with, to
>     establish technical steps forward. Quite a few have joined silently!
>     So far, 168 people have joined!
>
>     Please welcome Dr David Bray[1], a self-described "human flack jacket"
>     who, in the last NN debate, stood up for the non -partisan FCC IT team
>     that successfully kept the system up 99.4% of the time despite the
>     comment floods and network abuses from all sides. He has shared with
>     me privately many sad (and some hilarious!) stories of that era, and I
>     do kind of hope now, that some of that history surfaces, and we can
>     learn from it.
>
>     Thank you very much, David, for putting down your painful memories[2],
>     and agreeing to join here. There is a lot to tackle here, going
>     forward.
>
>     [1] https://www.stimson.org/ppl/david-bray/
>     [2] "Pain shared is reduced. Joy shared, increased." - Spider Robinson
>
>
>     -- 
>     Oct 30:
>     https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>     Dave Täht CSO, LibreQos
>     _______________________________________________
>     Nnagain mailing list
>     Nnagain@lists.bufferbloat.net
>     https://lists.bufferbloat.net/listinfo/nnagain
>
>
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain

[-- Attachment #2: Type: text/html, Size: 13203 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [NNagain] Introduction: Dr. David Bray
  2023-10-05 20:43   ` Jack Haverty
@ 2023-10-05 21:21     ` David Bray, PhD
  2023-10-07 20:10       ` Jack Haverty
  0 siblings, 1 reply; 15+ messages in thread
From: David Bray, PhD @ 2023-10-05 21:21 UTC (permalink / raw)
  To: Network Neutrality is back! Let´s make the technical
	aspects heard this time!

[-- Attachment #1: Type: text/plain, Size: 15404 bytes --]

Indeed Jack - a few things to balance - the Administrative Procedure Act of
1946 (on which the idea of rulemaking is based) us about raising legal
concerns that must be answered by the agency at the time the rulemaking is
done. It's not a vote nor is it the case that if the agency gets tons of
comments in one direction that they have to go in that direction. Instead
it's only about making sure legal concerns are considered and responded to
before the agency before the agency acts. (Which is partly why sending "I'm
for XYZ" or "I'm against ABC" really doesn't mean anything to an agency -
not only is that not a legal argument or concern, it's also not something
where they're obligated to follow these comments - it's not a vote or
poll).

That said, political folks have spun things to the public as if it is a
poll/vote/chance to act. The raise a valid legal concern part of the APA of
1946 is omitted. Moreover the fact that third party law firms and others
like to submit comments on behalf of clients - there will always be a third
party submitting multiple comments for their clients (or "clients") because
that's their business.

In the lead up to 2017, the Consumer and Government Affairs Bureau of the
FCC got an inquiry from a firm asking how they could submit 1 million
comments a day on an "upcoming privacy proceeding" (their words, astute
observers will note there was no privacy proceeding before the FCC in
2017). When the Bureau asked me, I told them either mail us a CD to upload
it or submit one comment with 1 million signatures. To attempt to flood us
with 1 million comments a day (aside from the fact who can "predict" having
that many daily) would deny resources to others. In the mess that followed,
what was released to the public was so redacted you couldn't see the
legitimate concerns and better paths that were offered to this entity.

And the FCC isn't alone. EPA, FTC, and other regulatory agencies have had
these hijinks for years - and before the Internet it was faxes, mass
mimeographs (remember blue ink?), and postcards.The Administrative
Conference of the United States (ACUS) - is the body that is supposed to
provide consistent guidance for things like this across the U.S.
government. I've briefed them and tried to raise awareness of these issues
- as I think fundamentally this is a **process** question that once
answered, tech can support. However they're not technologies and updating
the interpretation of the process isn't something lawyers are apt to do
until the evidence that things are in trouble is overwhelming.

52 folks wrote a letter to them - and to GSA - back in 2020. GSA had a
rulemaking of its own on how to improve things, yet oddly never published
any of the comments it received (including ours) and closed the rulemaking
quietly. Here's the letter: https://tinyurl.com/letter-signed-52-people

And here's an article published in OODAloop about this - and why Generative
AI is probably going to make things even more challenging:
https://www.oodaloop.com/archive/2023/04/18/why-a-pause-on-ai-development-is-not-the-answer-an-insiders-perspective/

[snippet of the article] *Now in 2023 and Beyond: Proactive Approaches to
AI and Society*

Looking to the future, to effectively address the challenges arising from
AI, we must foster a proactive, results-oriented, and cooperative approach
with the public
<https://davidbray.medium.com/challenges-and-needed-new-solutions-for-open-societies-to-maintain-civil-discourse-part-1-b5ea95f8c679>.
Think tanks and universities can engage the public in conversations about
how to work, live, govern, and co-exist with modern technologies that
impact society. By involving diverse voices in the decision-making process,
we can better address and resolve the complex challenges AI presents on
local and national levels.

In addition, we must encourage industry and political leaders to
participate in finding non-partisan, multi-sector solutions if civil
societies are to remain stable. By working together, we can bridge the gap
between technological advancements and their societal implications.

Finally, launching AI pilots across various sectors, such as work,
education, health, law, and civil society, is essential. We must learn by
doing on how we can create responsible civil environments where AIs can be
developed and deployed responsibly. These initiatives can help us better
understand and integrate AI into our lives, ensuring its potential is
harnessed for the greater good while mitigating risks.

In 2019 and 2020, a group of fifty-two people asked the Administrative
Conference of the United States
<https://tinyurl.com/letter-signed-52-people>(which helps guide rulemaking
procedures for federal agencies), General Accounting Office, and the
General Services Administration to call attention to the need to address
the challenges of chatbots flooding public commenting procedures and
potentially crowding out or denying services to actual humans wanting to
leave a comment. We asked
<https://davidbray.medium.com/challenges-and-needed-new-solutions-for-open-societies-to-maintain-civil-discourse-part-1-b5ea95f8c679>
:

*1. Does identity matter regarding who files a comment or not — and must
one be a U.S. person in order to file?*

*2. Should agencies publish real-time counts of the number of comments
received — or is it better to wait until the end of a commenting round to
make all comments available, including counts?*

*3. Should third-party groups be able to file on behalf of someone else or
not — and do agencies have the right to remove spam-like comments?*

*4. Should the public commenting process permit multiple comments per
individual for a proceeding — and if so, how many comments from a single
individual are too many? 100? 1000? More?*

*5. Finally, should the U.S. government itself consider, given public
perceptions about potential conflicts of interest for any agency performing
a public commenting process, whether it would be better to have third-party
groups take responsibility for assembling comments and then filing those
comments via a validated process with the government?*

These same questions need pragmatic pilots that involve the public to
co-explore
and co-develop how we operate effectively amid these technological shifts
<https://davidbray.medium.com/challenges-and-needed-new-solutions-for-open-societies-to-maintain-civil-discourse-part-2-2f637c472112>.
As the capabilities of LLMs continue to grow, we need positive change
agents willing to tackle the messy issues at the intersection of technology
and society. The challenges are immense, but so too are the opportunities
for positive change. Let’s seize this moment to create a better tomorrow
for all. Working together, we can co-create a future that embraces AI’s
potential while mitigating its risks
<https://medium.com/peoplecentered/the-need-for-people-centered-sources-of-hope-for-our-digital-future-ahead-ef491dd2703d>,
informed by the hard lessons we have already learned.

Full article:
https://www.oodaloop.com/archive/2023/04/18/why-a-pause-on-ai-development-is-not-the-answer-an-insiders-perspective/

Hope this helps.

On Thu, Oct 5, 2023 at 4:44 PM Jack Haverty via Nnagain <
nnagain@lists.bufferbloat.net> wrote:

> Thanks for all your efforts to keep the "feedback loop" to the rulemakers
> functioning!
>
> I'd like to offer a suggestion for a hopefully politically acceptable way
> to handle the deluge, derived from my own battles with "email" over the
> years (decades).
>
> Back in the 1970s, I implemented one of the first email systems on the
> Arpanet, under the mentorship of JCR Licklider, who had been pursuing his
> vision of a "Galactic Network" at ARPA and MIT.   One of the things we
> discovered was the significance of anonymity.   At the time, anonymity was
> forbidden on the Arpanet; you needed an account on some computer, protected
> by passwords, in order to legitimately use the network.   The mechanisms
> were crude and easily broken, but the principle applied.
>
> Over the years, that principle has been forgotten, and the right to be
> anonymous has become entrenched.   But many uses of the network, and needs
> of its users, demand accountability, so all sorts of mechanisms have been
> pasted on top of the network to provide ways to judge user identity.
> Banks, medical services, governments, and businesses all demand some way of
> proving your identity, with passwords, various schemes of 2FA, VPNs, or
> other such technology, with varying degrees of protection.   It is still
> possible to be anonymous on the net, but many things you do require you to
> prove, to some extent, who you are.
>
> So, my suggestion for handling the deluge of "comments" is:
>
> 1/ create some mechanism for "registering" your intent to submit a
> comment.   Make it hard for bots to register.  Perhaps you can leverage the
> work of various partners, e.g., ISPs, retailers, government agencies,
> financial institutions, of others who already have some way of identifying
> their users.
>
> 2/ Also make registration optional - anyone can still submit comments
> anonymously if they choose.
>
> 3/ for "registered commenters", provide a way to "edit" your previous
> comment - i.e., advise that your comment is always the last one you
> submitted.   I.E., whoever you are, you can only submit one comment, which
> will be the last one you submit.
>
> 4/ In the thousands of pages of comments, somehow flag the ones that are
> from registered commenters, visible to the people who read the comments.
> Even better, provide those "information consumers" with ways to sort,
> filter, and search through the body of comments.
>
> This may not reduce the deluge of comments, but I'd expect it to help the
> lawyers and politicians keep their heads above the water.
>
> Anonymity is an important issue for Net Neutrality too, but I'll opine
> about that separately.....
>
> Jack Haverty
>
>
> On 10/2/23 12:38, David Bray, PhD via Nnagain wrote:
>
> Greetings all and thank you Dave Taht for that very kind intro...
>
> First, I'll open with I'm a gosh-darn non-partisan, which means I swore an
> oath to uphold the Constitution first and serve the United States - not a
> specific party, tribe, or ideology. This often means, especially in today's
> era of 24/7 news and social media, non-partisans have to "top cover".
>
> Second, I'll share that in what happened in 2017 (which itself was 10x
> what we saw in 2014) my biggest concern was and remains that a few actors
> attempted to flood the system with less-than-authentic comments.
>
> In some respects this is not new. The whole "notice and comment" process
> is a legacy process that goes back decades. And the FCC (and others) have
> had postcard floods of comments, mimeographed letters of comments, faxed
> floods of comments, and now this - which, when combined with generative AI,
> will be yet another flood.
>
> Which gets me to my biggest concern as a non-partisan in 2023-2024, namely
> how LLMs might misuse and abuse the commenting process further.
>
> Both in 2014 and 2017, I asked FCC General Counsel if I could use CAPTChA
> to try to reduce the volume of web scrapers or bots both filing and pulling
> info from the Electronic Comment Filing System.
>
> Both times I was told *no* out of concerns that they might prevent someone
> from filing. I asked if I could block obvious spam, defined as someone
> filing a comment >100 times a minute, and was similarly told no because one
> of those possible comments might be genuine and/or it could be an ex party
> filing en masse for others.
>
> For 2017 we had to spin up 30x the number of AWS cloud instances to handle
> the load - and this was a flood of comments at 4am, 5am, and 6am ET at
> night which normally shouldn’t see such volumes. When I said there was a
> combination of actual humans wanting to leave comments and others who were
> effectively denying service to others (especially because if anyone wanted
> to do a batch upload of 100,000 comments or more they could submit a CSV
> file or a comment with 100,000 signatories) - both parties said no, that
> couldn’t be happening.
>
> Until 2021 when the NY Attorney General proved that was exactly what was
> happening with 18m of the 23m apparently from non-authentic origin with ~9m
> from one side of the political aisle (and six companies) and ~9m from the
> other side of the political aisle (and one or more teenagers).
>
> So with Net Neutrality back on the agenda - here’s a simple prediction,
> even if the volume of comments is somehow controlled, 10,000+ pages of
> comments produced by ChatGPT or a different LLM is both possible and
> probably will be done. The question is if someone includes a legitimate
> legal argument on page 6,517 - will FCC’s lawyers spot it and respond to it
> as part of the NPRM?
>
> Hope this helps and with highest regards,
>
> -d.
> --
>
> Principal, LeadDoAdapt Ventures, Inc. <https://www.leaddoadapt.com/> &
> Distinguished Fellow
>
> Henry S. Stimson Center <https://www.stimson.org/ppl/david-bray/>, Business
> Executives for National Security <https://bens.org/people/dr-david-bray/>
>
>
>
> On Mon, Oct 2, 2023 at 2:15 PM Dave Taht via Nnagain <
> nnagain@lists.bufferbloat.net> wrote:
>
>> All:
>>
>> I have spent the last several days reaching out to as many people I
>> know with a deep understanding of the policy and technical issues
>> surrounding the internet, to participate on this list. I encourage you
>> all to reach out on your own, especially to those that you can
>> constructively and civilly disagree with, and hopefully work with, to
>> establish technical steps forward. Quite a few have joined silently!
>> So far, 168 people have joined!
>>
>> Please welcome Dr David Bray[1], a self-described "human flack jacket"
>> who, in the last NN debate, stood up for the non -partisan FCC IT team
>> that successfully kept the system up 99.4% of the time despite the
>> comment floods and network abuses from all sides. He has shared with
>> me privately many sad (and some hilarious!) stories of that era, and I
>> do kind of hope now, that some of that history surfaces, and we can
>> learn from it.
>>
>> Thank you very much, David, for putting down your painful memories[2],
>> and agreeing to join here. There is a lot to tackle here, going
>> forward.
>>
>> [1] https://www.stimson.org/ppl/david-bray/
>> [2] "Pain shared is reduced. Joy shared, increased." - Spider Robinson
>>
>>
>> --
>> Oct 30:
>> https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>> Dave Täht CSO, LibreQos
>> _______________________________________________
>> Nnagain mailing list
>> Nnagain@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/nnagain
>>
>
> _______________________________________________
> Nnagain mailing listNnagain@lists.bufferbloat.nethttps://lists.bufferbloat.net/listinfo/nnagain
>
>
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
>

[-- Attachment #2: Type: text/html, Size: 21607 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [NNagain] Introduction: Dr. David Bray
  2023-10-05 21:21     ` David Bray, PhD
@ 2023-10-07 20:10       ` Jack Haverty
  2023-10-09 23:21         ` David Bray, PhD
  0 siblings, 1 reply; 15+ messages in thread
From: Jack Haverty @ 2023-10-07 20:10 UTC (permalink / raw)
  To: David Bray, PhD,
	Network Neutrality is back! Let´s make the technical
	aspects heard this time!


[-- Attachment #1.1.1: Type: text/plain, Size: 18843 bytes --]

Hi again David et al,

Interesting frenzy...lots of questions that need answers and associated 
policies.   I served 6 years as an elected official (in a small special 
district in California), so I have some small understanding of the 
government side of things and the constraints involved.   Being in 
charge doesn't mean you can do what you want.

I'm thinking here more near-term and incremental steps.  You said "These 
same questions need pragmatic pilots that involve the public ..."

So, how about using the current NN situation for a pilot?  Keep all the 
current ways and emerging AI techniques to continue to flood the system 
with comments.   But also offer an *optional* way for humans to 
"register" as a commenter and then submit their (latest only) comment 
into the melee.  Will people use it?  Will "consumers" (the lawyers, 
commissioners, etc.) find it useful?

I've found it curious, for decades now, that there are (too many) 
mechanisms for "secure email", that may help with the flood of 
disinformation from anonymous senders, but very very few people use 
them.   Maybe they don't know how; maybe the available schemes are too 
flawed; maybe ...?

About 30 years ago, I was a speaker in a public meeting orchestrated by 
USPS, and recommended that they take a lead role, e.g., by acting as a 
national CA - certificate authority.  Never happened though. FCC issues 
lots of licenses...perhaps they could issue online credentials too?

Perhaps a "pilot" where you will also accept comments by email, some 
possibly sent by "verified" humans if they understand how to do so, 
would be worth trying?   Perhaps comments on "technical aspects" coming 
from people who demonstrably know how to use technology would be 
valuable to the policy makers?

The Internet, and technology such as TCP, began as an experimental pilot 
about 50 years ago.  Sometimes pilots become infrastructures.

FYI, I'm signing this message.  Using OpenPGP.  I could encrypt it also, 
but my email program can't find your public key.

Jack Haverty


On 10/5/23 14:21, David Bray, PhD wrote:
> Indeed Jack - a few things to balance - the Administrative Procedure 
> Act of 1946 (on which the idea of rulemaking is based) us about 
> raising legal concerns that must be answered by the agency at the time 
> the rulemaking is done. It's not a vote nor is it the case that if the 
> agency gets tons of comments in one direction that they have to go in 
> that direction. Instead it's only about making sure legal concerns are 
> considered and responded to before the agency before the agency acts. 
> (Which is partly why sending "I'm for XYZ" or "I'm against ABC" really 
> doesn't mean anything to an agency - not only is that not a legal 
> argument or concern, it's also not something where they're obligated 
> to follow these comments - it's not a vote or poll).
>
> That said, political folks have spun things to the public as if it is 
> a poll/vote/chance to act. The raise a valid legal concern part of the 
> APA of 1946 is omitted. Moreover the fact that third party law firms 
> and others like to submit comments on behalf of clients - there will 
> always be a third party submitting multiple comments for their clients 
> (or "clients") because that's their business.
>
> In the lead up to 2017, the Consumer and Government Affairs Bureau of 
> the FCC got an inquiry from a firm asking how they could submit 1 
> million comments a day on an "upcoming privacy proceeding" (their 
> words, astute observers will note there was no privacy proceeding 
> before the FCC in 2017). When the Bureau asked me, I told them either 
> mail us a CD to upload it or submit one comment with 1 million 
> signatures. To attempt to flood us with 1 million comments a day 
> (aside from the fact who can "predict" having that many daily) would 
> deny resources to others. In the mess that followed, what was released 
> to the public was so redacted you couldn't see the legitimate concerns 
> and better paths that were offered to this entity.
>
> And the FCC isn't alone. EPA, FTC, and other regulatory agencies have 
> had these hijinks for years - and before the Internet it was faxes, 
> mass mimeographs (remember blue ink?), and postcards.The 
> Administrative Conference of the United States (ACUS) - is the body 
> that is supposed to provide consistent guidance for things like this 
> across the U.S. government. I've briefed them and tried to raise 
> awareness of these issues - as I think fundamentally this is a 
> **process** question that once answered, tech can support. However 
> they're not technologies and updating the interpretation of the 
> process isn't something lawyers are apt to do until the evidence that 
> things are in trouble is overwhelming.
>
> 52 folks wrote a letter to them - and to GSA - back in 2020. GSA had a 
> rulemaking of its own on how to improve things, yet oddly never 
> published any of the comments it received (including ours) and closed 
> the rulemaking quietly. Here's the letter: 
> https://tinyurl.com/letter-signed-52-people
>
> And here's an article published in OODAloop about this - and why 
> Generative AI is probably going to make things even more challenging: 
> https://www.oodaloop.com/archive/2023/04/18/why-a-pause-on-ai-development-is-not-the-answer-an-insiders-perspective/
>
> [snippet of the article] *Now in 2023 and Beyond: Proactive Approaches 
> to AI and Society*
>
> Looking to the future, to effectively address the challenges arising 
> from AI, we must foster a proactive, results-oriented, and cooperative 
> approach with the public 
> <https://davidbray.medium.com/challenges-and-needed-new-solutions-for-open-societies-to-maintain-civil-discourse-part-1-b5ea95f8c679>. 
> Think tanks and universities can engage the public in conversations 
> about how to work, live, govern, and co-exist with modern technologies 
> that impact society. By involving diverse voices in the 
> decision-making process, we can better address and resolve the complex 
> challenges AI presents on local and national levels.
>
> In addition, we must encourage industry and political leaders to 
> participate in finding non-partisan, multi-sector solutions if civil 
> societies are to remain stable. By working together, we can bridge the 
> gap between technological advancements and their societal implications.
>
> Finally, launching AI pilots across various sectors, such as work, 
> education, health, law, and civil society, is essential. We must learn 
> by doing on how we can create responsible civil environments where AIs 
> can be developed and deployed responsibly. These initiatives can help 
> us better understand and integrate AI into our lives, ensuring its 
> potential is harnessed for the greater good while mitigating risks.
>
> In 2019 and 2020, a group of fifty-two people asked the Administrative 
> Conference of the United States 
> <https://tinyurl.com/letter-signed-52-people>(which helps guide 
> rulemaking procedures for federal agencies), General Accounting 
> Office, and the General Services Administration to call attention to 
> the need to address the challenges of chatbots flooding public 
> commenting procedures and potentially crowding out or denying services 
> to actual humans wanting to leave a comment. We asked 
> <https://davidbray.medium.com/challenges-and-needed-new-solutions-for-open-societies-to-maintain-civil-discourse-part-1-b5ea95f8c679>: 
>
>
> *1. Does identity matter regarding who files a comment or not — and 
> must one be a U.S. person in order to file?*
>
> *2. Should agencies publish real-time counts of the number of comments 
> received — or is it better to wait until the end of a commenting round 
> to make all comments available, including counts?*
>
> *3. Should third-party groups be able to file on behalf of someone 
> else or not — and do agencies have the right to remove spam-like 
> comments?*
>
> *4. Should the public commenting process permit multiple comments per 
> individual for a proceeding — and if so, how many comments from a 
> single individual are too many? 100? 1000? More?*
>
> *5. Finally, should the U.S. government itself consider, given public 
> perceptions about potential conflicts of interest for any agency 
> performing a public commenting process, whether it would be better to 
> have third-party groups take responsibility for assembling comments 
> and then filing those comments via a validated process with the 
> government?*
>
> These same questions need pragmatic pilots that involve the public to 
> co-explore and co-develop how we operate effectively amid these 
> technological shifts 
> <https://davidbray.medium.com/challenges-and-needed-new-solutions-for-open-societies-to-maintain-civil-discourse-part-2-2f637c472112>. 
> As the capabilities of LLMs continue to grow, we need positive change 
> agents willing to tackle the messy issues at the intersection of 
> technology and society. The challenges are immense, but so too are the 
> opportunities for positive change. Let’s seize this moment to create a 
> better tomorrow for all. Working together, we can co-create a future 
> that embraces AI’s potential while mitigating its risks 
> <https://medium.com/peoplecentered/the-need-for-people-centered-sources-of-hope-for-our-digital-future-ahead-ef491dd2703d>, 
> informed by the hard lessons we have already learned.
>
> Full article: 
> https://www.oodaloop.com/archive/2023/04/18/why-a-pause-on-ai-development-is-not-the-answer-an-insiders-perspective/
>
> Hope this helps.
>
>
> On Thu, Oct 5, 2023 at 4:44 PM Jack Haverty via Nnagain 
> <nnagain@lists.bufferbloat.net> wrote:
>
>     Thanks for all your efforts to keep the "feedback loop" to the
>     rulemakers functioning!
>
>     I'd like to offer a suggestion for a hopefully politically
>     acceptable way to handle the deluge, derived from my own battles
>     with "email" over the years (decades).
>
>     Back in the 1970s, I implemented one of the first email systems on
>     the Arpanet, under the mentorship of JCR Licklider, who had been
>     pursuing his vision of a "Galactic Network" at ARPA and MIT.   One
>     of the things we discovered was the significance of anonymity.  
>     At the time, anonymity was forbidden on the Arpanet; you needed an
>     account on some computer, protected by passwords, in order to
>     legitimately use the network.   The mechanisms were crude and
>     easily broken, but the principle applied.
>
>     Over the years, that principle has been forgotten, and the right
>     to be anonymous has become entrenched.   But many uses of the
>     network, and needs of its users, demand accountability, so all
>     sorts of mechanisms have been pasted on top of the network to
>     provide ways to judge user identity.  Banks, medical services,
>     governments, and businesses all demand some way of proving your
>     identity, with passwords, various schemes of 2FA, VPNs, or other
>     such technology, with varying degrees of protection.   It is still
>     possible to be anonymous on the net, but many things you do
>     require you to prove, to some extent, who you are.
>
>     So, my suggestion for handling the deluge of "comments" is:
>
>     1/ create some mechanism for "registering" your intent to submit a
>     comment.   Make it hard for bots to register. Perhaps you can
>     leverage the work of various partners, e.g., ISPs, retailers,
>     government agencies, financial institutions, of others who already
>     have some way of identifying their users.
>
>     2/ Also make registration optional - anyone can still submit
>     comments anonymously if they choose.
>
>     3/ for "registered commenters", provide a way to "edit" your
>     previous comment - i.e., advise that your comment is always the
>     last one you submitted.   I.E., whoever you are, you can only
>     submit one comment, which will be the last one you submit.
>
>     4/ In the thousands of pages of comments, somehow flag the ones
>     that are from registered commenters, visible to the people who
>     read the comments.   Even better, provide those "information
>     consumers" with ways to sort, filter, and search through the body
>     of comments.
>
>     This may not reduce the deluge of comments, but I'd expect it to
>     help the lawyers and politicians keep their heads above the water.
>
>     Anonymity is an important issue for Net Neutrality too, but I'll
>     opine about that separately.....
>
>     Jack Haverty
>
>
>     On 10/2/23 12:38, David Bray, PhD via Nnagain wrote:
>>     Greetings all and thank you Dave Taht for that very kind intro...
>>
>>     First, I'll open with I'm a gosh-darn non-partisan, which means I
>>     swore an oath to uphold the Constitution first and serve the
>>     United States - not a specific party, tribe, or ideology. This
>>     often means, especially in today's era of 24/7 news and social
>>     media, non-partisans have to "top cover".
>>
>>     Second, I'll share that in what happened in 2017 (which itself
>>     was 10x what we saw in 2014) my biggest concern was and remains
>>     that a few actors attempted to flood the system with
>>     less-than-authentic comments.
>>
>>     In some respects this is not new. The whole "notice and comment"
>>     process is a legacy process that goes back decades. And the FCC
>>     (and others) have had postcard floods of comments, mimeographed
>>     letters of comments, faxed floods of comments, and now this -
>>     which, when combined with generative AI, will be yet another flood.
>>
>>     Which gets me to my biggest concern as a non-partisan in
>>     2023-2024, namely how LLMs might misuse and abuse the commenting
>>     process further.
>>
>>     Both in 2014 and 2017, I asked FCC General Counsel if I could use
>>     CAPTChA to try to reduce the volume of web scrapers or bots both
>>     filing and pulling info from the Electronic Comment Filing System.
>>
>>     Both times I was told *no* out of concerns that they might
>>     prevent someone from filing. I asked if I could block obvious
>>     spam, defined as someone filing a comment >100 times a minute,
>>     and was similarly told no because one of those possible comments
>>     might be genuine and/or it could be an ex party filing en masse
>>     for others.
>>
>>     For 2017 we had to spin up 30x the number of AWS cloud instances
>>     to handle the load - and this was a flood of comments at 4am,
>>     5am, and 6am ET at night which normally shouldn’t see such
>>     volumes. When I said there was a combination of actual humans
>>     wanting to leave comments and others who were effectively denying
>>     service to others (especially because if anyone wanted to do a
>>     batch upload of 100,000 comments or more they could submit a CSV
>>     file or a comment with 100,000 signatories) - both parties said
>>     no, that couldn’t be happening.
>>
>>     Until 2021 when the NY Attorney General proved that was exactly
>>     what was happening with 18m of the 23m apparently from
>>     non-authentic origin with ~9m from one side of the political
>>     aisle (and six companies) and ~9m from the other side of the
>>     political aisle (and one or more teenagers).
>>
>>     So with Net Neutrality back on the agenda - here’s a simple
>>     prediction, even if the volume of comments is somehow controlled,
>>     10,000+ pages of comments produced by ChatGPT or a different LLM
>>     is both possible and probably will be done. The question is if
>>     someone includes a legitimate legal argument on page 6,517 - will
>>     FCC’s lawyers spot it and respond to it as part of the NPRM?
>>
>>     Hope this helps and with highest regards,
>>
>>     -d.
>>     -- 
>>
>>     Principal, LeadDoAdapt Ventures, Inc.
>>     <https://www.leaddoadapt.com/> & Distinguished Fellow
>>
>>     Henry S. Stimson Center
>>     <https://www.stimson.org/ppl/david-bray/>, Business Executives
>>     for National Security <https://bens.org/people/dr-david-bray/>
>>
>>
>>
>>     On Mon, Oct 2, 2023 at 2:15 PM Dave Taht via Nnagain
>>     <nnagain@lists.bufferbloat.net> wrote:
>>
>>         All:
>>
>>         I have spent the last several days reaching out to as many
>>         people I
>>         know with a deep understanding of the policy and technical issues
>>         surrounding the internet, to participate on this list. I
>>         encourage you
>>         all to reach out on your own, especially to those that you can
>>         constructively and civilly disagree with, and hopefully work
>>         with, to
>>         establish technical steps forward. Quite a few have joined
>>         silently!
>>         So far, 168 people have joined!
>>
>>         Please welcome Dr David Bray[1], a self-described "human
>>         flack jacket"
>>         who, in the last NN debate, stood up for the non -partisan
>>         FCC IT team
>>         that successfully kept the system up 99.4% of the time
>>         despite the
>>         comment floods and network abuses from all sides. He has
>>         shared with
>>         me privately many sad (and some hilarious!) stories of that
>>         era, and I
>>         do kind of hope now, that some of that history surfaces, and
>>         we can
>>         learn from it.
>>
>>         Thank you very much, David, for putting down your painful
>>         memories[2],
>>         and agreeing to join here. There is a lot to tackle here, going
>>         forward.
>>
>>         [1] https://www.stimson.org/ppl/david-bray/
>>         [2] "Pain shared is reduced. Joy shared, increased." - Spider
>>         Robinson
>>
>>
>>         -- 
>>         Oct 30:
>>         https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>>         Dave Täht CSO, LibreQos
>>         _______________________________________________
>>         Nnagain mailing list
>>         Nnagain@lists.bufferbloat.net
>>         https://lists.bufferbloat.net/listinfo/nnagain
>>
>>
>>     _______________________________________________
>>     Nnagain mailing list
>>     Nnagain@lists.bufferbloat.net
>>     https://lists.bufferbloat.net/listinfo/nnagain
>
>     _______________________________________________
>     Nnagain mailing list
>     Nnagain@lists.bufferbloat.net
>     https://lists.bufferbloat.net/listinfo/nnagain
>


[-- Attachment #1.1.2: Type: text/html, Size: 29468 bytes --]

[-- Attachment #2: OpenPGP digital signature --]
[-- Type: application/pgp-signature, Size: 665 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [NNagain] Introduction: Dr. David Bray
  2023-10-07 20:10       ` Jack Haverty
@ 2023-10-09 23:21         ` David Bray, PhD
  2023-10-09 23:46           ` Vint Cerf
  0 siblings, 1 reply; 15+ messages in thread
From: David Bray, PhD @ 2023-10-09 23:21 UTC (permalink / raw)
  To: Jack Haverty
  Cc: Network Neutrality is back! Let´s make the technical
	aspects heard this time!

[-- Attachment #1: Type: text/plain, Size: 20050 bytes --]

I'm all for doing new things to make things better.

At the same time, I used to do bioterrorism preparedness and response from
2000-2005 (and aside from asking myself what kind of crazy world needed
counter-bioterrorism efforts... I also realized you don't want to interject
something completely new in the middle of an unfolding crisis event). If
something were to be injected now, it would have to have consensus from
both sides, otherwise at least one side (potentially detractors from both)
will claim that whatever form the new approaches take are somehow
advantaging "the other side" and disadvantaging them.

Probably would take a ruling by the Administrative Conference of the United
States, at a minimum to answer these five questions - and even then,
introducing something completely different in the midst of a political
melee might just invite mudslinging unless moderate voices on both sides
can reach some consensus.

*1. Does identity matter regarding who files a comment or not — and must
one be a U.S. person in order to file?*

*2. Should agencies publish real-time counts of the number of comments
received — or is it better to wait until the end of a commenting round to
make all comments available, including counts?*

*3. Should third-party groups be able to file on behalf of someone else or
not — and do agencies have the right to remove spam-like comments?*

*4. Should the public commenting process permit multiple comments per
individual for a proceeding — and if so, how many comments from a single
individual are too many? 100? 1000? More?*

*5. Finally, should the U.S. government itself consider, given public
perceptions about potential conflicts of interest for any agency performing
a public commenting process, whether it would be better to have third-party
groups take responsibility for assembling comments and then filing those
comments via a validated process with the government?*


On Sat, Oct 7, 2023 at 4:10 PM Jack Haverty <jack@3kitty.org> wrote:

> Hi again David et al,
>
> Interesting frenzy...lots of questions that need answers and associated
> policies.   I served 6 years as an elected official (in a small special
> district in California), so I have some small understanding of the
> government side of things and the constraints involved.   Being in charge
> doesn't mean you can do what you want.
>
> I'm thinking here more near-term and incremental steps.  You said "These
> same questions need pragmatic pilots that involve the public ..."
>
> So, how about using the current NN situation for a pilot?  Keep all the
> current ways and emerging AI techniques to continue to flood the system
> with comments.   But also offer an *optional* way for humans to "register"
> as a commenter and then submit their (latest only) comment into the melee.
> Will people use it?  Will "consumers" (the lawyers, commissioners, etc.)
> find it useful?
>
> I've found it curious, for decades now, that there are (too many)
> mechanisms for "secure email", that may help with the flood of
> disinformation from anonymous senders, but very very few people use them.
> Maybe they don't know how; maybe the available schemes are too flawed;
> maybe ...?
>
> About 30 years ago, I was a speaker in a public meeting orchestrated by
> USPS, and recommended that they take a lead role, e.g., by acting as a
> national CA - certificate authority.  Never happened though.   FCC issues
> lots of licenses...perhaps they could issue online credentials too?
>
> Perhaps a "pilot" where you will also accept comments by email, some
> possibly sent by "verified" humans if they understand how to do so, would
> be worth trying?   Perhaps comments on "technical aspects" coming from
> people who demonstrably know how to use technology would be valuable to the
> policy makers?
>
> The Internet, and technology such as TCP, began as an experimental pilot
> about 50 years ago.  Sometimes pilots become infrastructures.
>
> FYI, I'm signing this message.  Using OpenPGP.  I could encrypt it also,
> but my email program can't find your public key.
>
> Jack Haverty
>
>
> On 10/5/23 14:21, David Bray, PhD wrote:
>
> Indeed Jack - a few things to balance - the Administrative Procedure Act
> of 1946 (on which the idea of rulemaking is based) us about raising legal
> concerns that must be answered by the agency at the time the rulemaking is
> done. It's not a vote nor is it the case that if the agency gets tons of
> comments in one direction that they have to go in that direction. Instead
> it's only about making sure legal concerns are considered and responded to
> before the agency before the agency acts. (Which is partly why sending "I'm
> for XYZ" or "I'm against ABC" really doesn't mean anything to an agency -
> not only is that not a legal argument or concern, it's also not something
> where they're obligated to follow these comments - it's not a vote or
> poll).
>
> That said, political folks have spun things to the public as if it is a
> poll/vote/chance to act. The raise a valid legal concern part of the APA of
> 1946 is omitted. Moreover the fact that third party law firms and others
> like to submit comments on behalf of clients - there will always be a third
> party submitting multiple comments for their clients (or "clients") because
> that's their business.
>
> In the lead up to 2017, the Consumer and Government Affairs Bureau of the
> FCC got an inquiry from a firm asking how they could submit 1 million
> comments a day on an "upcoming privacy proceeding" (their words, astute
> observers will note there was no privacy proceeding before the FCC in
> 2017). When the Bureau asked me, I told them either mail us a CD to upload
> it or submit one comment with 1 million signatures. To attempt to flood us
> with 1 million comments a day (aside from the fact who can "predict" having
> that many daily) would deny resources to others. In the mess that followed,
> what was released to the public was so redacted you couldn't see the
> legitimate concerns and better paths that were offered to this entity.
>
> And the FCC isn't alone. EPA, FTC, and other regulatory agencies have had
> these hijinks for years - and before the Internet it was faxes, mass
> mimeographs (remember blue ink?), and postcards.The Administrative
> Conference of the United States (ACUS) - is the body that is supposed to
> provide consistent guidance for things like this across the U.S.
> government. I've briefed them and tried to raise awareness of these issues
> - as I think fundamentally this is a **process** question that once
> answered, tech can support. However they're not technologies and updating
> the interpretation of the process isn't something lawyers are apt to do
> until the evidence that things are in trouble is overwhelming.
>
> 52 folks wrote a letter to them - and to GSA - back in 2020. GSA had a
> rulemaking of its own on how to improve things, yet oddly never published
> any of the comments it received (including ours) and closed the rulemaking
> quietly. Here's the letter: https://tinyurl.com/letter-signed-52-people
>
> And here's an article published in OODAloop about this - and why
> Generative AI is probably going to make things even more challenging:
> https://www.oodaloop.com/archive/2023/04/18/why-a-pause-on-ai-development-is-not-the-answer-an-insiders-perspective/
>
> [snippet of the article] *Now in 2023 and Beyond: Proactive Approaches to
> AI and Society*
>
> Looking to the future, to effectively address the challenges arising from
> AI, we must foster a proactive, results-oriented, and cooperative
> approach with the public
> <https://davidbray.medium.com/challenges-and-needed-new-solutions-for-open-societies-to-maintain-civil-discourse-part-1-b5ea95f8c679>.
> Think tanks and universities can engage the public in conversations about
> how to work, live, govern, and co-exist with modern technologies that
> impact society. By involving diverse voices in the decision-making process,
> we can better address and resolve the complex challenges AI presents on
> local and national levels.
>
> In addition, we must encourage industry and political leaders to
> participate in finding non-partisan, multi-sector solutions if civil
> societies are to remain stable. By working together, we can bridge the gap
> between technological advancements and their societal implications.
>
> Finally, launching AI pilots across various sectors, such as work,
> education, health, law, and civil society, is essential. We must learn by
> doing on how we can create responsible civil environments where AIs can be
> developed and deployed responsibly. These initiatives can help us better
> understand and integrate AI into our lives, ensuring its potential is
> harnessed for the greater good while mitigating risks.
>
> In 2019 and 2020, a group of fifty-two people asked the Administrative
> Conference of the United States
> <https://tinyurl.com/letter-signed-52-people>(which helps guide
> rulemaking procedures for federal agencies), General Accounting Office, and
> the General Services Administration to call attention to the need to
> address the challenges of chatbots flooding public commenting procedures
> and potentially crowding out or denying services to actual humans wanting
> to leave a comment. We asked
> <https://davidbray.medium.com/challenges-and-needed-new-solutions-for-open-societies-to-maintain-civil-discourse-part-1-b5ea95f8c679>
> :
>
> *1. Does identity matter regarding who files a comment or not — and must
> one be a U.S. person in order to file?*
>
> *2. Should agencies publish real-time counts of the number of comments
> received — or is it better to wait until the end of a commenting round to
> make all comments available, including counts?*
>
> *3. Should third-party groups be able to file on behalf of someone else or
> not — and do agencies have the right to remove spam-like comments?*
>
> *4. Should the public commenting process permit multiple comments per
> individual for a proceeding — and if so, how many comments from a single
> individual are too many? 100? 1000? More?*
>
> *5. Finally, should the U.S. government itself consider, given public
> perceptions about potential conflicts of interest for any agency performing
> a public commenting process, whether it would be better to have third-party
> groups take responsibility for assembling comments and then filing those
> comments via a validated process with the government?*
>
> These same questions need pragmatic pilots that involve the public to co-explore
> and co-develop how we operate effectively amid these technological shifts
> <https://davidbray.medium.com/challenges-and-needed-new-solutions-for-open-societies-to-maintain-civil-discourse-part-2-2f637c472112>.
> As the capabilities of LLMs continue to grow, we need positive change
> agents willing to tackle the messy issues at the intersection of technology
> and society. The challenges are immense, but so too are the opportunities
> for positive change. Let’s seize this moment to create a better tomorrow
> for all. Working together, we can co-create a future that embraces AI’s
> potential while mitigating its risks
> <https://medium.com/peoplecentered/the-need-for-people-centered-sources-of-hope-for-our-digital-future-ahead-ef491dd2703d>,
> informed by the hard lessons we have already learned.
>
> Full article:
> https://www.oodaloop.com/archive/2023/04/18/why-a-pause-on-ai-development-is-not-the-answer-an-insiders-perspective/
>
> Hope this helps.
>
> On Thu, Oct 5, 2023 at 4:44 PM Jack Haverty via Nnagain <
> nnagain@lists.bufferbloat.net> wrote:
>
>> Thanks for all your efforts to keep the "feedback loop" to the rulemakers
>> functioning!
>>
>> I'd like to offer a suggestion for a hopefully politically acceptable way
>> to handle the deluge, derived from my own battles with "email" over the
>> years (decades).
>>
>> Back in the 1970s, I implemented one of the first email systems on the
>> Arpanet, under the mentorship of JCR Licklider, who had been pursuing his
>> vision of a "Galactic Network" at ARPA and MIT.   One of the things we
>> discovered was the significance of anonymity.   At the time, anonymity was
>> forbidden on the Arpanet; you needed an account on some computer, protected
>> by passwords, in order to legitimately use the network.   The mechanisms
>> were crude and easily broken, but the principle applied.
>>
>> Over the years, that principle has been forgotten, and the right to be
>> anonymous has become entrenched.   But many uses of the network, and needs
>> of its users, demand accountability, so all sorts of mechanisms have been
>> pasted on top of the network to provide ways to judge user identity.
>> Banks, medical services, governments, and businesses all demand some way of
>> proving your identity, with passwords, various schemes of 2FA, VPNs, or
>> other such technology, with varying degrees of protection.   It is still
>> possible to be anonymous on the net, but many things you do require you to
>> prove, to some extent, who you are.
>>
>> So, my suggestion for handling the deluge of "comments" is:
>>
>> 1/ create some mechanism for "registering" your intent to submit a
>> comment.   Make it hard for bots to register.  Perhaps you can leverage the
>> work of various partners, e.g., ISPs, retailers, government agencies,
>> financial institutions, of others who already have some way of identifying
>> their users.
>>
>> 2/ Also make registration optional - anyone can still submit comments
>> anonymously if they choose.
>>
>> 3/ for "registered commenters", provide a way to "edit" your previous
>> comment - i.e., advise that your comment is always the last one you
>> submitted.   I.E., whoever you are, you can only submit one comment, which
>> will be the last one you submit.
>>
>> 4/ In the thousands of pages of comments, somehow flag the ones that are
>> from registered commenters, visible to the people who read the comments.
>> Even better, provide those "information consumers" with ways to sort,
>> filter, and search through the body of comments.
>>
>> This may not reduce the deluge of comments, but I'd expect it to help the
>> lawyers and politicians keep their heads above the water.
>>
>> Anonymity is an important issue for Net Neutrality too, but I'll opine
>> about that separately.....
>>
>> Jack Haverty
>>
>>
>> On 10/2/23 12:38, David Bray, PhD via Nnagain wrote:
>>
>> Greetings all and thank you Dave Taht for that very kind intro...
>>
>> First, I'll open with I'm a gosh-darn non-partisan, which means I swore
>> an oath to uphold the Constitution first and serve the United States - not
>> a specific party, tribe, or ideology. This often means, especially in
>> today's era of 24/7 news and social media, non-partisans have to "top
>> cover".
>>
>> Second, I'll share that in what happened in 2017 (which itself was 10x
>> what we saw in 2014) my biggest concern was and remains that a few actors
>> attempted to flood the system with less-than-authentic comments.
>>
>> In some respects this is not new. The whole "notice and comment" process
>> is a legacy process that goes back decades. And the FCC (and others) have
>> had postcard floods of comments, mimeographed letters of comments, faxed
>> floods of comments, and now this - which, when combined with generative AI,
>> will be yet another flood.
>>
>> Which gets me to my biggest concern as a non-partisan in 2023-2024,
>> namely how LLMs might misuse and abuse the commenting process further.
>>
>> Both in 2014 and 2017, I asked FCC General Counsel if I could use CAPTChA
>> to try to reduce the volume of web scrapers or bots both filing and pulling
>> info from the Electronic Comment Filing System.
>>
>> Both times I was told *no* out of concerns that they might prevent
>> someone from filing. I asked if I could block obvious spam, defined as
>> someone filing a comment >100 times a minute, and was similarly told no
>> because one of those possible comments might be genuine and/or it could be
>> an ex party filing en masse for others.
>>
>> For 2017 we had to spin up 30x the number of AWS cloud instances to
>> handle the load - and this was a flood of comments at 4am, 5am, and 6am ET
>> at night which normally shouldn’t see such volumes. When I said there was a
>> combination of actual humans wanting to leave comments and others who were
>> effectively denying service to others (especially because if anyone wanted
>> to do a batch upload of 100,000 comments or more they could submit a CSV
>> file or a comment with 100,000 signatories) - both parties said no, that
>> couldn’t be happening.
>>
>> Until 2021 when the NY Attorney General proved that was exactly what was
>> happening with 18m of the 23m apparently from non-authentic origin with ~9m
>> from one side of the political aisle (and six companies) and ~9m from the
>> other side of the political aisle (and one or more teenagers).
>>
>> So with Net Neutrality back on the agenda - here’s a simple prediction,
>> even if the volume of comments is somehow controlled, 10,000+ pages of
>> comments produced by ChatGPT or a different LLM is both possible and
>> probably will be done. The question is if someone includes a legitimate
>> legal argument on page 6,517 - will FCC’s lawyers spot it and respond to it
>> as part of the NPRM?
>>
>> Hope this helps and with highest regards,
>>
>> -d.
>> --
>>
>> Principal, LeadDoAdapt Ventures, Inc. <https://www.leaddoadapt.com/> &
>> Distinguished Fellow
>>
>> Henry S. Stimson Center <https://www.stimson.org/ppl/david-bray/>, Business
>> Executives for National Security <https://bens.org/people/dr-david-bray/>
>>
>>
>>
>> On Mon, Oct 2, 2023 at 2:15 PM Dave Taht via Nnagain <
>> nnagain@lists.bufferbloat.net> wrote:
>>
>>> All:
>>>
>>> I have spent the last several days reaching out to as many people I
>>> know with a deep understanding of the policy and technical issues
>>> surrounding the internet, to participate on this list. I encourage you
>>> all to reach out on your own, especially to those that you can
>>> constructively and civilly disagree with, and hopefully work with, to
>>> establish technical steps forward. Quite a few have joined silently!
>>> So far, 168 people have joined!
>>>
>>> Please welcome Dr David Bray[1], a self-described "human flack jacket"
>>> who, in the last NN debate, stood up for the non -partisan FCC IT team
>>> that successfully kept the system up 99.4% of the time despite the
>>> comment floods and network abuses from all sides. He has shared with
>>> me privately many sad (and some hilarious!) stories of that era, and I
>>> do kind of hope now, that some of that history surfaces, and we can
>>> learn from it.
>>>
>>> Thank you very much, David, for putting down your painful memories[2],
>>> and agreeing to join here. There is a lot to tackle here, going
>>> forward.
>>>
>>> [1] https://www.stimson.org/ppl/david-bray/
>>> [2] "Pain shared is reduced. Joy shared, increased." - Spider Robinson
>>>
>>>
>>> --
>>> Oct 30:
>>> https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>>> Dave Täht CSO, LibreQos
>>> _______________________________________________
>>> Nnagain mailing list
>>> Nnagain@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/nnagain
>>>
>>
>> _______________________________________________
>> Nnagain mailing listNnagain@lists.bufferbloat.nethttps://lists.bufferbloat.net/listinfo/nnagain
>>
>>
>> _______________________________________________
>> Nnagain mailing list
>> Nnagain@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/nnagain
>>
>
>

[-- Attachment #2: Type: text/html, Size: 30794 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [NNagain] Introduction: Dr. David Bray
  2023-10-09 23:21         ` David Bray, PhD
@ 2023-10-09 23:46           ` Vint Cerf
  2023-10-09 23:55             ` David Bray, PhD
  0 siblings, 1 reply; 15+ messages in thread
From: Vint Cerf @ 2023-10-09 23:46 UTC (permalink / raw)
  To: Network Neutrality is back! Let´s make the technical
	aspects heard this time!


[-- Attachment #1.1: Type: text/plain, Size: 21350 bytes --]

David, this is a good list.
FACA has rules for public participation, for example.

I think it should be taken into account for any public commenting process
that online (and offline such as USPS or fax and phone calls) that spam and
artificial inflation of comments are possible. Is there any specific
standard for US agency public comment handling? If now, what committees of
the US Congress might have jurisdiction?

v


On Tue, Oct 10, 2023 at 8:22 AM David Bray, PhD via Nnagain <
nnagain@lists.bufferbloat.net> wrote:

> I'm all for doing new things to make things better.
>
> At the same time, I used to do bioterrorism preparedness and response from
> 2000-2005 (and aside from asking myself what kind of crazy world needed
> counter-bioterrorism efforts... I also realized you don't want to interject
> something completely new in the middle of an unfolding crisis event). If
> something were to be injected now, it would have to have consensus from
> both sides, otherwise at least one side (potentially detractors from both)
> will claim that whatever form the new approaches take are somehow
> advantaging "the other side" and disadvantaging them.
>
> Probably would take a ruling by the Administrative Conference of the
> United States, at a minimum to answer these five questions - and even then,
> introducing something completely different in the midst of a political
> melee might just invite mudslinging unless moderate voices on both sides
> can reach some consensus.
>
> *1. Does identity matter regarding who files a comment or not — and must
> one be a U.S. person in order to file?*
>
> *2. Should agencies publish real-time counts of the number of comments
> received — or is it better to wait until the end of a commenting round to
> make all comments available, including counts?*
>
> *3. Should third-party groups be able to file on behalf of someone else or
> not — and do agencies have the right to remove spam-like comments?*
>
> *4. Should the public commenting process permit multiple comments per
> individual for a proceeding — and if so, how many comments from a single
> individual are too many? 100? 1000? More?*
>
> *5. Finally, should the U.S. government itself consider, given public
> perceptions about potential conflicts of interest for any agency performing
> a public commenting process, whether it would be better to have third-party
> groups take responsibility for assembling comments and then filing those
> comments via a validated process with the government?*
>
>
> On Sat, Oct 7, 2023 at 4:10 PM Jack Haverty <jack@3kitty.org> wrote:
>
>> Hi again David et al,
>>
>> Interesting frenzy...lots of questions that need answers and associated
>> policies.   I served 6 years as an elected official (in a small special
>> district in California), so I have some small understanding of the
>> government side of things and the constraints involved.   Being in charge
>> doesn't mean you can do what you want.
>>
>> I'm thinking here more near-term and incremental steps.  You said "These
>> same questions need pragmatic pilots that involve the public ..."
>>
>> So, how about using the current NN situation for a pilot?  Keep all the
>> current ways and emerging AI techniques to continue to flood the system
>> with comments.   But also offer an *optional* way for humans to "register"
>> as a commenter and then submit their (latest only) comment into the melee.
>> Will people use it?  Will "consumers" (the lawyers, commissioners, etc.)
>> find it useful?
>>
>> I've found it curious, for decades now, that there are (too many)
>> mechanisms for "secure email", that may help with the flood of
>> disinformation from anonymous senders, but very very few people use them.
>> Maybe they don't know how; maybe the available schemes are too flawed;
>> maybe ...?
>>
>> About 30 years ago, I was a speaker in a public meeting orchestrated by
>> USPS, and recommended that they take a lead role, e.g., by acting as a
>> national CA - certificate authority.  Never happened though.   FCC issues
>> lots of licenses...perhaps they could issue online credentials too?
>>
>> Perhaps a "pilot" where you will also accept comments by email, some
>> possibly sent by "verified" humans if they understand how to do so, would
>> be worth trying?   Perhaps comments on "technical aspects" coming from
>> people who demonstrably know how to use technology would be valuable to the
>> policy makers?
>>
>> The Internet, and technology such as TCP, began as an experimental pilot
>> about 50 years ago.  Sometimes pilots become infrastructures.
>>
>> FYI, I'm signing this message.  Using OpenPGP.  I could encrypt it also,
>> but my email program can't find your public key.
>>
>> Jack Haverty
>>
>>
>> On 10/5/23 14:21, David Bray, PhD wrote:
>>
>> Indeed Jack - a few things to balance - the Administrative Procedure Act
>> of 1946 (on which the idea of rulemaking is based) us about raising legal
>> concerns that must be answered by the agency at the time the rulemaking is
>> done. It's not a vote nor is it the case that if the agency gets tons of
>> comments in one direction that they have to go in that direction. Instead
>> it's only about making sure legal concerns are considered and responded to
>> before the agency before the agency acts. (Which is partly why sending "I'm
>> for XYZ" or "I'm against ABC" really doesn't mean anything to an agency -
>> not only is that not a legal argument or concern, it's also not something
>> where they're obligated to follow these comments - it's not a vote or
>> poll).
>>
>> That said, political folks have spun things to the public as if it is a
>> poll/vote/chance to act. The raise a valid legal concern part of the APA of
>> 1946 is omitted. Moreover the fact that third party law firms and others
>> like to submit comments on behalf of clients - there will always be a third
>> party submitting multiple comments for their clients (or "clients") because
>> that's their business.
>>
>> In the lead up to 2017, the Consumer and Government Affairs Bureau of the
>> FCC got an inquiry from a firm asking how they could submit 1 million
>> comments a day on an "upcoming privacy proceeding" (their words, astute
>> observers will note there was no privacy proceeding before the FCC in
>> 2017). When the Bureau asked me, I told them either mail us a CD to upload
>> it or submit one comment with 1 million signatures. To attempt to flood us
>> with 1 million comments a day (aside from the fact who can "predict" having
>> that many daily) would deny resources to others. In the mess that followed,
>> what was released to the public was so redacted you couldn't see the
>> legitimate concerns and better paths that were offered to this entity.
>>
>> And the FCC isn't alone. EPA, FTC, and other regulatory agencies have had
>> these hijinks for years - and before the Internet it was faxes, mass
>> mimeographs (remember blue ink?), and postcards.The Administrative
>> Conference of the United States (ACUS) - is the body that is supposed to
>> provide consistent guidance for things like this across the U.S.
>> government. I've briefed them and tried to raise awareness of these issues
>> - as I think fundamentally this is a **process** question that once
>> answered, tech can support. However they're not technologies and updating
>> the interpretation of the process isn't something lawyers are apt to do
>> until the evidence that things are in trouble is overwhelming.
>>
>> 52 folks wrote a letter to them - and to GSA - back in 2020. GSA had a
>> rulemaking of its own on how to improve things, yet oddly never published
>> any of the comments it received (including ours) and closed the rulemaking
>> quietly. Here's the letter: https://tinyurl.com/letter-signed-52-people
>>
>> And here's an article published in OODAloop about this - and why
>> Generative AI is probably going to make things even more challenging:
>> https://www.oodaloop.com/archive/2023/04/18/why-a-pause-on-ai-development-is-not-the-answer-an-insiders-perspective/
>>
>> [snippet of the article] *Now in 2023 and Beyond: Proactive Approaches
>> to AI and Society*
>>
>> Looking to the future, to effectively address the challenges arising from
>> AI, we must foster a proactive, results-oriented, and cooperative
>> approach with the public
>> <https://davidbray.medium.com/challenges-and-needed-new-solutions-for-open-societies-to-maintain-civil-discourse-part-1-b5ea95f8c679>.
>> Think tanks and universities can engage the public in conversations about
>> how to work, live, govern, and co-exist with modern technologies that
>> impact society. By involving diverse voices in the decision-making process,
>> we can better address and resolve the complex challenges AI presents on
>> local and national levels.
>>
>> In addition, we must encourage industry and political leaders to
>> participate in finding non-partisan, multi-sector solutions if civil
>> societies are to remain stable. By working together, we can bridge the gap
>> between technological advancements and their societal implications.
>>
>> Finally, launching AI pilots across various sectors, such as work,
>> education, health, law, and civil society, is essential. We must learn by
>> doing on how we can create responsible civil environments where AIs can be
>> developed and deployed responsibly. These initiatives can help us better
>> understand and integrate AI into our lives, ensuring its potential is
>> harnessed for the greater good while mitigating risks.
>>
>> In 2019 and 2020, a group of fifty-two people asked the Administrative
>> Conference of the United States
>> <https://tinyurl.com/letter-signed-52-people>(which helps guide
>> rulemaking procedures for federal agencies), General Accounting Office, and
>> the General Services Administration to call attention to the need to
>> address the challenges of chatbots flooding public commenting procedures
>> and potentially crowding out or denying services to actual humans wanting
>> to leave a comment. We asked
>> <https://davidbray.medium.com/challenges-and-needed-new-solutions-for-open-societies-to-maintain-civil-discourse-part-1-b5ea95f8c679>
>> :
>>
>> *1. Does identity matter regarding who files a comment or not — and must
>> one be a U.S. person in order to file?*
>>
>> *2. Should agencies publish real-time counts of the number of comments
>> received — or is it better to wait until the end of a commenting round to
>> make all comments available, including counts?*
>>
>> *3. Should third-party groups be able to file on behalf of someone else
>> or not — and do agencies have the right to remove spam-like comments?*
>>
>> *4. Should the public commenting process permit multiple comments per
>> individual for a proceeding — and if so, how many comments from a single
>> individual are too many? 100? 1000? More?*
>>
>> *5. Finally, should the U.S. government itself consider, given public
>> perceptions about potential conflicts of interest for any agency performing
>> a public commenting process, whether it would be better to have third-party
>> groups take responsibility for assembling comments and then filing those
>> comments via a validated process with the government?*
>>
>> These same questions need pragmatic pilots that involve the public to co-explore
>> and co-develop how we operate effectively amid these technological shifts
>> <https://davidbray.medium.com/challenges-and-needed-new-solutions-for-open-societies-to-maintain-civil-discourse-part-2-2f637c472112>.
>> As the capabilities of LLMs continue to grow, we need positive change
>> agents willing to tackle the messy issues at the intersection of technology
>> and society. The challenges are immense, but so too are the opportunities
>> for positive change. Let’s seize this moment to create a better tomorrow
>> for all. Working together, we can co-create a future that embraces AI’s
>> potential while mitigating its risks
>> <https://medium.com/peoplecentered/the-need-for-people-centered-sources-of-hope-for-our-digital-future-ahead-ef491dd2703d>,
>> informed by the hard lessons we have already learned.
>>
>> Full article:
>> https://www.oodaloop.com/archive/2023/04/18/why-a-pause-on-ai-development-is-not-the-answer-an-insiders-perspective/
>>
>> Hope this helps.
>>
>> On Thu, Oct 5, 2023 at 4:44 PM Jack Haverty via Nnagain <
>> nnagain@lists.bufferbloat.net> wrote:
>>
>>> Thanks for all your efforts to keep the "feedback loop" to the
>>> rulemakers functioning!
>>>
>>> I'd like to offer a suggestion for a hopefully politically acceptable
>>> way to handle the deluge, derived from my own battles with "email" over the
>>> years (decades).
>>>
>>> Back in the 1970s, I implemented one of the first email systems on the
>>> Arpanet, under the mentorship of JCR Licklider, who had been pursuing his
>>> vision of a "Galactic Network" at ARPA and MIT.   One of the things we
>>> discovered was the significance of anonymity.   At the time, anonymity was
>>> forbidden on the Arpanet; you needed an account on some computer, protected
>>> by passwords, in order to legitimately use the network.   The mechanisms
>>> were crude and easily broken, but the principle applied.
>>>
>>> Over the years, that principle has been forgotten, and the right to be
>>> anonymous has become entrenched.   But many uses of the network, and needs
>>> of its users, demand accountability, so all sorts of mechanisms have been
>>> pasted on top of the network to provide ways to judge user identity.
>>> Banks, medical services, governments, and businesses all demand some way of
>>> proving your identity, with passwords, various schemes of 2FA, VPNs, or
>>> other such technology, with varying degrees of protection.   It is still
>>> possible to be anonymous on the net, but many things you do require you to
>>> prove, to some extent, who you are.
>>>
>>> So, my suggestion for handling the deluge of "comments" is:
>>>
>>> 1/ create some mechanism for "registering" your intent to submit a
>>> comment.   Make it hard for bots to register.  Perhaps you can leverage the
>>> work of various partners, e.g., ISPs, retailers, government agencies,
>>> financial institutions, of others who already have some way of identifying
>>> their users.
>>>
>>> 2/ Also make registration optional - anyone can still submit comments
>>> anonymously if they choose.
>>>
>>> 3/ for "registered commenters", provide a way to "edit" your previous
>>> comment - i.e., advise that your comment is always the last one you
>>> submitted.   I.E., whoever you are, you can only submit one comment, which
>>> will be the last one you submit.
>>>
>>> 4/ In the thousands of pages of comments, somehow flag the ones that are
>>> from registered commenters, visible to the people who read the comments.
>>> Even better, provide those "information consumers" with ways to sort,
>>> filter, and search through the body of comments.
>>>
>>> This may not reduce the deluge of comments, but I'd expect it to help
>>> the lawyers and politicians keep their heads above the water.
>>>
>>> Anonymity is an important issue for Net Neutrality too, but I'll opine
>>> about that separately.....
>>>
>>> Jack Haverty
>>>
>>>
>>> On 10/2/23 12:38, David Bray, PhD via Nnagain wrote:
>>>
>>> Greetings all and thank you Dave Taht for that very kind intro...
>>>
>>> First, I'll open with I'm a gosh-darn non-partisan, which means I swore
>>> an oath to uphold the Constitution first and serve the United States - not
>>> a specific party, tribe, or ideology. This often means, especially in
>>> today's era of 24/7 news and social media, non-partisans have to "top
>>> cover".
>>>
>>> Second, I'll share that in what happened in 2017 (which itself was 10x
>>> what we saw in 2014) my biggest concern was and remains that a few actors
>>> attempted to flood the system with less-than-authentic comments.
>>>
>>> In some respects this is not new. The whole "notice and comment" process
>>> is a legacy process that goes back decades. And the FCC (and others) have
>>> had postcard floods of comments, mimeographed letters of comments, faxed
>>> floods of comments, and now this - which, when combined with generative AI,
>>> will be yet another flood.
>>>
>>> Which gets me to my biggest concern as a non-partisan in 2023-2024,
>>> namely how LLMs might misuse and abuse the commenting process further.
>>>
>>> Both in 2014 and 2017, I asked FCC General Counsel if I could use
>>> CAPTChA to try to reduce the volume of web scrapers or bots both filing and
>>> pulling info from the Electronic Comment Filing System.
>>>
>>> Both times I was told *no* out of concerns that they might prevent
>>> someone from filing. I asked if I could block obvious spam, defined as
>>> someone filing a comment >100 times a minute, and was similarly told no
>>> because one of those possible comments might be genuine and/or it could be
>>> an ex party filing en masse for others.
>>>
>>> For 2017 we had to spin up 30x the number of AWS cloud instances to
>>> handle the load - and this was a flood of comments at 4am, 5am, and 6am ET
>>> at night which normally shouldn’t see such volumes. When I said there was a
>>> combination of actual humans wanting to leave comments and others who were
>>> effectively denying service to others (especially because if anyone wanted
>>> to do a batch upload of 100,000 comments or more they could submit a CSV
>>> file or a comment with 100,000 signatories) - both parties said no, that
>>> couldn’t be happening.
>>>
>>> Until 2021 when the NY Attorney General proved that was exactly what was
>>> happening with 18m of the 23m apparently from non-authentic origin with ~9m
>>> from one side of the political aisle (and six companies) and ~9m from the
>>> other side of the political aisle (and one or more teenagers).
>>>
>>> So with Net Neutrality back on the agenda - here’s a simple prediction,
>>> even if the volume of comments is somehow controlled, 10,000+ pages of
>>> comments produced by ChatGPT or a different LLM is both possible and
>>> probably will be done. The question is if someone includes a legitimate
>>> legal argument on page 6,517 - will FCC’s lawyers spot it and respond to it
>>> as part of the NPRM?
>>>
>>> Hope this helps and with highest regards,
>>>
>>> -d.
>>> --
>>>
>>> Principal, LeadDoAdapt Ventures, Inc. <https://www.leaddoadapt.com/> &
>>> Distinguished Fellow
>>>
>>> Henry S. Stimson Center <https://www.stimson.org/ppl/david-bray/>, Business
>>> Executives for National Security
>>> <https://bens.org/people/dr-david-bray/>
>>>
>>>
>>>
>>> On Mon, Oct 2, 2023 at 2:15 PM Dave Taht via Nnagain <
>>> nnagain@lists.bufferbloat.net> wrote:
>>>
>>>> All:
>>>>
>>>> I have spent the last several days reaching out to as many people I
>>>> know with a deep understanding of the policy and technical issues
>>>> surrounding the internet, to participate on this list. I encourage you
>>>> all to reach out on your own, especially to those that you can
>>>> constructively and civilly disagree with, and hopefully work with, to
>>>> establish technical steps forward. Quite a few have joined silently!
>>>> So far, 168 people have joined!
>>>>
>>>> Please welcome Dr David Bray[1], a self-described "human flack jacket"
>>>> who, in the last NN debate, stood up for the non -partisan FCC IT team
>>>> that successfully kept the system up 99.4% of the time despite the
>>>> comment floods and network abuses from all sides. He has shared with
>>>> me privately many sad (and some hilarious!) stories of that era, and I
>>>> do kind of hope now, that some of that history surfaces, and we can
>>>> learn from it.
>>>>
>>>> Thank you very much, David, for putting down your painful memories[2],
>>>> and agreeing to join here. There is a lot to tackle here, going
>>>> forward.
>>>>
>>>> [1] https://www.stimson.org/ppl/david-bray/
>>>> [2] "Pain shared is reduced. Joy shared, increased." - Spider Robinson
>>>>
>>>>
>>>> --
>>>> Oct 30:
>>>> https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>>>> Dave Täht CSO, LibreQos
>>>> _______________________________________________
>>>> Nnagain mailing list
>>>> Nnagain@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/nnagain
>>>>
>>>
>>> _______________________________________________
>>> Nnagain mailing listNnagain@lists.bufferbloat.nethttps://lists.bufferbloat.net/listinfo/nnagain
>>>
>>>
>>> _______________________________________________
>>> Nnagain mailing list
>>> Nnagain@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/nnagain
>>>
>>
>> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
>


-- 
Please send any postal/overnight deliveries to:
Vint Cerf
Google, LLC
1900 Reston Metro Plaza, 16th Floor
Reston, VA 20190
+1 (571) 213 1346


until further notice

[-- Attachment #1.2: Type: text/html, Size: 32521 bytes --]

[-- Attachment #2: S/MIME Cryptographic Signature --]
[-- Type: application/pkcs7-signature, Size: 3995 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [NNagain] Introduction: Dr. David Bray
  2023-10-09 23:46           ` Vint Cerf
@ 2023-10-09 23:55             ` David Bray, PhD
  2023-10-10  2:56               ` Jack Haverty
  0 siblings, 1 reply; 15+ messages in thread
From: David Bray, PhD @ 2023-10-09 23:55 UTC (permalink / raw)
  To: Vint Cerf
  Cc: Network Neutrality is back! Let´s make the technical
	aspects heard this time!,
	Jack Haverty

[-- Attachment #1: Type: text/plain, Size: 23139 bytes --]

Great points Vint as you're absolutely right - there are multiple
modalities here (and in the past it was spam from thousands of postcards,
then mimeographs, then faxes, etc.)

The standard historically has been set by the Administrative Conference of
the United States: https://www.acus.gov/about-acus

In 2020 there seemed to be an effort to gave the General Services
Administration weigh-in, however they closed that rulemaking attempt
without publishing any of the comments they got and no announcement why it
was closed.

As for what part of Congress - I believe ACUS was championed by both the
Senate and House Judiciary Committees as it has oversight and
responsibility for the interpretations of the Administrative Procedure Act
of 1946 (which sets out the whole rulemaking procedure).

Sadly there isn't a standard across agencies - which also means there isn't
a standard across Administrations. Back in 2018 and 2020, both with this
group of 52 people here https://tinyurl.com/letter-signed-52-people - as
well as individually - I did my darnest to encourage them to do a standard.

There's also the National Academy of Public Administration which is
probably the latest remaining non-partisan forum for discussions like this
too.


On Mon, Oct 9, 2023 at 7:46 PM Vint Cerf <vint@google.com> wrote:

> David, this is a good list.
> FACA has rules for public participation, for example.
>
> I think it should be taken into account for any public commenting process
> that online (and offline such as USPS or fax and phone calls) that spam and
> artificial inflation of comments are possible. Is there any specific
> standard for US agency public comment handling? If now, what committees of
> the US Congress might have jurisdiction?
>
> v
>
>
> On Tue, Oct 10, 2023 at 8:22 AM David Bray, PhD via Nnagain <
> nnagain@lists.bufferbloat.net> wrote:
>
>> I'm all for doing new things to make things better.
>>
>> At the same time, I used to do bioterrorism preparedness and response
>> from 2000-2005 (and aside from asking myself what kind of crazy world
>> needed counter-bioterrorism efforts... I also realized you don't want to
>> interject something completely new in the middle of an unfolding crisis
>> event). If something were to be injected now, it would have to have
>> consensus from both sides, otherwise at least one side (potentially
>> detractors from both) will claim that whatever form the new approaches take
>> are somehow advantaging "the other side" and disadvantaging them.
>>
>> Probably would take a ruling by the Administrative Conference of the
>> United States, at a minimum to answer these five questions - and even then,
>> introducing something completely different in the midst of a political
>> melee might just invite mudslinging unless moderate voices on both sides
>> can reach some consensus.
>>
>> *1. Does identity matter regarding who files a comment or not — and must
>> one be a U.S. person in order to file?*
>>
>> *2. Should agencies publish real-time counts of the number of comments
>> received — or is it better to wait until the end of a commenting round to
>> make all comments available, including counts?*
>>
>> *3. Should third-party groups be able to file on behalf of someone else
>> or not — and do agencies have the right to remove spam-like comments?*
>>
>> *4. Should the public commenting process permit multiple comments per
>> individual for a proceeding — and if so, how many comments from a single
>> individual are too many? 100? 1000? More?*
>>
>> *5. Finally, should the U.S. government itself consider, given public
>> perceptions about potential conflicts of interest for any agency performing
>> a public commenting process, whether it would be better to have third-party
>> groups take responsibility for assembling comments and then filing those
>> comments via a validated process with the government?*
>>
>>
>> On Sat, Oct 7, 2023 at 4:10 PM Jack Haverty <jack@3kitty.org> wrote:
>>
>>> Hi again David et al,
>>>
>>> Interesting frenzy...lots of questions that need answers and associated
>>> policies.   I served 6 years as an elected official (in a small special
>>> district in California), so I have some small understanding of the
>>> government side of things and the constraints involved.   Being in charge
>>> doesn't mean you can do what you want.
>>>
>>> I'm thinking here more near-term and incremental steps.  You said "These
>>> same questions need pragmatic pilots that involve the public ..."
>>>
>>> So, how about using the current NN situation for a pilot?  Keep all the
>>> current ways and emerging AI techniques to continue to flood the system
>>> with comments.   But also offer an *optional* way for humans to "register"
>>> as a commenter and then submit their (latest only) comment into the melee.
>>> Will people use it?  Will "consumers" (the lawyers, commissioners, etc.)
>>> find it useful?
>>>
>>> I've found it curious, for decades now, that there are (too many)
>>> mechanisms for "secure email", that may help with the flood of
>>> disinformation from anonymous senders, but very very few people use them.
>>> Maybe they don't know how; maybe the available schemes are too flawed;
>>> maybe ...?
>>>
>>> About 30 years ago, I was a speaker in a public meeting orchestrated by
>>> USPS, and recommended that they take a lead role, e.g., by acting as a
>>> national CA - certificate authority.  Never happened though.   FCC issues
>>> lots of licenses...perhaps they could issue online credentials too?
>>>
>>> Perhaps a "pilot" where you will also accept comments by email, some
>>> possibly sent by "verified" humans if they understand how to do so, would
>>> be worth trying?   Perhaps comments on "technical aspects" coming from
>>> people who demonstrably know how to use technology would be valuable to the
>>> policy makers?
>>>
>>> The Internet, and technology such as TCP, began as an experimental pilot
>>> about 50 years ago.  Sometimes pilots become infrastructures.
>>>
>>> FYI, I'm signing this message.  Using OpenPGP.  I could encrypt it also,
>>> but my email program can't find your public key.
>>>
>>> Jack Haverty
>>>
>>>
>>> On 10/5/23 14:21, David Bray, PhD wrote:
>>>
>>> Indeed Jack - a few things to balance - the Administrative Procedure Act
>>> of 1946 (on which the idea of rulemaking is based) us about raising legal
>>> concerns that must be answered by the agency at the time the rulemaking is
>>> done. It's not a vote nor is it the case that if the agency gets tons of
>>> comments in one direction that they have to go in that direction. Instead
>>> it's only about making sure legal concerns are considered and responded to
>>> before the agency before the agency acts. (Which is partly why sending "I'm
>>> for XYZ" or "I'm against ABC" really doesn't mean anything to an agency -
>>> not only is that not a legal argument or concern, it's also not something
>>> where they're obligated to follow these comments - it's not a vote or
>>> poll).
>>>
>>> That said, political folks have spun things to the public as if it is a
>>> poll/vote/chance to act. The raise a valid legal concern part of the APA of
>>> 1946 is omitted. Moreover the fact that third party law firms and others
>>> like to submit comments on behalf of clients - there will always be a third
>>> party submitting multiple comments for their clients (or "clients") because
>>> that's their business.
>>>
>>> In the lead up to 2017, the Consumer and Government Affairs Bureau of
>>> the FCC got an inquiry from a firm asking how they could submit 1 million
>>> comments a day on an "upcoming privacy proceeding" (their words, astute
>>> observers will note there was no privacy proceeding before the FCC in
>>> 2017). When the Bureau asked me, I told them either mail us a CD to upload
>>> it or submit one comment with 1 million signatures. To attempt to flood us
>>> with 1 million comments a day (aside from the fact who can "predict" having
>>> that many daily) would deny resources to others. In the mess that followed,
>>> what was released to the public was so redacted you couldn't see the
>>> legitimate concerns and better paths that were offered to this entity.
>>>
>>> And the FCC isn't alone. EPA, FTC, and other regulatory agencies have
>>> had these hijinks for years - and before the Internet it was faxes, mass
>>> mimeographs (remember blue ink?), and postcards.The Administrative
>>> Conference of the United States (ACUS) - is the body that is supposed to
>>> provide consistent guidance for things like this across the U.S.
>>> government. I've briefed them and tried to raise awareness of these issues
>>> - as I think fundamentally this is a **process** question that once
>>> answered, tech can support. However they're not technologies and updating
>>> the interpretation of the process isn't something lawyers are apt to do
>>> until the evidence that things are in trouble is overwhelming.
>>>
>>> 52 folks wrote a letter to them - and to GSA - back in 2020. GSA had a
>>> rulemaking of its own on how to improve things, yet oddly never published
>>> any of the comments it received (including ours) and closed the rulemaking
>>> quietly. Here's the letter: https://tinyurl.com/letter-signed-52-people
>>>
>>> And here's an article published in OODAloop about this - and why
>>> Generative AI is probably going to make things even more challenging:
>>> https://www.oodaloop.com/archive/2023/04/18/why-a-pause-on-ai-development-is-not-the-answer-an-insiders-perspective/
>>>
>>> [snippet of the article] *Now in 2023 and Beyond: Proactive Approaches
>>> to AI and Society*
>>>
>>> Looking to the future, to effectively address the challenges arising
>>> from AI, we must foster a proactive, results-oriented, and cooperative
>>> approach with the public
>>> <https://davidbray.medium.com/challenges-and-needed-new-solutions-for-open-societies-to-maintain-civil-discourse-part-1-b5ea95f8c679>.
>>> Think tanks and universities can engage the public in conversations about
>>> how to work, live, govern, and co-exist with modern technologies that
>>> impact society. By involving diverse voices in the decision-making process,
>>> we can better address and resolve the complex challenges AI presents on
>>> local and national levels.
>>>
>>> In addition, we must encourage industry and political leaders to
>>> participate in finding non-partisan, multi-sector solutions if civil
>>> societies are to remain stable. By working together, we can bridge the gap
>>> between technological advancements and their societal implications.
>>>
>>> Finally, launching AI pilots across various sectors, such as work,
>>> education, health, law, and civil society, is essential. We must learn by
>>> doing on how we can create responsible civil environments where AIs can be
>>> developed and deployed responsibly. These initiatives can help us better
>>> understand and integrate AI into our lives, ensuring its potential is
>>> harnessed for the greater good while mitigating risks.
>>>
>>> In 2019 and 2020, a group of fifty-two people asked the Administrative
>>> Conference of the United States
>>> <https://tinyurl.com/letter-signed-52-people>(which helps guide
>>> rulemaking procedures for federal agencies), General Accounting Office, and
>>> the General Services Administration to call attention to the need to
>>> address the challenges of chatbots flooding public commenting procedures
>>> and potentially crowding out or denying services to actual humans wanting
>>> to leave a comment. We asked
>>> <https://davidbray.medium.com/challenges-and-needed-new-solutions-for-open-societies-to-maintain-civil-discourse-part-1-b5ea95f8c679>
>>> :
>>>
>>> *1. Does identity matter regarding who files a comment or not — and must
>>> one be a U.S. person in order to file?*
>>>
>>> *2. Should agencies publish real-time counts of the number of comments
>>> received — or is it better to wait until the end of a commenting round to
>>> make all comments available, including counts?*
>>>
>>> *3. Should third-party groups be able to file on behalf of someone else
>>> or not — and do agencies have the right to remove spam-like comments?*
>>>
>>> *4. Should the public commenting process permit multiple comments per
>>> individual for a proceeding — and if so, how many comments from a single
>>> individual are too many? 100? 1000? More?*
>>>
>>> *5. Finally, should the U.S. government itself consider, given public
>>> perceptions about potential conflicts of interest for any agency performing
>>> a public commenting process, whether it would be better to have third-party
>>> groups take responsibility for assembling comments and then filing those
>>> comments via a validated process with the government?*
>>>
>>> These same questions need pragmatic pilots that involve the public to co-explore
>>> and co-develop how we operate effectively amid these technological shifts
>>> <https://davidbray.medium.com/challenges-and-needed-new-solutions-for-open-societies-to-maintain-civil-discourse-part-2-2f637c472112>.
>>> As the capabilities of LLMs continue to grow, we need positive change
>>> agents willing to tackle the messy issues at the intersection of technology
>>> and society. The challenges are immense, but so too are the opportunities
>>> for positive change. Let’s seize this moment to create a better tomorrow
>>> for all. Working together, we can co-create a future that embraces AI’s
>>> potential while mitigating its risks
>>> <https://medium.com/peoplecentered/the-need-for-people-centered-sources-of-hope-for-our-digital-future-ahead-ef491dd2703d>,
>>> informed by the hard lessons we have already learned.
>>>
>>> Full article:
>>> https://www.oodaloop.com/archive/2023/04/18/why-a-pause-on-ai-development-is-not-the-answer-an-insiders-perspective/
>>>
>>> Hope this helps.
>>>
>>> On Thu, Oct 5, 2023 at 4:44 PM Jack Haverty via Nnagain <
>>> nnagain@lists.bufferbloat.net> wrote:
>>>
>>>> Thanks for all your efforts to keep the "feedback loop" to the
>>>> rulemakers functioning!
>>>>
>>>> I'd like to offer a suggestion for a hopefully politically acceptable
>>>> way to handle the deluge, derived from my own battles with "email" over the
>>>> years (decades).
>>>>
>>>> Back in the 1970s, I implemented one of the first email systems on the
>>>> Arpanet, under the mentorship of JCR Licklider, who had been pursuing his
>>>> vision of a "Galactic Network" at ARPA and MIT.   One of the things we
>>>> discovered was the significance of anonymity.   At the time, anonymity was
>>>> forbidden on the Arpanet; you needed an account on some computer, protected
>>>> by passwords, in order to legitimately use the network.   The mechanisms
>>>> were crude and easily broken, but the principle applied.
>>>>
>>>> Over the years, that principle has been forgotten, and the right to be
>>>> anonymous has become entrenched.   But many uses of the network, and needs
>>>> of its users, demand accountability, so all sorts of mechanisms have been
>>>> pasted on top of the network to provide ways to judge user identity.
>>>> Banks, medical services, governments, and businesses all demand some way of
>>>> proving your identity, with passwords, various schemes of 2FA, VPNs, or
>>>> other such technology, with varying degrees of protection.   It is still
>>>> possible to be anonymous on the net, but many things you do require you to
>>>> prove, to some extent, who you are.
>>>>
>>>> So, my suggestion for handling the deluge of "comments" is:
>>>>
>>>> 1/ create some mechanism for "registering" your intent to submit a
>>>> comment.   Make it hard for bots to register.  Perhaps you can leverage the
>>>> work of various partners, e.g., ISPs, retailers, government agencies,
>>>> financial institutions, of others who already have some way of identifying
>>>> their users.
>>>>
>>>> 2/ Also make registration optional - anyone can still submit comments
>>>> anonymously if they choose.
>>>>
>>>> 3/ for "registered commenters", provide a way to "edit" your previous
>>>> comment - i.e., advise that your comment is always the last one you
>>>> submitted.   I.E., whoever you are, you can only submit one comment, which
>>>> will be the last one you submit.
>>>>
>>>> 4/ In the thousands of pages of comments, somehow flag the ones that
>>>> are from registered commenters, visible to the people who read the
>>>> comments.   Even better, provide those "information consumers" with ways to
>>>> sort, filter, and search through the body of comments.
>>>>
>>>> This may not reduce the deluge of comments, but I'd expect it to help
>>>> the lawyers and politicians keep their heads above the water.
>>>>
>>>> Anonymity is an important issue for Net Neutrality too, but I'll opine
>>>> about that separately.....
>>>>
>>>> Jack Haverty
>>>>
>>>>
>>>> On 10/2/23 12:38, David Bray, PhD via Nnagain wrote:
>>>>
>>>> Greetings all and thank you Dave Taht for that very kind intro...
>>>>
>>>> First, I'll open with I'm a gosh-darn non-partisan, which means I swore
>>>> an oath to uphold the Constitution first and serve the United States - not
>>>> a specific party, tribe, or ideology. This often means, especially in
>>>> today's era of 24/7 news and social media, non-partisans have to "top
>>>> cover".
>>>>
>>>> Second, I'll share that in what happened in 2017 (which itself was 10x
>>>> what we saw in 2014) my biggest concern was and remains that a few actors
>>>> attempted to flood the system with less-than-authentic comments.
>>>>
>>>> In some respects this is not new. The whole "notice and comment"
>>>> process is a legacy process that goes back decades. And the FCC (and
>>>> others) have had postcard floods of comments, mimeographed letters of
>>>> comments, faxed floods of comments, and now this - which, when combined
>>>> with generative AI, will be yet another flood.
>>>>
>>>> Which gets me to my biggest concern as a non-partisan in 2023-2024,
>>>> namely how LLMs might misuse and abuse the commenting process further.
>>>>
>>>> Both in 2014 and 2017, I asked FCC General Counsel if I could use
>>>> CAPTChA to try to reduce the volume of web scrapers or bots both filing and
>>>> pulling info from the Electronic Comment Filing System.
>>>>
>>>> Both times I was told *no* out of concerns that they might prevent
>>>> someone from filing. I asked if I could block obvious spam, defined as
>>>> someone filing a comment >100 times a minute, and was similarly told no
>>>> because one of those possible comments might be genuine and/or it could be
>>>> an ex party filing en masse for others.
>>>>
>>>> For 2017 we had to spin up 30x the number of AWS cloud instances to
>>>> handle the load - and this was a flood of comments at 4am, 5am, and 6am ET
>>>> at night which normally shouldn’t see such volumes. When I said there was a
>>>> combination of actual humans wanting to leave comments and others who were
>>>> effectively denying service to others (especially because if anyone wanted
>>>> to do a batch upload of 100,000 comments or more they could submit a CSV
>>>> file or a comment with 100,000 signatories) - both parties said no, that
>>>> couldn’t be happening.
>>>>
>>>> Until 2021 when the NY Attorney General proved that was exactly what
>>>> was happening with 18m of the 23m apparently from non-authentic origin with
>>>> ~9m from one side of the political aisle (and six companies) and ~9m from
>>>> the other side of the political aisle (and one or more teenagers).
>>>>
>>>> So with Net Neutrality back on the agenda - here’s a simple prediction,
>>>> even if the volume of comments is somehow controlled, 10,000+ pages of
>>>> comments produced by ChatGPT or a different LLM is both possible and
>>>> probably will be done. The question is if someone includes a legitimate
>>>> legal argument on page 6,517 - will FCC’s lawyers spot it and respond to it
>>>> as part of the NPRM?
>>>>
>>>> Hope this helps and with highest regards,
>>>>
>>>> -d.
>>>> --
>>>>
>>>> Principal, LeadDoAdapt Ventures, Inc. <https://www.leaddoadapt.com/> &
>>>> Distinguished Fellow
>>>>
>>>> Henry S. Stimson Center <https://www.stimson.org/ppl/david-bray/>, Business
>>>> Executives for National Security
>>>> <https://bens.org/people/dr-david-bray/>
>>>>
>>>>
>>>>
>>>> On Mon, Oct 2, 2023 at 2:15 PM Dave Taht via Nnagain <
>>>> nnagain@lists.bufferbloat.net> wrote:
>>>>
>>>>> All:
>>>>>
>>>>> I have spent the last several days reaching out to as many people I
>>>>> know with a deep understanding of the policy and technical issues
>>>>> surrounding the internet, to participate on this list. I encourage you
>>>>> all to reach out on your own, especially to those that you can
>>>>> constructively and civilly disagree with, and hopefully work with, to
>>>>> establish technical steps forward. Quite a few have joined silently!
>>>>> So far, 168 people have joined!
>>>>>
>>>>> Please welcome Dr David Bray[1], a self-described "human flack jacket"
>>>>> who, in the last NN debate, stood up for the non -partisan FCC IT team
>>>>> that successfully kept the system up 99.4% of the time despite the
>>>>> comment floods and network abuses from all sides. He has shared with
>>>>> me privately many sad (and some hilarious!) stories of that era, and I
>>>>> do kind of hope now, that some of that history surfaces, and we can
>>>>> learn from it.
>>>>>
>>>>> Thank you very much, David, for putting down your painful memories[2],
>>>>> and agreeing to join here. There is a lot to tackle here, going
>>>>> forward.
>>>>>
>>>>> [1] https://www.stimson.org/ppl/david-bray/
>>>>> [2] "Pain shared is reduced. Joy shared, increased." - Spider Robinson
>>>>>
>>>>>
>>>>> --
>>>>> Oct 30:
>>>>> https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>>>>> Dave Täht CSO, LibreQos
>>>>> _______________________________________________
>>>>> Nnagain mailing list
>>>>> Nnagain@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/nnagain
>>>>>
>>>>
>>>> _______________________________________________
>>>> Nnagain mailing listNnagain@lists.bufferbloat.nethttps://lists.bufferbloat.net/listinfo/nnagain
>>>>
>>>>
>>>> _______________________________________________
>>>> Nnagain mailing list
>>>> Nnagain@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/nnagain
>>>>
>>>
>>> _______________________________________________
>> Nnagain mailing list
>> Nnagain@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/nnagain
>>
>
>
> --
> Please send any postal/overnight deliveries to:
> Vint Cerf
> Google, LLC
> 1900 Reston Metro Plaza, 16th Floor
> Reston, VA 20190
> +1 (571) 213 1346
>
>
> until further notice
>
>
>
>

[-- Attachment #2: Type: text/html, Size: 34416 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [NNagain] Introduction: Dr. David Bray
  2023-10-09 23:55             ` David Bray, PhD
@ 2023-10-10  2:56               ` Jack Haverty
  2023-10-10 15:29                 ` [NNagain] somewhat OT: Licklidder Dave Taht
  0 siblings, 1 reply; 15+ messages in thread
From: Jack Haverty @ 2023-10-10  2:56 UTC (permalink / raw)
  To: David Bray, PhD, Vint Cerf
  Cc: Network Neutrality is back! Let´s make the technical
	aspects heard this time!

[-- Attachment #1: Type: text/plain, Size: 38361 bytes --]

IMHO, the problem may be that the Internet, and computing technology in 
general, is so new that non-technical organizations, such as government 
entities, don't understand it and therefore can't figure out whether or 
how to regulate anything involved.

In other, older, "technologies", rules, procedures, and traditions have 
developed over the years to provide for feedback and control between 
governees and governors.  Roberts Rules of Order was created 150 years 
ago, and is still widely used to manage public meetings. I've been in 
local meetings where everyone gets a chance to speak, but are limited to 
a few minutes to say whatever's on their mind. You have to appear in 
person, wait your turn, and make your comment.  Doing so is free, but 
still has the cost of time and hassle to get to the meeting.

Organizations have figured out over the years how to manage meetings.  
[Vint - remember the "Rathole!" mechanism that we used to keep Internet 
meetings on track...?]

 From what David describes, it sounds like the current "public comment" 
mechanisms in the electronic arena are only at the stage where the 
loudest voices can drown out all others, and public debates are 
essentially useless cacophonies of the loudest proponents of the various 
viewpoints.   There are no rules.   Why should anyone submit their own 
sensible comments, knowing they'll be lost in the noise?

In non-electronic public forums, such behavior is ruled out, and if it 
persists, the governing body can have offenders ejected, adjourn a 
meeting until cooler heads prevail, or otherwise make the discourse 
useful for informing decisions.   Courts can issue restraining orders, 
but has any court ever issued such an order applying to an electronic forum?

So, why haven't organizations yet developed rules and mechanisms for 
managing electronic discussions....?

I'd offer two observations and suggestions.

-----

First, a major reason for a lack of such rules and mechanisms may be an 
educational gap.  Administrators, politicians, and staffers may simply 
not understand all this newfangled technology, or how it works, and are 
drowning in a sea of terminology, acronyms, and concepts that make no 
sense (to them).   In the FCC case, even the technical gurus may have 
deep knowledge of their traditional realm of telephony, radio, and 
related issues and policy tradeoffs.   But they may be largely ignorant 
of computing and networking equivalents.   Probably even worse, they may 
unconsciously consider the new world as a simple evolution of the old, 
not recognizing the impact of incredibly fast computers and 
communications, and the advances that they enable, such as "AI" - 
whatever that is...

About 10 years ago, I accidentally got involved in a patent dispute to 
be an "expert witness", for a patent involving downloading new programs 
over a communications path into a remote computer (yes, what all our 
devices do almost every day).   I was astounded when I learned how 
little the "judicial system" (lawyers, judges, legislators, etc.) knew 
about computer and network technology. That didn't stop them from 
debating the meaning of technical terms. What is RAM?  How does 
"programming" differ from "reprogramming"? What is "memory"?  What is a 
"processor"?   What is an "operating system"?   The arguments continue 
until eventually a judge declares what the answer is, with little 
technical knowledge or expertise to help.   So you can easily get 
legally binding definitions such as "operating system" means "Windows", 
and that all computers contain an operating system.

I spent hours on the phone over about 18 months, explaining to the 
lawyers how computers and networks actually worked.   In turn, they 
taught me quite a lot about the vagaries of the laws and patents. It was 
fascinating but also disturbing to see how ill-prepared the legal system 
was for new technologies.

So, my suggestion is that a focus be placed on helping the non-technical 
decision makers understand the nuances of computing and the Internet.  I 
don't think that will be successful by burying them in the sea of 
technical jargon and acronyms.

Before I retired, I spent a lot of time with C-suite denizens from 
companies outside of the technology industry - banks, manufacturers, 
transportation, etc. - helping them understand what "The Internet" was, 
and help them see it as both a huge opportunity and a huge threat to 
their businesses.  One technique I used was simply stolen from the early 
days of The Internet.

When we were involved in designing the internal mechanisms of the 
Internet, in particular TCPV4, we didn't know much about networks 
either.  So we used analogies.  In particular we used the existing 
transportation infrastructure as a model.   Moving bits around the world 
isn't all that different from moving goods and people.   But everyone, 
even with no technical expertise, knows about transportation.

It turns out that there are a lot of useful analogies.  For example, we 
recognized that there were different kinds of "traffic" with different 
needs.  Coal for power plants was important, but not urgent.  If a coal 
train waits on a siding while a passenger train passes, it's OK, even 
preferred.   There could be different "types of service" available from 
the transportation infrastructure.   At the time (late 1970s) we didn't 
know exactly how to do that, but decided to put a field in the IP header 
as a placeholder - the "TOS" field.  Figuring out what different TOSes 
there should be, and how they would be handled differently, was still on 
the to-do list. There are even analogies to the Internet - goods might 
travel over a "marine network" to a "port", where they are moved onto a 
"rail network", to a distributor, and moved on the highway network to 
their final destination.  Routers, gateways, ...

Other transportation analogies reinforced the notion of TOS.  E.g., if 
you're sending a document somewhere, you can choose how to send it - 
normal postal mail, or Priority Mail, or even use a different "network" 
such as an overnight delivery service.  Different TOS would engage 
different behaviors of the underlying communications system, and might 
also have different costs to use them.  Sending a ton of coal to get 
delivered in a week or two would cost a lot less than sending a ton of 
documents for overnight delivery.

There were other transportation analogies heard during the TCPV4 design 
discussions - e.g., "Expressway Routing" (do you take a direct route 
over local streets, or go to the freeway even though it's longer) and 
"Multi-Homing" (your manufacturing plant has access to both a highway 
and a rail line).

Suggestion -- I suspect that using a familiar infrastructure such as 
transport to discuss issues with non-technical decision makers would be 
helpful.  E.g., imagine what would happen if some particular "net 
neutrality" set of rules was placed on the transportation 
infrastructure?   Would it have a desirable effect?

-----

Second, in addition to anonymity as an important issue in the electronic 
world, my experience as a mentee of Licklider surfaced another important 
issue in the "galactic network" vision -- "Back Pressure".     The 
notion is based in existing knowledge. Economics has notions of Supply 
and Demand and Cost Curves. Engineering has the notion of "Negative 
Feedback" to stabilize mechanical, electrical, or other systems.

We discussed Back Pressure, in the mid 70s, in the context of electronic 
mail, and tried to get the notion of "stamps" accepted as part of the 
email mechanisms.  The basic idea was that there had to be some form of 
"back pressure" to prevent overload by discouraging sending of huge 
quantities of mail.

At the time, mail traffic was light, since every message was typed by 
hand by some user.  In Lick's group we had experimented with using email 
as a way for computer programs to interact.  In Lick's vision, humans 
would interact by using their computers as their agents.   Even then, 
computers could send email a lot faster and continuously than any human 
at a keyboard, and could easily flood the network.  [This epiphany 
occurred shortly after a mistake in configuring distribution lists 
caused so many messages and replies that our machine crashed as its disk 
space ran out.]

"Stamps" didn't necessarily represent monetary cost.  Back pressure 
could be simple constraints, e.g., no user can send more than 500 (or 
whatever) messages per day.   This notion never got enough support to 
become part of the email standards; I still think it would help with the 
deluge of spam we all experience today.

Back Pressure in the Internet today is largely non-existent.  I (or my 
AI and computers) can send as much email as I like. Communications 
carriers promote "unlimited data" but won't guarantee anything.   Memory 
has become cheap, and as a result behaviors such as "buffer bloat" have 
appeared.

Suggestion - educate the decision-makers about Back Pressure, using 
highway analogies (metering lights, etc.)

-----

Education about the new technology, but by using some familiar analogs, 
and introduction of Back Pressure, in some appropriate form, as part of 
a "network neutrality" policy, would be the two foci I'd recommend.

My prior suggestion of "registration" and accepting only the last 
comment was based on the observations above.  Back pressure doesn't have 
to be monetary, and registered users don't have to be personally 
identified.   Simply making it sufficiently "hard" to register (using 
CAPTCHAs, 2FA, whatever) would be a "cost" discouraging "loud voices".   
Even the law firms submitting millions of comments on behalf of their 
clients might balk at the cost (in labor not money) to register their 
million clients, even anonymously, so each could get his/her comment 
submitted.   Of course, they could always pass the costs on to their 
(million? really?) clients.  But it would still be Back Pressure.

One possibility -- make the "cost" of submitting a million electronic 
comments equal to the cost of submitting a million postcards...?

Jack Haverty


On 10/9/23 16:55, David Bray, PhD wrote:
> Great points Vint as you're absolutely right - there are multiple 
> modalities here (and in the past it was spam from thousands of 
> postcards, then mimeographs, then faxes, etc.)
>
> The standard historically has been set by the Administrative 
> Conference of the United States: https://www.acus.gov/about-acus
>
> In 2020 there seemed to be an effort to gave the General Services 
> Administration weigh-in, however they closed that rulemaking attempt 
> without publishing any of the comments they got and no announcement 
> why it was closed.
>
> As for what part of Congress - I believe ACUS was championed by both 
> the Senate and House Judiciary Committees as it has oversight and 
> responsibility for the interpretations of the Administrative Procedure 
> Act of 1946 (which sets out the whole rulemaking procedure).
>
> Sadly there isn't a standard across agencies - which also means there 
> isn't a standard across Administrations. Back in 2018 and 2020, both 
> with this group of 52 people here 
> https://tinyurl.com/letter-signed-52-people - as well as individually 
> - I did my darnest to encourage them to do a standard.
>
> There's also the National Academy of Public Administration which is 
> probably the latest remaining non-partisan forum for discussions like 
> this too.
>
>
> On Mon, Oct 9, 2023 at 7:46 PM Vint Cerf <vint@google.com> wrote:
>
>     David, this is a good list.
>     FACA has rules for public participation, for example.
>
>     I think it should be taken into account for any public commenting
>     process that online (and offline such as USPS or fax and phone
>     calls) that spam and artificial inflation of comments are
>     possible. Is there any specific standard for US agency public
>     comment handling? If now, what committees of the US Congress might
>     have jurisdiction?
>
>     v
>
>
>     On Tue, Oct 10, 2023 at 8:22 AM David Bray, PhD via Nnagain
>     <nnagain@lists.bufferbloat.net> wrote:
>
>         I'm all for doing new things to make things better.
>
>         At the same time, I used to do bioterrorism preparedness and
>         response from 2000-2005 (and aside from asking myself what
>         kind of crazy world needed counter-bioterrorism efforts... I
>         also realized you don't want to interject something completely
>         new in the middle of an unfolding crisis event). If something
>         were to be injected now, it would have to have consensus from
>         both sides, otherwise at least one side (potentially
>         detractors from both) will claim that whatever form the new
>         approaches take are somehow advantaging "the other side" and
>         disadvantaging them.
>
>         Probably would take a ruling by the Administrative Conference
>         of the United States, at a minimum to answer these five
>         questions - and even then, introducing something completely
>         different in the midst of a political melee might just invite
>         mudslinging unless moderate voices on both sides can reach
>         some consensus.
>
>         *1. Does identity matter regarding who files a comment or not
>         — and must one be a U.S. person in order to file?*
>
>         *2. Should agencies publish real-time counts of the number of
>         comments received — or is it better to wait until the end of a
>         commenting round to make all comments available, including
>         counts?*
>
>         *3. Should third-party groups be able to file on behalf of
>         someone else or not — and do agencies have the right to remove
>         spam-like comments?*
>
>         *4. Should the public commenting process permit multiple
>         comments per individual for a proceeding — and if so, how many
>         comments from a single individual are too many? 100? 1000? More?*
>
>         *5. Finally, should the U.S. government itself consider, given
>         public perceptions about potential conflicts of interest for
>         any agency performing a public commenting process, whether it
>         would be better to have third-party groups take responsibility
>         for assembling comments and then filing those comments via a
>         validated process with the government?*
>
>
>
>         On Sat, Oct 7, 2023 at 4:10 PM Jack Haverty <jack@3kitty.org>
>         wrote:
>
>             Hi again David et al,
>
>             Interesting frenzy...lots of questions that need answers
>             and associated policies.   I served 6 years as an elected
>             official (in a small special district in California), so I
>             have some small understanding of the government side of
>             things and the constraints involved.   Being in charge
>             doesn't mean you can do what you want.
>
>             I'm thinking here more near-term and incremental steps. 
>             You said "These same questions need pragmatic pilots that
>             involve the public ..."
>
>             So, how about using the current NN situation for a pilot? 
>             Keep all the current ways and emerging AI techniques to
>             continue to flood the system with comments.   But also
>             offer an *optional* way for humans to "register" as a
>             commenter and then submit their (latest only) comment into
>             the melee.  Will people use it?  Will "consumers" (the
>             lawyers, commissioners, etc.) find it useful?
>
>             I've found it curious, for decades now, that there are
>             (too many) mechanisms for "secure email", that may help
>             with the flood of disinformation from anonymous senders,
>             but very very few people use them.   Maybe they don't know
>             how; maybe the available schemes are too flawed; maybe ...?
>
>             About 30 years ago, I was a speaker in a public meeting
>             orchestrated by USPS, and recommended that they take a
>             lead role, e.g., by acting as a national CA - certificate
>             authority.  Never happened though.   FCC issues lots of
>             licenses...perhaps they could issue online credentials too?
>
>             Perhaps a "pilot" where you will also accept comments by
>             email, some possibly sent by "verified" humans if they
>             understand how to do so, would be worth trying?   Perhaps
>             comments on "technical aspects" coming from people who
>             demonstrably know how to use technology would be valuable
>             to the policy makers?
>
>             The Internet, and technology such as TCP, began as an
>             experimental pilot about 50 years ago.  Sometimes pilots
>             become infrastructures.
>
>             FYI, I'm signing this message.  Using OpenPGP.  I could
>             encrypt it also, but my email program can't find your
>             public key.
>
>             Jack Haverty
>
>
>             On 10/5/23 14:21, David Bray, PhD wrote:
>>             Indeed Jack - a few things to balance - the
>>             Administrative Procedure Act of 1946 (on which the idea
>>             of rulemaking is based) us about raising legal concerns
>>             that must be answered by the agency at the time the
>>             rulemaking is done. It's not a vote nor is it the case
>>             that if the agency gets tons of comments in one direction
>>             that they have to go in that direction. Instead it's only
>>             about making sure legal concerns are considered and
>>             responded to before the agency before the agency acts.
>>             (Which is partly why sending "I'm for XYZ" or "I'm
>>             against ABC" really doesn't mean anything to an agency -
>>             not only is that not a legal argument or concern, it's
>>             also not something where they're obligated to follow
>>             these comments - it's not a vote or poll).
>>
>>             That said, political folks have spun things to the public
>>             as if it is a poll/vote/chance to act. The raise a valid
>>             legal concern part of the APA of 1946 is omitted.
>>             Moreover the fact that third party law firms and others
>>             like to submit comments on behalf of clients - there will
>>             always be a third party submitting multiple comments for
>>             their clients (or "clients") because that's their business.
>>
>>             In the lead up to 2017, the Consumer and Government
>>             Affairs Bureau of the FCC got an inquiry from a firm
>>             asking how they could submit 1 million comments a day on
>>             an "upcoming privacy proceeding" (their words, astute
>>             observers will note there was no privacy proceeding
>>             before the FCC in 2017). When the Bureau asked me, I told
>>             them either mail us a CD to upload it or submit one
>>             comment with 1 million signatures. To attempt to flood us
>>             with 1 million comments a day (aside from the fact who
>>             can "predict" having that many daily) would deny
>>             resources to others. In the mess that followed, what was
>>             released to the public was so redacted you couldn't see
>>             the legitimate concerns and better paths that were
>>             offered to this entity.
>>
>>             And the FCC isn't alone. EPA, FTC, and other regulatory
>>             agencies have had these hijinks for years - and before
>>             the Internet it was faxes, mass mimeographs (remember
>>             blue ink?), and postcards.The Administrative Conference
>>             of the United States (ACUS) - is the body that is
>>             supposed to provide consistent guidance for things like
>>             this across the U.S. government. I've briefed them and
>>             tried to raise awareness of these issues - as I think
>>             fundamentally this is a **process** question that once
>>             answered, tech can support. However they're not
>>             technologies and updating the interpretation of the
>>             process isn't something lawyers are apt to do until the
>>             evidence that things are in trouble is overwhelming.
>>
>>             52 folks wrote a letter to them - and to GSA - back in
>>             2020. GSA had a rulemaking of its own on how to improve
>>             things, yet oddly never published any of the comments it
>>             received (including ours) and closed the rulemaking
>>             quietly. Here's the letter:
>>             https://tinyurl.com/letter-signed-52-people
>>
>>             And here's an article published in OODAloop about this -
>>             and why Generative AI is probably going to make things
>>             even more challenging:
>>             https://www.oodaloop.com/archive/2023/04/18/why-a-pause-on-ai-development-is-not-the-answer-an-insiders-perspective/
>>
>>             [snippet of the article] *Now in 2023 and Beyond:
>>             Proactive Approaches to AI and Society*
>>
>>             Looking to the future, to effectively address the
>>             challenges arising from AI, we must foster a proactive,
>>             results-oriented, and cooperative approach with the
>>             public
>>             <https://davidbray.medium.com/challenges-and-needed-new-solutions-for-open-societies-to-maintain-civil-discourse-part-1-b5ea95f8c679>.
>>             Think tanks and universities can engage the public in
>>             conversations about how to work, live, govern, and
>>             co-exist with modern technologies that impact society. By
>>             involving diverse voices in the decision-making process,
>>             we can better address and resolve the complex challenges
>>             AI presents on local and national levels.
>>
>>             In addition, we must encourage industry and political
>>             leaders to participate in finding non-partisan,
>>             multi-sector solutions if civil societies are to remain
>>             stable. By working together, we can bridge the gap
>>             between technological advancements and their societal
>>             implications.
>>
>>             Finally, launching AI pilots across various sectors, such
>>             as work, education, health, law, and civil society, is
>>             essential. We must learn by doing on how we can create
>>             responsible civil environments where AIs can be developed
>>             and deployed responsibly. These initiatives can help us
>>             better understand and integrate AI into our lives,
>>             ensuring its potential is harnessed for the greater good
>>             while mitigating risks.
>>
>>             In 2019 and 2020, a group of fifty-two people asked the
>>             Administrative Conference of the United States
>>             <https://tinyurl.com/letter-signed-52-people>(which helps
>>             guide rulemaking procedures for federal agencies),
>>             General Accounting Office, and the General Services
>>             Administration to call attention to the need to address
>>             the challenges of chatbots flooding public commenting
>>             procedures and potentially crowding out or denying
>>             services to actual humans wanting to leave a comment. We
>>             asked
>>             <https://davidbray.medium.com/challenges-and-needed-new-solutions-for-open-societies-to-maintain-civil-discourse-part-1-b5ea95f8c679>:
>>
>>
>>             *1. Does identity matter regarding who files a comment or
>>             not — and must one be a U.S. person in order to file?*
>>
>>             *2. Should agencies publish real-time counts of the
>>             number of comments received — or is it better to wait
>>             until the end of a commenting round to make all comments
>>             available, including counts?*
>>
>>             *3. Should third-party groups be able to file on behalf
>>             of someone else or not — and do agencies have the right
>>             to remove spam-like comments?*
>>
>>             *4. Should the public commenting process permit multiple
>>             comments per individual for a proceeding — and if so, how
>>             many comments from a single individual are too many? 100?
>>             1000? More?*
>>
>>             *5. Finally, should the U.S. government itself consider,
>>             given public perceptions about potential conflicts of
>>             interest for any agency performing a public commenting
>>             process, whether it would be better to have third-party
>>             groups take responsibility for assembling comments and
>>             then filing those comments via a validated process with
>>             the government?*
>>
>>             These same questions need pragmatic pilots that involve
>>             the public to co-explore and co-develop how we operate
>>             effectively amid these technological shifts
>>             <https://davidbray.medium.com/challenges-and-needed-new-solutions-for-open-societies-to-maintain-civil-discourse-part-2-2f637c472112>.
>>             As the capabilities of LLMs continue to grow, we need
>>             positive change agents willing to tackle the messy issues
>>             at the intersection of technology and society. The
>>             challenges are immense, but so too are the opportunities
>>             for positive change. Let’s seize this moment to create a
>>             better tomorrow for all. Working together, we can
>>             co-create a future that embraces AI’s potential while
>>             mitigating its risks
>>             <https://medium.com/peoplecentered/the-need-for-people-centered-sources-of-hope-for-our-digital-future-ahead-ef491dd2703d>,
>>             informed by the hard lessons we have already learned.
>>
>>             Full article:
>>             https://www.oodaloop.com/archive/2023/04/18/why-a-pause-on-ai-development-is-not-the-answer-an-insiders-perspective/
>>
>>             Hope this helps.
>>
>>
>>             On Thu, Oct 5, 2023 at 4:44 PM Jack Haverty via Nnagain
>>             <nnagain@lists.bufferbloat.net> wrote:
>>
>>                 Thanks for all your efforts to keep the "feedback
>>                 loop" to the rulemakers functioning!
>>
>>                 I'd like to offer a suggestion for a hopefully
>>                 politically acceptable way to handle the deluge,
>>                 derived from my own battles with "email" over the
>>                 years (decades).
>>
>>                 Back in the 1970s, I implemented one of the first
>>                 email systems on the Arpanet, under the mentorship of
>>                 JCR Licklider, who had been pursuing his vision of a
>>                 "Galactic Network" at ARPA and MIT.   One of the
>>                 things we discovered was the significance of
>>                 anonymity.   At the time, anonymity was forbidden on
>>                 the Arpanet; you needed an account on some computer,
>>                 protected by passwords, in order to legitimately use
>>                 the network.   The mechanisms were crude and easily
>>                 broken, but the principle applied.
>>
>>                 Over the years, that principle has been forgotten,
>>                 and the right to be anonymous has become
>>                 entrenched.   But many uses of the network, and needs
>>                 of its users, demand accountability, so all sorts of
>>                 mechanisms have been pasted on top of the network to
>>                 provide ways to judge user identity.  Banks, medical
>>                 services, governments, and businesses all demand some
>>                 way of proving your identity, with passwords, various
>>                 schemes of 2FA, VPNs, or other such technology, with
>>                 varying degrees of protection.   It is still possible
>>                 to be anonymous on the net, but many things you do
>>                 require you to prove, to some extent, who you are.
>>
>>                 So, my suggestion for handling the deluge of
>>                 "comments" is:
>>
>>                 1/ create some mechanism for "registering" your
>>                 intent to submit a comment.   Make it hard for bots
>>                 to register.  Perhaps you can leverage the work of
>>                 various partners, e.g., ISPs, retailers, government
>>                 agencies, financial institutions, of others who
>>                 already have some way of identifying their users.
>>
>>                 2/ Also make registration optional - anyone can still
>>                 submit comments anonymously if they choose.
>>
>>                 3/ for "registered commenters", provide a way to
>>                 "edit" your previous comment - i.e., advise that your
>>                 comment is always the last one you submitted.   I.E.,
>>                 whoever you are, you can only submit one comment,
>>                 which will be the last one you submit.
>>
>>                 4/ In the thousands of pages of comments, somehow
>>                 flag the ones that are from registered commenters,
>>                 visible to the people who read the comments.   Even
>>                 better, provide those "information consumers" with
>>                 ways to sort, filter, and search through the body of
>>                 comments.
>>
>>                 This may not reduce the deluge of comments, but I'd
>>                 expect it to help the lawyers and politicians keep
>>                 their heads above the water.
>>
>>                 Anonymity is an important issue for Net Neutrality
>>                 too, but I'll opine about that separately.....
>>
>>                 Jack Haverty
>>
>>
>>                 On 10/2/23 12:38, David Bray, PhD via Nnagain wrote:
>>>                 Greetings all and thank you Dave Taht for that very
>>>                 kind intro...
>>>
>>>                 First, I'll open with I'm a gosh-darn non-partisan,
>>>                 which means I swore an oath to uphold the
>>>                 Constitution first and serve the United States - not
>>>                 a specific party, tribe, or ideology. This often
>>>                 means, especially in today's era of 24/7 news and
>>>                 social media, non-partisans have to "top cover".
>>>
>>>                 Second, I'll share that in what happened in 2017
>>>                 (which itself was 10x what we saw in 2014) my
>>>                 biggest concern was and remains that a few actors
>>>                 attempted to flood the system with
>>>                 less-than-authentic comments.
>>>
>>>                 In some respects this is not new. The whole "notice
>>>                 and comment" process is a legacy process that goes
>>>                 back decades. And the FCC (and others) have had
>>>                 postcard floods of comments, mimeographed letters of
>>>                 comments, faxed floods of comments, and now this -
>>>                 which, when combined with generative AI, will be yet
>>>                 another flood.
>>>
>>>                 Which gets me to my biggest concern as a
>>>                 non-partisan in 2023-2024, namely how LLMs might
>>>                 misuse and abuse the commenting process further.
>>>
>>>                 Both in 2014 and 2017, I asked FCC General Counsel
>>>                 if I could use CAPTChA to try to reduce the volume
>>>                 of web scrapers or bots both filing and pulling info
>>>                 from the Electronic Comment Filing System.
>>>
>>>                 Both times I was told *no* out of concerns that they
>>>                 might prevent someone from filing. I asked if I
>>>                 could block obvious spam, defined as someone filing
>>>                 a comment >100 times a minute, and was similarly
>>>                 told no because one of those possible comments might
>>>                 be genuine and/or it could be an ex party filing en
>>>                 masse for others.
>>>
>>>                 For 2017 we had to spin up 30x the number of AWS
>>>                 cloud instances to handle the load - and this was a
>>>                 flood of comments at 4am, 5am, and 6am ET at night
>>>                 which normally shouldn’t see such volumes. When I
>>>                 said there was a combination of actual humans
>>>                 wanting to leave comments and others who were
>>>                 effectively denying service to others (especially
>>>                 because if anyone wanted to do a batch upload of
>>>                 100,000 comments or more they could submit a CSV
>>>                 file or a comment with 100,000 signatories) - both
>>>                 parties said no, that couldn’t be happening.
>>>
>>>                 Until 2021 when the NY Attorney General proved that
>>>                 was exactly what was happening with 18m of the 23m
>>>                 apparently from non-authentic origin with ~9m from
>>>                 one side of the political aisle (and six companies)
>>>                 and ~9m from the other side of the political aisle
>>>                 (and one or more teenagers).
>>>
>>>                 So with Net Neutrality back on the agenda - here’s a
>>>                 simple prediction, even if the volume of comments is
>>>                 somehow controlled, 10,000+ pages of comments
>>>                 produced by ChatGPT or a different LLM is both
>>>                 possible and probably will be done. The question is
>>>                 if someone includes a legitimate legal argument on
>>>                 page 6,517 - will FCC’s lawyers spot it and respond
>>>                 to it as part of the NPRM?
>>>
>>>                 Hope this helps and with highest regards,
>>>
>>>                 -d.
>>>                 -- 
>>>
>>>                 Principal, LeadDoAdapt Ventures, Inc.
>>>                 <https://www.leaddoadapt.com/> & Distinguished Fellow
>>>
>>>                 Henry S. Stimson Center
>>>                 <https://www.stimson.org/ppl/david-bray/>, Business
>>>                 Executives for National Security
>>>                 <https://bens.org/people/dr-david-bray/>
>>>
>>>
>>>
>>>                 On Mon, Oct 2, 2023 at 2:15 PM Dave Taht via Nnagain
>>>                 <nnagain@lists.bufferbloat.net> wrote:
>>>
>>>                     All:
>>>
>>>                     I have spent the last several days reaching out
>>>                     to as many people I
>>>                     know with a deep understanding of the policy and
>>>                     technical issues
>>>                     surrounding the internet, to participate on this
>>>                     list. I encourage you
>>>                     all to reach out on your own, especially to
>>>                     those that you can
>>>                     constructively and civilly disagree with, and
>>>                     hopefully work with, to
>>>                     establish technical steps forward. Quite a few
>>>                     have joined silently!
>>>                     So far, 168 people have joined!
>>>
>>>                     Please welcome Dr David Bray[1], a
>>>                     self-described "human flack jacket"
>>>                     who, in the last NN debate, stood up for the non
>>>                     -partisan FCC IT team
>>>                     that successfully kept the system up 99.4% of
>>>                     the time despite the
>>>                     comment floods and network abuses from all
>>>                     sides. He has shared with
>>>                     me privately many sad (and some hilarious!)
>>>                     stories of that era, and I
>>>                     do kind of hope now, that some of that history
>>>                     surfaces, and we can
>>>                     learn from it.
>>>
>>>                     Thank you very much, David, for putting down
>>>                     your painful memories[2],
>>>                     and agreeing to join here. There is a lot to
>>>                     tackle here, going
>>>                     forward.
>>>
>>>                     [1] https://www.stimson.org/ppl/david-bray/
>>>                     [2] "Pain shared is reduced. Joy shared,
>>>                     increased." - Spider Robinson
>>>
>>>
>>>                     -- 
>>>                     Oct 30:
>>>                     https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>>>                     Dave Täht CSO, LibreQos
>>>                     _______________________________________________
>>>                     Nnagain mailing list
>>>                     Nnagain@lists.bufferbloat.net
>>>                     https://lists.bufferbloat.net/listinfo/nnagain
>>>
>>>
>>>                 _______________________________________________
>>>                 Nnagain mailing list
>>>                 Nnagain@lists.bufferbloat.net
>>>                 https://lists.bufferbloat.net/listinfo/nnagain
>>
>>                 _______________________________________________
>>                 Nnagain mailing list
>>                 Nnagain@lists.bufferbloat.net
>>                 https://lists.bufferbloat.net/listinfo/nnagain
>>
>
>         _______________________________________________
>         Nnagain mailing list
>         Nnagain@lists.bufferbloat.net
>         https://lists.bufferbloat.net/listinfo/nnagain
>
>
>
>     -- 
>     Please send any postal/overnight deliveries to:
>     Vint Cerf
>     Google, LLC
>     1900 Reston Metro Plaza, 16th Floor
>     Reston, VA 20190
>     +1 (571) 213 1346
>
>
>     until further notice
>
>
>

[-- Attachment #2: Type: text/html, Size: 60397 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [NNagain] somewhat OT: Licklidder
  2023-10-10  2:56               ` Jack Haverty
@ 2023-10-10 15:29                 ` Dave Taht
  2023-10-10 15:53                   ` Steve Crocker
  2023-10-10 16:59                   ` Jack Haverty
  0 siblings, 2 replies; 15+ messages in thread
From: Dave Taht @ 2023-10-10 15:29 UTC (permalink / raw)
  To: Network Neutrality is back! Let´s make the technical
	aspects heard this time!

On Mon, Oct 9, 2023 at 7:56 PM Jack Haverty via Nnagain
<nnagain@lists.bufferbloat.net> wrote:

For starters it is an honor to be conversing with folk that knew Bob
Taylor, and "Lick", and y'all made me go back and re-read

http://memex.org/licklider.pdf

For inspiration. I think everyone in our field should re-read that,
periodically. For example he makes an overgeneralization about the
thinking processes of men, as compared to the computers of the time,
and not to women...

But I have always had an odd question - what songs did Lick play on
guitar? Do any recordings exist?

Music defines who I am, at least. I love the angularness and surprises
in jazz, and the deep storytelling buried deep in Shostakovich's
Fifth. Moving forward to modern music: the steady backbeat of Burning
Man - and endless repetition of short phrases - seems to lead to
groupthink - I can hardly stand EDM for an hour.

 I am "maked" by Angela' Lansbury's Sweeny Todd, and my religion,
forever reformed by Monty Python's Life of Brian, One Flew over the
Cookoos nest, 12 Angry Men, and the 12 Monkees, Pink Floyd and punk
music were the things that shaped me. No doubt it differs
significantly for everyone here, please share?

Powerful tales and their technologies predate the internet, and
because they were wildly shared, influenced how generations thought
without being the one true answer. Broadcast media, also, was joint,
and in school we

We are in a new era of uncommonality of experience, in part from
bringing in all the information in the world, while still separated by
differences in language, exposure, education, and culture, although
nowadays it has become so easy and natural to be able to use computer
assisted language translation tools, I do not know how well they truly
make the jump between cultures.

In that paper he talked about 75% of his time being spent setting up
to do analytics, where today so much information exists as to be
impossible to analyze.

I only have a few more small comments below, but I wanted to pick out
the concepts of TOS and backpressure as needing thought on another
day, in another email (what was licks song list??? :)). The internet
has very little Tos or backpressure, and Flow Queuing based algorithms
actually function thusly:

If the arrival rate of a flow is less than the departure rate of all
other flows, it goes out first.

To some extent this matches some of Nagles' "every application has a
right to one packet in the network", and puts a reward into the system
for applications that use slightly less than their fair share of the
bandwidth.

> IMHO, the problem may be that the Internet, and computing technology in general, is so new that non-technical organizations, such as government entities, don't understand it and therefore can't figure out whether or how to regulate anything involved.
>
> In other, older, "technologies", rules, procedures, and traditions have developed over the years to provide for feedback and control between governees and governors.  Roberts Rules of Order was created 150 years ago, and is still widely used to manage public meetings.  I've been in local meetings where everyone gets a chance to speak, but are limited to a few minutes to say whatever's on their mind.  You have to appear in person, wait your turn, and make your comment.  Doing so is free, but still has the cost of time and hassle to get to the meeting.
>
> Organizations have figured out over the years how to manage meetings.  [Vint - remember the "Rathole!" mechanism that we used to keep Internet meetings on track...?]

PARC had "Dealer".

> From what David describes, it sounds like the current "public comment" mechanisms in the electronic arena are only at the stage where the loudest voices can drown out all others, and public debates are essentially useless cacophonies of the loudest proponents of the various viewpoints.   There are no rules.   Why should anyone submit their own sensible comments, knowing they'll be lost in the noise?
>
> In non-electronic public forums, such behavior is ruled out, and if it persists, the governing body can have offenders ejected, adjourn a meeting until cooler heads prevail, or otherwise make the discourse useful for informing decisions.   Courts can issue restraining orders, but has any court ever issued such an order applying to an electronic forum?
>
> So, why haven't organizations yet developed rules and mechanisms for managing electronic discussions....?
>
> I'd offer two observations and suggestions.
>
> -----
>
> First, a major reason for a lack of such rules and mechanisms may be an educational gap.  Administrators, politicians, and staffers may simply not understand all this newfangled technology, or how it works, and are drowning in a sea of terminology, acronyms, and concepts that make no sense (to them).   In the FCC case, even the technical gurus may have deep knowledge of their traditional realm of telephony, radio, and related issues and policy tradeoffs.   But they may be largely ignorant of computing and networking equivalents.   Probably even worse, they may unconsciously consider the new world as a simple evolution of the old, not recognizing the impact of incredibly fast computers and communications, and the advances that they enable, such as "AI" - whatever that is...
>
> About 10 years ago, I accidentally got involved in a patent dispute to be an "expert witness", for a patent involving downloading new programs over a communications path into a remote computer (yes, what all our devices do almost every day).   I was astounded when I learned how little the "judicial system" (lawyers, judges, legislators, etc.) knew about computer and network technology.   That didn't stop them from debating the meaning of technical terms.  What is RAM?  How does "programming" differ from "reprogramming"?  What is "memory"?  What is a "processor"?   What is an "operating system"?   The arguments continue until eventually a judge declares what the answer is, with little technical knowledge or expertise to help.   So you can easily get legally binding definitions such as "operating system" means "Windows", and that all computers contain an operating system.
>
> I spent hours on the phone over about 18 months, explaining to the lawyers how computers and networks actually worked.   In turn, they taught me quite a lot about the vagaries of the laws and patents.  It was fascinating but also disturbing to see how ill-prepared the legal system was for new technologies.
>
> So, my suggestion is that a focus be placed on helping the non-technical decision makers understand the nuances of computing and the Internet.  I don't think that will be successful by burying them in the sea of technical jargon and acronyms.
>
> Before I retired, I spent a lot of time with C-suite denizens from companies outside of the technology industry - banks, manufacturers, transportation, etc. - helping them understand what "The Internet" was, and help them see it as both a huge opportunity and a huge threat to their businesses.  One technique I used was simply stolen from the early days of The Internet.
>
> When we were involved in designing the internal mechanisms of the Internet, in particular TCPV4, we didn't know much about networks either.  So we used analogies.  In particular we used the existing transportation infrastructure as a model.   Moving bits around the world isn't all that different from moving goods and people.   But everyone, even with no technical expertise, knows about transportation.
>
> It turns out that there are a lot of useful analogies.  For example, we recognized that there were different kinds of "traffic" with different needs.  Coal for power plants was important, but not urgent.  If a coal train waits on a siding while a passenger train passes, it's OK, even preferred.   There could be different "types of service" available from the transportation infrastructure.   At the time (late 1970s) we didn't know exactly how to do that, but decided to put a field in the IP header as a placeholder - the "TOS" field.  Figuring out what different TOSes there should be, and how they would be handled differently, was still on the to-do list.   There are even analogies to the Internet - goods might travel over a "marine network" to a "port", where they are moved onto a "rail network", to a distributor, and moved on the highway network to their final destination.  Routers, gateways, ...
>
> Other transportation analogies reinforced the notion of TOS.  E.g., if you're sending a document somewhere, you can choose how to send it - normal postal mail, or Priority Mail, or even use a different "network" such as an overnight delivery service.  Different TOS would engage different behaviors of the underlying communications system, and might also have different costs to use them.  Sending a ton of coal to get delivered in a week or two would cost a lot less than sending a ton of documents for overnight delivery.
>
> There were other transportation analogies heard during the TCPV4 design discussions - e.g., "Expressway Routing" (do you take a direct route over local streets, or go to the freeway even though it's longer) and "Multi-Homing" (your manufacturing plant has access to both a highway and a rail line).
>
> Suggestion -- I suspect that using a familiar infrastructure such as transport to discuss issues with non-technical decision makers would be helpful.  E.g., imagine what would happen if some particular "net neutrality" set of rules was placed on the transportation infrastructure?   Would it have a desirable effect?
>
> -----
>
> Second, in addition to anonymity as an important issue in the electronic world, my experience as a mentee of Licklider surfaced another important issue in the "galactic network" vision -- "Back Pressure".     The notion is based in existing knowledge.   Economics has notions of Supply and Demand and Cost Curves.   Engineering has the notion of "Negative Feedback" to stabilize mechanical, electrical, or other systems.
>
> We discussed Back Pressure, in the mid 70s, in the context of electronic mail, and tried to get the notion of "stamps" accepted as part of the email mechanisms.  The basic idea was that there had to be some form of "back pressure" to prevent overload by discouraging sending of huge quantities of mail.
>
> At the time, mail traffic was light, since every message was typed by hand by some user.  In Lick's group we had experimented with using email as a way for computer programs to interact.  In Lick's vision, humans would interact by using their computers as their agents.   Even then, computers could send email a lot faster and continuously than any human at a keyboard, and could easily flood the network.  [This epiphany occurred shortly after a mistake in configuring distribution lists caused so many messages and replies that our machine crashed as its disk space ran out.]
>
> "Stamps" didn't necessarily represent monetary cost.  Back pressure could be simple constraints, e.g., no user can send more than 500 (or whatever) messages per day.   This notion never got enough support to become part of the email standards; I still think it would help with the deluge of spam we all experience today.
>
> Back Pressure in the Internet today is largely non-existent.  I (or my AI and computers) can send as much email as I like.   Communications carriers promote "unlimited data" but won't guarantee anything.   Memory has become cheap, and as a result behaviors such as "buffer bloat" have appeared.
>
> Suggestion - educate the decision-makers about Back Pressure, using highway analogies (metering lights, etc.)
>
> -----
>
> Education about the new technology, but by using some familiar analogs, and introduction of Back Pressure, in some appropriate form, as part of a "network neutrality" policy, would be the two foci I'd recommend.
>
> My prior suggestion of "registration" and accepting only the last comment was based on the observations above.  Back pressure doesn't have to be monetary, and registered users don't have to be personally identified.   Simply making it sufficiently "hard" to register (using CAPTCHAs, 2FA, whatever) would be a "cost" discouraging "loud voices".   Even the law firms submitting millions of comments on behalf of their clients might balk at the cost (in labor not money) to register their million clients, even anonymously, so each could get his/her comment submitted.   Of course, they could always pass the costs on to their (million? really?) clients.  But it would still be Back Pressure.
>
> One possibility -- make the "cost" of submitting a million electronic comments equal to the cost of submitting a million postcards...?
>
> Jack Haverty
>
>
> On 10/9/23 16:55, David Bray, PhD wrote:
>
> Great points Vint as you're absolutely right - there are multiple modalities here (and in the past it was spam from thousands of postcards, then mimeographs, then faxes, etc.)
>
> The standard historically has been set by the Administrative Conference of the United States: https://www.acus.gov/about-acus
>
> In 2020 there seemed to be an effort to gave the General Services Administration weigh-in, however they closed that rulemaking attempt without publishing any of the comments they got and no announcement why it was closed.
>
> As for what part of Congress - I believe ACUS was championed by both the Senate and House Judiciary Committees as it has oversight and responsibility for the interpretations of the Administrative Procedure Act of 1946 (which sets out the whole rulemaking procedure).
>
> Sadly there isn't a standard across agencies - which also means there isn't a standard across Administrations. Back in 2018 and 2020, both with this group of 52 people here https://tinyurl.com/letter-signed-52-people - as well as individually - I did my darnest to encourage them to do a standard.
>
> There's also the National Academy of Public Administration which is probably the latest remaining non-partisan forum for discussions like this too.
>
>
> On Mon, Oct 9, 2023 at 7:46 PM Vint Cerf <vint@google.com> wrote:
>>
>> David, this is a good list.
>> FACA has rules for public participation, for example.
>>
>> I think it should be taken into account for any public commenting process that online (and offline such as USPS or fax and phone calls) that spam and artificial inflation of comments are possible. Is there any specific standard for US agency public comment handling? If now, what committees of the US Congress might have jurisdiction?
>>
>> v
>>
>>
>> On Tue, Oct 10, 2023 at 8:22 AM David Bray, PhD via Nnagain <nnagain@lists.bufferbloat.net> wrote:
>>>
>>> I'm all for doing new things to make things better.
>>>
>>> At the same time, I used to do bioterrorism preparedness and response from 2000-2005 (and aside from asking myself what kind of crazy world needed counter-bioterrorism efforts... I also realized you don't want to interject something completely new in the middle of an unfolding crisis event). If something were to be injected now, it would have to have consensus from both sides, otherwise at least one side (potentially detractors from both) will claim that whatever form the new approaches take are somehow advantaging "the other side" and disadvantaging them.
>>>
>>> Probably would take a ruling by the Administrative Conference of the United States, at a minimum to answer these five questions - and even then, introducing something completely different in the midst of a political melee might just invite mudslinging unless moderate voices on both sides can reach some consensus.
>>>
>>> 1. Does identity matter regarding who files a comment or not — and must one be a U.S. person in order to file?
>>>
>>> 2. Should agencies publish real-time counts of the number of comments received — or is it better to wait until the end of a commenting round to make all comments available, including counts?
>>>
>>> 3. Should third-party groups be able to file on behalf of someone else or not — and do agencies have the right to remove spam-like comments?
>>>
>>> 4. Should the public commenting process permit multiple comments per individual for a proceeding — and if so, how many comments from a single individual are too many? 100? 1000? More?
>>>
>>> 5. Finally, should the U.S. government itself consider, given public perceptions about potential conflicts of interest for any agency performing a public commenting process, whether it would be better to have third-party groups take responsibility for assembling comments and then filing those comments via a validated process with the government?
>>>
>>>
>>>
>>> On Sat, Oct 7, 2023 at 4:10 PM Jack Haverty <jack@3kitty.org> wrote:
>>>>
>>>> Hi again David et al,
>>>>
>>>> Interesting frenzy...lots of questions that need answers and associated policies.   I served 6 years as an elected official (in a small special district in California), so I have some small understanding of the government side of things and the constraints involved.   Being in charge doesn't mean you can do what you want.
>>>>
>>>> I'm thinking here more near-term and incremental steps.  You said "These same questions need pragmatic pilots that involve the public ..."
>>>>
>>>> So, how about using the current NN situation for a pilot?  Keep all the current ways and emerging AI techniques to continue to flood the system with comments.   But also offer an *optional* way for humans to "register" as a commenter and then submit their (latest only) comment into the melee.  Will people use it?  Will "consumers" (the lawyers, commissioners, etc.) find it useful?
>>>>
>>>> I've found it curious, for decades now, that there are (too many) mechanisms for "secure email", that may help with the flood of disinformation from anonymous senders, but very very few people use them.   Maybe they don't know how; maybe the available schemes are too flawed; maybe ...?
>>>>
>>>> About 30 years ago, I was a speaker in a public meeting orchestrated by USPS, and recommended that they take a lead role, e.g., by acting as a national CA - certificate authority.  Never happened though.   FCC issues lots of licenses...perhaps they could issue online credentials too?
>>>>
>>>> Perhaps a "pilot" where you will also accept comments by email, some possibly sent by "verified" humans if they understand how to do so, would be worth trying?   Perhaps comments on "technical aspects" coming from people who demonstrably know how to use technology would be valuable to the policy makers?
>>>>
>>>> The Internet, and technology such as TCP, began as an experimental pilot about 50 years ago.  Sometimes pilots become infrastructures.
>>>>
>>>> FYI, I'm signing this message.  Using OpenPGP.  I could encrypt it also, but my email program can't find your public key.
>>>>
>>>> Jack Haverty
>>>>
>>>>
>>>> On 10/5/23 14:21, David Bray, PhD wrote:
>>>>
>>>> Indeed Jack - a few things to balance - the Administrative Procedure Act of 1946 (on which the idea of rulemaking is based) us about raising legal concerns that must be answered by the agency at the time the rulemaking is done. It's not a vote nor is it the case that if the agency gets tons of comments in one direction that they have to go in that direction. Instead it's only about making sure legal concerns are considered and responded to before the agency before the agency acts. (Which is partly why sending "I'm for XYZ" or "I'm against ABC" really doesn't mean anything to an agency - not only is that not a legal argument or concern, it's also not something where they're obligated to follow these comments - it's not a vote or poll).
>>>>
>>>> That said, political folks have spun things to the public as if it is a poll/vote/chance to act. The raise a valid legal concern part of the APA of 1946 is omitted. Moreover the fact that third party law firms and others like to submit comments on behalf of clients - there will always be a third party submitting multiple comments for their clients (or "clients") because that's their business.
>>>>
>>>> In the lead up to 2017, the Consumer and Government Affairs Bureau of the FCC got an inquiry from a firm asking how they could submit 1 million comments a day on an "upcoming privacy proceeding" (their words, astute observers will note there was no privacy proceeding before the FCC in 2017). When the Bureau asked me, I told them either mail us a CD to upload it or submit one comment with 1 million signatures. To attempt to flood us with 1 million comments a day (aside from the fact who can "predict" having that many daily) would deny resources to others. In the mess that followed, what was released to the public was so redacted you couldn't see the legitimate concerns and better paths that were offered to this entity.
>>>>
>>>> And the FCC isn't alone. EPA, FTC, and other regulatory agencies have had these hijinks for years - and before the Internet it was faxes, mass mimeographs (remember blue ink?), and postcards.The Administrative Conference of the United States (ACUS) - is the body that is supposed to provide consistent guidance for things like this across the U.S. government. I've briefed them and tried to raise awareness of these issues - as I think fundamentally this is a **process** question that once answered, tech can support. However they're not technologies and updating the interpretation of the process isn't something lawyers are apt to do until the evidence that things are in trouble is overwhelming.
>>>>
>>>> 52 folks wrote a letter to them - and to GSA - back in 2020. GSA had a rulemaking of its own on how to improve things, yet oddly never published any of the comments it received (including ours) and closed the rulemaking quietly. Here's the letter: https://tinyurl.com/letter-signed-52-people
>>>>
>>>> And here's an article published in OODAloop about this - and why Generative AI is probably going to make things even more challenging: https://www.oodaloop.com/archive/2023/04/18/why-a-pause-on-ai-development-is-not-the-answer-an-insiders-perspective/
>>>>
>>>> [snippet of the article] Now in 2023 and Beyond: Proactive Approaches to AI and Society
>>>>
>>>> Looking to the future, to effectively address the challenges arising from AI, we must foster a proactive, results-oriented, and cooperative approach with the public. Think tanks and universities can engage the public in conversations about how to work, live, govern, and co-exist with modern technologies that impact society. By involving diverse voices in the decision-making process, we can better address and resolve the complex challenges AI presents on local and national levels.
>>>>
>>>> In addition, we must encourage industry and political leaders to participate in finding non-partisan, multi-sector solutions if civil societies are to remain stable. By working together, we can bridge the gap between technological advancements and their societal implications.
>>>>
>>>> Finally, launching AI pilots across various sectors, such as work, education, health, law, and civil society, is essential. We must learn by doing on how we can create responsible civil environments where AIs can be developed and deployed responsibly. These initiatives can help us better understand and integrate AI into our lives, ensuring its potential is harnessed for the greater good while mitigating risks.
>>>>
>>>> In 2019 and 2020, a group of fifty-two people asked the Administrative Conference of the United States (which helps guide rulemaking procedures for federal agencies), General Accounting Office, and the General Services Administration to call attention to the need to address the challenges of chatbots flooding public commenting procedures and potentially crowding out or denying services to actual humans wanting to leave a comment. We asked:
>>>>
>>>> 1. Does identity matter regarding who files a comment or not — and must one be a U.S. person in order to file?
>>>>
>>>> 2. Should agencies publish real-time counts of the number of comments received — or is it better to wait until the end of a commenting round to make all comments available, including counts?
>>>>
>>>> 3. Should third-party groups be able to file on behalf of someone else or not — and do agencies have the right to remove spam-like comments?
>>>>
>>>> 4. Should the public commenting process permit multiple comments per individual for a proceeding — and if so, how many comments from a single individual are too many? 100? 1000? More?
>>>>
>>>> 5. Finally, should the U.S. government itself consider, given public perceptions about potential conflicts of interest for any agency performing a public commenting process, whether it would be better to have third-party groups take responsibility for assembling comments and then filing those comments via a validated process with the government?
>>>>
>>>> These same questions need pragmatic pilots that involve the public to co-explore and co-develop how we operate effectively amid these technological shifts. As the capabilities of LLMs continue to grow, we need positive change agents willing to tackle the messy issues at the intersection of technology and society. The challenges are immense, but so too are the opportunities for positive change. Let’s seize this moment to create a better tomorrow for all. Working together, we can co-create a future that embraces AI’s potential while mitigating its risks, informed by the hard lessons we have already learned.
>>>>
>>>> Full article: https://www.oodaloop.com/archive/2023/04/18/why-a-pause-on-ai-development-is-not-the-answer-an-insiders-perspective/
>>>>
>>>> Hope this helps.
>>>>
>>>>
>>>> On Thu, Oct 5, 2023 at 4:44 PM Jack Haverty via Nnagain <nnagain@lists.bufferbloat.net> wrote:
>>>>>
>>>>> Thanks for all your efforts to keep the "feedback loop" to the rulemakers functioning!
>>>>>
>>>>> I'd like to offer a suggestion for a hopefully politically acceptable way to handle the deluge, derived from my own battles with "email" over the years (decades).
>>>>>
>>>>> Back in the 1970s, I implemented one of the first email systems on the Arpanet, under the mentorship of JCR Licklider, who had been pursuing his vision of a "Galactic Network" at ARPA and MIT.   One of the things we discovered was the significance of anonymity.   At the time, anonymity was forbidden on the Arpanet; you needed an account on some computer, protected by passwords, in order to legitimately use the network.   The mechanisms were crude and easily broken, but the principle applied.
>>>>>
>>>>> Over the years, that principle has been forgotten, and the right to be anonymous has become entrenched.   But many uses of the network, and needs of its users, demand accountability, so all sorts of mechanisms have been pasted on top of the network to provide ways to judge user identity.  Banks, medical services, governments, and businesses all demand some way of proving your identity, with passwords, various schemes of 2FA, VPNs, or other such technology, with varying degrees of protection.   It is still possible to be anonymous on the net, but many things you do require you to prove, to some extent, who you are.
>>>>>
>>>>> So, my suggestion for handling the deluge of "comments" is:
>>>>>
>>>>> 1/ create some mechanism for "registering" your intent to submit a comment.   Make it hard for bots to register.  Perhaps you can leverage the work of various partners, e.g., ISPs, retailers, government agencies, financial institutions, of others who already have some way of identifying their users.
>>>>>
>>>>> 2/ Also make registration optional - anyone can still submit comments anonymously if they choose.
>>>>>
>>>>> 3/ for "registered commenters", provide a way to "edit" your previous comment - i.e., advise that your comment is always the last one you submitted.   I.E., whoever you are, you can only submit one comment, which will be the last one you submit.
>>>>>
>>>>> 4/ In the thousands of pages of comments, somehow flag the ones that are from registered commenters, visible to the people who read the comments.   Even better, provide those "information consumers" with ways to sort, filter, and search through the body of comments.
>>>>>
>>>>> This may not reduce the deluge of comments, but I'd expect it to help the lawyers and politicians keep their heads above the water.
>>>>>
>>>>> Anonymity is an important issue for Net Neutrality too, but I'll opine about that separately.....
>>>>>
>>>>> Jack Haverty
>>>>>
>>>>>
>>>>> On 10/2/23 12:38, David Bray, PhD via Nnagain wrote:
>>>>>
>>>>> Greetings all and thank you Dave Taht for that very kind intro...
>>>>>
>>>>> First, I'll open with I'm a gosh-darn non-partisan, which means I swore an oath to uphold the Constitution first and serve the United States - not a specific party, tribe, or ideology. This often means, especially in today's era of 24/7 news and social media, non-partisans have to "top cover".
>>>>>
>>>>> Second, I'll share that in what happened in 2017 (which itself was 10x what we saw in 2014) my biggest concern was and remains that a few actors attempted to flood the system with less-than-authentic comments.
>>>>>
>>>>> In some respects this is not new. The whole "notice and comment" process is a legacy process that goes back decades. And the FCC (and others) have had postcard floods of comments, mimeographed letters of comments, faxed floods of comments, and now this - which, when combined with generative AI, will be yet another flood.
>>>>>
>>>>> Which gets me to my biggest concern as a non-partisan in 2023-2024, namely how LLMs might misuse and abuse the commenting process further.
>>>>>
>>>>> Both in 2014 and 2017, I asked FCC General Counsel if I could use CAPTChA to try to reduce the volume of web scrapers or bots both filing and pulling info from the Electronic Comment Filing System.
>>>>>
>>>>> Both times I was told *no* out of concerns that they might prevent someone from filing. I asked if I could block obvious spam, defined as someone filing a comment >100 times a minute, and was similarly told no because one of those possible comments might be genuine and/or it could be an ex party filing en masse for others.
>>>>>
>>>>> For 2017 we had to spin up 30x the number of AWS cloud instances to handle the load - and this was a flood of comments at 4am, 5am, and 6am ET at night which normally shouldn’t see such volumes. When I said there was a combination of actual humans wanting to leave comments and others who were effectively denying service to others (especially because if anyone wanted to do a batch upload of 100,000 comments or more they could submit a CSV file or a comment with 100,000 signatories) - both parties said no, that couldn’t be happening.
>>>>>
>>>>> Until 2021 when the NY Attorney General proved that was exactly what was happening with 18m of the 23m apparently from non-authentic origin with ~9m from one side of the political aisle (and six companies) and ~9m from the other side of the political aisle (and one or more teenagers).
>>>>>
>>>>> So with Net Neutrality back on the agenda - here’s a simple prediction, even if the volume of comments is somehow controlled, 10,000+ pages of comments produced by ChatGPT or a different LLM is both possible and probably will be done. The question is if someone includes a legitimate legal argument on page 6,517 - will FCC’s lawyers spot it and respond to it as part of the NPRM?
>>>>>
>>>>> Hope this helps and with highest regards,
>>>>>
>>>>> -d.
>>>>> --
>>>>>
>>>>> Principal, LeadDoAdapt Ventures, Inc. & Distinguished Fellow
>>>>>
>>>>> Henry S. Stimson Center, Business Executives for National Security
>>>>>
>>>>>
>>>>>
>>>>> On Mon, Oct 2, 2023 at 2:15 PM Dave Taht via Nnagain <nnagain@lists.bufferbloat.net> wrote:
>>>>>>
>>>>>> All:
>>>>>>
>>>>>> I have spent the last several days reaching out to as many people I
>>>>>> know with a deep understanding of the policy and technical issues
>>>>>> surrounding the internet, to participate on this list. I encourage you
>>>>>> all to reach out on your own, especially to those that you can
>>>>>> constructively and civilly disagree with, and hopefully work with, to
>>>>>> establish technical steps forward. Quite a few have joined silently!
>>>>>> So far, 168 people have joined!
>>>>>>
>>>>>> Please welcome Dr David Bray[1], a self-described "human flack jacket"
>>>>>> who, in the last NN debate, stood up for the non -partisan FCC IT team
>>>>>> that successfully kept the system up 99.4% of the time despite the
>>>>>> comment floods and network abuses from all sides. He has shared with
>>>>>> me privately many sad (and some hilarious!) stories of that era, and I
>>>>>> do kind of hope now, that some of that history surfaces, and we can
>>>>>> learn from it.
>>>>>>
>>>>>> Thank you very much, David, for putting down your painful memories[2],
>>>>>> and agreeing to join here. There is a lot to tackle here, going
>>>>>> forward.
>>>>>>
>>>>>> [1] https://www.stimson.org/ppl/david-bray/
>>>>>> [2] "Pain shared is reduced. Joy shared, increased." - Spider Robinson
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>>>>>> Dave Täht CSO, LibreQos
>>>>>> _______________________________________________
>>>>>> Nnagain mailing list
>>>>>> Nnagain@lists.bufferbloat.net
>>>>>> https://lists.bufferbloat.net/listinfo/nnagain
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Nnagain mailing list
>>>>> Nnagain@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/nnagain
>>>>>
>>>>>
>>>>> _______________________________________________
>>>>> Nnagain mailing list
>>>>> Nnagain@lists.bufferbloat.net
>>>>> https://lists.bufferbloat.net/listinfo/nnagain
>>>>
>>>>
>>> _______________________________________________
>>> Nnagain mailing list
>>> Nnagain@lists.bufferbloat.net
>>> https://lists.bufferbloat.net/listinfo/nnagain
>>
>>
>>
>> --
>> Please send any postal/overnight deliveries to:
>> Vint Cerf
>> Google, LLC
>> 1900 Reston Metro Plaza, 16th Floor
>> Reston, VA 20190
>> +1 (571) 213 1346
>>
>>
>> until further notice
>>
>>
>>
>
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain



--
Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
Dave Täht CSO, LibreQos

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [NNagain] somewhat OT: Licklidder
  2023-10-10 15:29                 ` [NNagain] somewhat OT: Licklidder Dave Taht
@ 2023-10-10 15:53                   ` Steve Crocker
  2023-10-10 17:12                     ` Jack Haverty
  2023-10-10 16:59                   ` Jack Haverty
  1 sibling, 1 reply; 15+ messages in thread
From: Steve Crocker @ 2023-10-10 15:53 UTC (permalink / raw)
  To: Network Neutrality is back! Let´s make the technical
	aspects heard this time!

[-- Attachment #1: Type: text/plain, Size: 38716 bytes --]

Lots of good stuff here and I missed the earlier posts, but one small thing
caught my attention:

> About 10 years ago, I accidentally got involved in a patent dispute to be
an "expert witness", for a patent involving downloading new programs over a
communications path into a remote computer (yes, what all our devices do
almost every day).

In the seminal period of late 1968 and early 1969 when we were thinking
about Arpanet protocols, one idea that was very much part of our thinking
was downloading a small program at the beginning of an interactive
session.  The downloaded program would take care of local interactions to
avoid the need to send every character across the net only to have it
echoed remotely.  Why not always use local echo?  Because most of
the time-shared systems in the various ARPA-supported research environments
had distinct ways of interpreting each and every character.  Imposing a
network-wide rule of local echoing would have compromised the usability of
most of the systems on the Arpanet.  I think Multics was the only "modern"
line-at-a-time system at the time.

In March 1969 we decided it was time to write down the ideas from our
meetings in late 1968 and early 1969.  The first batch of RFCs included
Rulifson's RFC 5.  He proposed DEL, the Decode-Encode Language.  Elie's RFC
51 a year later proposed the Network Interchange Language.  In both cases
the basic concept was the creation of a simple language, easily
implementable on each platform, that would mediate the interaction with a
remote system.  The programs were expected to be short -- hence
downloadable quickly -- and either interpreted or quickly translated.
There was a tiny bit of experimental work along this line, but it was far
ahead of its time.  I think it was about 25 years before ActiveX came
along, followed by Java.

Steve


On Tue, Oct 10, 2023 at 11:30 AM Dave Taht via Nnagain <
nnagain@lists.bufferbloat.net> wrote:

> On Mon, Oct 9, 2023 at 7:56 PM Jack Haverty via Nnagain
> <nnagain@lists.bufferbloat.net> wrote:
>
> For starters it is an honor to be conversing with folk that knew Bob
> Taylor, and "Lick", and y'all made me go back and re-read
>
> http://memex.org/licklider.pdf
>
> For inspiration. I think everyone in our field should re-read that,
> periodically. For example he makes an overgeneralization about the
> thinking processes of men, as compared to the computers of the time,
> and not to women...
>
> But I have always had an odd question - what songs did Lick play on
> guitar? Do any recordings exist?
>
> Music defines who I am, at least. I love the angularness and surprises
> in jazz, and the deep storytelling buried deep in Shostakovich's
> Fifth. Moving forward to modern music: the steady backbeat of Burning
> Man - and endless repetition of short phrases - seems to lead to
> groupthink - I can hardly stand EDM for an hour.
>
>  I am "maked" by Angela' Lansbury's Sweeny Todd, and my religion,
> forever reformed by Monty Python's Life of Brian, One Flew over the
> Cookoos nest, 12 Angry Men, and the 12 Monkees, Pink Floyd and punk
> music were the things that shaped me. No doubt it differs
> significantly for everyone here, please share?
>
> Powerful tales and their technologies predate the internet, and
> because they were wildly shared, influenced how generations thought
> without being the one true answer. Broadcast media, also, was joint,
> and in school we
>
> We are in a new era of uncommonality of experience, in part from
> bringing in all the information in the world, while still separated by
> differences in language, exposure, education, and culture, although
> nowadays it has become so easy and natural to be able to use computer
> assisted language translation tools, I do not know how well they truly
> make the jump between cultures.
>
> In that paper he talked about 75% of his time being spent setting up
> to do analytics, where today so much information exists as to be
> impossible to analyze.
>
> I only have a few more small comments below, but I wanted to pick out
> the concepts of TOS and backpressure as needing thought on another
> day, in another email (what was licks song list??? :)). The internet
> has very little Tos or backpressure, and Flow Queuing based algorithms
> actually function thusly:
>
> If the arrival rate of a flow is less than the departure rate of all
> other flows, it goes out first.
>
> To some extent this matches some of Nagles' "every application has a
> right to one packet in the network", and puts a reward into the system
> for applications that use slightly less than their fair share of the
> bandwidth.
>
> > IMHO, the problem may be that the Internet, and computing technology in
> general, is so new that non-technical organizations, such as government
> entities, don't understand it and therefore can't figure out whether or how
> to regulate anything involved.
> >
> > In other, older, "technologies", rules, procedures, and traditions have
> developed over the years to provide for feedback and control between
> governees and governors.  Roberts Rules of Order was created 150 years ago,
> and is still widely used to manage public meetings.  I've been in local
> meetings where everyone gets a chance to speak, but are limited to a few
> minutes to say whatever's on their mind.  You have to appear in person,
> wait your turn, and make your comment.  Doing so is free, but still has the
> cost of time and hassle to get to the meeting.
> >
> > Organizations have figured out over the years how to manage meetings.
> [Vint - remember the "Rathole!" mechanism that we used to keep Internet
> meetings on track...?]
>
> PARC had "Dealer".
>
> > From what David describes, it sounds like the current "public comment"
> mechanisms in the electronic arena are only at the stage where the loudest
> voices can drown out all others, and public debates are essentially useless
> cacophonies of the loudest proponents of the various viewpoints.   There
> are no rules.   Why should anyone submit their own sensible comments,
> knowing they'll be lost in the noise?
> >
> > In non-electronic public forums, such behavior is ruled out, and if it
> persists, the governing body can have offenders ejected, adjourn a meeting
> until cooler heads prevail, or otherwise make the discourse useful for
> informing decisions.   Courts can issue restraining orders, but has any
> court ever issued such an order applying to an electronic forum?
> >
> > So, why haven't organizations yet developed rules and mechanisms for
> managing electronic discussions....?
> >
> > I'd offer two observations and suggestions.
> >
> > -----
> >
> > First, a major reason for a lack of such rules and mechanisms may be an
> educational gap.  Administrators, politicians, and staffers may simply not
> understand all this newfangled technology, or how it works, and are
> drowning in a sea of terminology, acronyms, and concepts that make no sense
> (to them).   In the FCC case, even the technical gurus may have deep
> knowledge of their traditional realm of telephony, radio, and related
> issues and policy tradeoffs.   But they may be largely ignorant of
> computing and networking equivalents.   Probably even worse, they may
> unconsciously consider the new world as a simple evolution of the old, not
> recognizing the impact of incredibly fast computers and communications, and
> the advances that they enable, such as "AI" - whatever that is...
> >
> > About 10 years ago, I accidentally got involved in a patent dispute to
> be an "expert witness", for a patent involving downloading new programs
> over a communications path into a remote computer (yes, what all our
> devices do almost every day).   I was astounded when I learned how little
> the "judicial system" (lawyers, judges, legislators, etc.) knew about
> computer and network technology.   That didn't stop them from debating the
> meaning of technical terms.  What is RAM?  How does "programming" differ
> from "reprogramming"?  What is "memory"?  What is a "processor"?   What is
> an "operating system"?   The arguments continue until eventually a judge
> declares what the answer is, with little technical knowledge or expertise
> to help.   So you can easily get legally binding definitions such as
> "operating system" means "Windows", and that all computers contain an
> operating system.
> >
> > I spent hours on the phone over about 18 months, explaining to the
> lawyers how computers and networks actually worked.   In turn, they taught
> me quite a lot about the vagaries of the laws and patents.  It was
> fascinating but also disturbing to see how ill-prepared the legal system
> was for new technologies.
> >
> > So, my suggestion is that a focus be placed on helping the non-technical
> decision makers understand the nuances of computing and the Internet.  I
> don't think that will be successful by burying them in the sea of technical
> jargon and acronyms.
> >
> > Before I retired, I spent a lot of time with C-suite denizens from
> companies outside of the technology industry - banks, manufacturers,
> transportation, etc. - helping them understand what "The Internet" was, and
> help them see it as both a huge opportunity and a huge threat to their
> businesses.  One technique I used was simply stolen from the early days of
> The Internet.
> >
> > When we were involved in designing the internal mechanisms of the
> Internet, in particular TCPV4, we didn't know much about networks either.
> So we used analogies.  In particular we used the existing transportation
> infrastructure as a model.   Moving bits around the world isn't all that
> different from moving goods and people.   But everyone, even with no
> technical expertise, knows about transportation.
> >
> > It turns out that there are a lot of useful analogies.  For example, we
> recognized that there were different kinds of "traffic" with different
> needs.  Coal for power plants was important, but not urgent.  If a coal
> train waits on a siding while a passenger train passes, it's OK, even
> preferred.   There could be different "types of service" available from the
> transportation infrastructure.   At the time (late 1970s) we didn't know
> exactly how to do that, but decided to put a field in the IP header as a
> placeholder - the "TOS" field.  Figuring out what different TOSes there
> should be, and how they would be handled differently, was still on the
> to-do list.   There are even analogies to the Internet - goods might travel
> over a "marine network" to a "port", where they are moved onto a "rail
> network", to a distributor, and moved on the highway network to their final
> destination.  Routers, gateways, ...
> >
> > Other transportation analogies reinforced the notion of TOS.  E.g., if
> you're sending a document somewhere, you can choose how to send it - normal
> postal mail, or Priority Mail, or even use a different "network" such as an
> overnight delivery service.  Different TOS would engage different behaviors
> of the underlying communications system, and might also have different
> costs to use them.  Sending a ton of coal to get delivered in a week or two
> would cost a lot less than sending a ton of documents for overnight
> delivery.
> >
> > There were other transportation analogies heard during the TCPV4 design
> discussions - e.g., "Expressway Routing" (do you take a direct route over
> local streets, or go to the freeway even though it's longer) and
> "Multi-Homing" (your manufacturing plant has access to both a highway and a
> rail line).
> >
> > Suggestion -- I suspect that using a familiar infrastructure such as
> transport to discuss issues with non-technical decision makers would be
> helpful.  E.g., imagine what would happen if some particular "net
> neutrality" set of rules was placed on the transportation infrastructure?
>  Would it have a desirable effect?
> >
> > -----
> >
> > Second, in addition to anonymity as an important issue in the electronic
> world, my experience as a mentee of Licklider surfaced another important
> issue in the "galactic network" vision -- "Back Pressure".     The notion
> is based in existing knowledge.   Economics has notions of Supply and
> Demand and Cost Curves.   Engineering has the notion of "Negative Feedback"
> to stabilize mechanical, electrical, or other systems.
> >
> > We discussed Back Pressure, in the mid 70s, in the context of electronic
> mail, and tried to get the notion of "stamps" accepted as part of the email
> mechanisms.  The basic idea was that there had to be some form of "back
> pressure" to prevent overload by discouraging sending of huge quantities of
> mail.
> >
> > At the time, mail traffic was light, since every message was typed by
> hand by some user.  In Lick's group we had experimented with using email as
> a way for computer programs to interact.  In Lick's vision, humans would
> interact by using their computers as their agents.   Even then, computers
> could send email a lot faster and continuously than any human at a
> keyboard, and could easily flood the network.  [This epiphany occurred
> shortly after a mistake in configuring distribution lists caused so many
> messages and replies that our machine crashed as its disk space ran out.]
> >
> > "Stamps" didn't necessarily represent monetary cost.  Back pressure
> could be simple constraints, e.g., no user can send more than 500 (or
> whatever) messages per day.   This notion never got enough support to
> become part of the email standards; I still think it would help with the
> deluge of spam we all experience today.
> >
> > Back Pressure in the Internet today is largely non-existent.  I (or my
> AI and computers) can send as much email as I like.   Communications
> carriers promote "unlimited data" but won't guarantee anything.   Memory
> has become cheap, and as a result behaviors such as "buffer bloat" have
> appeared.
> >
> > Suggestion - educate the decision-makers about Back Pressure, using
> highway analogies (metering lights, etc.)
> >
> > -----
> >
> > Education about the new technology, but by using some familiar analogs,
> and introduction of Back Pressure, in some appropriate form, as part of a
> "network neutrality" policy, would be the two foci I'd recommend.
> >
> > My prior suggestion of "registration" and accepting only the last
> comment was based on the observations above.  Back pressure doesn't have to
> be monetary, and registered users don't have to be personally identified.
>  Simply making it sufficiently "hard" to register (using CAPTCHAs, 2FA,
> whatever) would be a "cost" discouraging "loud voices".   Even the law
> firms submitting millions of comments on behalf of their clients might balk
> at the cost (in labor not money) to register their million clients, even
> anonymously, so each could get his/her comment submitted.   Of course, they
> could always pass the costs on to their (million? really?) clients.  But it
> would still be Back Pressure.
> >
> > One possibility -- make the "cost" of submitting a million electronic
> comments equal to the cost of submitting a million postcards...?
> >
> > Jack Haverty
> >
> >
> > On 10/9/23 16:55, David Bray, PhD wrote:
> >
> > Great points Vint as you're absolutely right - there are multiple
> modalities here (and in the past it was spam from thousands of postcards,
> then mimeographs, then faxes, etc.)
> >
> > The standard historically has been set by the Administrative Conference
> of the United States: https://www.acus.gov/about-acus
> >
> > In 2020 there seemed to be an effort to gave the General Services
> Administration weigh-in, however they closed that rulemaking attempt
> without publishing any of the comments they got and no announcement why it
> was closed.
> >
> > As for what part of Congress - I believe ACUS was championed by both the
> Senate and House Judiciary Committees as it has oversight and
> responsibility for the interpretations of the Administrative Procedure Act
> of 1946 (which sets out the whole rulemaking procedure).
> >
> > Sadly there isn't a standard across agencies - which also means there
> isn't a standard across Administrations. Back in 2018 and 2020, both with
> this group of 52 people here https://tinyurl.com/letter-signed-52-people
> - as well as individually - I did my darnest to encourage them to do a
> standard.
> >
> > There's also the National Academy of Public Administration which is
> probably the latest remaining non-partisan forum for discussions like this
> too.
> >
> >
> > On Mon, Oct 9, 2023 at 7:46 PM Vint Cerf <vint@google.com> wrote:
> >>
> >> David, this is a good list.
> >> FACA has rules for public participation, for example.
> >>
> >> I think it should be taken into account for any public commenting
> process that online (and offline such as USPS or fax and phone calls) that
> spam and artificial inflation of comments are possible. Is there any
> specific standard for US agency public comment handling? If now, what
> committees of the US Congress might have jurisdiction?
> >>
> >> v
> >>
> >>
> >> On Tue, Oct 10, 2023 at 8:22 AM David Bray, PhD via Nnagain <
> nnagain@lists.bufferbloat.net> wrote:
> >>>
> >>> I'm all for doing new things to make things better.
> >>>
> >>> At the same time, I used to do bioterrorism preparedness and response
> from 2000-2005 (and aside from asking myself what kind of crazy world
> needed counter-bioterrorism efforts... I also realized you don't want to
> interject something completely new in the middle of an unfolding crisis
> event). If something were to be injected now, it would have to have
> consensus from both sides, otherwise at least one side (potentially
> detractors from both) will claim that whatever form the new approaches take
> are somehow advantaging "the other side" and disadvantaging them.
> >>>
> >>> Probably would take a ruling by the Administrative Conference of the
> United States, at a minimum to answer these five questions - and even then,
> introducing something completely different in the midst of a political
> melee might just invite mudslinging unless moderate voices on both sides
> can reach some consensus.
> >>>
> >>> 1. Does identity matter regarding who files a comment or not — and
> must one be a U.S. person in order to file?
> >>>
> >>> 2. Should agencies publish real-time counts of the number of comments
> received — or is it better to wait until the end of a commenting round to
> make all comments available, including counts?
> >>>
> >>> 3. Should third-party groups be able to file on behalf of someone else
> or not — and do agencies have the right to remove spam-like comments?
> >>>
> >>> 4. Should the public commenting process permit multiple comments per
> individual for a proceeding — and if so, how many comments from a single
> individual are too many? 100? 1000? More?
> >>>
> >>> 5. Finally, should the U.S. government itself consider, given public
> perceptions about potential conflicts of interest for any agency performing
> a public commenting process, whether it would be better to have third-party
> groups take responsibility for assembling comments and then filing those
> comments via a validated process with the government?
> >>>
> >>>
> >>>
> >>> On Sat, Oct 7, 2023 at 4:10 PM Jack Haverty <jack@3kitty.org> wrote:
> >>>>
> >>>> Hi again David et al,
> >>>>
> >>>> Interesting frenzy...lots of questions that need answers and
> associated policies.   I served 6 years as an elected official (in a small
> special district in California), so I have some small understanding of the
> government side of things and the constraints involved.   Being in charge
> doesn't mean you can do what you want.
> >>>>
> >>>> I'm thinking here more near-term and incremental steps.  You said
> "These same questions need pragmatic pilots that involve the public ..."
> >>>>
> >>>> So, how about using the current NN situation for a pilot?  Keep all
> the current ways and emerging AI techniques to continue to flood the system
> with comments.   But also offer an *optional* way for humans to "register"
> as a commenter and then submit their (latest only) comment into the melee.
> Will people use it?  Will "consumers" (the lawyers, commissioners, etc.)
> find it useful?
> >>>>
> >>>> I've found it curious, for decades now, that there are (too many)
> mechanisms for "secure email", that may help with the flood of
> disinformation from anonymous senders, but very very few people use them.
>  Maybe they don't know how; maybe the available schemes are too flawed;
> maybe ...?
> >>>>
> >>>> About 30 years ago, I was a speaker in a public meeting orchestrated
> by USPS, and recommended that they take a lead role, e.g., by acting as a
> national CA - certificate authority.  Never happened though.   FCC issues
> lots of licenses...perhaps they could issue online credentials too?
> >>>>
> >>>> Perhaps a "pilot" where you will also accept comments by email, some
> possibly sent by "verified" humans if they understand how to do so, would
> be worth trying?   Perhaps comments on "technical aspects" coming from
> people who demonstrably know how to use technology would be valuable to the
> policy makers?
> >>>>
> >>>> The Internet, and technology such as TCP, began as an experimental
> pilot about 50 years ago.  Sometimes pilots become infrastructures.
> >>>>
> >>>> FYI, I'm signing this message.  Using OpenPGP.  I could encrypt it
> also, but my email program can't find your public key.
> >>>>
> >>>> Jack Haverty
> >>>>
> >>>>
> >>>> On 10/5/23 14:21, David Bray, PhD wrote:
> >>>>
> >>>> Indeed Jack - a few things to balance - the Administrative Procedure
> Act of 1946 (on which the idea of rulemaking is based) us about raising
> legal concerns that must be answered by the agency at the time the
> rulemaking is done. It's not a vote nor is it the case that if the agency
> gets tons of comments in one direction that they have to go in that
> direction. Instead it's only about making sure legal concerns are
> considered and responded to before the agency before the agency acts.
> (Which is partly why sending "I'm for XYZ" or "I'm against ABC" really
> doesn't mean anything to an agency - not only is that not a legal argument
> or concern, it's also not something where they're obligated to follow these
> comments - it's not a vote or poll).
> >>>>
> >>>> That said, political folks have spun things to the public as if it is
> a poll/vote/chance to act. The raise a valid legal concern part of the APA
> of 1946 is omitted. Moreover the fact that third party law firms and others
> like to submit comments on behalf of clients - there will always be a third
> party submitting multiple comments for their clients (or "clients") because
> that's their business.
> >>>>
> >>>> In the lead up to 2017, the Consumer and Government Affairs Bureau of
> the FCC got an inquiry from a firm asking how they could submit 1 million
> comments a day on an "upcoming privacy proceeding" (their words, astute
> observers will note there was no privacy proceeding before the FCC in
> 2017). When the Bureau asked me, I told them either mail us a CD to upload
> it or submit one comment with 1 million signatures. To attempt to flood us
> with 1 million comments a day (aside from the fact who can "predict" having
> that many daily) would deny resources to others. In the mess that followed,
> what was released to the public was so redacted you couldn't see the
> legitimate concerns and better paths that were offered to this entity.
> >>>>
> >>>> And the FCC isn't alone. EPA, FTC, and other regulatory agencies have
> had these hijinks for years - and before the Internet it was faxes, mass
> mimeographs (remember blue ink?), and postcards.The Administrative
> Conference of the United States (ACUS) - is the body that is supposed to
> provide consistent guidance for things like this across the U.S.
> government. I've briefed them and tried to raise awareness of these issues
> - as I think fundamentally this is a **process** question that once
> answered, tech can support. However they're not technologies and updating
> the interpretation of the process isn't something lawyers are apt to do
> until the evidence that things are in trouble is overwhelming.
> >>>>
> >>>> 52 folks wrote a letter to them - and to GSA - back in 2020. GSA had
> a rulemaking of its own on how to improve things, yet oddly never published
> any of the comments it received (including ours) and closed the rulemaking
> quietly. Here's the letter: https://tinyurl.com/letter-signed-52-people
> >>>>
> >>>> And here's an article published in OODAloop about this - and why
> Generative AI is probably going to make things even more challenging:
> https://www.oodaloop.com/archive/2023/04/18/why-a-pause-on-ai-development-is-not-the-answer-an-insiders-perspective/
> >>>>
> >>>> [snippet of the article] Now in 2023 and Beyond: Proactive Approaches
> to AI and Society
> >>>>
> >>>> Looking to the future, to effectively address the challenges arising
> from AI, we must foster a proactive, results-oriented, and cooperative
> approach with the public. Think tanks and universities can engage the
> public in conversations about how to work, live, govern, and co-exist with
> modern technologies that impact society. By involving diverse voices in the
> decision-making process, we can better address and resolve the complex
> challenges AI presents on local and national levels.
> >>>>
> >>>> In addition, we must encourage industry and political leaders to
> participate in finding non-partisan, multi-sector solutions if civil
> societies are to remain stable. By working together, we can bridge the gap
> between technological advancements and their societal implications.
> >>>>
> >>>> Finally, launching AI pilots across various sectors, such as work,
> education, health, law, and civil society, is essential. We must learn by
> doing on how we can create responsible civil environments where AIs can be
> developed and deployed responsibly. These initiatives can help us better
> understand and integrate AI into our lives, ensuring its potential is
> harnessed for the greater good while mitigating risks.
> >>>>
> >>>> In 2019 and 2020, a group of fifty-two people asked the
> Administrative Conference of the United States (which helps guide
> rulemaking procedures for federal agencies), General Accounting Office, and
> the General Services Administration to call attention to the need to
> address the challenges of chatbots flooding public commenting procedures
> and potentially crowding out or denying services to actual humans wanting
> to leave a comment. We asked:
> >>>>
> >>>> 1. Does identity matter regarding who files a comment or not — and
> must one be a U.S. person in order to file?
> >>>>
> >>>> 2. Should agencies publish real-time counts of the number of comments
> received — or is it better to wait until the end of a commenting round to
> make all comments available, including counts?
> >>>>
> >>>> 3. Should third-party groups be able to file on behalf of someone
> else or not — and do agencies have the right to remove spam-like comments?
> >>>>
> >>>> 4. Should the public commenting process permit multiple comments per
> individual for a proceeding — and if so, how many comments from a single
> individual are too many? 100? 1000? More?
> >>>>
> >>>> 5. Finally, should the U.S. government itself consider, given public
> perceptions about potential conflicts of interest for any agency performing
> a public commenting process, whether it would be better to have third-party
> groups take responsibility for assembling comments and then filing those
> comments via a validated process with the government?
> >>>>
> >>>> These same questions need pragmatic pilots that involve the public to
> co-explore and co-develop how we operate effectively amid these
> technological shifts. As the capabilities of LLMs continue to grow, we need
> positive change agents willing to tackle the messy issues at the
> intersection of technology and society. The challenges are immense, but so
> too are the opportunities for positive change. Let’s seize this moment to
> create a better tomorrow for all. Working together, we can co-create a
> future that embraces AI’s potential while mitigating its risks, informed by
> the hard lessons we have already learned.
> >>>>
> >>>> Full article:
> https://www.oodaloop.com/archive/2023/04/18/why-a-pause-on-ai-development-is-not-the-answer-an-insiders-perspective/
> >>>>
> >>>> Hope this helps.
> >>>>
> >>>>
> >>>> On Thu, Oct 5, 2023 at 4:44 PM Jack Haverty via Nnagain <
> nnagain@lists.bufferbloat.net> wrote:
> >>>>>
> >>>>> Thanks for all your efforts to keep the "feedback loop" to the
> rulemakers functioning!
> >>>>>
> >>>>> I'd like to offer a suggestion for a hopefully politically
> acceptable way to handle the deluge, derived from my own battles with
> "email" over the years (decades).
> >>>>>
> >>>>> Back in the 1970s, I implemented one of the first email systems on
> the Arpanet, under the mentorship of JCR Licklider, who had been pursuing
> his vision of a "Galactic Network" at ARPA and MIT.   One of the things we
> discovered was the significance of anonymity.   At the time, anonymity was
> forbidden on the Arpanet; you needed an account on some computer, protected
> by passwords, in order to legitimately use the network.   The mechanisms
> were crude and easily broken, but the principle applied.
> >>>>>
> >>>>> Over the years, that principle has been forgotten, and the right to
> be anonymous has become entrenched.   But many uses of the network, and
> needs of its users, demand accountability, so all sorts of mechanisms have
> been pasted on top of the network to provide ways to judge user identity.
> Banks, medical services, governments, and businesses all demand some way of
> proving your identity, with passwords, various schemes of 2FA, VPNs, or
> other such technology, with varying degrees of protection.   It is still
> possible to be anonymous on the net, but many things you do require you to
> prove, to some extent, who you are.
> >>>>>
> >>>>> So, my suggestion for handling the deluge of "comments" is:
> >>>>>
> >>>>> 1/ create some mechanism for "registering" your intent to submit a
> comment.   Make it hard for bots to register.  Perhaps you can leverage the
> work of various partners, e.g., ISPs, retailers, government agencies,
> financial institutions, of others who already have some way of identifying
> their users.
> >>>>>
> >>>>> 2/ Also make registration optional - anyone can still submit
> comments anonymously if they choose.
> >>>>>
> >>>>> 3/ for "registered commenters", provide a way to "edit" your
> previous comment - i.e., advise that your comment is always the last one
> you submitted.   I.E., whoever you are, you can only submit one comment,
> which will be the last one you submit.
> >>>>>
> >>>>> 4/ In the thousands of pages of comments, somehow flag the ones that
> are from registered commenters, visible to the people who read the
> comments.   Even better, provide those "information consumers" with ways to
> sort, filter, and search through the body of comments.
> >>>>>
> >>>>> This may not reduce the deluge of comments, but I'd expect it to
> help the lawyers and politicians keep their heads above the water.
> >>>>>
> >>>>> Anonymity is an important issue for Net Neutrality too, but I'll
> opine about that separately.....
> >>>>>
> >>>>> Jack Haverty
> >>>>>
> >>>>>
> >>>>> On 10/2/23 12:38, David Bray, PhD via Nnagain wrote:
> >>>>>
> >>>>> Greetings all and thank you Dave Taht for that very kind intro...
> >>>>>
> >>>>> First, I'll open with I'm a gosh-darn non-partisan, which means I
> swore an oath to uphold the Constitution first and serve the United States
> - not a specific party, tribe, or ideology. This often means, especially in
> today's era of 24/7 news and social media, non-partisans have to "top
> cover".
> >>>>>
> >>>>> Second, I'll share that in what happened in 2017 (which itself was
> 10x what we saw in 2014) my biggest concern was and remains that a few
> actors attempted to flood the system with less-than-authentic comments.
> >>>>>
> >>>>> In some respects this is not new. The whole "notice and comment"
> process is a legacy process that goes back decades. And the FCC (and
> others) have had postcard floods of comments, mimeographed letters of
> comments, faxed floods of comments, and now this - which, when combined
> with generative AI, will be yet another flood.
> >>>>>
> >>>>> Which gets me to my biggest concern as a non-partisan in 2023-2024,
> namely how LLMs might misuse and abuse the commenting process further.
> >>>>>
> >>>>> Both in 2014 and 2017, I asked FCC General Counsel if I could use
> CAPTChA to try to reduce the volume of web scrapers or bots both filing and
> pulling info from the Electronic Comment Filing System.
> >>>>>
> >>>>> Both times I was told *no* out of concerns that they might prevent
> someone from filing. I asked if I could block obvious spam, defined as
> someone filing a comment >100 times a minute, and was similarly told no
> because one of those possible comments might be genuine and/or it could be
> an ex party filing en masse for others.
> >>>>>
> >>>>> For 2017 we had to spin up 30x the number of AWS cloud instances to
> handle the load - and this was a flood of comments at 4am, 5am, and 6am ET
> at night which normally shouldn’t see such volumes. When I said there was a
> combination of actual humans wanting to leave comments and others who were
> effectively denying service to others (especially because if anyone wanted
> to do a batch upload of 100,000 comments or more they could submit a CSV
> file or a comment with 100,000 signatories) - both parties said no, that
> couldn’t be happening.
> >>>>>
> >>>>> Until 2021 when the NY Attorney General proved that was exactly what
> was happening with 18m of the 23m apparently from non-authentic origin with
> ~9m from one side of the political aisle (and six companies) and ~9m from
> the other side of the political aisle (and one or more teenagers).
> >>>>>
> >>>>> So with Net Neutrality back on the agenda - here’s a simple
> prediction, even if the volume of comments is somehow controlled, 10,000+
> pages of comments produced by ChatGPT or a different LLM is both possible
> and probably will be done. The question is if someone includes a legitimate
> legal argument on page 6,517 - will FCC’s lawyers spot it and respond to it
> as part of the NPRM?
> >>>>>
> >>>>> Hope this helps and with highest regards,
> >>>>>
> >>>>> -d.
> >>>>> --
> >>>>>
> >>>>> Principal, LeadDoAdapt Ventures, Inc. & Distinguished Fellow
> >>>>>
> >>>>> Henry S. Stimson Center, Business Executives for National Security
> >>>>>
> >>>>>
> >>>>>
> >>>>> On Mon, Oct 2, 2023 at 2:15 PM Dave Taht via Nnagain <
> nnagain@lists.bufferbloat.net> wrote:
> >>>>>>
> >>>>>> All:
> >>>>>>
> >>>>>> I have spent the last several days reaching out to as many people I
> >>>>>> know with a deep understanding of the policy and technical issues
> >>>>>> surrounding the internet, to participate on this list. I encourage
> you
> >>>>>> all to reach out on your own, especially to those that you can
> >>>>>> constructively and civilly disagree with, and hopefully work with,
> to
> >>>>>> establish technical steps forward. Quite a few have joined silently!
> >>>>>> So far, 168 people have joined!
> >>>>>>
> >>>>>> Please welcome Dr David Bray[1], a self-described "human flack
> jacket"
> >>>>>> who, in the last NN debate, stood up for the non -partisan FCC IT
> team
> >>>>>> that successfully kept the system up 99.4% of the time despite the
> >>>>>> comment floods and network abuses from all sides. He has shared with
> >>>>>> me privately many sad (and some hilarious!) stories of that era,
> and I
> >>>>>> do kind of hope now, that some of that history surfaces, and we can
> >>>>>> learn from it.
> >>>>>>
> >>>>>> Thank you very much, David, for putting down your painful
> memories[2],
> >>>>>> and agreeing to join here. There is a lot to tackle here, going
> >>>>>> forward.
> >>>>>>
> >>>>>> [1] https://www.stimson.org/ppl/david-bray/
> >>>>>> [2] "Pain shared is reduced. Joy shared, increased." - Spider
> Robinson
> >>>>>>
> >>>>>>
> >>>>>> --
> >>>>>> Oct 30:
> https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
> >>>>>> Dave Täht CSO, LibreQos
> >>>>>> _______________________________________________
> >>>>>> Nnagain mailing list
> >>>>>> Nnagain@lists.bufferbloat.net
> >>>>>> https://lists.bufferbloat.net/listinfo/nnagain
> >>>>>
> >>>>>
> >>>>> _______________________________________________
> >>>>> Nnagain mailing list
> >>>>> Nnagain@lists.bufferbloat.net
> >>>>> https://lists.bufferbloat.net/listinfo/nnagain
> >>>>>
> >>>>>
> >>>>> _______________________________________________
> >>>>> Nnagain mailing list
> >>>>> Nnagain@lists.bufferbloat.net
> >>>>> https://lists.bufferbloat.net/listinfo/nnagain
> >>>>
> >>>>
> >>> _______________________________________________
> >>> Nnagain mailing list
> >>> Nnagain@lists.bufferbloat.net
> >>> https://lists.bufferbloat.net/listinfo/nnagain
> >>
> >>
> >>
> >> --
> >> Please send any postal/overnight deliveries to:
> >> Vint Cerf
> >> Google, LLC
> >> 1900 Reston Metro Plaza, 16th Floor
> >> Reston, VA 20190
> >> +1 (571) 213 1346
> >>
> >>
> >> until further notice
> >>
> >>
> >>
> >
> > _______________________________________________
> > Nnagain mailing list
> > Nnagain@lists.bufferbloat.net
> > https://lists.bufferbloat.net/listinfo/nnagain
>
>
>
> --
> Oct 30:
> https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
> Dave Täht CSO, LibreQos
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain
>

[-- Attachment #2: Type: text/html, Size: 46457 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [NNagain] somewhat OT: Licklidder
  2023-10-10 15:29                 ` [NNagain] somewhat OT: Licklidder Dave Taht
  2023-10-10 15:53                   ` Steve Crocker
@ 2023-10-10 16:59                   ` Jack Haverty
  1 sibling, 0 replies; 15+ messages in thread
From: Jack Haverty @ 2023-10-10 16:59 UTC (permalink / raw)
  To: Dave Taht,
	Network Neutrality is back! Let´s make the technical
	aspects heard this time!

For more on Lick, I recommend Waldrop's book "The Dream Machine" - 
basically a history of Lick's activities in the early days of his 
"galactic network" activities.  The stories about the 70s, when I was 
part of Lick's group as student and then staff, match my own 
recollections very well, so I tend to believe that the other material is 
also historically accurate.   For events I personally experienced, it 
also revealed a lot of the "why" things happened, which I hadn't known 
before.

Lick's vision was that interconnected computers would eventually become 
pervasive and help humans do everything humans do.  Getting above the 
level of datagrams, algorithms, protocols, and acronyms, it seems to me 
that is pretty much what The Internet has become.

https://www.amazon.com/Dream-Machine-Licklider-Revolution-Computing/dp/014200135X

I don't remember if it contains anything about the guitar though. I 
don't recall that he ever brought one in to work.   But he could 
sometimes be seen playing MazeWars on our Imlacs (early desktop 
workstations in the 70s).

Jack

On 10/10/23 08:29, Dave Taht wrote:
> On Mon, Oct 9, 2023 at 7:56 PM Jack Haverty via Nnagain
> <nnagain@lists.bufferbloat.net> wrote:
>
> For starters it is an honor to be conversing with folk that knew Bob
> Taylor, and "Lick", and y'all made me go back and re-read
>
> http://memex.org/licklider.pdf
>
> For inspiration. I think everyone in our field should re-read that,
> periodically. For example he makes an overgeneralization about the
> thinking processes of men, as compared to the computers of the time,
> and not to women...
>
> But I have always had an odd question - what songs did Lick play on
> guitar? Do any recordings exist?
>
> Music defines who I am, at least. I love the angularness and surprises
> in jazz, and the deep storytelling buried deep in Shostakovich's
> Fifth. Moving forward to modern music: the steady backbeat of Burning
> Man - and endless repetition of short phrases - seems to lead to
> groupthink - I can hardly stand EDM for an hour.
>
>   I am "maked" by Angela' Lansbury's Sweeny Todd, and my religion,
> forever reformed by Monty Python's Life of Brian, One Flew over the
> Cookoos nest, 12 Angry Men, and the 12 Monkees, Pink Floyd and punk
> music were the things that shaped me. No doubt it differs
> significantly for everyone here, please share?
>
> Powerful tales and their technologies predate the internet, and
> because they were wildly shared, influenced how generations thought
> without being the one true answer. Broadcast media, also, was joint,
> and in school we
>
> We are in a new era of uncommonality of experience, in part from
> bringing in all the information in the world, while still separated by
> differences in language, exposure, education, and culture, although
> nowadays it has become so easy and natural to be able to use computer
> assisted language translation tools, I do not know how well they truly
> make the jump between cultures.
>
> In that paper he talked about 75% of his time being spent setting up
> to do analytics, where today so much information exists as to be
> impossible to analyze.
>
> I only have a few more small comments below, but I wanted to pick out
> the concepts of TOS and backpressure as needing thought on another
> day, in another email (what was licks song list??? :)). The internet
> has very little Tos or backpressure, and Flow Queuing based algorithms
> actually function thusly:
>
> If the arrival rate of a flow is less than the departure rate of all
> other flows, it goes out first.
>
> To some extent this matches some of Nagles' "every application has a
> right to one packet in the network", and puts a reward into the system
> for applications that use slightly less than their fair share of the
> bandwidth.
>
>> IMHO, the problem may be that the Internet, and computing technology in general, is so new that non-technical organizations, such as government entities, don't understand it and therefore can't figure out whether or how to regulate anything involved.
>>
>> In other, older, "technologies", rules, procedures, and traditions have developed over the years to provide for feedback and control between governees and governors.  Roberts Rules of Order was created 150 years ago, and is still widely used to manage public meetings.  I've been in local meetings where everyone gets a chance to speak, but are limited to a few minutes to say whatever's on their mind.  You have to appear in person, wait your turn, and make your comment.  Doing so is free, but still has the cost of time and hassle to get to the meeting.
>>
>> Organizations have figured out over the years how to manage meetings.  [Vint - remember the "Rathole!" mechanism that we used to keep Internet meetings on track...?]
> PARC had "Dealer".
>
>>  From what David describes, it sounds like the current "public comment" mechanisms in the electronic arena are only at the stage where the loudest voices can drown out all others, and public debates are essentially useless cacophonies of the loudest proponents of the various viewpoints.   There are no rules.   Why should anyone submit their own sensible comments, knowing they'll be lost in the noise?
>>
>> In non-electronic public forums, such behavior is ruled out, and if it persists, the governing body can have offenders ejected, adjourn a meeting until cooler heads prevail, or otherwise make the discourse useful for informing decisions.   Courts can issue restraining orders, but has any court ever issued such an order applying to an electronic forum?
>>
>> So, why haven't organizations yet developed rules and mechanisms for managing electronic discussions....?
>>
>> I'd offer two observations and suggestions.
>>
>> -----
>>
>> First, a major reason for a lack of such rules and mechanisms may be an educational gap.  Administrators, politicians, and staffers may simply not understand all this newfangled technology, or how it works, and are drowning in a sea of terminology, acronyms, and concepts that make no sense (to them).   In the FCC case, even the technical gurus may have deep knowledge of their traditional realm of telephony, radio, and related issues and policy tradeoffs.   But they may be largely ignorant of computing and networking equivalents.   Probably even worse, they may unconsciously consider the new world as a simple evolution of the old, not recognizing the impact of incredibly fast computers and communications, and the advances that they enable, such as "AI" - whatever that is...
>>
>> About 10 years ago, I accidentally got involved in a patent dispute to be an "expert witness", for a patent involving downloading new programs over a communications path into a remote computer (yes, what all our devices do almost every day).   I was astounded when I learned how little the "judicial system" (lawyers, judges, legislators, etc.) knew about computer and network technology.   That didn't stop them from debating the meaning of technical terms.  What is RAM?  How does "programming" differ from "reprogramming"?  What is "memory"?  What is a "processor"?   What is an "operating system"?   The arguments continue until eventually a judge declares what the answer is, with little technical knowledge or expertise to help.   So you can easily get legally binding definitions such as "operating system" means "Windows", and that all computers contain an operating system.
>>
>> I spent hours on the phone over about 18 months, explaining to the lawyers how computers and networks actually worked.   In turn, they taught me quite a lot about the vagaries of the laws and patents.  It was fascinating but also disturbing to see how ill-prepared the legal system was for new technologies.
>>
>> So, my suggestion is that a focus be placed on helping the non-technical decision makers understand the nuances of computing and the Internet.  I don't think that will be successful by burying them in the sea of technical jargon and acronyms.
>>
>> Before I retired, I spent a lot of time with C-suite denizens from companies outside of the technology industry - banks, manufacturers, transportation, etc. - helping them understand what "The Internet" was, and help them see it as both a huge opportunity and a huge threat to their businesses.  One technique I used was simply stolen from the early days of The Internet.
>>
>> When we were involved in designing the internal mechanisms of the Internet, in particular TCPV4, we didn't know much about networks either.  So we used analogies.  In particular we used the existing transportation infrastructure as a model.   Moving bits around the world isn't all that different from moving goods and people.   But everyone, even with no technical expertise, knows about transportation.
>>
>> It turns out that there are a lot of useful analogies.  For example, we recognized that there were different kinds of "traffic" with different needs.  Coal for power plants was important, but not urgent.  If a coal train waits on a siding while a passenger train passes, it's OK, even preferred.   There could be different "types of service" available from the transportation infrastructure.   At the time (late 1970s) we didn't know exactly how to do that, but decided to put a field in the IP header as a placeholder - the "TOS" field.  Figuring out what different TOSes there should be, and how they would be handled differently, was still on the to-do list.   There are even analogies to the Internet - goods might travel over a "marine network" to a "port", where they are moved onto a "rail network", to a distributor, and moved on the highway network to their final destination.  Routers, gateways, ...
>>
>> Other transportation analogies reinforced the notion of TOS.  E.g., if you're sending a document somewhere, you can choose how to send it - normal postal mail, or Priority Mail, or even use a different "network" such as an overnight delivery service.  Different TOS would engage different behaviors of the underlying communications system, and might also have different costs to use them.  Sending a ton of coal to get delivered in a week or two would cost a lot less than sending a ton of documents for overnight delivery.
>>
>> There were other transportation analogies heard during the TCPV4 design discussions - e.g., "Expressway Routing" (do you take a direct route over local streets, or go to the freeway even though it's longer) and "Multi-Homing" (your manufacturing plant has access to both a highway and a rail line).
>>
>> Suggestion -- I suspect that using a familiar infrastructure such as transport to discuss issues with non-technical decision makers would be helpful.  E.g., imagine what would happen if some particular "net neutrality" set of rules was placed on the transportation infrastructure?   Would it have a desirable effect?
>>
>> -----
>>
>> Second, in addition to anonymity as an important issue in the electronic world, my experience as a mentee of Licklider surfaced another important issue in the "galactic network" vision -- "Back Pressure".     The notion is based in existing knowledge.   Economics has notions of Supply and Demand and Cost Curves.   Engineering has the notion of "Negative Feedback" to stabilize mechanical, electrical, or other systems.
>>
>> We discussed Back Pressure, in the mid 70s, in the context of electronic mail, and tried to get the notion of "stamps" accepted as part of the email mechanisms.  The basic idea was that there had to be some form of "back pressure" to prevent overload by discouraging sending of huge quantities of mail.
>>
>> At the time, mail traffic was light, since every message was typed by hand by some user.  In Lick's group we had experimented with using email as a way for computer programs to interact.  In Lick's vision, humans would interact by using their computers as their agents.   Even then, computers could send email a lot faster and continuously than any human at a keyboard, and could easily flood the network.  [This epiphany occurred shortly after a mistake in configuring distribution lists caused so many messages and replies that our machine crashed as its disk space ran out.]
>>
>> "Stamps" didn't necessarily represent monetary cost.  Back pressure could be simple constraints, e.g., no user can send more than 500 (or whatever) messages per day.   This notion never got enough support to become part of the email standards; I still think it would help with the deluge of spam we all experience today.
>>
>> Back Pressure in the Internet today is largely non-existent.  I (or my AI and computers) can send as much email as I like.   Communications carriers promote "unlimited data" but won't guarantee anything.   Memory has become cheap, and as a result behaviors such as "buffer bloat" have appeared.
>>
>> Suggestion - educate the decision-makers about Back Pressure, using highway analogies (metering lights, etc.)
>>
>> -----
>>
>> Education about the new technology, but by using some familiar analogs, and introduction of Back Pressure, in some appropriate form, as part of a "network neutrality" policy, would be the two foci I'd recommend.
>>
>> My prior suggestion of "registration" and accepting only the last comment was based on the observations above.  Back pressure doesn't have to be monetary, and registered users don't have to be personally identified.   Simply making it sufficiently "hard" to register (using CAPTCHAs, 2FA, whatever) would be a "cost" discouraging "loud voices".   Even the law firms submitting millions of comments on behalf of their clients might balk at the cost (in labor not money) to register their million clients, even anonymously, so each could get his/her comment submitted.   Of course, they could always pass the costs on to their (million? really?) clients.  But it would still be Back Pressure.
>>
>> One possibility -- make the "cost" of submitting a million electronic comments equal to the cost of submitting a million postcards...?
>>
>> Jack Haverty
>>
>>
>> On 10/9/23 16:55, David Bray, PhD wrote:
>>
>> Great points Vint as you're absolutely right - there are multiple modalities here (and in the past it was spam from thousands of postcards, then mimeographs, then faxes, etc.)
>>
>> The standard historically has been set by the Administrative Conference of the United States: https://www.acus.gov/about-acus
>>
>> In 2020 there seemed to be an effort to gave the General Services Administration weigh-in, however they closed that rulemaking attempt without publishing any of the comments they got and no announcement why it was closed.
>>
>> As for what part of Congress - I believe ACUS was championed by both the Senate and House Judiciary Committees as it has oversight and responsibility for the interpretations of the Administrative Procedure Act of 1946 (which sets out the whole rulemaking procedure).
>>
>> Sadly there isn't a standard across agencies - which also means there isn't a standard across Administrations. Back in 2018 and 2020, both with this group of 52 people here https://tinyurl.com/letter-signed-52-people - as well as individually - I did my darnest to encourage them to do a standard.
>>
>> There's also the National Academy of Public Administration which is probably the latest remaining non-partisan forum for discussions like this too.
>>
>>
>> On Mon, Oct 9, 2023 at 7:46 PM Vint Cerf <vint@google.com> wrote:
>>> David, this is a good list.
>>> FACA has rules for public participation, for example.
>>>
>>> I think it should be taken into account for any public commenting process that online (and offline such as USPS or fax and phone calls) that spam and artificial inflation of comments are possible. Is there any specific standard for US agency public comment handling? If now, what committees of the US Congress might have jurisdiction?
>>>
>>> v
>>>
>>>
>>> On Tue, Oct 10, 2023 at 8:22 AM David Bray, PhD via Nnagain <nnagain@lists.bufferbloat.net> wrote:
>>>> I'm all for doing new things to make things better.
>>>>
>>>> At the same time, I used to do bioterrorism preparedness and response from 2000-2005 (and aside from asking myself what kind of crazy world needed counter-bioterrorism efforts... I also realized you don't want to interject something completely new in the middle of an unfolding crisis event). If something were to be injected now, it would have to have consensus from both sides, otherwise at least one side (potentially detractors from both) will claim that whatever form the new approaches take are somehow advantaging "the other side" and disadvantaging them.
>>>>
>>>> Probably would take a ruling by the Administrative Conference of the United States, at a minimum to answer these five questions - and even then, introducing something completely different in the midst of a political melee might just invite mudslinging unless moderate voices on both sides can reach some consensus.
>>>>
>>>> 1. Does identity matter regarding who files a comment or not — and must one be a U.S. person in order to file?
>>>>
>>>> 2. Should agencies publish real-time counts of the number of comments received — or is it better to wait until the end of a commenting round to make all comments available, including counts?
>>>>
>>>> 3. Should third-party groups be able to file on behalf of someone else or not — and do agencies have the right to remove spam-like comments?
>>>>
>>>> 4. Should the public commenting process permit multiple comments per individual for a proceeding — and if so, how many comments from a single individual are too many? 100? 1000? More?
>>>>
>>>> 5. Finally, should the U.S. government itself consider, given public perceptions about potential conflicts of interest for any agency performing a public commenting process, whether it would be better to have third-party groups take responsibility for assembling comments and then filing those comments via a validated process with the government?
>>>>
>>>>
>>>>
>>>> On Sat, Oct 7, 2023 at 4:10 PM Jack Haverty <jack@3kitty.org> wrote:
>>>>> Hi again David et al,
>>>>>
>>>>> Interesting frenzy...lots of questions that need answers and associated policies.   I served 6 years as an elected official (in a small special district in California), so I have some small understanding of the government side of things and the constraints involved.   Being in charge doesn't mean you can do what you want.
>>>>>
>>>>> I'm thinking here more near-term and incremental steps.  You said "These same questions need pragmatic pilots that involve the public ..."
>>>>>
>>>>> So, how about using the current NN situation for a pilot?  Keep all the current ways and emerging AI techniques to continue to flood the system with comments.   But also offer an *optional* way for humans to "register" as a commenter and then submit their (latest only) comment into the melee.  Will people use it?  Will "consumers" (the lawyers, commissioners, etc.) find it useful?
>>>>>
>>>>> I've found it curious, for decades now, that there are (too many) mechanisms for "secure email", that may help with the flood of disinformation from anonymous senders, but very very few people use them.   Maybe they don't know how; maybe the available schemes are too flawed; maybe ...?
>>>>>
>>>>> About 30 years ago, I was a speaker in a public meeting orchestrated by USPS, and recommended that they take a lead role, e.g., by acting as a national CA - certificate authority.  Never happened though.   FCC issues lots of licenses...perhaps they could issue online credentials too?
>>>>>
>>>>> Perhaps a "pilot" where you will also accept comments by email, some possibly sent by "verified" humans if they understand how to do so, would be worth trying?   Perhaps comments on "technical aspects" coming from people who demonstrably know how to use technology would be valuable to the policy makers?
>>>>>
>>>>> The Internet, and technology such as TCP, began as an experimental pilot about 50 years ago.  Sometimes pilots become infrastructures.
>>>>>
>>>>> FYI, I'm signing this message.  Using OpenPGP.  I could encrypt it also, but my email program can't find your public key.
>>>>>
>>>>> Jack Haverty
>>>>>
>>>>>
>>>>> On 10/5/23 14:21, David Bray, PhD wrote:
>>>>>
>>>>> Indeed Jack - a few things to balance - the Administrative Procedure Act of 1946 (on which the idea of rulemaking is based) us about raising legal concerns that must be answered by the agency at the time the rulemaking is done. It's not a vote nor is it the case that if the agency gets tons of comments in one direction that they have to go in that direction. Instead it's only about making sure legal concerns are considered and responded to before the agency before the agency acts. (Which is partly why sending "I'm for XYZ" or "I'm against ABC" really doesn't mean anything to an agency - not only is that not a legal argument or concern, it's also not something where they're obligated to follow these comments - it's not a vote or poll).
>>>>>
>>>>> That said, political folks have spun things to the public as if it is a poll/vote/chance to act. The raise a valid legal concern part of the APA of 1946 is omitted. Moreover the fact that third party law firms and others like to submit comments on behalf of clients - there will always be a third party submitting multiple comments for their clients (or "clients") because that's their business.
>>>>>
>>>>> In the lead up to 2017, the Consumer and Government Affairs Bureau of the FCC got an inquiry from a firm asking how they could submit 1 million comments a day on an "upcoming privacy proceeding" (their words, astute observers will note there was no privacy proceeding before the FCC in 2017). When the Bureau asked me, I told them either mail us a CD to upload it or submit one comment with 1 million signatures. To attempt to flood us with 1 million comments a day (aside from the fact who can "predict" having that many daily) would deny resources to others. In the mess that followed, what was released to the public was so redacted you couldn't see the legitimate concerns and better paths that were offered to this entity.
>>>>>
>>>>> And the FCC isn't alone. EPA, FTC, and other regulatory agencies have had these hijinks for years - and before the Internet it was faxes, mass mimeographs (remember blue ink?), and postcards.The Administrative Conference of the United States (ACUS) - is the body that is supposed to provide consistent guidance for things like this across the U.S. government. I've briefed them and tried to raise awareness of these issues - as I think fundamentally this is a **process** question that once answered, tech can support. However they're not technologies and updating the interpretation of the process isn't something lawyers are apt to do until the evidence that things are in trouble is overwhelming.
>>>>>
>>>>> 52 folks wrote a letter to them - and to GSA - back in 2020. GSA had a rulemaking of its own on how to improve things, yet oddly never published any of the comments it received (including ours) and closed the rulemaking quietly. Here's the letter: https://tinyurl.com/letter-signed-52-people
>>>>>
>>>>> And here's an article published in OODAloop about this - and why Generative AI is probably going to make things even more challenging: https://www.oodaloop.com/archive/2023/04/18/why-a-pause-on-ai-development-is-not-the-answer-an-insiders-perspective/
>>>>>
>>>>> [snippet of the article] Now in 2023 and Beyond: Proactive Approaches to AI and Society
>>>>>
>>>>> Looking to the future, to effectively address the challenges arising from AI, we must foster a proactive, results-oriented, and cooperative approach with the public. Think tanks and universities can engage the public in conversations about how to work, live, govern, and co-exist with modern technologies that impact society. By involving diverse voices in the decision-making process, we can better address and resolve the complex challenges AI presents on local and national levels.
>>>>>
>>>>> In addition, we must encourage industry and political leaders to participate in finding non-partisan, multi-sector solutions if civil societies are to remain stable. By working together, we can bridge the gap between technological advancements and their societal implications.
>>>>>
>>>>> Finally, launching AI pilots across various sectors, such as work, education, health, law, and civil society, is essential. We must learn by doing on how we can create responsible civil environments where AIs can be developed and deployed responsibly. These initiatives can help us better understand and integrate AI into our lives, ensuring its potential is harnessed for the greater good while mitigating risks.
>>>>>
>>>>> In 2019 and 2020, a group of fifty-two people asked the Administrative Conference of the United States (which helps guide rulemaking procedures for federal agencies), General Accounting Office, and the General Services Administration to call attention to the need to address the challenges of chatbots flooding public commenting procedures and potentially crowding out or denying services to actual humans wanting to leave a comment. We asked:
>>>>>
>>>>> 1. Does identity matter regarding who files a comment or not — and must one be a U.S. person in order to file?
>>>>>
>>>>> 2. Should agencies publish real-time counts of the number of comments received — or is it better to wait until the end of a commenting round to make all comments available, including counts?
>>>>>
>>>>> 3. Should third-party groups be able to file on behalf of someone else or not — and do agencies have the right to remove spam-like comments?
>>>>>
>>>>> 4. Should the public commenting process permit multiple comments per individual for a proceeding — and if so, how many comments from a single individual are too many? 100? 1000? More?
>>>>>
>>>>> 5. Finally, should the U.S. government itself consider, given public perceptions about potential conflicts of interest for any agency performing a public commenting process, whether it would be better to have third-party groups take responsibility for assembling comments and then filing those comments via a validated process with the government?
>>>>>
>>>>> These same questions need pragmatic pilots that involve the public to co-explore and co-develop how we operate effectively amid these technological shifts. As the capabilities of LLMs continue to grow, we need positive change agents willing to tackle the messy issues at the intersection of technology and society. The challenges are immense, but so too are the opportunities for positive change. Let’s seize this moment to create a better tomorrow for all. Working together, we can co-create a future that embraces AI’s potential while mitigating its risks, informed by the hard lessons we have already learned.
>>>>>
>>>>> Full article: https://www.oodaloop.com/archive/2023/04/18/why-a-pause-on-ai-development-is-not-the-answer-an-insiders-perspective/
>>>>>
>>>>> Hope this helps.
>>>>>
>>>>>
>>>>> On Thu, Oct 5, 2023 at 4:44 PM Jack Haverty via Nnagain <nnagain@lists.bufferbloat.net> wrote:
>>>>>> Thanks for all your efforts to keep the "feedback loop" to the rulemakers functioning!
>>>>>>
>>>>>> I'd like to offer a suggestion for a hopefully politically acceptable way to handle the deluge, derived from my own battles with "email" over the years (decades).
>>>>>>
>>>>>> Back in the 1970s, I implemented one of the first email systems on the Arpanet, under the mentorship of JCR Licklider, who had been pursuing his vision of a "Galactic Network" at ARPA and MIT.   One of the things we discovered was the significance of anonymity.   At the time, anonymity was forbidden on the Arpanet; you needed an account on some computer, protected by passwords, in order to legitimately use the network.   The mechanisms were crude and easily broken, but the principle applied.
>>>>>>
>>>>>> Over the years, that principle has been forgotten, and the right to be anonymous has become entrenched.   But many uses of the network, and needs of its users, demand accountability, so all sorts of mechanisms have been pasted on top of the network to provide ways to judge user identity.  Banks, medical services, governments, and businesses all demand some way of proving your identity, with passwords, various schemes of 2FA, VPNs, or other such technology, with varying degrees of protection.   It is still possible to be anonymous on the net, but many things you do require you to prove, to some extent, who you are.
>>>>>>
>>>>>> So, my suggestion for handling the deluge of "comments" is:
>>>>>>
>>>>>> 1/ create some mechanism for "registering" your intent to submit a comment.   Make it hard for bots to register.  Perhaps you can leverage the work of various partners, e.g., ISPs, retailers, government agencies, financial institutions, of others who already have some way of identifying their users.
>>>>>>
>>>>>> 2/ Also make registration optional - anyone can still submit comments anonymously if they choose.
>>>>>>
>>>>>> 3/ for "registered commenters", provide a way to "edit" your previous comment - i.e., advise that your comment is always the last one you submitted.   I.E., whoever you are, you can only submit one comment, which will be the last one you submit.
>>>>>>
>>>>>> 4/ In the thousands of pages of comments, somehow flag the ones that are from registered commenters, visible to the people who read the comments.   Even better, provide those "information consumers" with ways to sort, filter, and search through the body of comments.
>>>>>>
>>>>>> This may not reduce the deluge of comments, but I'd expect it to help the lawyers and politicians keep their heads above the water.
>>>>>>
>>>>>> Anonymity is an important issue for Net Neutrality too, but I'll opine about that separately.....
>>>>>>
>>>>>> Jack Haverty
>>>>>>
>>>>>>
>>>>>> On 10/2/23 12:38, David Bray, PhD via Nnagain wrote:
>>>>>>
>>>>>> Greetings all and thank you Dave Taht for that very kind intro...
>>>>>>
>>>>>> First, I'll open with I'm a gosh-darn non-partisan, which means I swore an oath to uphold the Constitution first and serve the United States - not a specific party, tribe, or ideology. This often means, especially in today's era of 24/7 news and social media, non-partisans have to "top cover".
>>>>>>
>>>>>> Second, I'll share that in what happened in 2017 (which itself was 10x what we saw in 2014) my biggest concern was and remains that a few actors attempted to flood the system with less-than-authentic comments.
>>>>>>
>>>>>> In some respects this is not new. The whole "notice and comment" process is a legacy process that goes back decades. And the FCC (and others) have had postcard floods of comments, mimeographed letters of comments, faxed floods of comments, and now this - which, when combined with generative AI, will be yet another flood.
>>>>>>
>>>>>> Which gets me to my biggest concern as a non-partisan in 2023-2024, namely how LLMs might misuse and abuse the commenting process further.
>>>>>>
>>>>>> Both in 2014 and 2017, I asked FCC General Counsel if I could use CAPTChA to try to reduce the volume of web scrapers or bots both filing and pulling info from the Electronic Comment Filing System.
>>>>>>
>>>>>> Both times I was told *no* out of concerns that they might prevent someone from filing. I asked if I could block obvious spam, defined as someone filing a comment >100 times a minute, and was similarly told no because one of those possible comments might be genuine and/or it could be an ex party filing en masse for others.
>>>>>>
>>>>>> For 2017 we had to spin up 30x the number of AWS cloud instances to handle the load - and this was a flood of comments at 4am, 5am, and 6am ET at night which normally shouldn’t see such volumes. When I said there was a combination of actual humans wanting to leave comments and others who were effectively denying service to others (especially because if anyone wanted to do a batch upload of 100,000 comments or more they could submit a CSV file or a comment with 100,000 signatories) - both parties said no, that couldn’t be happening.
>>>>>>
>>>>>> Until 2021 when the NY Attorney General proved that was exactly what was happening with 18m of the 23m apparently from non-authentic origin with ~9m from one side of the political aisle (and six companies) and ~9m from the other side of the political aisle (and one or more teenagers).
>>>>>>
>>>>>> So with Net Neutrality back on the agenda - here’s a simple prediction, even if the volume of comments is somehow controlled, 10,000+ pages of comments produced by ChatGPT or a different LLM is both possible and probably will be done. The question is if someone includes a legitimate legal argument on page 6,517 - will FCC’s lawyers spot it and respond to it as part of the NPRM?
>>>>>>
>>>>>> Hope this helps and with highest regards,
>>>>>>
>>>>>> -d.
>>>>>> --
>>>>>>
>>>>>> Principal, LeadDoAdapt Ventures, Inc. & Distinguished Fellow
>>>>>>
>>>>>> Henry S. Stimson Center, Business Executives for National Security
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Mon, Oct 2, 2023 at 2:15 PM Dave Taht via Nnagain <nnagain@lists.bufferbloat.net> wrote:
>>>>>>> All:
>>>>>>>
>>>>>>> I have spent the last several days reaching out to as many people I
>>>>>>> know with a deep understanding of the policy and technical issues
>>>>>>> surrounding the internet, to participate on this list. I encourage you
>>>>>>> all to reach out on your own, especially to those that you can
>>>>>>> constructively and civilly disagree with, and hopefully work with, to
>>>>>>> establish technical steps forward. Quite a few have joined silently!
>>>>>>> So far, 168 people have joined!
>>>>>>>
>>>>>>> Please welcome Dr David Bray[1], a self-described "human flack jacket"
>>>>>>> who, in the last NN debate, stood up for the non -partisan FCC IT team
>>>>>>> that successfully kept the system up 99.4% of the time despite the
>>>>>>> comment floods and network abuses from all sides. He has shared with
>>>>>>> me privately many sad (and some hilarious!) stories of that era, and I
>>>>>>> do kind of hope now, that some of that history surfaces, and we can
>>>>>>> learn from it.
>>>>>>>
>>>>>>> Thank you very much, David, for putting down your painful memories[2],
>>>>>>> and agreeing to join here. There is a lot to tackle here, going
>>>>>>> forward.
>>>>>>>
>>>>>>> [1] https://www.stimson.org/ppl/david-bray/
>>>>>>> [2] "Pain shared is reduced. Joy shared, increased." - Spider Robinson
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>>>>>>> Dave Täht CSO, LibreQos
>>>>>>> _______________________________________________
>>>>>>> Nnagain mailing list
>>>>>>> Nnagain@lists.bufferbloat.net
>>>>>>> https://lists.bufferbloat.net/listinfo/nnagain
>>>>>>
>>>>>> _______________________________________________
>>>>>> Nnagain mailing list
>>>>>> Nnagain@lists.bufferbloat.net
>>>>>> https://lists.bufferbloat.net/listinfo/nnagain
>>>>>>
>>>>>>
>>>>>> _______________________________________________
>>>>>> Nnagain mailing list
>>>>>> Nnagain@lists.bufferbloat.net
>>>>>> https://lists.bufferbloat.net/listinfo/nnagain
>>>>>
>>>> _______________________________________________
>>>> Nnagain mailing list
>>>> Nnagain@lists.bufferbloat.net
>>>> https://lists.bufferbloat.net/listinfo/nnagain
>>>
>>>
>>> --
>>> Please send any postal/overnight deliveries to:
>>> Vint Cerf
>>> Google, LLC
>>> 1900 Reston Metro Plaza, 16th Floor
>>> Reston, VA 20190
>>> +1 (571) 213 1346
>>>
>>>
>>> until further notice
>>>
>>>
>>>
>> _______________________________________________
>> Nnagain mailing list
>> Nnagain@lists.bufferbloat.net
>> https://lists.bufferbloat.net/listinfo/nnagain
>
>
> --
> Oct 30: https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
> Dave Täht CSO, LibreQos


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [NNagain] somewhat OT: Licklidder
  2023-10-10 15:53                   ` Steve Crocker
@ 2023-10-10 17:12                     ` Jack Haverty
  2023-10-10 19:00                       ` Robert McMahon
  0 siblings, 1 reply; 15+ messages in thread
From: Jack Haverty @ 2023-10-10 17:12 UTC (permalink / raw)
  To: nnagain

[-- Attachment #1: Type: text/plain, Size: 42364 bytes --]

FYI, The Arpanet was a key player in that patent fight.  The Arpanet 
IMPs (the packet switches) downloaded software from each other, and that 
capability was used to distribute new releases of the IMP program.  I 
suggested that 1970s implementation to the lawyers as a good example of 
prior art, which led to a lot of work that eventually resurrected the 
1970s IMP code from a moldy listing in someone's basement, and got it 
running again on simulated ancient hardware.   At one point the 4-node 
Arpanet of 1970 was created and run, in anticipation of a demo of prior 
art at trial.  Sadly (for me at least) the combatants suddenly settled 
out of court, so the trial never happened and the patent issue was not 
adjudicated.   But the resurrected IMP code is on github now, so anyone 
interested can run their own Arpanet.

Jack


On 10/10/23 08:53, Steve Crocker via Nnagain wrote:
> Lots of good stuff here and I missed the earlier posts, but one small 
> thing caught my attention:
>
>     > About 10 years ago, I accidentally got involved in a patent
>     dispute to be an "expert witness", for a patent involving
>     downloading new programs over a communications path into a remote
>     computer (yes, what all our devices do almost every day).
>
> In the seminal period of late 1968 and early 1969 when we were 
> thinking about Arpanet protocols, one idea that was very much part of 
> our thinking was downloading a small program at the beginning of an 
> interactive session.  The downloaded program would take care of local 
> interactions to avoid the need to send every character across the net 
> only to have it echoed remotely.  Why not always use local echo?  
> Because most of the time-shared systems in the various ARPA-supported 
> research environments had distinct ways of interpreting each and 
> every character.  Imposing a network-wide rule of local echoing would 
> have compromised the usability of most of the systems on the Arpanet.  
> I think Multics was the only "modern" line-at-a-time system at the time.
>
> In March 1969 we decided it was time to write down the ideas from our 
> meetings in late 1968 and early 1969.  The first batch of RFCs 
> included Rulifson's RFC 5.  He proposed DEL, the Decode-Encode 
> Language.  Elie's RFC 51 a year later proposed the Network Interchange 
> Language.  In both cases the basic concept was the creation of a 
> simple language, easily implementable on each platform, that would 
> mediate the interaction with a remote system.  The programs were 
> expected to be short -- hence downloadable quickly -- and either 
> interpreted or quickly translated.  There was a tiny bit of 
> experimental work along this line, but it was far ahead of its time.  
> I think it was about 25 years before ActiveX came along, followed by Java.
>
> Steve
>
>
> On Tue, Oct 10, 2023 at 11:30 AM Dave Taht via Nnagain 
> <nnagain@lists.bufferbloat.net> wrote:
>
>     On Mon, Oct 9, 2023 at 7:56 PM Jack Haverty via Nnagain
>     <nnagain@lists.bufferbloat.net> wrote:
>
>     For starters it is an honor to be conversing with folk that knew Bob
>     Taylor, and "Lick", and y'all made me go back and re-read
>
>     http://memex.org/licklider.pdf
>
>     For inspiration. I think everyone in our field should re-read that,
>     periodically. For example he makes an overgeneralization about the
>     thinking processes of men, as compared to the computers of the time,
>     and not to women...
>
>     But I have always had an odd question - what songs did Lick play on
>     guitar? Do any recordings exist?
>
>     Music defines who I am, at least. I love the angularness and surprises
>     in jazz, and the deep storytelling buried deep in Shostakovich's
>     Fifth. Moving forward to modern music: the steady backbeat of Burning
>     Man - and endless repetition of short phrases - seems to lead to
>     groupthink - I can hardly stand EDM for an hour.
>
>      I am "maked" by Angela' Lansbury's Sweeny Todd, and my religion,
>     forever reformed by Monty Python's Life of Brian, One Flew over the
>     Cookoos nest, 12 Angry Men, and the 12 Monkees, Pink Floyd and punk
>     music were the things that shaped me. No doubt it differs
>     significantly for everyone here, please share?
>
>     Powerful tales and their technologies predate the internet, and
>     because they were wildly shared, influenced how generations thought
>     without being the one true answer. Broadcast media, also, was joint,
>     and in school we
>
>     We are in a new era of uncommonality of experience, in part from
>     bringing in all the information in the world, while still separated by
>     differences in language, exposure, education, and culture, although
>     nowadays it has become so easy and natural to be able to use computer
>     assisted language translation tools, I do not know how well they truly
>     make the jump between cultures.
>
>     In that paper he talked about 75% of his time being spent setting up
>     to do analytics, where today so much information exists as to be
>     impossible to analyze.
>
>     I only have a few more small comments below, but I wanted to pick out
>     the concepts of TOS and backpressure as needing thought on another
>     day, in another email (what was licks song list??? :)). The internet
>     has very little Tos or backpressure, and Flow Queuing based algorithms
>     actually function thusly:
>
>     If the arrival rate of a flow is less than the departure rate of all
>     other flows, it goes out first.
>
>     To some extent this matches some of Nagles' "every application has a
>     right to one packet in the network", and puts a reward into the system
>     for applications that use slightly less than their fair share of the
>     bandwidth.
>
>     > IMHO, the problem may be that the Internet, and computing
>     technology in general, is so new that non-technical organizations,
>     such as government entities, don't understand it and therefore
>     can't figure out whether or how to regulate anything involved.
>     >
>     > In other, older, "technologies", rules, procedures, and
>     traditions have developed over the years to provide for feedback
>     and control between governees and governors.  Roberts Rules of
>     Order was created 150 years ago, and is still widely used to
>     manage public meetings.  I've been in local meetings where
>     everyone gets a chance to speak, but are limited to a few minutes
>     to say whatever's on their mind.  You have to appear in person,
>     wait your turn, and make your comment. Doing so is free, but still
>     has the cost of time and hassle to get to the meeting.
>     >
>     > Organizations have figured out over the years how to manage
>     meetings.  [Vint - remember the "Rathole!" mechanism that we used
>     to keep Internet meetings on track...?]
>
>     PARC had "Dealer".
>
>     > From what David describes, it sounds like the current "public
>     comment" mechanisms in the electronic arena are only at the stage
>     where the loudest voices can drown out all others, and public
>     debates are essentially useless cacophonies of the loudest
>     proponents of the various viewpoints.   There are no rules.   Why
>     should anyone submit their own sensible comments, knowing they'll
>     be lost in the noise?
>     >
>     > In non-electronic public forums, such behavior is ruled out, and
>     if it persists, the governing body can have offenders ejected,
>     adjourn a meeting until cooler heads prevail, or otherwise make
>     the discourse useful for informing decisions.  Courts can issue
>     restraining orders, but has any court ever issued such an order
>     applying to an electronic forum?
>     >
>     > So, why haven't organizations yet developed rules and mechanisms
>     for managing electronic discussions....?
>     >
>     > I'd offer two observations and suggestions.
>     >
>     > -----
>     >
>     > First, a major reason for a lack of such rules and mechanisms
>     may be an educational gap.  Administrators, politicians, and
>     staffers may simply not understand all this newfangled technology,
>     or how it works, and are drowning in a sea of terminology,
>     acronyms, and concepts that make no sense (to them).   In the FCC
>     case, even the technical gurus may have deep knowledge of their
>     traditional realm of telephony, radio, and related issues and
>     policy tradeoffs.   But they may be largely ignorant of computing
>     and networking equivalents.  Probably even worse, they may
>     unconsciously consider the new world as a simple evolution of the
>     old, not recognizing the impact of incredibly fast computers and
>     communications, and the advances that they enable, such as "AI" -
>     whatever that is...
>     >
>     > About 10 years ago, I accidentally got involved in a patent
>     dispute to be an "expert witness", for a patent involving
>     downloading new programs over a communications path into a remote
>     computer (yes, what all our devices do almost every day).   I was
>     astounded when I learned how little the "judicial system"
>     (lawyers, judges, legislators, etc.) knew about computer and
>     network technology.   That didn't stop them from debating the
>     meaning of technical terms.  What is RAM? How does "programming"
>     differ from "reprogramming"?  What is "memory"?  What is a
>     "processor"?   What is an "operating system"?   The arguments
>     continue until eventually a judge declares what the answer is,
>     with little technical knowledge or expertise to help.   So you can
>     easily get legally binding definitions such as "operating system"
>     means "Windows", and that all computers contain an operating system.
>     >
>     > I spent hours on the phone over about 18 months, explaining to
>     the lawyers how computers and networks actually worked.   In turn,
>     they taught me quite a lot about the vagaries of the laws and
>     patents.  It was fascinating but also disturbing to see how
>     ill-prepared the legal system was for new technologies.
>     >
>     > So, my suggestion is that a focus be placed on helping the
>     non-technical decision makers understand the nuances of computing
>     and the Internet.  I don't think that will be successful by
>     burying them in the sea of technical jargon and acronyms.
>     >
>     > Before I retired, I spent a lot of time with C-suite denizens
>     from companies outside of the technology industry - banks,
>     manufacturers, transportation, etc. - helping them understand what
>     "The Internet" was, and help them see it as both a huge
>     opportunity and a huge threat to their businesses.  One technique
>     I used was simply stolen from the early days of The Internet.
>     >
>     > When we were involved in designing the internal mechanisms of
>     the Internet, in particular TCPV4, we didn't know much about
>     networks either.  So we used analogies.  In particular we used the
>     existing transportation infrastructure as a model.   Moving bits
>     around the world isn't all that different from moving goods and
>     people.   But everyone, even with no technical expertise, knows
>     about transportation.
>     >
>     > It turns out that there are a lot of useful analogies. For
>     example, we recognized that there were different kinds of
>     "traffic" with different needs.  Coal for power plants was
>     important, but not urgent.  If a coal train waits on a siding
>     while a passenger train passes, it's OK, even preferred.  There
>     could be different "types of service" available from the
>     transportation infrastructure.   At the time (late 1970s) we
>     didn't know exactly how to do that, but decided to put a field in
>     the IP header as a placeholder - the "TOS" field. Figuring out
>     what different TOSes there should be, and how they would be
>     handled differently, was still on the to-do list.   There are even
>     analogies to the Internet - goods might travel over a "marine
>     network" to a "port", where they are moved onto a "rail network",
>     to a distributor, and moved on the highway network to their final
>     destination.  Routers, gateways, ...
>     >
>     > Other transportation analogies reinforced the notion of TOS. 
>     E.g., if you're sending a document somewhere, you can choose how
>     to send it - normal postal mail, or Priority Mail, or even use a
>     different "network" such as an overnight delivery service. 
>     Different TOS would engage different behaviors of the underlying
>     communications system, and might also have different costs to use
>     them.  Sending a ton of coal to get delivered in a week or two
>     would cost a lot less than sending a ton of documents for
>     overnight delivery.
>     >
>     > There were other transportation analogies heard during the TCPV4
>     design discussions - e.g., "Expressway Routing" (do you take a
>     direct route over local streets, or go to the freeway even though
>     it's longer) and "Multi-Homing" (your manufacturing plant has
>     access to both a highway and a rail line).
>     >
>     > Suggestion -- I suspect that using a familiar infrastructure
>     such as transport to discuss issues with non-technical decision
>     makers would be helpful.  E.g., imagine what would happen if some
>     particular "net neutrality" set of rules was placed on the
>     transportation infrastructure?   Would it have a desirable effect?
>     >
>     > -----
>     >
>     > Second, in addition to anonymity as an important issue in the
>     electronic world, my experience as a mentee of Licklider surfaced
>     another important issue in the "galactic network" vision -- "Back
>     Pressure".     The notion is based in existing knowledge. 
>      Economics has notions of Supply and Demand and Cost Curves. 
>      Engineering has the notion of "Negative Feedback" to stabilize
>     mechanical, electrical, or other systems.
>     >
>     > We discussed Back Pressure, in the mid 70s, in the context of
>     electronic mail, and tried to get the notion of "stamps" accepted
>     as part of the email mechanisms.  The basic idea was that there
>     had to be some form of "back pressure" to prevent overload by
>     discouraging sending of huge quantities of mail.
>     >
>     > At the time, mail traffic was light, since every message was
>     typed by hand by some user.  In Lick's group we had experimented
>     with using email as a way for computer programs to interact.  In
>     Lick's vision, humans would interact by using their computers as
>     their agents.   Even then, computers could send email a lot faster
>     and continuously than any human at a keyboard, and could easily
>     flood the network.  [This epiphany occurred shortly after a
>     mistake in configuring distribution lists caused so many messages
>     and replies that our machine crashed as its disk space ran out.]
>     >
>     > "Stamps" didn't necessarily represent monetary cost. Back
>     pressure could be simple constraints, e.g., no user can send more
>     than 500 (or whatever) messages per day.   This notion never got
>     enough support to become part of the email standards; I still
>     think it would help with the deluge of spam we all experience today.
>     >
>     > Back Pressure in the Internet today is largely non-existent.  I
>     (or my AI and computers) can send as much email as I like. 
>      Communications carriers promote "unlimited data" but won't
>     guarantee anything.   Memory has become cheap, and as a result
>     behaviors such as "buffer bloat" have appeared.
>     >
>     > Suggestion - educate the decision-makers about Back Pressure,
>     using highway analogies (metering lights, etc.)
>     >
>     > -----
>     >
>     > Education about the new technology, but by using some familiar
>     analogs, and introduction of Back Pressure, in some appropriate
>     form, as part of a "network neutrality" policy, would be the two
>     foci I'd recommend.
>     >
>     > My prior suggestion of "registration" and accepting only the
>     last comment was based on the observations above.  Back pressure
>     doesn't have to be monetary, and registered users don't have to be
>     personally identified.   Simply making it sufficiently "hard" to
>     register (using CAPTCHAs, 2FA, whatever) would be a "cost"
>     discouraging "loud voices".   Even the law firms submitting
>     millions of comments on behalf of their clients might balk at the
>     cost (in labor not money) to register their million clients, even
>     anonymously, so each could get his/her comment submitted.   Of
>     course, they could always pass the costs on to their (million?
>     really?) clients. But it would still be Back Pressure.
>     >
>     > One possibility -- make the "cost" of submitting a million
>     electronic comments equal to the cost of submitting a million
>     postcards...?
>     >
>     > Jack Haverty
>     >
>     >
>     > On 10/9/23 16:55, David Bray, PhD wrote:
>     >
>     > Great points Vint as you're absolutely right - there are
>     multiple modalities here (and in the past it was spam from
>     thousands of postcards, then mimeographs, then faxes, etc.)
>     >
>     > The standard historically has been set by the Administrative
>     Conference of the United States: https://www.acus.gov/about-acus
>     >
>     > In 2020 there seemed to be an effort to gave the General
>     Services Administration weigh-in, however they closed that
>     rulemaking attempt without publishing any of the comments they got
>     and no announcement why it was closed.
>     >
>     > As for what part of Congress - I believe ACUS was championed by
>     both the Senate and House Judiciary Committees as it has oversight
>     and responsibility for the interpretations of the Administrative
>     Procedure Act of 1946 (which sets out the whole rulemaking procedure).
>     >
>     > Sadly there isn't a standard across agencies - which also means
>     there isn't a standard across Administrations. Back in 2018 and
>     2020, both with this group of 52 people here
>     https://tinyurl.com/letter-signed-52-people - as well as
>     individually - I did my darnest to encourage them to do a standard.
>     >
>     > There's also the National Academy of Public Administration which
>     is probably the latest remaining non-partisan forum for
>     discussions like this too.
>     >
>     >
>     > On Mon, Oct 9, 2023 at 7:46 PM Vint Cerf <vint@google.com> wrote:
>     >>
>     >> David, this is a good list.
>     >> FACA has rules for public participation, for example.
>     >>
>     >> I think it should be taken into account for any public
>     commenting process that online (and offline such as USPS or fax
>     and phone calls) that spam and artificial inflation of comments
>     are possible. Is there any specific standard for US agency public
>     comment handling? If now, what committees of the US Congress might
>     have jurisdiction?
>     >>
>     >> v
>     >>
>     >>
>     >> On Tue, Oct 10, 2023 at 8:22 AM David Bray, PhD via Nnagain
>     <nnagain@lists.bufferbloat.net> wrote:
>     >>>
>     >>> I'm all for doing new things to make things better.
>     >>>
>     >>> At the same time, I used to do bioterrorism preparedness and
>     response from 2000-2005 (and aside from asking myself what kind of
>     crazy world needed counter-bioterrorism efforts... I also realized
>     you don't want to interject something completely new in the middle
>     of an unfolding crisis event). If something were to be injected
>     now, it would have to have consensus from both sides, otherwise at
>     least one side (potentially detractors from both) will claim that
>     whatever form the new approaches take are somehow advantaging "the
>     other side" and disadvantaging them.
>     >>>
>     >>> Probably would take a ruling by the Administrative Conference
>     of the United States, at a minimum to answer these five questions
>     - and even then, introducing something completely different in the
>     midst of a political melee might just invite mudslinging unless
>     moderate voices on both sides can reach some consensus.
>     >>>
>     >>> 1. Does identity matter regarding who files a comment or not —
>     and must one be a U.S. person in order to file?
>     >>>
>     >>> 2. Should agencies publish real-time counts of the number of
>     comments received — or is it better to wait until the end of a
>     commenting round to make all comments available, including counts?
>     >>>
>     >>> 3. Should third-party groups be able to file on behalf of
>     someone else or not — and do agencies have the right to remove
>     spam-like comments?
>     >>>
>     >>> 4. Should the public commenting process permit multiple
>     comments per individual for a proceeding — and if so, how many
>     comments from a single individual are too many? 100? 1000? More?
>     >>>
>     >>> 5. Finally, should the U.S. government itself consider, given
>     public perceptions about potential conflicts of interest for any
>     agency performing a public commenting process, whether it would be
>     better to have third-party groups take responsibility for
>     assembling comments and then filing those comments via a validated
>     process with the government?
>     >>>
>     >>>
>     >>>
>     >>> On Sat, Oct 7, 2023 at 4:10 PM Jack Haverty <jack@3kitty.org>
>     wrote:
>     >>>>
>     >>>> Hi again David et al,
>     >>>>
>     >>>> Interesting frenzy...lots of questions that need answers and
>     associated policies.   I served 6 years as an elected official (in
>     a small special district in California), so I have some small
>     understanding of the government side of things and the constraints
>     involved.   Being in charge doesn't mean you can do what you want.
>     >>>>
>     >>>> I'm thinking here more near-term and incremental steps.  You
>     said "These same questions need pragmatic pilots that involve the
>     public ..."
>     >>>>
>     >>>> So, how about using the current NN situation for a pilot? 
>     Keep all the current ways and emerging AI techniques to continue
>     to flood the system with comments.  But also offer an *optional*
>     way for humans to "register" as a commenter and then submit their
>     (latest only) comment into the melee.  Will people use it?  Will
>     "consumers" (the lawyers, commissioners, etc.) find it useful?
>     >>>>
>     >>>> I've found it curious, for decades now, that there are (too
>     many) mechanisms for "secure email", that may help with the flood
>     of disinformation from anonymous senders, but very very few people
>     use them.   Maybe they don't know how; maybe the available schemes
>     are too flawed; maybe ...?
>     >>>>
>     >>>> About 30 years ago, I was a speaker in a public meeting
>     orchestrated by USPS, and recommended that they take a lead role,
>     e.g., by acting as a national CA - certificate authority.  Never
>     happened though.   FCC issues lots of licenses...perhaps they
>     could issue online credentials too?
>     >>>>
>     >>>> Perhaps a "pilot" where you will also accept comments by
>     email, some possibly sent by "verified" humans if they understand
>     how to do so, would be worth trying?   Perhaps comments on
>     "technical aspects" coming from people who demonstrably know how
>     to use technology would be valuable to the policy makers?
>     >>>>
>     >>>> The Internet, and technology such as TCP, began as an
>     experimental pilot about 50 years ago.  Sometimes pilots become
>     infrastructures.
>     >>>>
>     >>>> FYI, I'm signing this message.  Using OpenPGP.  I could
>     encrypt it also, but my email program can't find your public key.
>     >>>>
>     >>>> Jack Haverty
>     >>>>
>     >>>>
>     >>>> On 10/5/23 14:21, David Bray, PhD wrote:
>     >>>>
>     >>>> Indeed Jack - a few things to balance - the Administrative
>     Procedure Act of 1946 (on which the idea of rulemaking is based)
>     us about raising legal concerns that must be answered by the
>     agency at the time the rulemaking is done. It's not a vote nor is
>     it the case that if the agency gets tons of comments in one
>     direction that they have to go in that direction. Instead it's
>     only about making sure legal concerns are considered and responded
>     to before the agency before the agency acts. (Which is partly why
>     sending "I'm for XYZ" or "I'm against ABC" really doesn't mean
>     anything to an agency - not only is that not a legal argument or
>     concern, it's also not something where they're obligated to follow
>     these comments - it's not a vote or poll).
>     >>>>
>     >>>> That said, political folks have spun things to the public as
>     if it is a poll/vote/chance to act. The raise a valid legal
>     concern part of the APA of 1946 is omitted. Moreover the fact that
>     third party law firms and others like to submit comments on behalf
>     of clients - there will always be a third party submitting
>     multiple comments for their clients (or "clients") because that's
>     their business.
>     >>>>
>     >>>> In the lead up to 2017, the Consumer and Government Affairs
>     Bureau of the FCC got an inquiry from a firm asking how they could
>     submit 1 million comments a day on an "upcoming privacy
>     proceeding" (their words, astute observers will note there was no
>     privacy proceeding before the FCC in 2017). When the Bureau asked
>     me, I told them either mail us a CD to upload it or submit one
>     comment with 1 million signatures. To attempt to flood us with 1
>     million comments a day (aside from the fact who can "predict"
>     having that many daily) would deny resources to others. In the
>     mess that followed, what was released to the public was so
>     redacted you couldn't see the legitimate concerns and better paths
>     that were offered to this entity.
>     >>>>
>     >>>> And the FCC isn't alone. EPA, FTC, and other regulatory
>     agencies have had these hijinks for years - and before the
>     Internet it was faxes, mass mimeographs (remember blue ink?), and
>     postcards.The Administrative Conference of the United States
>     (ACUS) - is the body that is supposed to provide consistent
>     guidance for things like this across the U.S. government. I've
>     briefed them and tried to raise awareness of these issues - as I
>     think fundamentally this is a **process** question that once
>     answered, tech can support. However they're not technologies and
>     updating the interpretation of the process isn't something lawyers
>     are apt to do until the evidence that things are in trouble is
>     overwhelming.
>     >>>>
>     >>>> 52 folks wrote a letter to them - and to GSA - back in 2020.
>     GSA had a rulemaking of its own on how to improve things, yet
>     oddly never published any of the comments it received (including
>     ours) and closed the rulemaking quietly. Here's the letter:
>     https://tinyurl.com/letter-signed-52-people
>     >>>>
>     >>>> And here's an article published in OODAloop about this - and
>     why Generative AI is probably going to make things even more
>     challenging:
>     https://www.oodaloop.com/archive/2023/04/18/why-a-pause-on-ai-development-is-not-the-answer-an-insiders-perspective/
>     >>>>
>     >>>> [snippet of the article] Now in 2023 and Beyond: Proactive
>     Approaches to AI and Society
>     >>>>
>     >>>> Looking to the future, to effectively address the challenges
>     arising from AI, we must foster a proactive, results-oriented, and
>     cooperative approach with the public. Think tanks and universities
>     can engage the public in conversations about how to work, live,
>     govern, and co-exist with modern technologies that impact society.
>     By involving diverse voices in the decision-making process, we can
>     better address and resolve the complex challenges AI presents on
>     local and national levels.
>     >>>>
>     >>>> In addition, we must encourage industry and political leaders
>     to participate in finding non-partisan, multi-sector solutions if
>     civil societies are to remain stable. By working together, we can
>     bridge the gap between technological advancements and their
>     societal implications.
>     >>>>
>     >>>> Finally, launching AI pilots across various sectors, such as
>     work, education, health, law, and civil society, is essential. We
>     must learn by doing on how we can create responsible civil
>     environments where AIs can be developed and deployed responsibly.
>     These initiatives can help us better understand and integrate AI
>     into our lives, ensuring its potential is harnessed for the
>     greater good while mitigating risks.
>     >>>>
>     >>>> In 2019 and 2020, a group of fifty-two people asked the
>     Administrative Conference of the United States (which helps guide
>     rulemaking procedures for federal agencies), General Accounting
>     Office, and the General Services Administration to call attention
>     to the need to address the challenges of chatbots flooding public
>     commenting procedures and potentially crowding out or denying
>     services to actual humans wanting to leave a comment. We asked:
>     >>>>
>     >>>> 1. Does identity matter regarding who files a comment or not
>     — and must one be a U.S. person in order to file?
>     >>>>
>     >>>> 2. Should agencies publish real-time counts of the number of
>     comments received — or is it better to wait until the end of a
>     commenting round to make all comments available, including counts?
>     >>>>
>     >>>> 3. Should third-party groups be able to file on behalf of
>     someone else or not — and do agencies have the right to remove
>     spam-like comments?
>     >>>>
>     >>>> 4. Should the public commenting process permit multiple
>     comments per individual for a proceeding — and if so, how many
>     comments from a single individual are too many? 100? 1000? More?
>     >>>>
>     >>>> 5. Finally, should the U.S. government itself consider, given
>     public perceptions about potential conflicts of interest for any
>     agency performing a public commenting process, whether it would be
>     better to have third-party groups take responsibility for
>     assembling comments and then filing those comments via a validated
>     process with the government?
>     >>>>
>     >>>> These same questions need pragmatic pilots that involve the
>     public to co-explore and co-develop how we operate effectively
>     amid these technological shifts. As the capabilities of LLMs
>     continue to grow, we need positive change agents willing to tackle
>     the messy issues at the intersection of technology and society.
>     The challenges are immense, but so too are the opportunities for
>     positive change. Let’s seize this moment to create a better
>     tomorrow for all. Working together, we can co-create a future that
>     embraces AI’s potential while mitigating its risks, informed by
>     the hard lessons we have already learned.
>     >>>>
>     >>>> Full article:
>     https://www.oodaloop.com/archive/2023/04/18/why-a-pause-on-ai-development-is-not-the-answer-an-insiders-perspective/
>     >>>>
>     >>>> Hope this helps.
>     >>>>
>     >>>>
>     >>>> On Thu, Oct 5, 2023 at 4:44 PM Jack Haverty via Nnagain
>     <nnagain@lists.bufferbloat.net> wrote:
>     >>>>>
>     >>>>> Thanks for all your efforts to keep the "feedback loop" to
>     the rulemakers functioning!
>     >>>>>
>     >>>>> I'd like to offer a suggestion for a hopefully politically
>     acceptable way to handle the deluge, derived from my own battles
>     with "email" over the years (decades).
>     >>>>>
>     >>>>> Back in the 1970s, I implemented one of the first email
>     systems on the Arpanet, under the mentorship of JCR Licklider, who
>     had been pursuing his vision of a "Galactic Network" at ARPA and
>     MIT.   One of the things we discovered was the significance of
>     anonymity.   At the time, anonymity was forbidden on the Arpanet;
>     you needed an account on some computer, protected by passwords, in
>     order to legitimately use the network.   The mechanisms were crude
>     and easily broken, but the principle applied.
>     >>>>>
>     >>>>> Over the years, that principle has been forgotten, and the
>     right to be anonymous has become entrenched.   But many uses of
>     the network, and needs of its users, demand accountability, so all
>     sorts of mechanisms have been pasted on top of the network to
>     provide ways to judge user identity.  Banks, medical services,
>     governments, and businesses all demand some way of proving your
>     identity, with passwords, various schemes of 2FA, VPNs, or other
>     such technology, with varying degrees of protection.   It is still
>     possible to be anonymous on the net, but many things you do
>     require you to prove, to some extent, who you are.
>     >>>>>
>     >>>>> So, my suggestion for handling the deluge of "comments" is:
>     >>>>>
>     >>>>> 1/ create some mechanism for "registering" your intent to
>     submit a comment.   Make it hard for bots to register.  Perhaps
>     you can leverage the work of various partners, e.g., ISPs,
>     retailers, government agencies, financial institutions, of others
>     who already have some way of identifying their users.
>     >>>>>
>     >>>>> 2/ Also make registration optional - anyone can still submit
>     comments anonymously if they choose.
>     >>>>>
>     >>>>> 3/ for "registered commenters", provide a way to "edit" your
>     previous comment - i.e., advise that your comment is always the
>     last one you submitted.   I.E., whoever you are, you can only
>     submit one comment, which will be the last one you submit.
>     >>>>>
>     >>>>> 4/ In the thousands of pages of comments, somehow flag the
>     ones that are from registered commenters, visible to the people
>     who read the comments.   Even better, provide those "information
>     consumers" with ways to sort, filter, and search through the body
>     of comments.
>     >>>>>
>     >>>>> This may not reduce the deluge of comments, but I'd expect
>     it to help the lawyers and politicians keep their heads above the
>     water.
>     >>>>>
>     >>>>> Anonymity is an important issue for Net Neutrality too, but
>     I'll opine about that separately.....
>     >>>>>
>     >>>>> Jack Haverty
>     >>>>>
>     >>>>>
>     >>>>> On 10/2/23 12:38, David Bray, PhD via Nnagain wrote:
>     >>>>>
>     >>>>> Greetings all and thank you Dave Taht for that very kind
>     intro...
>     >>>>>
>     >>>>> First, I'll open with I'm a gosh-darn non-partisan, which
>     means I swore an oath to uphold the Constitution first and serve
>     the United States - not a specific party, tribe, or ideology. This
>     often means, especially in today's era of 24/7 news and social
>     media, non-partisans have to "top cover".
>     >>>>>
>     >>>>> Second, I'll share that in what happened in 2017 (which
>     itself was 10x what we saw in 2014) my biggest concern was and
>     remains that a few actors attempted to flood the system with
>     less-than-authentic comments.
>     >>>>>
>     >>>>> In some respects this is not new. The whole "notice and
>     comment" process is a legacy process that goes back decades. And
>     the FCC (and others) have had postcard floods of comments,
>     mimeographed letters of comments, faxed floods of comments, and
>     now this - which, when combined with generative AI, will be yet
>     another flood.
>     >>>>>
>     >>>>> Which gets me to my biggest concern as a non-partisan in
>     2023-2024, namely how LLMs might misuse and abuse the commenting
>     process further.
>     >>>>>
>     >>>>> Both in 2014 and 2017, I asked FCC General Counsel if I
>     could use CAPTChA to try to reduce the volume of web scrapers or
>     bots both filing and pulling info from the Electronic Comment
>     Filing System.
>     >>>>>
>     >>>>> Both times I was told *no* out of concerns that they might
>     prevent someone from filing. I asked if I could block obvious
>     spam, defined as someone filing a comment >100 times a minute, and
>     was similarly told no because one of those possible comments might
>     be genuine and/or it could be an ex party filing en masse for others.
>     >>>>>
>     >>>>> For 2017 we had to spin up 30x the number of AWS cloud
>     instances to handle the load - and this was a flood of comments at
>     4am, 5am, and 6am ET at night which normally shouldn’t see such
>     volumes. When I said there was a combination of actual humans
>     wanting to leave comments and others who were effectively denying
>     service to others (especially because if anyone wanted to do a
>     batch upload of 100,000 comments or more they could submit a CSV
>     file or a comment with 100,000 signatories) - both parties said
>     no, that couldn’t be happening.
>     >>>>>
>     >>>>> Until 2021 when the NY Attorney General proved that was
>     exactly what was happening with 18m of the 23m apparently from
>     non-authentic origin with ~9m from one side of the political aisle
>     (and six companies) and ~9m from the other side of the political
>     aisle (and one or more teenagers).
>     >>>>>
>     >>>>> So with Net Neutrality back on the agenda - here’s a simple
>     prediction, even if the volume of comments is somehow controlled,
>     10,000+ pages of comments produced by ChatGPT or a different LLM
>     is both possible and probably will be done. The question is if
>     someone includes a legitimate legal argument on page 6,517 - will
>     FCC’s lawyers spot it and respond to it as part of the NPRM?
>     >>>>>
>     >>>>> Hope this helps and with highest regards,
>     >>>>>
>     >>>>> -d.
>     >>>>> --
>     >>>>>
>     >>>>> Principal, LeadDoAdapt Ventures, Inc. & Distinguished Fellow
>     >>>>>
>     >>>>> Henry S. Stimson Center, Business Executives for National
>     Security
>     >>>>>
>     >>>>>
>     >>>>>
>     >>>>> On Mon, Oct 2, 2023 at 2:15 PM Dave Taht via Nnagain
>     <nnagain@lists.bufferbloat.net> wrote:
>     >>>>>>
>     >>>>>> All:
>     >>>>>>
>     >>>>>> I have spent the last several days reaching out to as many
>     people I
>     >>>>>> know with a deep understanding of the policy and technical
>     issues
>     >>>>>> surrounding the internet, to participate on this list. I
>     encourage you
>     >>>>>> all to reach out on your own, especially to those that you can
>     >>>>>> constructively and civilly disagree with, and hopefully
>     work with, to
>     >>>>>> establish technical steps forward. Quite a few have joined
>     silently!
>     >>>>>> So far, 168 people have joined!
>     >>>>>>
>     >>>>>> Please welcome Dr David Bray[1], a self-described "human
>     flack jacket"
>     >>>>>> who, in the last NN debate, stood up for the non -partisan
>     FCC IT team
>     >>>>>> that successfully kept the system up 99.4% of the time
>     despite the
>     >>>>>> comment floods and network abuses from all sides. He has
>     shared with
>     >>>>>> me privately many sad (and some hilarious!) stories of that
>     era, and I
>     >>>>>> do kind of hope now, that some of that history surfaces,
>     and we can
>     >>>>>> learn from it.
>     >>>>>>
>     >>>>>> Thank you very much, David, for putting down your painful
>     memories[2],
>     >>>>>> and agreeing to join here. There is a lot to tackle here, going
>     >>>>>> forward.
>     >>>>>>
>     >>>>>> [1] https://www.stimson.org/ppl/david-bray/
>     >>>>>> [2] "Pain shared is reduced. Joy shared, increased." -
>     Spider Robinson
>     >>>>>>
>     >>>>>>
>     >>>>>> --
>     >>>>>> Oct 30:
>     https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>     >>>>>> Dave Täht CSO, LibreQos
>     >>>>>> _______________________________________________
>     >>>>>> Nnagain mailing list
>     >>>>>> Nnagain@lists.bufferbloat.net
>     >>>>>> https://lists.bufferbloat.net/listinfo/nnagain
>     >>>>>
>     >>>>>
>     >>>>> _______________________________________________
>     >>>>> Nnagain mailing list
>     >>>>> Nnagain@lists.bufferbloat.net
>     >>>>> https://lists.bufferbloat.net/listinfo/nnagain
>     >>>>>
>     >>>>>
>     >>>>> _______________________________________________
>     >>>>> Nnagain mailing list
>     >>>>> Nnagain@lists.bufferbloat.net
>     >>>>> https://lists.bufferbloat.net/listinfo/nnagain
>     >>>>
>     >>>>
>     >>> _______________________________________________
>     >>> Nnagain mailing list
>     >>> Nnagain@lists.bufferbloat.net
>     >>> https://lists.bufferbloat.net/listinfo/nnagain
>     >>
>     >>
>     >>
>     >> --
>     >> Please send any postal/overnight deliveries to:
>     >> Vint Cerf
>     >> Google, LLC
>     >> 1900 Reston Metro Plaza, 16th Floor
>     >> Reston, VA 20190
>     >> +1 (571) 213 1346
>     >>
>     >>
>     >> until further notice
>     >>
>     >>
>     >>
>     >
>     > _______________________________________________
>     > Nnagain mailing list
>     > Nnagain@lists.bufferbloat.net
>     > https://lists.bufferbloat.net/listinfo/nnagain
>
>
>
>     --
>     Oct 30:
>     https://netdevconf.info/0x17/news/the-maestro-and-the-music-bof.html
>     Dave Täht CSO, LibreQos
>     _______________________________________________
>     Nnagain mailing list
>     Nnagain@lists.bufferbloat.net
>     https://lists.bufferbloat.net/listinfo/nnagain
>
>
> _______________________________________________
> Nnagain mailing list
> Nnagain@lists.bufferbloat.net
> https://lists.bufferbloat.net/listinfo/nnagain

[-- Attachment #2: Type: text/html, Size: 58323 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [NNagain] somewhat OT: Licklidder
  2023-10-10 17:12                     ` Jack Haverty
@ 2023-10-10 19:00                       ` Robert McMahon
  2023-10-10 19:38                         ` Dick Roy
  0 siblings, 1 reply; 15+ messages in thread
From: Robert McMahon @ 2023-10-10 19:00 UTC (permalink / raw)
  To: Jack Haverty via Nnagain

[-- Attachment #1: Type: text/plain, Size: 37459 bytes --]

Thanks for sharing. It's amazing to me what was accomplished and continues forward with communications & compute by extremely phenomenal people. I think the closest analog is the Gutenberg press, which many know had profound effects on the human condition. A hope is that we figure out how to progress in a similar manner, and somehow, the diffusion of knowledge and peaceful coexistence prevail.


https://www.crf-usa.org//bill-of-rights-in-action/bria-24-3-b-gutenberg-and-the-printing-revolution-in-europe#:~:text=Johann%20Gutenberg%27s%20invention%20of%20movable,split%20apart%20the%20Catholic%20Church.

Johann Gutenberg’s invention of movable-type printing quickened the spread of knowledge, discoveries, and literacy in Renaissance Europe. The printing revolution also contributed mightily to the Protestant Reformation that split apart the Catholic Church.


Bob

On Oct 10, 2023, 10:12 AM, at 10:12 AM, Jack Haverty via Nnagain <nnagain@lists.bufferbloat.net> wrote:
>FYI, The Arpanet was a key player in that patent fight.  The Arpanet
>IMPs (the packet switches) downloaded software from each other, and
>that
>capability was used to distribute new releases of the IMP program.  I
>suggested that 1970s implementation to the lawyers as a good example of
>
>prior art, which led to a lot of work that eventually resurrected the
>1970s IMP code from a moldy listing in someone's basement, and got it
>running again on simulated ancient hardware.   At one point the 4-node
>Arpanet of 1970 was created and run, in anticipation of a demo of prior
>
>art at trial.  Sadly (for me at least) the combatants suddenly settled
>out of court, so the trial never happened and the patent issue was not
>adjudicated.   But the resurrected IMP code is on github now, so anyone
>
>interested can run their own Arpanet.
>
>Jack
>
>
>On 10/10/23 08:53, Steve Crocker via Nnagain wrote:
>> Lots of good stuff here and I missed the earlier posts, but one small
>
>> thing caught my attention:
>>
>>     > About 10 years ago, I accidentally got involved in a patent
>>     dispute to be an "expert witness", for a patent involving
>>     downloading new programs over a communications path into a remote
>>     computer (yes, what all our devices do almost every day).
>>
>> In the seminal period of late 1968 and early 1969 when we were
>> thinking about Arpanet protocols, one idea that was very much part of
>
>> our thinking was downloading a small program at the beginning of an
>> interactive session.  The downloaded program would take care of local
>
>> interactions to avoid the need to send every character across the net
>
>> only to have it echoed remotely.  Why not always use local echo? 
>> Because most of the time-shared systems in the various ARPA-supported
>
>> research environments had distinct ways of interpreting each and
>> every character.  Imposing a network-wide rule of local echoing would
>
>> have compromised the usability of most of the systems on the
>Arpanet. 
>> I think Multics was the only "modern" line-at-a-time system at the
>time.
>>
>> In March 1969 we decided it was time to write down the ideas from our
>
>> meetings in late 1968 and early 1969.  The first batch of RFCs
>> included Rulifson's RFC 5.  He proposed DEL, the Decode-Encode
>> Language.  Elie's RFC 51 a year later proposed the Network
>Interchange
>> Language.  In both cases the basic concept was the creation of a
>> simple language, easily implementable on each platform, that would
>> mediate the interaction with a remote system.  The programs were
>> expected to be short -- hence downloadable quickly -- and either
>> interpreted or quickly translated.  There was a tiny bit of
>> experimental work along this line, but it was far ahead of its time. 
>
>> I think it was about 25 years before ActiveX came along, followed by
>Java.
>>
>> Steve
>>
>>
>> On Tue, Oct 10, 2023 at 11:30 AM Dave Taht via Nnagain
>> <nnagain@lists.bufferbloat.net> wrote:
>>
>>     On Mon, Oct 9, 2023 at 7:56 PM Jack Haverty via Nnagain
>>     <nnagain@lists.bufferbloat.net> wrote:
>>
>>     For starters it is an honor to be conversing with folk that knew
>Bob
>>     Taylor, and "Lick", and y'all made me go back and re-read
>>
>>     http://memex.org/licklider.pdf
>>
>>     For inspiration. I think everyone in our field should re-read
>that,
>>     periodically. For example he makes an overgeneralization about
>the
>>     thinking processes of men, as compared to the computers of the
>time,
>>     and not to women...
>>
>>     But I have always had an odd question - what songs did Lick play
>on
>>     guitar? Do any recordings exist?
>>
>>     Music defines who I am, at least. I love the angularness and
>surprises
>>     in jazz, and the deep storytelling buried deep in Shostakovich's
>>     Fifth. Moving forward to modern music: the steady backbeat of
>Burning
>>     Man - and endless repetition of short phrases - seems to lead to
>>     groupthink - I can hardly stand EDM for an hour.
>>
>>      I am "maked" by Angela' Lansbury's Sweeny Todd, and my religion,
>>     forever reformed by Monty Python's Life of Brian, One Flew over
>the
>>     Cookoos nest, 12 Angry Men, and the 12 Monkees, Pink Floyd and
>punk
>>     music were the things that shaped me. No doubt it differs
>>     significantly for everyone here, please share?
>>
>>     Powerful tales and their technologies predate the internet, and
>>     because they were wildly shared, influenced how generations
>thought
>>     without being the one true answer. Broadcast media, also, was
>joint,
>>     and in school we
>>
>>     We are in a new era of uncommonality of experience, in part from
>>     bringing in all the information in the world, while still
>separated by
>>     differences in language, exposure, education, and culture,
>although
>>     nowadays it has become so easy and natural to be able to use
>computer
>>     assisted language translation tools, I do not know how well they
>truly
>>     make the jump between cultures.
>>
>>     In that paper he talked about 75% of his time being spent setting
>up
>>     to do analytics, where today so much information exists as to be
>>     impossible to analyze.
>>
>>     I only have a few more small comments below, but I wanted to pick
>out
>>     the concepts of TOS and backpressure as needing thought on
>another
>>     day, in another email (what was licks song list??? :)). The
>internet
>>     has very little Tos or backpressure, and Flow Queuing based
>algorithms
>>     actually function thusly:
>>
>>     If the arrival rate of a flow is less than the departure rate of
>all
>>     other flows, it goes out first.
>>
>>     To some extent this matches some of Nagles' "every application
>has a
>>     right to one packet in the network", and puts a reward into the
>system
>>     for applications that use slightly less than their fair share of
>the
>>     bandwidth.
>>
>>     > IMHO, the problem may be that the Internet, and computing
>>     technology in general, is so new that non-technical
>organizations,
>>     such as government entities, don't understand it and therefore
>>     can't figure out whether or how to regulate anything involved.
>>     >
>>     > In other, older, "technologies", rules, procedures, and
>>     traditions have developed over the years to provide for feedback
>>     and control between governees and governors.  Roberts Rules of
>>     Order was created 150 years ago, and is still widely used to
>>     manage public meetings.  I've been in local meetings where
>>     everyone gets a chance to speak, but are limited to a few minutes
>>     to say whatever's on their mind.  You have to appear in person,
>>     wait your turn, and make your comment. Doing so is free, but
>still
>>     has the cost of time and hassle to get to the meeting.
>>     >
>>     > Organizations have figured out over the years how to manage
>>     meetings.  [Vint - remember the "Rathole!" mechanism that we used
>>     to keep Internet meetings on track...?]
>>
>>     PARC had "Dealer".
>>
>>     > From what David describes, it sounds like the current "public
>>     comment" mechanisms in the electronic arena are only at the stage
>>     where the loudest voices can drown out all others, and public
>>     debates are essentially useless cacophonies of the loudest
>>     proponents of the various viewpoints.   There are no rules.   Why
>>     should anyone submit their own sensible comments, knowing they'll
>>     be lost in the noise?
>>     >
>>     > In non-electronic public forums, such behavior is ruled out,
>and
>>     if it persists, the governing body can have offenders ejected,
>>     adjourn a meeting until cooler heads prevail, or otherwise make
>>     the discourse useful for informing decisions.  Courts can issue
>>     restraining orders, but has any court ever issued such an order
>>     applying to an electronic forum?
>>     >
>>     > So, why haven't organizations yet developed rules and
>mechanisms
>>     for managing electronic discussions....?
>>     >
>>     > I'd offer two observations and suggestions.
>>     >
>>     > -----
>>     >
>>     > First, a major reason for a lack of such rules and mechanisms
>>     may be an educational gap.  Administrators, politicians, and
>>     staffers may simply not understand all this newfangled
>technology,
>>     or how it works, and are drowning in a sea of terminology,
>>     acronyms, and concepts that make no sense (to them).   In the FCC
>>     case, even the technical gurus may have deep knowledge of their
>>     traditional realm of telephony, radio, and related issues and
>>     policy tradeoffs.   But they may be largely ignorant of computing
>>     and networking equivalents.  Probably even worse, they may
>>     unconsciously consider the new world as a simple evolution of the
>>     old, not recognizing the impact of incredibly fast computers and
>>     communications, and the advances that they enable, such as "AI" -
>>     whatever that is...
>>     >
>>     > About 10 years ago, I accidentally got involved in a patent
>>     dispute to be an "expert witness", for a patent involving
>>     downloading new programs over a communications path into a remote
>>     computer (yes, what all our devices do almost every day).   I was
>>     astounded when I learned how little the "judicial system"
>>     (lawyers, judges, legislators, etc.) knew about computer and
>>     network technology.   That didn't stop them from debating the
>>     meaning of technical terms.  What is RAM? How does "programming"
>>     differ from "reprogramming"?  What is "memory"?  What is a
>>     "processor"?   What is an "operating system"?   The arguments
>>     continue until eventually a judge declares what the answer is,
>>     with little technical knowledge or expertise to help.   So you
>can
>>     easily get legally binding definitions such as "operating system"
>>     means "Windows", and that all computers contain an operating
>system.
>>     >
>>     > I spent hours on the phone over about 18 months, explaining to
>>     the lawyers how computers and networks actually worked.   In
>turn,
>>     they taught me quite a lot about the vagaries of the laws and
>>     patents.  It was fascinating but also disturbing to see how
>>     ill-prepared the legal system was for new technologies.
>>     >
>>     > So, my suggestion is that a focus be placed on helping the
>>     non-technical decision makers understand the nuances of computing
>>     and the Internet.  I don't think that will be successful by
>>     burying them in the sea of technical jargon and acronyms.
>>     >
>>     > Before I retired, I spent a lot of time with C-suite denizens
>>     from companies outside of the technology industry - banks,
>>     manufacturers, transportation, etc. - helping them understand
>what
>>     "The Internet" was, and help them see it as both a huge
>>     opportunity and a huge threat to their businesses.  One technique
>>     I used was simply stolen from the early days of The Internet.
>>     >
>>     > When we were involved in designing the internal mechanisms of
>>     the Internet, in particular TCPV4, we didn't know much about
>>     networks either.  So we used analogies.  In particular we used
>the
>>     existing transportation infrastructure as a model.   Moving bits
>>     around the world isn't all that different from moving goods and
>>     people.   But everyone, even with no technical expertise, knows
>>     about transportation.
>>     >
>>     > It turns out that there are a lot of useful analogies. For
>>     example, we recognized that there were different kinds of
>>     "traffic" with different needs.  Coal for power plants was
>>     important, but not urgent.  If a coal train waits on a siding
>>     while a passenger train passes, it's OK, even preferred.  There
>>     could be different "types of service" available from the
>>     transportation infrastructure.   At the time (late 1970s) we
>>     didn't know exactly how to do that, but decided to put a field in
>>     the IP header as a placeholder - the "TOS" field. Figuring out
>>     what different TOSes there should be, and how they would be
>>     handled differently, was still on the to-do list.   There are
>even
>>     analogies to the Internet - goods might travel over a "marine
>>     network" to a "port", where they are moved onto a "rail network",
>>     to a distributor, and moved on the highway network to their final
>>     destination.  Routers, gateways, ...
>>     >
>>     > Other transportation analogies reinforced the notion of TOS. 
>>     E.g., if you're sending a document somewhere, you can choose how
>>     to send it - normal postal mail, or Priority Mail, or even use a
>>     different "network" such as an overnight delivery service. 
>>     Different TOS would engage different behaviors of the underlying
>>     communications system, and might also have different costs to use
>>     them.  Sending a ton of coal to get delivered in a week or two
>>     would cost a lot less than sending a ton of documents for
>>     overnight delivery.
>>     >
>>     > There were other transportation analogies heard during the
>TCPV4
>>     design discussions - e.g., "Expressway Routing" (do you take a
>>     direct route over local streets, or go to the freeway even though
>>     it's longer) and "Multi-Homing" (your manufacturing plant has
>>     access to both a highway and a rail line).
>>     >
>>     > Suggestion -- I suspect that using a familiar infrastructure
>>     such as transport to discuss issues with non-technical decision
>>     makers would be helpful.  E.g., imagine what would happen if some
>>     particular "net neutrality" set of rules was placed on the
>>     transportation infrastructure?   Would it have a desirable
>effect?
>>     >
>>     > -----
>>     >
>>     > Second, in addition to anonymity as an important issue in the
>>     electronic world, my experience as a mentee of Licklider surfaced
>>     another important issue in the "galactic network" vision -- "Back
>>     Pressure".     The notion is based in existing knowledge. 
>>      Economics has notions of Supply and Demand and Cost Curves. 
>>      Engineering has the notion of "Negative Feedback" to stabilize
>>     mechanical, electrical, or other systems.
>>     >
>>     > We discussed Back Pressure, in the mid 70s, in the context of
>>     electronic mail, and tried to get the notion of "stamps" accepted
>>     as part of the email mechanisms.  The basic idea was that there
>>     had to be some form of "back pressure" to prevent overload by
>>     discouraging sending of huge quantities of mail.
>>     >
>>     > At the time, mail traffic was light, since every message was
>>     typed by hand by some user.  In Lick's group we had experimented
>>     with using email as a way for computer programs to interact.  In
>>     Lick's vision, humans would interact by using their computers as
>>     their agents.   Even then, computers could send email a lot
>faster
>>     and continuously than any human at a keyboard, and could easily
>>     flood the network.  [This epiphany occurred shortly after a
>>     mistake in configuring distribution lists caused so many messages
>>     and replies that our machine crashed as its disk space ran out.]
>>     >
>>     > "Stamps" didn't necessarily represent monetary cost. Back
>>     pressure could be simple constraints, e.g., no user can send more
>>     than 500 (or whatever) messages per day.   This notion never got
>>     enough support to become part of the email standards; I still
>>     think it would help with the deluge of spam we all experience
>today.
>>     >
>>     > Back Pressure in the Internet today is largely non-existent.  I
>>     (or my AI and computers) can send as much email as I like. 
>>      Communications carriers promote "unlimited data" but won't
>>     guarantee anything.   Memory has become cheap, and as a result
>>     behaviors such as "buffer bloat" have appeared.
>>     >
>>     > Suggestion - educate the decision-makers about Back Pressure,
>>     using highway analogies (metering lights, etc.)
>>     >
>>     > -----
>>     >
>>     > Education about the new technology, but by using some familiar
>>     analogs, and introduction of Back Pressure, in some appropriate
>>     form, as part of a "network neutrality" policy, would be the two
>>     foci I'd recommend.
>>     >
>>     > My prior suggestion of "registration" and accepting only the
>>     last comment was based on the observations above.  Back pressure
>>     doesn't have to be monetary, and registered users don't have to
>be
>>     personally identified.   Simply making it sufficiently "hard" to
>>     register (using CAPTCHAs, 2FA, whatever) would be a "cost"
>>     discouraging "loud voices".   Even the law firms submitting
>>     millions of comments on behalf of their clients might balk at the
>>     cost (in labor not money) to register their million clients, even
>>     anonymously, so each could get his/her comment submitted.   Of
>>     course, they could always pass the costs on to their (million?
>>     really?) clients. But it would still be Back Pressure.
>>     >
>>     > One possibility -- make the "cost" of submitting a million
>>     electronic comments equal to the cost of submitting a million
>>     postcards...?
>>     >
>>     > Jack Haverty
>>     >
>>     >
>>     > On 10/9/23 16:55, David Bray, PhD wrote:
>>     >
>>     > Great points Vint as you're absolutely right - there are
>>     multiple modalities here (and in the past it was spam from
>>     thousands of postcards, then mimeographs, then faxes, etc.)
>>     >
>>     > The standard historically has been set by the Administrative
>>     Conference of the United States: https://www.acus.gov/about-acus
>>     >
>>     > In 2020 there seemed to be an effort to gave the General
>>     Services Administration weigh-in, however they closed that
>>     rulemaking attempt without publishing any of the comments they
>got
>>     and no announcement why it was closed.
>>     >
>>     > As for what part of Congress - I believe ACUS was championed by
>>     both the Senate and House Judiciary Committees as it has
>oversight
>>     and responsibility for the interpretations of the Administrative
>>     Procedure Act of 1946 (which sets out the whole rulemaking
>procedure).
>>     >
>>     > Sadly there isn't a standard across agencies - which also means
>>     there isn't a standard across Administrations. Back in 2018 and
>>     2020, both with this group of 52 people here
>>     https://tinyurl.com/letter-signed-52-people - as well as
>>     individually - I did my darnest to encourage them to do a
>standard.
>>     >
>>     > There's also the National Academy of Public Administration
>which
>>     is probably the latest remaining non-partisan forum for
>>     discussions like this too.
>>     >
>>     >
>>     > On Mon, Oct 9, 2023 at 7:46 PM Vint Cerf <vint@google.com>
>wrote:
>>     >>
>>     >> David, this is a good list.
>>     >> FACA has rules for public participation, for example.
>>     >>
>>     >> I think it should be taken into account for any public
>>     commenting process that online (and offline such as USPS or fax
>>     and phone calls) that spam and artificial inflation of comments
>>     are possible. Is there any specific standard for US agency public
>>     comment handling? If now, what committees of the US Congress
>might
>>     have jurisdiction?
>>     >>
>>     >> v
>>     >>
>>     >>
>>     >> On Tue, Oct 10, 2023 at 8:22 AM David Bray, PhD via Nnagain
>>     <nnagain@lists.bufferbloat.net> wrote:
>>     >>>
>>     >>> I'm all for doing new things to make things better.
>>     >>>
>>     >>> At the same time, I used to do bioterrorism preparedness and
>>     response from 2000-2005 (and aside from asking myself what kind
>of
>>     crazy world needed counter-bioterrorism efforts... I also
>realized
>>     you don't want to interject something completely new in the
>middle
>>     of an unfolding crisis event). If something were to be injected
>>     now, it would have to have consensus from both sides, otherwise
>at
>>     least one side (potentially detractors from both) will claim that
>>     whatever form the new approaches take are somehow advantaging
>"the
>>     other side" and disadvantaging them.
>>     >>>
>>     >>> Probably would take a ruling by the Administrative Conference
>>     of the United States, at a minimum to answer these five questions
>>     - and even then, introducing something completely different in
>the
>>     midst of a political melee might just invite mudslinging unless
>>     moderate voices on both sides can reach some consensus.
>>     >>>
>>     >>> 1. Does identity matter regarding who files a comment or not
>—
>>     and must one be a U.S. person in order to file?
>>     >>>
>>     >>> 2. Should agencies publish real-time counts of the number of
>>     comments received — or is it better to wait until the end of a
>>     commenting round to make all comments available, including
>counts?
>>     >>>
>>     >>> 3. Should third-party groups be able to file on behalf of
>>     someone else or not — and do agencies have the right to remove
>>     spam-like comments?
>>     >>>
>>     >>> 4. Should the public commenting process permit multiple
>>     comments per individual for a proceeding — and if so, how many
>>     comments from a single individual are too many? 100? 1000? More?
>>     >>>
>>     >>> 5. Finally, should the U.S. government itself consider, given
>>     public perceptions about potential conflicts of interest for any
>>     agency performing a public commenting process, whether it would
>be
>>     better to have third-party groups take responsibility for
>>     assembling comments and then filing those comments via a
>validated
>>     process with the government?
>>     >>>
>>     >>>
>>     >>>
>>     >>> On Sat, Oct 7, 2023 at 4:10 PM Jack Haverty <jack@3kitty.org>
>>     wrote:
>>     >>>>
>>     >>>> Hi again David et al,
>>     >>>>
>>     >>>> Interesting frenzy...lots of questions that need answers and
>>     associated policies.   I served 6 years as an elected official
>(in
>>     a small special district in California), so I have some small
>>     understanding of the government side of things and the
>constraints
>>     involved.   Being in charge doesn't mean you can do what you
>want.
>>     >>>>
>>     >>>> I'm thinking here more near-term and incremental steps.  You
>>     said "These same questions need pragmatic pilots that involve the
>>     public ..."
>>     >>>>
>>     >>>> So, how about using the current NN situation for a pilot? 
>>     Keep all the current ways and emerging AI techniques to continue
>>     to flood the system with comments.  But also offer an *optional*
>>     way for humans to "register" as a commenter and then submit their
>>     (latest only) comment into the melee.  Will people use it?  Will
>>     "consumers" (the lawyers, commissioners, etc.) find it useful?
>>     >>>>
>>     >>>> I've found it curious, for decades now, that there are (too
>>     many) mechanisms for "secure email", that may help with the flood
>>     of disinformation from anonymous senders, but very very few
>people
>>     use them.   Maybe they don't know how; maybe the available
>schemes
>>     are too flawed; maybe ...?
>>     >>>>
>>     >>>> About 30 years ago, I was a speaker in a public meeting
>>     orchestrated by USPS, and recommended that they take a lead role,
>>     e.g., by acting as a national CA - certificate authority.  Never
>>     happened though.   FCC issues lots of licenses...perhaps they
>>     could issue online credentials too?
>>     >>>>
>>     >>>> Perhaps a "pilot" where you will also accept comments by
>>     email, some possibly sent by "verified" humans if they understand
>>     how to do so, would be worth trying?   Perhaps comments on
>>     "technical aspects" coming from people who demonstrably know how
>>     to use technology would be valuable to the policy makers?
>>     >>>>
>>     >>>> The Internet, and technology such as TCP, began as an
>>     experimental pilot about 50 years ago.  Sometimes pilots become
>>     infrastructures.
>>     >>>>
>>     >>>> FYI, I'm signing this message.  Using OpenPGP.  I could
>>     encrypt it also, but my email program can't find your public key.
>>     >>>>
>>     >>>> Jack Haverty
>>     >>>>
>>     >>>>
>>     >>>> On 10/5/23 14:21, David Bray, PhD wrote:
>>     >>>>
>>     >>>> Indeed Jack - a few things to balance - the Administrative
>>     Procedure Act of 1946 (on which the idea of rulemaking is based)
>>     us about raising legal concerns that must be answered by the
>>     agency at the time the rulemaking is done. It's not a vote nor is
>>     it the case that if the agency gets tons of comments in one
>>     direction that they have to go in that direction. Instead it's
>>     only about making sure legal concerns are considered and
>responded
>>     to before the agency before the agency acts. (Which is partly why
>>     sending "I'm for XYZ" or "I'm against ABC" really doesn't mean
>>     anything to an agency - not only is that not a legal argument or
>>     concern, it's also not something where they're obligated to
>follow
>>     these comments - it's not a vote or poll).
>>     >>>>
>>     >>>> That said, political folks have spun things to the public as
>>     if it is a poll/vote/chance to act. The raise a valid legal
>>     concern part of the APA of 1946 is omitted. Moreover the fact
>that
>>     third party law firms and others like to submit comments on
>behalf
>>     of clients - there will always be a third party submitting
>>     multiple comments for their clients (or "clients") because that's
>>     their business.
>>     >>>>
>>     >>>> In the lead up to 2017, the Consumer and Government Affairs
>>     Bureau of the FCC got an inquiry from a firm asking how they
>could
>>     submit 1 million comments a day on an "upcoming privacy
>>     proceeding" (their words, astute observers will note there was no
>>     privacy proceeding before the FCC in 2017). When the Bureau asked
>>     me, I told them either mail us a CD to upload it or submit one
>>     comment with 1 million signatures. To attempt to flood us with 1
>>     million comments a day (aside from the fact who can "predict"
>>     having that many daily) would deny resources to others. In the
>>     mess that followed, what was released to the public was so
>>     redacted you couldn't see the legitimate concerns and better
>paths
>>     that were offered to this entity.
>>     >>>>
>>     >>>> And the FCC isn't alone. EPA, FTC, and other regulatory
>>     agencies have had these hijinks for years - and before the
>>     Internet it was faxes, mass mimeographs (remember blue ink?), and
>>     postcards.The Administrative Conference of the United States
>>     (ACUS) - is the body that is supposed to provide consistent
>>     guidance for things like this across the U.S. government. I've
>>     briefed them and tried to raise awareness of these issues - as I
>>     think fundamentally this is a **process** question that once
>>     answered, tech can support. However they're not technologies and
>>     updating the interpretation of the process isn't something
>lawyers
>>     are apt to do until the evidence that things are in trouble is
>>     overwhelming.
>>     >>>>
>>     >>>> 52 folks wrote a letter to them - and to GSA - back in 2020.
>>     GSA had a rulemaking of its own on how to improve things, yet
>>     oddly never published any of the comments it received (including
>>     ours) and closed the rulemaking quietly. Here's the letter:
>>     https://tinyurl.com/letter-signed-52-people
>>     >>>>
>>     >>>> And here's an article published in OODAloop about this - and
>>     why Generative AI is probably going to make things even more
>>     challenging:
>>
>https://www.oodaloop.com/archive/2023/04/18/why-a-pause-on-ai-development-is-not-the-answer-an-insiders-perspective/
>>     >>>>
>>     >>>> [snippet of the article] Now in 2023 and Beyond: Proactive
>>     Approaches to AI and Society
>>     >>>>
>>     >>>> Looking to the future, to effectively address the challenges
>>     arising from AI, we must foster a proactive, results-oriented,
>and
>>     cooperative approach with the public. Think tanks and
>universities
>>     can engage the public in conversations about how to work, live,
>>     govern, and co-exist with modern technologies that impact
>society.
>>     By involving diverse voices in the decision-making process, we
>can
>>     better address and resolve the complex challenges AI presents on
>>     local and national levels.
>>     >>>>
>>     >>>> In addition, we must encourage industry and political
>leaders
>>     to participate in finding non-partisan, multi-sector solutions if
>>     civil societies are to remain stable. By working together, we can
>>     bridge the gap between technological advancements and their
>>     societal implications.
>>     >>>>
>>     >>>> Finally, launching AI pilots across various sectors, such as
>>     work, education, health, law, and civil society, is essential. We
>>     must learn by doing on how we can create responsible civil
>>     environments where AIs can be developed and deployed responsibly.
>>     These initiatives can help us better understand and integrate AI
>>     into our lives, ensuring its potential is harnessed for the
>>     greater good while mitigating risks.
>>     >>>>
>>     >>>> In 2019 and 2020, a group of fifty-two people asked the
>>     Administrative Conference of the United States (which helps guide
>>     rulemaking procedures for federal agencies), General Accounting
>>     Office, and the General Services Administration to call attention
>>     to the need to address the challenges of chatbots flooding public
>>     commenting procedures and potentially crowding out or denying
>>     services to actual humans wanting to leave a comment. We asked:
>>     >>>>
>>     >>>> 1. Does identity matter regarding who files a comment or not
>>     — and must one be a U.S. person in order to file?
>>     >>>>
>>     >>>> 2. Should agencies publish real-time counts of the number of
>>     comments received — or is it better to wait until the end of a
>>     commenting round to make all comments available, including
>counts?
>>     >>>>
>>     >>>> 3. Should third-party groups be able to file on behalf of
>>     someone else or not — and do agencies have the right to remove
>>     spam-like comments?
>>     >>>>
>>     >>>> 4. Should the public commenting process permit multiple
>>     comments per individual for a proceeding — and if so, how many
>>     comments from a single individual are too many? 100? 1000? More?
>>     >>>>
>>     >>>> 5. Finally, should the U.S. government itself consider,
>given
>>     public perceptions about potential conflicts of interest for any
>>     agency performing a public commenting process, whether it would
>be
>>     better to have third-party groups take responsibility for
>>     assembling comments and then filing those comments via a
>validated
>>     process with the government?
>>     >>>>
>>     >>>> These same questions need pragmatic pilots that involve the
>>     public to co-explore and co-develop how we operate effectively
>>     amid these technological shifts. As the capabilities of LLMs
>>     continue to grow, we need positive change agents willing to
>tackle
>>     the messy issues at the intersection of technology and society.
>>     The challenges are immense, but so too are the opportunities for
>>     positive change. Let’s seize this moment to create a better
>>     tomorrow for all. Working together, we can co-create a future
>that
>>     embraces AI’s potential while mitigating its risks, informed by
>>     the hard lessons we have already learned.
>>     >>>>
>>     >>>> Full article:
>>
>https://www.oodaloop.com/archive/2023/04/18/why-a-pause-on-ai-development-is-not-the-answer-an-insiders-perspective/
>>     >>>>
>>     >>>> Hope this helps.
>>     >>>>
>>     >>>>
>>     >>>> On Thu, Oct 5, 2023 at 4:44 PM Jack Haverty via Nnagain
>>     <nnagain@lists.bufferbloat.net> wrote:
>>     >>>>>
>>     >>>>> Thanks for all your efforts to keep the "feedback loop" to
>>     the rulemakers functioning!
>>     >>>>>
>>     >>>>> I'd like to offer a suggestion for a hopefully politically
>>     acceptable way to handle the deluge, derived from my own battles
>>     with "email" over the years (decades).
>>     >>>>>
>>     >>>>> Back in the 1970s, I implemented one of the first email
>>     systems on the Arpanet, under the mentorship of JCR Licklider,
>who
>>     had been pursuing his vision of a "Galactic Network" at ARPA and
>>     MIT.   One of the things we discovered was the significance of
>>     anonymity.   At the time, anonymity was forbidden on the Arpanet;
>>     you needed an account on some computer, protected by passwords,
>in
>>     order to legitimately use the network.   The mechanisms were
>crude
>>     and easily broken, but the principle applied.
>>     >>>>>
>>     >>>>> Over the years, that principle has been forgotten, and the
>>     right to be anonymous has become entrenched.   But many uses of
>>     the network, and needs of its users, demand accountability, so
>all
>>     sorts of mechanisms have been pasted on top of the network to
>>     provide ways to judge user identity.  Banks, medical services,
>>     governments, and businesses all demand some way of proving your
>>     identity, with passwords, various schemes of 2FA, VPNs, or other
>>     such technology, with varying degrees of protection.   It is
>still
>>     possible to be anonymous on the net, but many things you do
>>     require you to prove, to some extent, who you are.
>>     >>>>>
>>     >>>>> So, my suggestion for handling the deluge of "comments" is:
>>     >>>>>
>>     >>>>> 1/ create some mechanism for "registering" your intent to
>>     submit a comment.   Make it hard for bots to register.  Perhaps
>>     you can leverage the work of various partners, e.g., ISPs,
>>     retailers, government agencies, financial institutions, of others
>>     who already have some way of identifying their users.
>>     >>>>>
>>     >>>>> 2/ Also make registration optional - anyone can still
>submit
>>     comments anonymously if they choose.
>>     >>>>>
>>     >>>>> 3/ for "registered commenters", provide a way to "edit"
>your
>>     previous comment - i.e., advise that your comment is always the
>>     last one you submitted.   I.E., whoever you are, you can only
>>     submit one comment, which will be the last one you submit.
>>     >>>>>
>>     >>>>> 4/ In the thousands of pages of comments, somehow flag the
>>     ones that are from registered commenters, visible to the people
>>     who read the comments.   Even better, provide those "information
>>     consumers" with ways to sort, filter, and search through the body
>>     of comments.
>>     >>>>>
>>     >>>>> This may not reduce the deluge of comments, but I'd expect
>>     it to help the lawyers and politicians keep their heads above the
>>     water.
>>     >>>>>
>>     >>>>> Anonymity is an important issue for Net Neutrality too, but

[-- Attachment #2: Type: text/html, Size: 66479 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [NNagain] somewhat OT: Licklidder
  2023-10-10 19:00                       ` Robert McMahon
@ 2023-10-10 19:38                         ` Dick Roy
  0 siblings, 0 replies; 15+ messages in thread
From: Dick Roy @ 2023-10-10 19:38 UTC (permalink / raw)
  To: 'Network Neutrality is back! Let´s make the technical
	aspects heard this time!'

[-- Attachment #1: Type: text/plain, Size: 37506 bytes --]

 

 

  _____  

From: Nnagain [mailto:nnagain-bounces@lists.bufferbloat.net] On Behalf Of Robert McMahon via Nnagain
Sent: Tuesday, October 10, 2023 12:01 PM
To: Jack Haverty via Nnagain
Cc: Robert McMahon
Subject: Re: [NNagain] somewhat OT: Licklidder

 

Thanks for sharing. It's amazing to me what was accomplished and continues forward with communications & compute by extremely phenomenal people. I think the closest analog is the Gutenberg press, which many know had profound effects on the human condition. A hope is that we figure out how to progress in a similar manner, and somehow, the diffusion of knowledge and peaceful coexistence prevail.



https://www.crf-usa.org//bill-of-rights-in-action/bria-24-3-b-gutenberg-and-the-printing-revolution-in-europe#:~:text=Johann%20Gutenberg%27s%20invention%20of%20movable,split%20apart%20the%20Catholic%20Church <https://www.crf-usa.org/bill-of-rights-in-action/bria-24-3-b-gutenberg-and-the-printing-revolution-in-europe#:~:text=Johann%20Gutenberg%27s%20invention%20of%20movable,split%20apart%20the%20Catholic%20Church> .

Johann Gutenberg’s invention of movable-type printing quickened the spread of knowledge, discoveries, and literacy in Renaissance Europe. The printing revolution also contributed mightily to the Protestant Reformation that split apart the Catholic Church.

[RR] It is claimed by many to be the achievement that most profoundly affected humanity over the last two millennia! Hard to disagree IMO!

Cheers,

RR





Bob

On Oct 10, 2023, at 10:12 AM, Jack Haverty via Nnagain <nnagain@lists.bufferbloat.net> wrote:

FYI, The Arpanet was a key player in that patent fight.  The Arpanet 


IMPs (the packet switches) downloaded software from each other, and that 


capability was used to distribute new releases of the IMP program.  I 


suggested that 1970s implementation to the lawyers as a good example of 


prior art, which led to a lot of work that eventually resurrected the 


1970s IMP code from a moldy listing in someone's basement, and got it 


running again on simulated ancient hardware.   At one point the 4-node 


Arpanet of 1970 was created and run, in anticipation of a demo of prior 


art at trial.  Sadly (for me at least) the combatants suddenly settled 


out of court, so the trial never happened and the patent issue was not 


adjudicated.   But the resurrected IMP code is on github now, so anyone 


interested can run their own Arpanet.





Jack








On 10/10/23 08:53, Steve Crocker via Nnagain wrote:
 Lots of good stuff here and I missed the earlier posts, but one small 


 thing caught my attention:
 About 10 years ago, I accidentally got involved in a patent
     dispute to be an "expert witness", for a patent involving


     downloading new programs over a communications path into a remote


     computer (yes, what all our devices do almost every day).





 In the seminal period of late 1968 and early 1969 when we were 


 thinking about Arpanet protocols, one idea that was very much part of 


 our thinking was downloading a small program at the beginning of an 


 interactive session.  The downloaded program would take care of local 


 interactions to avoid the need to send every character across the net 


 only to have it echoed remotely.  Why not always use local echo?  


 Because most of the time-shared systems in the various ARPA-supported 


 research environments had distinct ways of interpreting each and 


 every character.  Imposing a network-wide rule of local echoing would 


 have compromised the usability of most of the systems on the Arpanet.  


 I think Multics was the only "modern" line-at-a-time system at the time.





 In March 1969 we decided it was time to write down the ideas from our 


 meetings in late 1968 and early 1969.  The first batch of RFCs 


 included Rulifson's RFC 5.  He proposed DEL, the Decode-Encode 


 Language.  Elie's RFC 51 a year later proposed the Network Interchange 


 Language.  In both cases the basic concept was the creation of a 


 simple language, easily implementable on each platform, that would 


 mediate the interaction with a remote system.  The programs were 


 expected to be short -- hence downloadable quickly -- and either 


 interpreted or quickly translated.  There was a tiny bit of 


 experimental work along this line, but it was far ahead of its time.  


 I think it was about 25 years before ActiveX came along, followed by Java.





 Steve








 On Tue, Oct 10, 2023 at 11:30 AM Dave Taht via Nnagain 


 <nnagain@lists.bufferbloat.net> wrote:





     On Mon, Oct 9, 2023 at 7:56 PM Jack Haverty via Nnagain


     <nnagain@lists.bufferbloat.net> wrote:





     For starters it is an honor to be conversing with folk that knew Bob


     Taylor, and "Lick", and y'all made me go back and re-read





     http://memex.org/licklider.pdf





     For inspiration. I think everyone in our field should re-read that,


     periodically. For example he makes an overgeneralization about the


     thinking processes of men, as compared to the computers of the time,


     and not to women...





     But I have always had an odd question - what songs did Lick play on


     guitar? Do any recordings exist?





     Music defines who I am, at least. I love the angularness and surprises


     in jazz, and the deep storytelling buried deep in Shostakovich's


     Fifth. Moving forward to modern music: the steady backbeat of Burning


     Man - and endless repetition of short phrases - seems to lead to


     groupthink - I can hardly stand EDM for an hour.





      I am "maked" by Angela' Lansbury's Sweeny Todd, and my religion,


     forever reformed by Monty Python's Life of Brian, One Flew over the


     Cookoos nest, 12 Angry Men, and the 12 Monkees, Pink Floyd and punk


     music were the things that shaped me. No doubt it differs


     significantly for everyone here, please share?





     Powerful tales and their technologies predate the internet, and


     because they were wildly shared, influenced how generations thought


     without being the one true answer. Broadcast media, also, was joint,


     and in school we





     We are in a new era of uncommonality of experience, in part from


     bringing in all the information in the world, while still separated by


     differences in language, exposure, education, and culture, although


     nowadays it has become so easy and natural to be able to use computer


     assisted language translation tools, I do not know how well they truly


     make the jump between cultures.





     In that paper he talked about 75% of his time being spent setting up


     to do analytics, where today so much information exists as to be


     impossible to analyze.





     I only have a few more small comments below, but I wanted to pick out


     the concepts of TOS and backpressure as needing thought on another


     day, in another email (what was licks song list??? :)). The internet


     has very little Tos or backpressure, and Flow Queuing based algorithms


     actually function thusly:





     If the arrival rate of a flow is less than the departure rate of all


     other flows, it goes out first.





     To some extent this matches some of Nagles' "every application has a


     right to one packet in the network", and puts a reward into the system


     for applications that use slightly less than their fair share of the


     bandwidth.
 IMHO, the problem may be that the Internet, and computing
     technology in general, is so new that non-technical organizations,


     such as government entities, don't understand it and therefore


     can't figure out whether or how to regulate anything involved.



 In other, older, "technologies", rules, procedures, and
     traditions have developed over the years to provide for feedback


     and control between governees and governors.  Roberts Rules of


     Order was created 150 years ago, and is still widely used to


     manage public meetings.  I've been in local meetings where


     everyone gets a chance to speak, but are limited to a few minutes


     to say whatever's on their mind.  You have to appear in person,


     wait your turn, and make your comment. Doing so is free, but still


     has the cost of time and hassle to get to the meeting.



 Organizations have figured out over the years how to manage
     meetings.  [Vint - remember the "Rathole!" mechanism that we used


     to keep Internet meetings on track...?]





     PARC had "Dealer".
 From what David describes, it sounds like the current "public
     comment" mechanisms in the electronic arena are only at the stage


     where the loudest voices can drown out all others, and public


     debates are essentially useless cacophonies of the loudest


     proponents of the various viewpoints.   There are no rules.   Why


     should anyone submit their own sensible comments, knowing they'll


     be lost in the noise?



 In non-electronic public forums, such behavior is ruled out, and
     if it persists, the governing body can have offenders ejected,


     adjourn a meeting until cooler heads prevail, or otherwise make


     the discourse useful for informing decisions.  Courts can issue


     restraining orders, but has any court ever issued such an order


     applying to an electronic forum?



 So, why haven't organizations yet developed rules and mechanisms
     for managing electronic discussions....?



 I'd offer two observations and suggestions.





 -----





 First, a major reason for a lack of such rules and mechanisms
     may be an educational gap.  Administrators, politicians, and


     staffers may simply not understand all this newfangled technology,


     or how it works, and are drowning in a sea of terminology,


     acronyms, and concepts that make no sense (to them).   In the FCC


     case, even the technical gurus may have deep knowledge of their


     traditional realm of telephony, radio, and related issues and


     policy tradeoffs.   But they may be largely ignorant of computing


     and networking equivalents.  Probably even worse, they may


     unconsciously consider the new world as a simple evolution of the


     old, not recognizing the impact of incredibly fast computers and


     communications, and the advances that they enable, such as "AI" -


     whatever that is...



 About 10 years ago, I accidentally got involved in a patent
     dispute to be an "expert witness", for a patent involving


     downloading new programs over a communications path into a remote


     computer (yes, what all our devices do almost every day).   I was


     astounded when I learned how little the "judicial system"


     (lawyers, judges, legislators, etc.) knew about computer and


     network technology.   That didn't stop them from debating the


     meaning of technical terms.  What is RAM? How does "programming"


     differ from "reprogramming"?  What is "memory"?  What is a


     "processor"?   What is an "operating system"?   The arguments


     continue until eventually a judge declares what the answer is,


     with little technical knowledge or expertise to help.   So you can


     easily get legally binding definitions such as "operating system"


     means "Windows", and that all computers contain an operating system.



 I spent hours on the phone over about 18 months, explaining to
     the lawyers how computers and networks actually worked.   In turn,


     they taught me quite a lot about the vagaries of the laws and


     patents.  It was fascinating but also disturbing to see how


     ill-prepared the legal system was for new technologies.



 So, my suggestion is that a focus be placed on helping the
     non-technical decision makers understand the nuances of computing


     and the Internet.  I don't think that will be successful by


     burying them in the sea of technical jargon and acronyms.



 Before I retired, I spent a lot of time with C-suite denizens
     from companies outside of the technology industry - banks,


     manufacturers, transportation, etc. - helping them understand what


     "The Internet" was, and help them see it as both a huge


     opportunity and a huge threat to their businesses.  One technique


     I used was simply stolen from the early days of The Internet.



 When we were involved in designing the internal mechanisms of
     the Internet, in particular TCPV4, we didn't know much about


     networks either.  So we used analogies.  In particular we used the


     existing transportation infrastructure as a model.   Moving bits


     around the world isn't all that different from moving goods and


     people.   But everyone, even with no technical expertise, knows


     about transportation.



 It turns out that there are a lot of useful analogies. For
     example, we recognized that there were different kinds of


     "traffic" with different needs.  Coal for power plants was


     important, but not urgent.  If a coal train waits on a siding


     while a passenger train passes, it's OK, even preferred.  There


     could be different "types of service" available from the


     transportation infrastructure.   At the time (late 1970s) we


     didn't know exactly how to do that, but decided to put a field in


     the IP header as a placeholder - the "TOS" field. Figuring out


     what different TOSes there should be, and how they would be


     handled differently, was still on the to-do list.   There are even


     analogies to the Internet - goods might travel over a "marine


     network" to a "port", where they are moved onto a "rail network",


     to a distributor, and moved on the highway network to their final


     destination.  Routers, gateways, ...



 Other transportation analogies reinforced the notion of TOS. 
     E.g., if you're sending a document somewhere, you can choose how


     to send it - normal postal mail, or Priority Mail, or even use a


     different "network" such as an overnight delivery service. 


     Different TOS would engage different behaviors of the underlying


     communications system, and might also have different costs to use


     them.  Sending a ton of coal to get delivered in a week or two


     would cost a lot less than sending a ton of documents for


     overnight delivery.



 There were other transportation analogies heard during the TCPV4
     design discussions - e.g., "Expressway Routing" (do you take a


     direct route over local streets, or go to the freeway even though


     it's longer) and "Multi-Homing" (your manufacturing plant has


     access to both a highway and a rail line).



 Suggestion -- I suspect that using a familiar infrastructure
     such as transport to discuss issues with non-technical decision


     makers would be helpful.  E.g., imagine what would happen if some


     particular "net neutrality" set of rules was placed on the


     transportation infrastructure?   Would it have a desirable effect?



 -----





 Second, in addition to anonymity as an important issue in the
     electronic world, my experience as a mentee of Licklider surfaced


     another important issue in the "galactic network" vision -- "Back


     Pressure".     The notion is based in existing knowledge. 


      Economics has notions of Supply and Demand and Cost Curves. 


      Engineering has the notion of "Negative Feedback" to stabilize


     mechanical, electrical, or other systems.



 We discussed Back Pressure, in the mid 70s, in the context of
     electronic mail, and tried to get the notion of "stamps" accepted


     as part of the email mechanisms.  The basic idea was that there


     had to be some form of "back pressure" to prevent overload by


     discouraging sending of huge quantities of mail.



 At the time, mail traffic was light, since every message was
     typed by hand by some user.  In Lick's group we had experimented


     with using email as a way for computer programs to interact.  In


     Lick's vision, humans would interact by using their computers as


     their agents.   Even then, computers could send email a lot faster


     and continuously than any human at a keyboard, and could easily


     flood the network.  [This epiphany occurred shortly after a


     mistake in configuring distribution lists caused so many messages


     and replies that our machine crashed as its disk space ran out.]



 "Stamps" didn't necessarily represent monetary cost. Back
     pressure could be simple constraints, e.g., no user can send more


     than 500 (or whatever) messages per day.   This notion never got


     enough support to become part of the email standards; I still


     think it would help with the deluge of spam we all experience today.



 Back Pressure in the Internet today is largely non-existent.  I
     (or my AI and computers) can send as much email as I like. 


      Communications carriers promote "unlimited data" but won't


     guarantee anything.   Memory has become cheap, and as a result


     behaviors such as "buffer bloat" have appeared.



 Suggestion - educate the decision-makers about Back Pressure,
     using highway analogies (metering lights, etc.)



 -----





 Education about the new technology, but by using some familiar
     analogs, and introduction of Back Pressure, in some appropriate


     form, as part of a "network neutrality" policy, would be the two


     foci I'd recommend.



 My prior suggestion of "registration" and accepting only the
     last comment was based on the observations above.  Back pressure


     doesn't have to be monetary, and registered users don't have to be


     personally identified.   Simply making it sufficiently "hard" to


     register (using CAPTCHAs, 2FA, whatever) would be a "cost"


     discouraging "loud voices".   Even the law firms submitting


     millions of comments on behalf of their clients might balk at the


     cost (in labor not money) to register their million clients, even


     anonymously, so each could get his/her comment submitted.   Of


     course, they could always pass the costs on to their (million?


     really?) clients. But it would still be Back Pressure.



 One possibility -- make the "cost" of submitting a million
     electronic comments equal to the cost of submitting a million


     postcards...?



 Jack Haverty








 On 10/9/23 16:55, David Bray, PhD wrote:





 Great points Vint as you're absolutely right - there are
     multiple modalities here (and in the past it was spam from


     thousands of postcards, then mimeographs, then faxes, etc.)



 The standard historically has been set by the Administrative
     Conference of the United States: https://www.acus.gov/about-acus



 In 2020 there seemed to be an effort to gave the General
     Services Administration weigh-in, however they closed that


     rulemaking attempt without publishing any of the comments they got


     and no announcement why it was closed.



 As for what part of Congress - I believe ACUS was championed by
     both the Senate and House Judiciary Committees as it has oversight


     and responsibility for the interpretations of the Administrative


     Procedure Act of 1946 (which sets out the whole rulemaking procedure).



 Sadly there isn't a standard across agencies - which also means
     there isn't a standard across Administrations. Back in 2018 and


     2020, both with this group of 52 people here


     https://tinyurl.com/letter-signed-52-people - as well as


     individually - I did my darnest to encourage them to do a standard.



 There's also the National Academy of Public Administration which
     is probably the latest remaining non-partisan forum for


     discussions like this too.






 On Mon, Oct 9, 2023 at 7:46 PM Vint Cerf <vint@google.com> wrote:



 David, this is a good list.


 FACA has rules for public participation, for example.





 I think it should be taken into account for any public
     commenting process that online (and offline such as USPS or fax


     and phone calls) that spam and artificial inflation of comments


     are possible. Is there any specific standard for US agency public


     comment handling? If now, what committees of the US Congress might


     have jurisdiction?




 v








 On Tue, Oct 10, 2023 at 8:22 AM David Bray, PhD via Nnagain

     <nnagain@lists.bufferbloat.net> wrote:




 I'm all for doing new things to make things better.





 At the same time, I used to do bioterrorism preparedness and

     response from 2000-2005 (and aside from asking myself what kind of


     crazy world needed counter-bioterrorism efforts... I also realized


     you don't want to interject something completely new in the middle


     of an unfolding crisis event). If something were to be injected


     now, it would have to have consensus from both sides, otherwise at


     least one side (potentially detractors from both) will claim that


     whatever form the new approaches take are somehow advantaging "the


     other side" and disadvantaging them.




 Probably would take a ruling by the Administrative Conference

     of the United States, at a minimum to answer these five questions


     - and even then, introducing something completely different in the


     midst of a political melee might just invite mudslinging unless


     moderate voices on both sides can reach some consensus.




 1. Does identity matter regarding who files a comment or not —

     and must one be a U.S. person in order to file?




 2. Should agencies publish real-time counts of the number of

     comments received — or is it better to wait until the end of a


     commenting round to make all comments available, including counts?




 3. Should third-party groups be able to file on behalf of

     someone else or not — and do agencies have the right to remove


     spam-like comments?




 4. Should the public commenting process permit multiple

     comments per individual for a proceeding — and if so, how many


     comments from a single individual are too many? 100? 1000? More?




 5. Finally, should the U.S. government itself consider, given

     public perceptions about potential conflicts of interest for any


     agency performing a public commenting process, whether it would be


     better to have third-party groups take responsibility for


     assembling comments and then filing those comments via a validated


     process with the government?










 On Sat, Oct 7, 2023 at 4:10 PM Jack Haverty <jack@3kitty.org>

     wrote:




 Hi again David et al,





 Interesting frenzy...lots of questions that need answers and

     associated policies.   I served 6 years as an elected official (in


     a small special district in California), so I have some small


     understanding of the government side of things and the constraints


     involved.   Being in charge doesn't mean you can do what you want.




 I'm thinking here more near-term and incremental steps.  You

     said "These same questions need pragmatic pilots that involve the


     public ..."




 So, how about using the current NN situation for a pilot? 

     Keep all the current ways and emerging AI techniques to continue


     to flood the system with comments.  But also offer an *optional*


     way for humans to "register" as a commenter and then submit their


     (latest only) comment into the melee.  Will people use it?  Will


     "consumers" (the lawyers, commissioners, etc.) find it useful?




 I've found it curious, for decades now, that there are (too

     many) mechanisms for "secure email", that may help with the flood


     of disinformation from anonymous senders, but very very few people


     use them.   Maybe they don't know how; maybe the available schemes


     are too flawed; maybe ...?




 About 30 years ago, I was a speaker in a public meeting

     orchestrated by USPS, and recommended that they take a lead role,


     e.g., by acting as a national CA - certificate authority.  Never


     happened though.   FCC issues lots of licenses...perhaps they


     could issue online credentials too?




 Perhaps a "pilot" where you will also accept comments by

     email, some possibly sent by "verified" humans if they understand


     how to do so, would be worth trying?   Perhaps comments on


     "technical aspects" coming from people who demonstrably know how


     to use technology would be valuable to the policy makers?




 The Internet, and technology such as TCP, began as an

     experimental pilot about 50 years ago.  Sometimes pilots become


     infrastructures.




 FYI, I'm signing this message.  Using OpenPGP.  I could

     encrypt it also, but my email program can't find your public key.




 Jack Haverty








 On 10/5/23 14:21, David Bray, PhD wrote:





 Indeed Jack - a few things to balance - the Administrative

     Procedure Act of 1946 (on which the idea of rulemaking is based)


     us about raising legal concerns that must be answered by the


     agency at the time the rulemaking is done. It's not a vote nor is


     it the case that if the agency gets tons of comments in one


     direction that they have to go in that direction. Instead it's


     only about making sure legal concerns are considered and responded


     to before the agency before the agency acts. (Which is partly why


     sending "I'm for XYZ" or "I'm against ABC" really doesn't mean


     anything to an agency - not only is that not a legal argument or


     concern, it's also not something where they're obligated to follow


     these comments - it's not a vote or poll).




 That said, political folks have spun things to the public as

     if it is a poll/vote/chance to act. The raise a valid legal


     concern part of the APA of 1946 is omitted. Moreover the fact that


     third party law firms and others like to submit comments on behalf


     of clients - there will always be a third party submitting


     multiple comments for their clients (or "clients") because that's


     their business.




 In the lead up to 2017, the Consumer and Government Affairs

     Bureau of the FCC got an inquiry from a firm asking how they could


     submit 1 million comments a day on an "upcoming privacy


     proceeding" (their words, astute observers will note there was no


     privacy proceeding before the FCC in 2017). When the Bureau asked


     me, I told them either mail us a CD to upload it or submit one


     comment with 1 million signatures. To attempt to flood us with 1


     million comments a day (aside from the fact who can "predict"


     having that many daily) would deny resources to others. In the


     mess that followed, what was released to the public was so


     redacted you couldn't see the legitimate concerns and better paths


     that were offered to this entity.




 And the FCC isn't alone. EPA, FTC, and other regulatory

     agencies have had these hijinks for years - and before the


     Internet it was faxes, mass mimeographs (remember blue ink?), and


     postcards.The Administrative Conference of the United States


     (ACUS) - is the body that is supposed to provide consistent


     guidance for things like this across the U.S. government. I've


     briefed them and tried to raise awareness of these issues - as I


     think fundamentally this is a **process** question that once


     answered, tech can support. However they're not technologies and


     updating the interpretation of the process isn't something lawyers


     are apt to do until the evidence that things are in trouble is


     overwhelming.




 52 folks wrote a letter to them - and to GSA - back in 2020.

     GSA had a rulemaking of its own on how to improve things, yet


     oddly never published any of the comments it received (including


     ours) and closed the rulemaking quietly. Here's the letter:


     https://tinyurl.com/letter-signed-52-people




 And here's an article published in OODAloop about this - and

     why Generative AI is probably going to make things even more


     challenging:


     https://www.oodaloop.com/archive/2023/04/18/why-a-pause-on-ai-development-is-not-the-answer-an-insiders-perspective/




 [snippet of the article] Now in 2023 and Beyond: Proactive

     Approaches to AI and Society




 Looking to the future, to effectively address the challenges

     arising from AI, we must foster a proactive, results-oriented, and


     cooperative approach with the public. Think tanks and universities


     can engage the public in conversations about how to work, live,


     govern, and co-exist with modern technologies that impact society.


     By involving diverse voices in the decision-making process, we can


     better address and resolve the complex challenges AI presents on


     local and national levels.




 In addition, we must encourage industry and political leaders

     to participate in finding non-partisan, multi-sector solutions if


     civil societies are to remain stable. By working together, we can


     bridge the gap between technological advancements and their


     societal implications.




 Finally, launching AI pilots across various sectors, such as

     work, education, health, law, and civil society, is essential. We


     must learn by doing on how we can create responsible civil


     environments where AIs can be developed and deployed responsibly.


     These initiatives can help us better understand and integrate AI


     into our lives, ensuring its potential is harnessed for the


     greater good while mitigating risks.




 In 2019 and 2020, a group of fifty-two people asked the

     Administrative Conference of the United States (which helps guide


     rulemaking procedures for federal agencies), General Accounting


     Office, and the General Services Administration to call attention


     to the need to address the challenges of chatbots flooding public


     commenting procedures and potentially crowding out or denying


     services to actual humans wanting to leave a comment. We asked:




 1. Does identity matter regarding who files a comment or not

     — and must one be a U.S. person in order to file?




 2. Should agencies publish real-time counts of the number of

     comments received — or is it better to wait until the end of a


     commenting round to make all comments available, including counts?




 3. Should third-party groups be able to file on behalf of

     someone else or not — and do agencies have the right to remove


     spam-like comments?




 4. Should the public commenting process permit multiple

     comments per individual for a proceeding — and if so, how many


     comments from a single individual are too many? 100? 1000? More?




 5. Finally, should the U.S. government itself consider, given

     public perceptions about potential conflicts of interest for any


     agency performing a public commenting process, whether it would be


     better to have third-party groups take responsibility for


     assembling comments and then filing those comments via a validated


     process with the government?




 These same questions need pragmatic pilots that involve the

     public to co-explore and co-develop how we operate effectively


     amid these technological shifts. As the capabilities of LLMs


     continue to grow, we need positive change agents willing to tackle


     the messy issues at the intersection of technology and society.


     The challenges are immense, but so too are the opportunities for


     positive change. Let’s seize this moment to create a better


     tomorrow for all. Working together, we can co-create a future that


     embraces AI’s potential while mitigating its risks, informed by


     the hard lessons we have already learned.




 Full article:

     https://www.oodaloop.com/archive/2023/04/18/why-a-pause-on-ai-development-is-not-the-answer-an-insiders-perspective/




 Hope this helps.








 On Thu, Oct 5, 2023 at 4:44 PM Jack Haverty via Nnagain

     <nnagain@lists.bufferbloat.net> wrote:

 
       






        Thanks for all your efforts to keep the "feedback loop" to
       

     the rulemakers functioning!

 
       






        I'd like to offer a suggestion for a hopefully politically
       

     acceptable way to handle the deluge, derived from my own battles


     with "email" over the years (decades).

 
       






        Back in the 1970s, I implemented one of the first email
       

     systems on the Arpanet, under the mentorship of JCR Licklider, who


     had been pursuing his vision of a "Galactic Network" at ARPA and


     MIT.   One of the things we discovered was the significance of


     anonymity.   At the time, anonymity was forbidden on the Arpanet;


     you needed an account on some computer, protected by passwords, in


     order to legitimately use the network.   The mechanisms were crude


     and easily broken, but the principle applied.

 
       






        Over the years, that principle has been forgotten, and the
       

     right to be anonymous has become entrenched.   But many uses of


     the network, and needs of its users, demand accountability, so all


     sorts of mechanisms have been pasted on top of the network to


     provide ways to judge user identity.  Banks, medical services,


     governments, and businesses all demand some way of proving your


     identity, with passwords, various schemes of 2FA, VPNs, or other


     such technology, with varying degrees of protection.   It is still


     possible to be anonymous on the net, but many things you do


     require you to prove, to some extent, who you are.

 
       






        So, my suggestion for handling the deluge of "comments" is:
       






       






        1/ create some mechanism for "registering" your intent to
       

     submit a comment.   Make it hard for bots to register.  Perhaps


     you can leverage the work of various partners, e.g., ISPs,


     retailers, government agencies, financial institutions, of others


     who already have some way of identifying their users.

 
       






        2/ Also make registration optional - anyone can still submit
       

     comments anonymously if they choose.

 
       






        3/ for "registered commenters", provide a way to "edit" your
       

     previous comment - i.e., advise that your comment is always the


     last one you submitted.   I.E., whoever you are, you can only


     submit one comment, which will be the last one you submit.

 
       






        4/ In the thousands of pages of comments, somehow flag the
       

     ones that are from registered commenters, visible to the people


     who read the comments.   Even better, provide those "information


     consumers" with ways to sort, filter, and search through the body


     of comments.

 
       






        This may not reduce the deluge of comments, but I'd expect
       

     it to help the lawyers and politicians keep their heads above the


     water.

 
       






        Anonymity is an important issue for Net Neutrality too, but
       


[-- Attachment #2: Type: text/html, Size: 89603 bytes --]

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2023-10-10 19:39 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-10-02 18:15 [NNagain] Introduction: Dr. David Bray Dave Taht
2023-10-02 19:38 ` David Bray, PhD
2023-10-05 20:43   ` Jack Haverty
2023-10-05 21:21     ` David Bray, PhD
2023-10-07 20:10       ` Jack Haverty
2023-10-09 23:21         ` David Bray, PhD
2023-10-09 23:46           ` Vint Cerf
2023-10-09 23:55             ` David Bray, PhD
2023-10-10  2:56               ` Jack Haverty
2023-10-10 15:29                 ` [NNagain] somewhat OT: Licklidder Dave Taht
2023-10-10 15:53                   ` Steve Crocker
2023-10-10 17:12                     ` Jack Haverty
2023-10-10 19:00                       ` Robert McMahon
2023-10-10 19:38                         ` Dick Roy
2023-10-10 16:59                   ` Jack Haverty

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox