Greetings all and thank you Dave Taht for that very
kind intro...
First, I'll open with I'm a gosh-darn non-partisan, which
means I swore an oath to uphold the Constitution first and
serve the United States - not a specific party, tribe, or
ideology. This often means, especially in today's era of
24/7 news and social media, non-partisans have to "top
cover".
Second, I'll share that in what happened in 2017 (which
itself was 10x what we saw in 2014) my biggest concern was
and remains that a few actors attempted to flood the system
with less-than-authentic comments.
In some respects this is not new. The whole "notice and
comment" process is a legacy process that goes back decades.
And the FCC (and others) have had postcard floods of comments,
mimeographed letters of comments, faxed floods of comments,
and now this - which, when combined with generative AI, will
be yet another flood.
Which gets me to my biggest concern as a non-partisan in
2023-2024, namely how LLMs might misuse and abuse the commenting
process further.
Both in 2014 and 2017, I asked FCC General Counsel if I could
use CAPTChA to try to reduce the volume of web scrapers or bots
both filing and pulling info from the Electronic Comment Filing
System.
Both times I was told *no* out of concerns that
they might prevent someone from filing. I asked if I could
block obvious spam, defined as someone filing a comment
>100 times a minute, and was similarly told no because one
of those possible comments might be genuine and/or it could be
an ex party filing en masse for others.
For 2017 we had to spin up 30x the number of
AWS cloud instances to handle the load - and this was a
flood of comments at 4am, 5am, and 6am ET at night which
normally shouldn’t see such volumes. When I said there was a
combination of actual humans wanting to leave comments and
others who were effectively denying service to others
(especially because if anyone wanted to do a batch upload of
100,000 comments or more they could submit a CSV file or a
comment with 100,000 signatories) - both parties said no,
that couldn’t be happening.
Until 2021 when the NY Attorney General proved
that was exactly what was happening with 18m of the 23m
apparently from non-authentic origin with ~9m from one side
of the political aisle (and six companies) and ~9m from the
other side of the political aisle (and one or more
teenagers).
So with Net Neutrality back on the agenda -
here’s a simple prediction, even if the volume
of comments is somehow controlled, 10,000+ pages of comments
produced by ChatGPT or a different LLM is both possible and
probably will be done. The question is if someone includes a
legitimate legal argument on page 6,517 - will FCC’s lawyers
spot it and respond to it as part of the NPRM?
Hope this helps and with highest regards,