Thank you Jack! Awesome history lesson. Somehow
Churchill’s quote, “Those that fail
to learn from history are doomed to repeat it.!”, comes to mind! J
Happy New Year!
RR
From: Nnagain
[mailto:nnagain-bounces@lists.bufferbloat.net] On Behalf Of Jack Haverty via Nnagain
Sent: Tuesday, January 9, 2024
12:00 PM
To: nnagain@lists.bufferbloat.net
Cc: Jack Haverty
Subject: Re: [NNagain] The growing
challenges of discerning authentic vs. inauthentic information and identity
IMHO, similar issues of
judgement and trust have come up in the past, and it might be worth researching
the history.
In the context of the Web, during the 90s there was a similar concern about
categorizing content available on the Internet. The issue at the
time was providing mechanisms to protect children from pornography. But
today's issues of truth and misinformation are very similar -- e.g., you might
categorize an inaccurate news post as "pornographic".
I suggest looking at some work from the 90s. At the time, I was working
at Oracle as "Internet Architect", and served as corporate
representative to W3C (see https://www.w3.org/
). Thw W3C group, led by Tim Berners-Lee, was intensely involved in
setting technical standards for the Web.
A project was formed call PICS - Platform for Internet Content Selection.
Essentially it created mechanisms to add metadata to existing content on the
Web, and use it to filter content for end users.
See https://www.w3.org/PICS/ for the
history. PICS is now obsolete and was replaced by something called POWDER
- see https://www.w3.org/2007/powder/
I wasn't involved in POWDER, which occurred after my involvement with W3C
ended. But I was very involved in the creation of PICS.
The main idea of PICS was to enable the creation of "rating schemes"
to categorize content. Since the focus was on pornography, one likely
rating scheme was the classical G/R/X ratings popular at the time for
characterizing movies. But anyone, or any group, could define a
rating scheme to suit their views.
Having selected a rating scheme they liked, any group, or individual, could
assign ratings to specific content. Perhaps you think that movie is
"R", but I think it's "X". As a judge once
noted - "I can't define it, but I know it when I see it".
Opinions can of course differ.
Ratings were to be kept in one or more databases, accessible on the Internet to
anyone. Content could be identified by a URL, or perhaps a unique
cryptographic "hash" of the content itself, in case it was
moved. Each record would contain 4 items - the identity of the content,
the identity of the rating scheme used, the identity of the person or group
making the rating, and the rating which they assigned. Such technology
was easily within the capabilities of databases even then.
On the "consumer" side, applications (e.g., browsers) would have
settings that could be applied to indicate which rating system was to be used,
which groups or persons making ratings were to be trusted, and what ratings of
content would be actually viewable by the human end user.
The idea was that various groups (content creators, reviewers, religious
groups, community activists, etc.) would define their preferred rating scheme
and then assign ratings, at least to content they deemed objectionable.
End users, e.g., parents, could then set up their children's web browsers to
use the rating scheme of whichever group(s) they trusted to make
"correct" ratings, and set their children's browsers appropriately to
restrict the content they could see. A content consumer simply
selects the rating service they trust.
It seems straightforward how a similar mechanism might be applied to instead
rate accuracy of Internet content, and allow consumers to choose which, if any,
ratings are applied to filter the information they see, based on who they trust
to make such judgements.
PICS was actually implemented in popular browser software. But, as
far as I know, no group ever designed their preferred rating scheme, or
actually assigned ratings to any content then available on the
Internet. The mechanisms were there. But apparently no
one used them. The loud voices of "Something has to be
done!" didn't actually themselves do anything.
Even if PICS/POWDER isn't appropriate for handling misinformation, an analysis
of why it failed to be used might be revealing.
Jack Haverty