Globalists plan to disable Gopher, BBS, P2P with the help of AI

Klaus Schwab's World Economic Forum (WEF) have announced
plans to moderate the internet by using artificial intelligence (AI)
that identifies "misinformation," harmful content, or anything else
Klaus Schwab decides should be censored.
The internet censorship proposal would require "subject matter
experts" to provide training sets to the AI so it can learn to
recognize and flag or restrict content that the WEF deems
"dangerous."
The WEF published an article (https://bit.ly/3dpXVgc) Wednesday
outlining a plan to overcome frequent instances of "child abuse,
extremism, disinformation, hate speech and fraud" online, which the
organization said cannot be handled by human "trust and safety
teams," according to ActiveFence Trusty & Safety Vice President
Inbal Goldberger, who authored the article.
NEW - Klaus Schwab's World Economic Forum proposes to automate
censorship of "hate speech" and "disinformation" with AI fed by
"subject matter experts."
https://bit.ly/3QICjKw
The system works through "human-curated, multi-language,
off-platform intelligence," input provided from expert sources, to
create "learning sets" for the AI machine.
"Supplementing this smarter automated detection with human expertise
to review edge cases and identify false positives and negatives and
then feeding those findings back into training sets will allow us to
create AI with human intelligence baked in," Goldberger stated.
DailyCaller report (https://bit.ly/3zP0YWO): In other words, trust
and safety teams can help the AI with anomalous cases, allowing it to
detect nuances in content that a purely automated system might
otherwise miss or misinterpret, according to Goldberger.
"A human moderator who is an expert in European white supremacy
won't necessarily be able to recognize harmful content in India or
misinformation narratives in Kenya," she explained. As time goes on
and the AI practices with more learning sets, it begins to identify the
kinds of content moderating teams would find offensive, reaching
"near-perfect detection" at a massive scale, Goldberger said the
system would protect against "increasingly advanced actors misusing
platforms in unique ways."
Trust and safety teams at online media platforms, such as Facebook
and Twitter, bring a "nuanced comprehension of disinformation
campaigns" that they apply to content moderation, said Goldberger.
That includes working with government organizations to filter content
communicating a narrative about COVID-19, for example. The Centers
for Disease Control and Prevention advised Big Tech companies on
what types of content to label as misinformation on their sites.
Social media companies have also targeted conservative content,
including posts that negatively portray abortion and transgender
activism, or contradict the mainstream understanding of climate
change, by either labeling them as "misinformation" or blocking them
entirely.
The WEF document did not specify how members of the AI training
team would be decided, how they would be held accountable or whether
countries could exercise controls over the AI.
Elite business executives who participate in WEF gatherings have
a track record of proposals that expand corporate control over peoples'
lives. At the latest WEF annual summit, in March, the head of the
Chinese multinational technology company Alibaba Group boasted
of a system for monitoring individual carbon footprints derived from
eating, travel and similar behaviors.
"The future is built by us, by a powerful community such as you here
in this room," WEF founder and chairman Klaus Schwab told an
audience of more than 2,500 global business and political elites.