(C) Alec Muffett's DropSafe blog.
Author Name: Alec Muffett
This story was originally published on allecmuffett.com. [1]
License: CC-BY-SA 3.0.[2]
It’s never going to be popular to tell people that saving innocent little children is *not* the most important big-picture thing for humanity — but it’s true (HT: @elegant_wallaby)
2021-11-23 14:44:42+00:00
This is a long Twitter thread which can be accessed by starting with this tweet and working backwards.
Introduction
David / @Elegant_Wallaby writes powerfully in favour of addressing child safety concerns regarding end-to-end encrypted environments — concerns which are rooted in zero, none, a complete lack of data, but the online safety community considers that to be okay because the online safety community survives on no data regarding impact, at all.
This is not hyperbole — if you check out NCMEC’s own statistics they make big headlines regarding “numbers of reports to NCMEC” but rarely if ever mention how many children they help save per year, and they also tend to lack transparency around how many reports are duplicates, are “funny memes” from developing countries which happen to have different social mores about pictures that include naked kids, and other stuff that is not germane to saving a child who is at risk from exploitation.
If you ask NCMEC more forthrightly, you can chart the growth of their database of hashes of CSAM imagery over time; for instance in 2020 they received 21.8 million reports comprising 65 million files (slide 17 of this PDF, and elsewhere) — yet a NCMEC source tells me that in that same year they only added 1.7 million “triple-vetted” images hashes to their database of CSAM; from this it might be possible to infer that more than 90% of NCMEC’s reports are in some way duplicates or irrelevant, but that would be unfair to do without more context —perhaps they simply need more “vetters”? — but all of this underscores the fact that the entire child-online-safety industry is blind to its impact.
As David himself points out: nobody from NCMEC follows up to see if their reports are being helpful in creating prosecutions. This “throw the reports over the wall to local police forces” approach is possibly even a problem, evidenced by the child-safety umbrella organisation INHOPE having to build tools to help local police forces winnow-out the duplicates and chaff in what they are given, so that they can focus on helping children who are actually at-risk:
But back to David: one of his tweets particularly grates upon me, where he writes: “knock it off with painting child safety advocates as disingenuous “think of the children” stooges for expanded LE capabilities” — and it grates for this reason:
Yes, there are certainly some people who believe that this “think of the children” push is a proxy war for expanded law-enforcement powers, and to obtain the long-sought-after “golden key” backdoors into end-to-end encrypted messengers; and I am not alone in seeing some merit in this view.
HOWEVER: I have also seen online-safety child-protection work from a close distance, and I have seen the emotional impact it has on the people who do it. Safety teams experience a proportionately small amount of very, very bad stuff, and they become really, really motivated to fix it for everyone. This leads them to develop disproportionate perspectives and become precautionary, even reactionary, depressed or catastrophizing.
For most of the past decade the UK has seen about 1750 road deaths per year, but the pandemic lockdowns of 2020 reduced that figure by 300; should we not therefore enforce lockdown perpetually to save 300 lives per annum? Or perhaps those 300 deaths happened differently, transferred to people who committed suicide from depressing isolation? Perhaps we could mitigate ALL road deaths by reducing all speed limits to (say) 20 miles per hour — but if we do that, that would happen at what other cost?
The same “solutioneering” arguments apply to CSAM.
The stark truth is: abuse happens, online or otherwise. Most of the time (93%?) the perpetrator is known to the victim, and any amount of “online-ness” is not causative. We don’t speculatively strip away fundamental human freedoms like privacy even though this abuse happens, and — just as we have not banned cars in order to keep children safe — we are forced to accept that the amount of abuse will never be zero, and that increasingly illiberal approaches towards stopping it will yield diminishing returns.
We have no numbers regarding how many globally at-risk children per year are being rescued from harm — not even sampled, extrapolated, guesstimated numbers. We are told that this nonexistent number must be used as an input to “balance” the interests of privacy for 2.7 billion people each of whom sends between zero and a few hundred or thousand messages per day, of varying sensitivity and (legal) intimacy.
There is a very large number in the order of tens or hundreds of billions of messages per day on one side of the scale, but there is a empty space on the other side.
This is unfair. This is unrealistic. This is a problem.
There are implicit and explicit calls for “productive conversation” (see below) but until someone on that side can deliver a heartless but necessary cost/benefit analysis, those who favour a “precautionary principle” approach to connecting the world’s population and undermining fundamental human rights, can and should be ignored .
Thread with David
Upstream thread continues, thusly…
[END]
[1] URL:
https://alecmuffett.com/article/15558
[2] URL:
https://creativecommons.org/licenses/by-sa/3.0/
DropSafe Blog via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/alecmuffett/