(C) Alec Muffett's DropSafe blog.
Author Name: Alec Muffett
This story was originally published on allecmuffett.com. [1]
License: CC-BY-SA 3.0.[2]
Response to the @AllTechIsHuman report into End-to-End Encryption: the original questionnaire, with responses & critique by Alec Muffett (1/n ?)
2024-10-22 15:15:59+00:00
In May 2024, I was approached by Anne Collier for All Tech Is Human to contribute to a new “report” into end-to-end encryption. That report (after considerable administrative churn) has now been published at
https://alltechishuman.org/all-tech-is-human-blog/balancing-privacy-and-child-safety-in-encrypted-environments as announced on LinkedIn.
My take is that is ATIH could have done a lot worse at representation; there are aspects and sections with which I still vehemently disagree, but there’s far more balanced representation of both privacy and safety viewpoints than I have experienced from (especially: Thorn-adjacent) civil society in some time, and that is after starting from the terrible baseline of the questions as-framed below – questions which should serve both as warning and primer to anyone who seeks to explain “encryption vs: safety” to policy people.
It was unclear if my contributions would be published al all — this report was being managed differently from some previous, and Anne (who I respect and who I have observed acting quite even-handedly throughout the questionnaire process) was brought-in to run it (although subsequently it changed hands) – but I agreed to participate on the understanding that I’d publish my response in a blogpost, after the report’s release. Hence this post.
Subsequently I was delighted to be invited to comment upon drafts and provide what feedback I could. The ATIH team dealt with this torrent of challenges calmly and deftly and in a non-judgemental, neutral-ish manner, which was nice to experience for once although the document was clearly originally framed towards their funder’s interests.
Re: the questionnaire, I responded to Anne:
“The questions are massively biased … how does [post-abuse CSAM detection] protect children? [Do you] Wait for them to become victims and then cheer yourself for finding them and rehabilitating them? … [if we were discussing] murder then the solution would be to stop and search everyone’s cars for corpses, and to pretend that that was a solution?”
— and Anne invited me to unbias the questions. And I did. Therefore: here are my responses to the original questionnaire, framed by the questions which stimulated them.
Benefits and Risks in Harm Mitigation
From your vantage point, what are the benefits and risks of end-to-end encryption in digital social spaces? Does E2EE represent greater benefit or greater risk to users under 18?
Let’s turn this question on its head: the function of E2EE is purely to restrict access to communications data – messages, emails, etc – to exclude access by anyone other than the actual participants in the communication, at the time and to whom the given message was actually composed and sent. That’s it. That’s the whole thing. It’s not some grand and complex “techbro” conspiracy. For more on this, see the primer at:
https://alecmuffett.com/alecm/e2e-primer/e2e-primer-print.html
This characteristic is a general benefit to all forms of communication because it reduces the risk of data leakage via account theft, via hacking of the platform, via snooping by malicious platform employees, even (where the E2EE includes device enrollment) via device theft, mugging, and pickpocketing.
Given that users under 18 will live longer than the rest of us, they will see more benefit from this than we will throughout the rest of their lives – fewer “embarrassing pictures that I foolishly posted to my Facebook in 2004”-type scenarios, fewer hacked credit-card details, fewer “my abusive partner is spying on me” relationships, etc…
Also: given the nature of politics at the moment, e.g. Governments teetering on the precipice of rolling back various rights related to bodily autonomy and agency, in my specific circumstance I want my toddler daughter to grow up with both the right and the ability to entirely evade any and all forms of online surveillance, even/especially that which is imposed “for her own good” by the state. Orwell wrote about the risks of that sort of thing.
Where are we with safety in E2EE environments at this point?
Evolving. There is no other word. All environments, all platforms, are evolving, be they E2EE or otherwise. Anyone who responds “we have not gone far enough” or “we need to go further” should be obligated to explain what metrics they are bringing to bear. If they take refuge that “even one case of online abuse is one too many” then they should likewise describe their approaches to mitigating poverty, hunger, and domestic violence, which are far more prolific harms to children. Or, if they prefer, they can explain why it is that society is content to allow ongoing use of motor vehicles when thousands of children are killed or seriously injured in cars and on roads each year.
Are you aware of any existing technology that works with E2EE that can prevent or slow down the dissemination of child sexual abuse material? If so, please describe.
The goal of E2EE is to prevent any one piece of content being distinguishable from any other piece of content, so the only way to achieve this is to slow, and even imperil, the dissemination of all content; however the nature of this question reflects the ubiquitous “content-centric” approach which this very questionnaire exemplifies.
As such, it’s necessary to ask the people who wrote or who have responded to this questionnaire: do you want to track infinitely-replicable, infinitely-resharable content after the abuse has occurred, OR do you want to protect children by literally preventing abuse from happening in the first place?
This question cuts-through even to the concrete harms of re-victimisation caused by resharing of extant CSAM and NCII: it is better that we focus our effort to prevent people becoming victims in the first place, and to address the social and cultural permissiveness that enables the re-sharing of extant content, in order to address the root causes of these problems.
Stakeholders’ Roles
What role do Internet companies have in reducing or minimising child risks in E2EE environments they provide? Please be as specific as possible in describing how they can help mitigate child harm while protecting privacy.
So now we are talking about “minimising child risks” whereas above we were talking about “dissemination of material [i.e. content]” — the two are not the same, and moreover each of those “goals” has separate tactical and strategic approaches which need to be implemented.
If you are a platform provider who intentionally and knowingly provides user-services to minors, you are in a position to offer protective services to those minors as part of your product (“family-friendly!”) value proposition. It may even be legally obligated. This is generally fair, and can extend to all aspects of the platform offering (e.g. groups, product pages, advertising, etc…)
The liberal boundaries of any legal obligation are twofold:
where the state prevents/chills the provision of a feature or characteristic (such as “end-to-end privacy” or “anonymity”) to all users
where the state obligates/compels the placement of a feature or characteristic (such as “content scanning & filtering” or “government ID requirement”) upon all users
So if you want to reduce child risk in ALL environments whilst protecting privacy, you should encourage platforms to adopt privacy-friendly and (important!) very-low-stakes user nudges towards safer forms of behaviour, whilst leveraging validated user-reports and multiplatform metadata-sharing to identify clusters of abusers.
What would this look like? It would look like extant efforts on various platforms to (e.g.) identify and educate users who are in the process of taking nude selfies with the intention of sharing them on the platform, with popup alerts like “Are you sure about what you are doing?” and “Do you really trust this person?” and “The person you’re talking to is using a recently-created account and has changed their name several times, which is suspicious”. Importantly: all of this would be done on the client device and not reported to “authorities” because the device has to be acting in the user’s interests, rather than to fill an already-overflowing pipeline of NCMEC reports to review.
It would also act as a preventative before any abuse occurred, and because this is “lower stakes” engagement without reporting to authorities the sensitivity (and therefore: risk of false-positives) can be dialled-up without significant social consequence. The goal should be to offer airbags and seatbelts to the user as they drive around the internet rather than for them to be tailed by a vigilant policeman — not least because the latter (i.e. pervasive monitoring and surveillance) will change user behaviour and push them towards using less-protective, less-safe platform spaces.
In passing: we should observe the technical challenges (limitations?) of extracting family-friendly, child-centred-design lessons from kid-focused platforms like the LEGO online space, in order to apply those lessons to general-purpose platforms like email or collaborative document editing. How would those tools need to be changed to become more child-centred? Would they then remain fit for their normal purpose? Or should we simply ban children from using them, via age-verification, such that they don’t experience them (except under supervision) until they eventually join the workforce?
What can companies do in terms of policy? Please be as specific as possible.
Companies should – to the greatest extent possible – implement E2EE to protect data from access by third or fourth parties, as defined in the E2EE primer which is referenced above.
Companies should also decide whether they have a business requirement to know enough about their users to be capable of differentiating minors from the adults amongst their user base; and if so then companies should choose whether to adapt their offerings accordingly.
These two choices (to know as little as necessary, and to act upon what is known only with a business purpose) reflect the regulatory obligation of “data minimization” which is enshrined in privacy law such as GDPR. That some forms of safety activism obligate the collection and proliferation of more data, revolts against these principles.
What can companies do in terms of technical measures? Please address tooling for content moderation, product features, data, reporting flows, etc.
The above “minimising child risks” answer deals with this issue; the “metadata-sharing” which is alluded-to is already being implemented by projects such as (e.g.) Lantern:
https://www.technologycoalition.org/newsroom/announcing-lantern
What can governments or regulators do to support both privacy and child safety in E2EE environments? Please be as specific as possible.
They could stop pushing the perspective that the best route to minimise child risks in online environments (of any sort) is to surveil all users for the potential transmission of post-abuse content, rather than encouraging the adoption of low-stakes abuse-prevention education, low-stakes abuse-deterrence, abuse-reporting flows which feed into the likes of Lantern, and pursuit of cultural change.
Aside: simple demands that “the platforms should be forced to solve this” / “…should have a duty of care to protect [children]” are not actually solutions, instead they are merely whip-cracking without offering actual thought towards how this problem might be addressed, let alone the consequences of actually trying to do so in any particular fashion.
Turning to the role of law enforcement: What do they need and what could they do to reduce the problem of CSAM and help rescue victims?
According to global child-safety umbrella organisation INHOPE, law enforcement are already swamped and are having to build tools to address CSAM-report false positives, noise, and other “scale” challenges which inhibit their ability to protect children who are immediately at risk:
https://www.inhope.org/EN/articles/combatting-cybercrime-using-aviator
As such, it would be better for law enforcement to try moving from post-abuse detection towards a harm-reduction, harm-prevention approach.
Would having all parties to the issue working together help? (Yes/No/Other)
Probably not, but I’m sure they’ll try.
If your answer to the previous question was yes, what would that look like? What conditions are needed to make that happen?
I suspect it will look like a shouting match, until the goals of all stakeholders become harmonised. This will require multiple significant shifts in political stance.
If your answer to the previous question was no, could you tell us why you say that?
There are many hard issues to be addressed, including:
a culture of child safety organisations which validate their ongoing existence by citing increasingly huge numbers of “millions of reports” – only a tiny fraction (ref: INHOPE, GCHQ) of which relate to children being actually at risk of exploitation in the present or future, and generally all of which effort is geared towards post-abuse response.
complex issues of the statutory criminality of transmitting CSAM, or NCII, or AI-generated Fake-CSAM, filtered through the stakeholder’s perceptions of the balance and relative importance of preventing new abuse versus detecting (re-)transmission of abuse – be it old, new, or fake.
As such, from personal observation I suspect there will be a lot of raised voices and stonewalling.
In 2021 Apple and others conducted what turned out to be an important experiment when Apple announced expanded protections for minors through scanning iCloud accounts for CSAM only, and later pausing that initiative. Had Apple not paused it, would its protections have been a significant advance in solving the problem of CSAM in the cloud for Apple users or not? Please explain.
The question is not relevant as-phrased; the Apple iCloud Scanner was designed to benefit law enforcement in pursuit of post-abuse material in a high-stakes and surreptitious way. It was not designed to serve or benefit the user. Therefore: it supported state surveillance, and was properly identified and criticised as such. To the best of my knowledge the program was not “paused” but actually stopped, rescinded, walked-back, or killed. Do you believe otherwise? If so, please explain why?
Concluding Thoughts
Do you feel we missed any important questions here? If so, what should we (all) be asking, and what would you say is the answer or gets us closer to an answer?
Taking this and the next question together, the missing questions from this questionnaire are these:
Why is it that online child safety is primarily, even exclusively, perceived as being most appropriately delivered through pervasive surveillance to enable detection of known post-abuse imagery, rather than via a collective harm-reduction effort of user-education, abuse prevention, potential and actual abuse-reporting, and collective action on intelligence provided through such metadata?
And: how do we get from the former to the latter?
As another person who works in the internet safety space put it to me: “…in some regions the production of CSAM is seen as one of the few available routes out of systemic poverty.” Where that is the case it seems strange to not address that poverty… but this thought then forces us to ask the most general question: why are we so focused upon the matter of post-abuse content, rather than trying to address all of the social challenges and cultural mores which cause, enable or promote direct or indirect online abuse?
What else is missing from today’s youth safety conversation? (Optional)
See above.
[END]
[1] URL:
https://alecmuffett.com/article/110017
[2] URL:
https://creativecommons.org/licenses/by-sa/3.0/
DropSafe Blog via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/alecmuffett/