(C) Alec Muffett's DropSafe blog.
Author Name: Alec Muffett
This story was originally published on allecmuffett.com. [1]
License: CC-BY-SA 3.0.[2]
facebook – dropsafe
2021-11
In 2016 I quit the best job I ever had: the most impactful, the most challenging, and (yes) the best paid. I did so in part because of explicable burnout from shipping a radical product, but also and primarily because of a shift in Facebook’s company goals towards degrading user-experience for profit, and for experimentally building message-censorship infrastructure to offer-up as a gift and to show willing, for entry into the Chinese market.
This disheartened me tremendously.
I wrote this essay and left it on my work timeline, but it has now been leaked to multiple journalists as part of a new whistleblower’s campaign — a campaign which regrettably attempts to undermine this essay’s goal to enable as many people as possible to benefit from secure, private, and censorship-resistant communication; in specific now the privacy of end-to-end encryption.
I did not keep a copy of this essay, however I have obtained one from a Facebook person with the justification that “it seems unfair for random journalists to be circulating a stolen copy amongst themselves when I do not have one for myself.”
I have elided most [Colleague] names excepting Mark and Boz, the former because he’s the CEO and the latter because he’s just too darn recognisable given context and some of my prior tweets. Boz has already been hauled over the coals for what he wrote.
I chose to do my whistleblowing internally, and to an audience who (after leadership) could most effectively shape the company’s development and direction: the engineers. I believe that my essay was a contribution to making Facebook collectively reassess its direction, and (later) towards their goal of offering more privacy and better data protection through broadly embracing end-to-end encryption.
There is much to critique, even perhaps regulate, about Google, Apple, Facebook, Amazon, Microsoft and “tech” business practices in general; I do not dispute that fact. However some people now attempt to present improvements in user-privacy as a “harm” or as a “safety risk”.
I strongly disagree. Robust privacy is an enabler of safety and opportunity at a far greater scale than concomitant harm, and is a necessity for the future of communication.
Some modern [annotations have been added in red boldface square brackets]; standard infix boldface for emphasis, new highlighting and links.
Why I am leaving
So, Friday is my last day. The past 3.25 years have been some of the most incredible in my life – no mean feat when you are 48 – and I’m delighted to be going out on something of a high note, just after the launch of Tincan. [i.e. Facebook Messenger Secret Conversations]
Why has it been great? Lots of reasons.
First and foremost: the people.
I still do not understand how the apparently simple and straightforward Facebook hiring process manages to consistently hire smart, quick, overwhelmingly pleasant people, and generally manages to avoid hiring assholes.
Huge respect to the recruiting team for their work, and I count some of the nicest folk I’ve ever met amongst their number.
Secondly: the challenge.
Much of my Bootcamp was comprised of me saying “fffffffuuuuuuuuuuuuu…” whilst swallowing an ocean of code, learning new languages and new working practices.
Props to [Colleague] and [Colleague] for bearing patiently with me until I found my feet, and [Colleague] for being the unflappable Buddha of WWW, providing pointers to the arcana of PHP.
Then, perhaps third, the Covent Garden office.
Some flashbacks:
[Colleague] laying his head on the desk and laughing because he’d blown past a 32-bit counter in 12 hours of sampling requests at some minuscule rate. That was a lesson in “scale”.
laying his head on the desk and laughing because he’d blown past a 32-bit counter in 12 hours of sampling requests at some minuscule rate. That was a lesson in “scale”. Also: reviewing “Invariant Detector” ; python templates generating nested-five-deep Hive subqueries.
; python templates generating nested-five-deep Hive subqueries. [Colleague] idly flicking through his phone as, swaying like a shark, he silently Ripsticked the length of the Engineering floor.
idly flicking through his phone as, swaying like a shark, he silently Ripsticked the length of the Engineering floor. Arguing with [Colleague] when he suggested that maybe we should just block Tor because it was such a source of nuisance to Facebook
[Colleague] helping “contextualise” [Colleague] ‘s and my subsequent argument, afterwards.
helping “contextualise” ‘s and my subsequent argument, afterwards. Drink-and-tell on the roof terrace when you knew every single engineer by name, and every newcomer could sensibly have a personalised introduction.
Kylie Minogue asking us what we all do?
asking us what we all do? [Colleague], teaching us all to lick batteries.
Comparative chocolate with [Colleague], [Colleague], and [Colleague] .
and . “British Swearing” with [Colleague] .
“The Best Thing Ever” with [Colleague] .
Crumpets.
Covent Garden felt like a family. It was the best introduction to Facebook that I can still imagine. I was a little sad to move to Brock St – but I’m glad that we did because that’s where it all started to happen: [Code], Tor, [Code], the Onion RFC, Tincan. Numerous hacks and side-projects.
There, more key people – [Colleague] (“In the past three weeks, I think I have worked five weeks”) – [Colleague] wearing shades and drinking beer at 2am whilst herding 24 thousand onion-miners, [Colleague], [Colleague], [Colleague], [Colleague], [Colleague]. A happy few, a band of brothers and sisters.
So why leave? I promised an explanation. It’s long. If you don’t want to read it, that’s okay, go have tea or a beer instead.
Beer is probably more interesting.
I’ve thought at considerable length about how much detail to go into, whether it’s “proper” for me to do so, whether it’s necessary at all, and overall I’m pretty certain that I’ve made an appropriate choice.
As part of this I’ve elected not to post something immediately on my way out of the door. I’ll try to stay around and answer questions if I am able. “Elevate the debate”, etc.
Also: when you leave you’re asked to fill in a questionnaire about why; this will serve as my response.
So let’s start with a couple of non-reasons for leaving:
Non-Reason 1: PSC
This will not be a popular opinion, but PSC is awesome. You should treasure it.
I worked 17 years at Sun Microsystems and in that time I had perhaps 6 or 7 “annual reviews”; some basic maths should show the issue. The missing reviews at Sun were lost in “upcoming reorganisation”, or “I’m a new manager I need to know you better” or “you’ve just swapped into this role, let’s come back to it in 6 months”.
Here at Facebook? Twice a year everything stops for a week and you have to find people who like you and what you’ve done, thereby to get a reality check and to justify your ongoing employment.
PSC can be a pain and clearly it has flaws, but it works far better than I’ve seen elsewhere, so stop complaining about PSC. 🙂
In case someone wants to speculate: my last four reviews were “meets” (or maybe “exceeds”, I forget) – and then “redefines”, then “greatly” and (in January) “meets most”.
The latter was given mostly to send a message that although everyone loved what I was doing, flatly arguing with directors about their proposals for E2E cryptography is not entirely ‘constructive’. I’ll accept that criticism. But given my track record I feel that I could weather another PSC – but I do not want to.
Non-Reason 2: Tincan
I led Tincan from inception through to almost-delivery. The result is not [not exactly] what I thought would be the most impactful thing to ship, but it’s very close, and it’s good. There are nits, but they will be fixed, and everything will be fine.
In the process of development I have seen some of the most amazing feats of engineering, the most incredible effort, and the most awesome output from the Tincan team. Every last one of them is deserving of the highest praise for expertise and focus. It has been an honour to serve with them, to steer the ship for a while, and to share some practical security learnings whilst hacking the PHP backend.
I look forward to seeing Tincan evolve – end-to-end encryption will be the new normal in messaging, and possibly in the future no credible messenger will survive without being 100% E2E. I believe that Messenger can and will rise to this, though the changes may present a considerable challenge, not just for Engineering.
But I am not leaving because of Tincan. So, why?
Well, first I want to make something absolutely clear:
Warning
You are not Edward Snowden, and neither am I.
This post is my subjective interpretation of a bunch of FB-internal stuff. I may be wrong, though I don’t think I am.
So: do not leak this stuff. Do not talk about it, do not blab about it in bars or at cocktail parties, and be careful who can overhear you or can see over your shoulder when you read this.
If you do leak then [Colleague] and his team will hunt you down and you will be fired, and if possible I shall help, because journalists are nothing but exploitative.
This is a Facebook-internal post written about Facebook matters for Facebookers to read and think about. Nothing more. I do not want to be yours, or anyone else’s, newspaper byline.
So, why?
Reason 1: The Telegram Thing
A few months ago WhatsApp, and later Instagram, began not “linkifying” user content which contained links to Telegram group chats:
http://www.androidcentral.com/whatsapp-enters-tiff-telegram-blocking-competitor-links
If someone sent one of these links then you would have to type the damned thing out, manually, without even the help of cut-and-paste. No tapping, no clicking.
This was described in the press as a corporate ‘spat’ or ‘tiff’, but I found it considerably more troubling.
SecInfra is part of PAC (Protect and Care) where we fight spam and other forms of badness. There’s a general philosophy in spam-fighting, and it goes something like this:
if the sender is a fake or stolen account, or is misrepresenting or exploiting other users, take them down
if the content is illegal, harmful, malware or against terms of service, take it down
if the content is a side effect of malware, etc, then clean it up / take it down
otherwise the content is probably okay
The decision to not “linkify” Telegram links was the first occasion I’d seen legitimate user-generated content, posted intentionally, being treated in a “second-class” or degraded manner, for reasons of Facebook competitive interest.
I’ll note here that degrading Telegram links makes fabulous business sense. It’s applied business, even military, philosophy to deny your opponents’ access to ground, to deprive them of mobility and of the ability to engage.
So, from a business perspective: This is awesome. Bravo. Well done.
But from a human perspective my feeling is: if a Brazilian teenager wants to send a link for a Telegram group to some friends over Messenger, then we should take the punch on the chin and do it. To degrade the end-user UX around what she sends because it represents a mild business risk to us, seems petty.
We should step up and compete, not undermine. In Britain we would say that such behaviour is ‘not cricket’ – not playing the game fairly.
I was at Mark’s Q&A when [Colleague] asked about this matter; Mark’s response recapped an internal group thread from a few days previously, largely as I have outlined above. I believe that Mark said something like “why should we do this on behalf of a competitor?” – but my feeling was that we are meant to be doing this on behalf of a person, treating their content equitably so long as it was not spam or malware, as above.
We’re meant to be a people-centric company – it’s discouraged to talk about “users” when instead you can talk about “people” – but this rationale sounded more like people and their content were the turf in a turf war between competing businesses, a dehumanising perspective which I felt to be at odds with our mission statement.
This was my first realisation that Facebook would be happy to interfere with legitimate user content in pursuit of its own business interests – and it is the first reason why I’m leaving.
The second reason is speculative, and sits on top of the first:
Reason 2: We’re aiming for China
Facebook has a clear intention of bringing the platform to China, which I feel will have worrying ramifications.
[Colleague] in his FYI post of September 22nd last, writes:
Mark has said many times that we can’t achieve our mission of connecting with world without including the people who live in China, so we’re excited to have the opportunity learn and explore how we might expand our work for Chinese advertisers and developers and someday offer the Facebook family of apps to people in China.
More recently Andrew Bosworth wrote the following in a post – titled “The Ugly” – on June 19th; note the second paragraph, but read the whole thing:
The ugly truth is that we believe in connecting people so deeply that anything that allows us to connect more people more often is de facto good. […] It is literally just what we do. We connect people. Period. That’s why all the work we do in growth is justified. All the questionable contact importing practices. All the subtle language that helps people stay searchable by friends. All of the work we do bring more communication in. The work we will likely have to do in China some day. All of it. The natural state of the world is not connected. It is not unified. It is fragmented by borders, languages, and increasingly by different products. The best products don’t win. The ones everyone use win. … In almost all of our work, we have to answer hard questions about what we believe. We have to justify the metrics and make sure they aren’t losing out on a bigger picture. But connecting people. That is our imperative. Because that’s what we do. We connect people.
I don’t recall there being any codicil to our mission statement – perhaps something of the form:
“Make the world more open and connected, specifically by Facebook and at whatever the ethical cost.”
The suggestion that “anything that allows us to connect more people more often is de facto good” can be restated as “the end justifies the means” – a consequentialist argument which not only ignores the impact of the “means” but also presupposes the value of the “ends” to all parties.
My perspective is different to Boz’s. I don’t feel that the goal is for Facebook to connect people; I feel that the end should be for people to be connected. “Connection” should not be a “Gotta Catch ‘Em All” Pokemon game, it should be about making peoples’ lives better.
The question then becomes: what constitutes “better”?
I can see a massive upside of China as a target for Facebook: the potential for growth, the space to connect 1.35 billion more people, the opportunity to create (and be seen to create) change for good.
That sounds like “better”, especially if you believe that “more == better”.
But China is not a liberal democracy and to be permitted to operate there involves compromise, and to operate there we would have to compromise a lot.
Compare with Google back in 2006, where – as a demonstration reported on my blog at the time:
https://dropsafe.crypticide.com/article/1464
…capitalisation of “Tiananmen” in English led to very different search results being offered from the same word in all-lowercase. It was a concrete experience of censorship.
Is that “better”? Does returning censored search results to someone make them “more connected”, or only partially so?
But that was Google.
Now consider us. Our volume of user-generated content. How many posts we would have to censor? How many photos we would have to block? Think of the volume of appeals and erroneous takedowns which we currently perform, but at least our existing content-owners don’t risk state censure by appealing.
For instance: if there is an earthquake and some buildings collapse, or if there is a civil rights protest, do we want to put ourselves in the way of people sharing photos of that? [one may ask the same question of those people who want to ‘minimise harm’ in E2EE systems by (e.g.) restricting the number of people to whom an image can be sent]
It’s common to present implementation of such controls as “compliance with applicable local laws and regulations”, but at some point this becomes “collusion”, and to collude in inhibiting such connectedness strikes me as unattractive.
Perhaps it’s simply pragmatic for us to focus on business and say “China will have to fix itself, but in the meantime we can do business and make money there” – but (as with Telegram) I believe that business is better with a twist of ethics and (like with content degradation) I do not consider state-mandated “content management” to be ethical.
Maybe establishing Facebook in China might yield an incremental improvement in connectivity eventually leading to a kind of “tipping point” with subsequent societal change; but that’s a long-shot bet. The Chinese state is adept at censorship, and adding a compliant Facebook to the (numerous) available social tools, will not cause a significant shift.
So the “ends” of having Facebook in China may not be as valuable to the Chinese people as some of us might hope. If the value we would bring is “connection to the rest of the world”, and then we censor that connection, then has life substantially improved?
Perhaps by some percentage, but I would like to hear evidence that the benefit outweighs the risk, cost and the substantial impact on Facebook’s company ethic.
To step back for a moment, what clearly seems important – from Boz’s comments and elsewhere – regards China, is the potential for Facebook to grow. Facebook needs China – perhaps more than China needs Facebook – and what worries me is the “ugly” lengths to which we might go, and the deals that we might make, in order to achieve growth.
Reason 3: The lengths to which we might go…
If you’re not familiar with the term, “dual-use” is the principle that tools can be used equally for good and bad purposes – that a knife can be a surgeon’s scalpel or a weapon, that encryption software like PGP can encrypt both M&A contracts or terrorist plans.
What’s important in any given circumstance is to explore the intent of the tool, how it has been architected, deployed and for what purpose. I have 25+ years of working with dual-use technologies, and I am pretty good at recognising and discussing their nuances.
Observing the WWW codebase I see that my team has built an awesome upgrade to our spamfighting and abuse-reporting tools, but in the process has made it “dual-use” in a troubling way.
This is approximately how the spamfighting system works:
when the same words get used in lots of status updates
or when there’s a lot of commonalities in messenger messages
or when the same (or very similar) images are uploaded lots
especially when such things happen from different people but coming from the same or similar network addresses
…these events become “clusters” of similarity and are reviewed as potential spam.
If they are deemed to be spam, the content is hidden from public view – mostly elided at page-render time rather than deleted, in case we made a mistake and need to undo the spam classification / block.
It’s an effective mechanism, and recently there has been much work done so that we can increase the number of people who review potential spam and block it from the site. Mistakes sometimes happen – eg: crazes and memes sometimes get blocked and need to be “whitelisted” – but otherwise it’s a good way to defend the site.
People on Facebook trying to sell fake RayBans and other bad stuff are doomed. This is cool.
In the new tooling, there is an extra, experimental feature: if members of a community have a certain “bit” set, and if they create an interesting cluster of similar (viral?) content, then that content can be sent for external “content management” review.
If the reviewer gives a thumbs-down, the content is suppressed for other people who also have that bit set. There are complex (and possibly buggy) rules about what happens when content passes from a community with the bit set, to one which does not, and vice versa. There is also additional logging of which people are current members of a group from where content was sent for review.
Objectively this does not sound too bad – in fact it sounds very like an A/B test framework for spamfighting – but in actuality the code is not for that kind of testing.
The bit is to be set based upon the nationality of a person.
This feature is a tectonic shift which goes beyond traditional geoblocking (i.e.: “you can’t show posts which mention [topic X] when the user appears to be in [region Y]”) and pushes the content management controls down onto individual accounts.
It enables filtering of user content on the basis of a presumed nationality of a person.
It could be used – and appears intended – to facilitate state censorship.
I consider this to be a highly illiberal and misconceived feature.
The fact that we could even contemplate building this is the third reason why I’m leaving.
Let’s ignore the “censorship” aspect and ask: if we build such a tool and capability for one country, how can we not offer it to more countries? Consider the larger countries which try to armtwist us over access to their markets, our users, their people, our new.
Shall we pick-and-choose which countries may benefit from direct oversight of what their populations are buzzing about?
Do we believe that we can keep this censorious genie in a single-bit, single-customer, on-or-off, binary bottle?
If so, it’s political hubris on an extraordinary scale.
We have recently taken a noteworthy – and I feel praiseworthy – stand as amici alongside Apple in the FBI/iPhone lawsuit:
https://www.apple.com/pr/pdf/Amazon_Cisco_Dropbox_Evernote_Facebook_Google_Microsoft_Mozilla_Nest_Pinterest_Slack_Snapchat_WhatsApp_and_Yahoo.pdf
…but it seems that we’re willing to “play both sides against the middle” and also implement such content management where it fulfills a growth target, and/or some absolute desire to “connect the world”.
What would Russia do with this feature, especially when it has just passed laws aimed at forbidding the likes of Tincan?
http://www.theregister.co.uk/2016/06/21/kremlin_wants_to_shoot_the_messenger_and_whatsapp_to_boot/
What would the Pakistan State Police do with such a feature, or Thailand with it’s lese-majeste laws?
What would an illiberal American President do with the feature? Whomever the next US Government whistleblower is, perhaps their activities may or may-not be discussed on Facebook, depending on how capable and motivated our lawyers are.
Further: is it not worrying that Facebook is effectively adding a concept of “national obligation” to peoples’ profiles? That after providing your genuine name and other information, you may be assigned political oversight by a nation state, one which is permitted to impose its regulation on your updates and messages?
Would you be able or permitted to “appeal” your assigned nationality?
Would you even know about it?
Given the general air of reticence which surrounds this feature, I suspect the answer would be “no”.
Quickfire Questions & Answers
Why are you leaving?
I am leaving because I am highly concerned about our corporate direction, how our pursuit of growth may negatively impact our ethics and mission statement, and how this has become manifest in our codebase. Plus: I am too tired to fight it.
How will leaving change anything?
The past 25 years in industry have taught me that ‘nothing moves without a metric’, and I am leaving because there needs to be a metric moved: employee retention. It’s the only one that I can move.
I joined Facebook by choice, and equally I am making the choice to leave, for the reasons stated.
Are you trying to get other people to leave?
No. I would prefer that you folks stay put and address this.
You should make your feelings known to senior managers!
I did. Generally they looked sorry that I was leaving for these reasons, presumably because they considered the reasons to be unaddressable.
You shouldn’t post about this!
Some of the feedback I’ve had suggests that contrary opinions are not being voiced, nor heard, at leadership levels.
I don’t know if that’s true. If not, perhaps it’s discussion which is overdue.
How do you feel?
I’ve spent three years defending my employer, arguing that Facebook was not like some people in civil society were saying.
Occasionally I got hissed at – that was rather unpleasant.
But I stuck with it. I learned about the awesome work that we do to keep people safe – and about the awesome people with whom I was working, and the discretion and diplomatic skills and technology which they brought to bear in protecting people from harm.
I learned a lot, and even now I am amazed at the lengths to which Facebook will go to keep people safe. [there are a bunch of stories which are, and need to remain, untold; for instance involving activists living under oppressive regimes]
We do some tremendous good in the world. And I got to share some of it.
I also got to build stuff which added to that story – projects which were wildly successful in changing peoples’ perception of Facebook, privacy, and care.
And now I feel doubt. Massive doubt. Like perhaps it’s all been whitewash, greenwash, privacywash – that my work was partly sanctioned to distract people from our efforts to grow into new markets regardless of impedance mismatch with our ethical standards elsewhere. [I doubted the long-term commitment of Facebook towards E2EE, until this happened]
Somehow I no longer feel proud to work at Facebook. I recognise our good work, but I don’t feel connected to it any more.
I don’t feel angry. I feel disappointed and somewhat hollow.
Why don’t you stay, participate, fight for what you believe?
Because I’m pooped. I poured a lot of effort into making Tincan as credible as possible – wanting to provide Facebook with a counterweight product with credible, popular, robust, privacy – and when that didn’t turn out [quite] as hoped, my 18-month adrenaline rush collapsed. It’s going to take me a while to recover.
What do you hope to achieve by sharing this?
That maybe we can become more deeply aware of what we are doing, and its ramifications.
It’s a legitimate business perspective to suggest that Facebook can, and perhaps should, make the world more connected by entering the Chinese market, and in doing so make necessary adjustments in order to comply with local laws and regulations. I have no argument against the legitimacy of this proposition – it’s good business sense.
But it’s also a legitimate human perspective to suggest that censorship is illiberal and not in line with our broader company ethic, and that offering it as a service to one/more states would be ethically dubious and (worse) create an ongoing and ever-increasing moral hazard in a global market.
These are two options. Feel free to pick one, or come up with your own. Discussion is good.
Epilogue
We like to say that “nothing at Facebook is someone else’s problem”. It’s a great aphorism.
However, I’ve done my tour of duty, I’m exhausted, and from Friday I won’t be at Facebook.
Therefore this one is squarely on you.
Perhaps you disagree with me? That’s okay. Maybe I’m wrong, or perhaps my sense of the need for humility and ethical behaviour from Facebook may diverge from yours.
That’s fine.
But please do think about the impact your code may have on people, both positive and negative.
And then, bearing all this in mind, go make the world connected, and more open.
Best wishes to you all. You’re all awesome <3
— alec
ps: It’s almost always good to end on a quote. This helps explain how I feel about the temptations of staying:
Laws and principles are not for the times when there is no temptation: they are for such moments as this, when body and soul rise in mutiny against their rigour; stringent are they; inviolate they shall be. If at my individual convenience I might break them, what would be their worth? Charlotte Bronte
[END]
[1] URL:
https://alecmuffett.com/article/tag/facebook
[2] URL:
https://creativecommons.org/licenses/by-sa/3.0/
BoingBoing via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/alecmuffett/