(C) Daily Kos
This story was originally published by Daily Kos and is unaltered.
. . . . . . . . . .



Reid Hoffman Wants All your Intimate Details in Order to Have AI Paint you Broccoli [1]

['This Content Is Not Subject To Review Daily Kos Staff Prior To Publication.']

Date: 2025-01-27

Reid Hoffman, who some people tell me I must listen to because he is a strong supporter of Kamala Harris, has written an op-ed purporting to extoll the virtues of imitative AI but really extolling the virtues of giving all your data to Microsoft so that they can continue to feed their models’ need for data. It is a masterclass in bullshit, but it is also a good indicator of why imitative AI is in such trouble as a product.

(And aside related to the Kamala Harris comment. I have nothing against Harris. I voted for her and think she would have made a perfectly fine president, certainly better than either the neo-Nazi tech bro or rabid Oompa Loompa we currently have, depending on who's in charge on a given day. But just because a rich person gave a lot of money to a candidate, even a candidate you adore, does not make them moral or correct about a specific argument. It just means that they were attempting to bribe, to one degree or another, that candidate. And I am not even saying the bribe worked — I am just pointing out that is the point, largely, of giving an amount of money that is a significant portion of a campaign’s total spend. Aside over. We now return you to your regularly scheduled rant.)

Hoffman starts off with an ode to ChatGPT generated portraits, especially those feature broccoli in them. He thinks that those portraits represent a potential deep insight into the person who promoted ChatGPT. Ask yourself what it means, he says, if ChatGPT picked up the fact that you are talking about broccoli a lot and so put it in a picture of you! What deep and meaningful insight! What kind of tree would you be? What he never mentions is the likelihood that the broccoli is a mere hallucination, the imitate AI merely putting something its pixel calculator calculated should go there because its training data says that it should.

Hoffman, though, thinks that anything the word calculators spits out must be meaningful and contain insight. And if we all just gave him a lot of our most private data, then we could get even more insight, maybe even more profound than broccoli in our AI self-generated portraits. If you gave up your bank account and purchasing information, as well as your personal chats and messages, it can help you figure out whether you wanted to move or not! If you gave it your health information, it could tell you whether you feel better after playing Warcraft or after taking a walk! If you let it have all your opinions on books, it can tell you if you have a ten percent chance to get past page six!

Hoffman, of course, never explicitly says that he wants your most intimate, privacy protected data. But how can an imitative AI help you decide how you feel about moving if it does not have access to all of your most intimate thoughts, or at least those that exist digitally? How can it determine if you feel better after Warcraft than a walk in the park if it doesn’t have your health records? How can it paint you some broccoli if it doesn’t have access to all of your communications with friends and loved ones? And who could object?

Oh, Hoffman mentions that people are afraid of what large corporations can do, but he just hand waves away the concern. After all, what if, he says, we designed AI to protect you from advertising manipulation or social media manipulation. He doesn’t mention that we could, oh, I don’t know, outlaw using personal data for social media algorithms or advertising. That would cost him money. Instead, he thinks that each individual person should be involved in an arms race with large companies with close to unlimited budgets. How do you think that is going to end? My guess is not with the plucky individual resistance. And for what?

That is the saddest part of Hoffman’s bullshit. The products he mentions aren’t worth the costs. An AI algorithm that tells you you likely won’t want to read past page six is a glorified recommendation algorithm. We already have those. And you know if you feel better after playing Warcraft than after a walk in the park because you, well, feel. You don’t have to have an algorithm paw through your intimate thoughts to tell you whether you want to move. You know that already because you already know your intimate thoughts. And if you are uncertain, deferring to an algorithm isn’t going to make you less uncertain and has the real possibility of substituting its math for your thoughts.

Hoffman hints at this last idea, suggesting that “… we risk something even more fundamental by constraining or even dismissing our instinctive appetite for rationalism and enlightenment.” Except that, as I have said, he never mentions the fact that imitative AI cannot solve its hallucination problem. Bullshitting is baked into the way these systems work and while it can be mitigated, it cannot be eliminated. You will always run the risk of being told to glue your cheese to your pizza. You aren’t substituting rationality and the enlightenment, you are substituting a best guess at what word or pixel should come next. There is no rationality involved.

And it does challenge our humanity. It is one thing to get recommendations based on the fact that I, say, like Columbo (yes, I am a GenX dad what of it?) as opposed to turning my decision over to a next best guess algorithm. In the first, I am still making the decision. The implication of the second is that since i have a model of, well, me, that model knows me better than I know myself. This is not guaranteed to be true, of course, and the model can only represent apt me. People’s taste change and grow as they change and grow. I used to hate thrillers as a teenager — now I am writing one (next, a book of dad jokes.). Using a model to substitute for yourself locks out those potential changes. You can only ever be what you used to be, not what you can be. Yes, you can reject the advice of the model, but Hoffman clearly doesn’t think you should.

And that is the key, here. Hoffman wants you to turn over all your personal data to him and people like him to use. In exchange, you will get crappy products that substitute their word calculator vision of you for your own judgement and be forced into a constant arms race with large companies and governments who will have access to those same models and data. He doesn’t even hint that you should be paid for that data. Crappy recommendation engines should be payment enough.

Hoffman’s op-ed is not an argument for the usefulness of imitative AI. The products Hoffman suggest are either laughably poor improvements on existing products, products that do not require boiling an ocean to work, or deceptively useless. Hoffman wants you to think that my giving up your right to privacy, and by throwing artists, programmers, and writers overboard, you will gain something useful and important. But his own examples show that to be nonsense. Hoffman only cares about access to your data. Once he has that, you get, well, digital broccoli.

It is not a fair trade.

[END]
---
[1] Url: https://www.dailykos.com/stories/2025/1/27/2299341/-Reid-Hoffman-Wants-All-your-Intimate-Details-in-Order-to-Have-AI-Paint-you-Broccoli?pm_campaign=front_page&pm_source=more_community&pm_medium=web

Published and (C) by Daily Kos
Content appears here under this condition or license: Site content may be used for any purpose without permission unless otherwise specified.

via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/dailykos/