(C) Daily Kos
This story was originally published by Daily Kos and is unaltered.
. . . . . . . . . .
Anti-anti-Discrimination in Anti-AI Regulation [1]
['This Content Is Not Subject To Review Daily Kos Staff Prior To Publication.']
Date: 2025-02-17
One of the iron-clad tenets of working with machines is that machines should never make decisions — only humans should. Machines can be used to the hide who made the decision and why. They can be means of keeping accountability away from people who, say, discriminate. This is one of the reasons that AI regulation is so focused on accountability, and it seems to be one of the reasons that some people are so opposed to said regulation. Businesses must not face accountability, it seems.
A good example of this genre is the attack on AI regulation laws in this newsletter. Now, I do not normally like to go after individual newsletters. First, I have a life to lead and thus if there is any back and forth, I likely won’t be able to participate in it. Second, the issues are systematic, not individual. One person’s bad take is a symptom, not the disease. However, this specific symptom seems to hit a lot of the highpoints.
And I’m cranky — the ice has taken down several large tree branches that I now need to deal with. Such are the magical mysteries of life.
The post starts with praise of Vance’s position on AI regulation. I suppose that tracks given the post’s opinions, but it is striking that a post so dismissive of discrimination concerns would be praise a man who couldn’t even defend his own children against racism. And a lack of concern with discrimination is evident throughout the post, at least compared to the “harm” that regulating AI can do.
Early in the post, the author is concerned with the supposed lack of clarity in the bills he is discussing:
Here's where the trouble begins. Say that I want to use ChatGPT to filter resumes I have received. In this instance, the AI system is not making the final decision, but it is structuring the information environment for the person making the final decision. Is that a substantial factor in a consequential decision? Maybe! Or say that I want to promote a job listing on a social media platform. In that case, algorithms maintained by the social media platform are “deciding” who sees the job description in the first place. What if the algorithms decide to show my job listing primarily to white and Asian men, or to young people rather than old people? Is that discrimination? Is that a “substantial factor” in a “consequential decision”? Would I, as the owner of the business, need to write a risk management plan and algorithmic impact assessment for my use of a social media advertising system?
First, note that he is conflating two separate scenarios and not being entirely clear about either. In the first, he is using a tool to make decisions about which resumes get to the next step. Note that he claims that the filtering is not making a decision, but it absolutely is a decision-making process. If you filter out a resume then you by definition decide that said resume does not get to move on. A decision has been made. So, yes, in that case, a law that holds the user of a system to account when that system makes decisions may very well apply in this case.
The second case is a little less egregious since discrimination laws generally apply to the company that is ultimately responsible — in this case social media firms are generally held to be the discriminating party since they are ultimately the ones whose system discriminates. But the shock that someone would be held to account for using a system they know discriminates is darkly amusing. Why how dare you sir! How can you hold me to account for using a system I know discriminates! Why the very thought of such accountability is downright un-American! How can we be expected to run a business if we are responsible for our choices! Harrumph, I say! Harrumph!
I suspect that if a business knowingly used a system that, say, broke the fingers of the author every time someone used it, he would find the concept of accountability somewhat more reasonable.
Most of the post has this focus on the evils of worrying about the victims of discrimination rather than its perpetrators. Take facial recognition:
Do we have evidence that algorithmic discrimination is such a significant problem that it is worth paying these costs? I believe the answer is no. Take facial recognition: in a survey of police departments, the Washington Post found eight instances of false arrest due to a flawed facial recognition algorithm. Eight, across thousands of arrests. In all cases, the charges were dropped after the error was recognized. False arrests are, of course, a serious problem—but they have happened for a long time. I am aware of no studies that attempt to compare the rate of AI-assisted false arrests to that of purely human-enabled false arrest. If I had to bet money, I’d bet the AI systems are less prone to false arrests than humans.
Unsurprisingly, this is not entirely accurate. First, that we have only found eight instances does not mean that those are the only instances of false arrests. The article he links to shows the real problem — those arrests came about because the police ignored basic commonsense and good investigative practices, relying inappropriately on the facial recognition software. The eight that the Post found were the most egregious, undeniably wrong — like a woman who was pregnant even though video and witnesses provided no evidence of pregnancy. The idea that there are only eight such cases given the pattern of behavior uncovered and the difficulty in overturning false convictions beggars’ belief and does not appear to be entirely in good faith.
Second, the argument that AI systems are less prone to false arrests compared to humans also appears to be somewhat disingenuous. The false arrests were discovered by human beings, not by artificial intelligence. Without human intervention, these would never have come to light. It seems odd that one would assume that a system that humans had to correct would be inherently better than humans at the thing the humans had to correct. It also does not mention the fact that humans tend to give more credence to machines than humans, thus making such systems more likely to perpetuate errors since people defer to them. Seems like a pretty good place for enforced vigilance, no?
Well, no, according to the author. Discrimination appears to be a minor concern given that taking steps to mitigate discrimination costs businesses money. Even medical discrimination is poo-poohed:
There is a wealth of other literature on algorithmic bias, and some of it is quite damning. Hospital systems, for example, have used a machine-learning-based system to recommend care for patients. This system was trained on historical data, which reflected the fact that, in general, less money is spent on the care of black patients than those of, say, white patients. So the system recommended lower levels of care for black patients than patients of other races. This is a poorly designed recommendation system, to be sure—but does it merit the imposition of the AI Act on the American economy?
You will be unsurprised to learn that his answer is no, and we will get to the reason why in a moment, but I want to focus on the description here. These systems cause unnecessary pain, suffering, and even death to minority patients. To me, that is a tragedy needing correction. To the author, it is a “poorly designed recommendation system”. As if life and death medical decisions are no different in kind from Netflix movie recommendation. Shame about your dead husband — but hey, Netflix doesn’t always get the best movie for you, so really, who can say what’s worse?
The suffering human beings could if you paid attention to them.
The reasoning behind the “no” answer is also a little odd:
Thus, these new algorithmic discrimination laws are not novel in making algorithmic discrimination a legally actionable offense; instead, their novelty lies in their preemptive approach, attempting to stop algorithmic discrimination before it happens rather than enforcing the law after the fact.
First, this is not a novel legal theory. If you know something is likely to cause harm, it is common to preemptively try and prevent it. We do not wait for people to become sick from inappropriately prepared foods. We have laws that punish companies for dirty kitchens, for example, regardless of whether or not a person is harmed by the unsanitary cooking conditions. The author is trying to make a common occurrence in our legal and regulatory domains sound somehow sinister.
The post also discusses how some laws do not have a hard and fast rule for some of the terms. This, too, is not unusual (ask a copyright lawyer about definitive terms. Better bring a snack.) and while it is sometimes vexing, it also reflects the fact that life is not a computer. Sometimes, we need to allow room for interpretation and ambiguity because not everything fits neatly into simple definitions. Our legal system has delt with these kinds of ambiguities since its existence. It can be annoying, but it is not novel.
The author, near the end, clearly states his primary concern:
Reasonable people can argue about this, but I believe the compliance costs of these laws are high enough that they do not justify the modest benefits the laws might deliver. And this is to say nothing of the many other ways these laws could harm both innovation and technology diffusion, creating a weaker American AI ecosystem overall.
So let us be reasonable and argue. What fucking benefits justify imposing jail time or medical pain on non-whites and non-men? Imitative AI makes almost every problem it touches worse. It largely does not help humans do their work. It lies about simple matters of law. It is an environmental disaster. And it is not making money, likely cannot, at least not at the amounts needed to support itself. Where, precisely, is the immense value that we are strangling in the artificial crib?
Other forms of artificial intelligence, machine learning and expert systems, do better on most of those marks. But, as the author admits, there is a wealth of academic research that demonstrates that these systems discriminate. The way capitalism in the US is supposed to work is that businesses are supposed to be responsible for the cost of those externalities and the government is supposed to regulate those externalities out of the business, if possible.
I will now pause for you to stop laughing.
These laws are designed to do just that. There is nothing novel about them in our system, and there is nothing especially onerous about them. The author simply does not think that the pain and suffering of non-white and non-men measures at all against the potential that a business might make money if it is allowed to us AI to discriminate. The benefits to society clearly do not outweigh the costs to society. Small productivity improvements do not justify letting people rot in jail, or lose jobs, or suffer needlessly in hospitals.
The author wants you to think otherwise, to believe that common legal practices are too onerous for AI companies to support. Nonsense. All it tells me is that AI companies cannot be viable businesses if they are forced to deal with their externalities — such as well-known and massive discrimination. To that I say: oh well. Build a better business. It is not my responsibility, or society’s responsibility, to suffer so that you can turn a profit.
[END]
---
[1] Url:
https://www.dailykos.com/stories/2025/2/17/2304155/-Anti-anti-Discrimination-in-Anti-AI-Regulation?pm_campaign=front_page&pm_source=more_community&pm_medium=web
Published and (C) by Daily Kos
Content appears here under this condition or license: Site content may be used for any purpose without permission unless otherwise specified.
via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/dailykos/