(C) Daily Kos
This story was originally published by Daily Kos and is unaltered.
. . . . . . . . . .
AI Bothsiderism, and Foreign & Domestic Bots, as Influence Operators [1]
['This Content Is Not Subject To Review Daily Kos Staff Prior To Publication.']
Date: 2025-02-19
Back in 2022, generative AI, also known as large language models, or “AI chatbots” exploded onto the scene. These models are known for their capability to write human-like text, and even create images, video, and audio.
These models are given datasets to determine how they will respond, but they are also influenced by the opinions of end users as well, a concept called reinforcement learning that is an important part of AI machine learning. This has led to a situation where AI could be described as an “Oracle machine,” a theoretical concept in computer science where you have a machine that gives you an answer, and you have no idea how it actually came to the answer it gave you.
These Magic 8-Balls caused problems from the start, as they were trained on faulty datasets that were biased towards white males. In November of 2023, the Brennan Center for Justice released a scathing report detailing multiple ways AI powered systems were acting on flawed data or software that was biased towards white men, and had discriminated against, or violated the rights of women, the poor, Black people, Hispanic people, and the neurologically divergent.
The Brennan Center for Justice sent a letter to Congress with 85 signatories from other organizations urging them to act. As far as we can find, congress ignored their request, and did not even take up the issue. In response to this and other accusations of the like, the AI tech giants tried to adjust their models to respect diversity more, attempting to feed the models better data and train them to be more fair. However, the results of these adjustments had unintended consequences.
In February of 2024, reports came in of some responses to prompts from Google’s Gemini (formerly Bard) that were a tad bit of an overcorrection, such as showing a Black man for diversity when asked for a picture of the original founding fathers signing the declaration of independence. Instead of seeing this as an overcorrection due to retraining models that were initially based on faulty data, the right-wing had a freak out and jumped on it like a cat on a pile of treats. They accused the bot of being “woke." And before long most models were being accused of having a liberal bias intentionally placed there by big tech.
Google’s Sundar Pichai vowed to make the model unbiased in the future, and months later Sam Altman, CEO of OpenAI, who was also accused of a liberal bias, bragged to Elon Musk in an ongoing slap fight, that his model was regularly found by studies to be the least biased. In other words, the big tech giants started a race to the bottom in a similar fashion to the legacy media, to make sure bothsiderism is the rule of the day.
Now, a study by the Brookings Institute on May 8th of last year did discover a slight left leaning bias in OpenAI, but they also cautioned that they did not find it to be consistent. Furthermore, if we pop back to slightly before the Brennan Center Report, to August of 2023, a Carnegie Mellon study found Meta to be slightly right authoritarian, Google’s Gemini to be slightly socially conservative, and OpenAi to be left/libertarian, not openly left-wing.
This suggests that perhaps a correction was indeed necessary for fairness, even to the liberal side, and the reaction to the temporary overcorrection from conservatives was pure hysteria. In fact, August of 2023 was far from the last time LLMs were caught with a right wing bias, showing how easy they are to influence, and how easily reinforcement learning causes them to change their minds even against their creators wishes. In June 2024, leading up to the CNN debate between Biden and Trump, CoPilot and OpenAi were both caught parroting lies about a time delay on CNN. So much for Sam Altman’s “most unbiased AI” right?
However, the bottom line here is that, as illustrated by a Brown University study in October of 2024, models are incredibly easy to manipulate. Someone with the right knowledge can easily feed a dataset to a base model and cause it to crank out the answers they want in less than a day, and then release it into the wild.
Seeing how easy they were to manipulate, unethical actors began taking advantage of the situation to pump out their own, malformed language models. One of the most problematic, Dolphin, was literally created to be nearly limitless and ignore safety restrictions that other models had to worry about. Its founder, Eric Hartford, literally compared his model to a chainsaw, saying that we don’t expect chainsaws to “only cut trees”.
To add to the pile of malformed bots, right wing social media site Gab announced plans in April of 2024 to create their own right wing model. Then of course there is Elon Musk, who decried LLMs for being woke back in July of 2023, claiming that telling them to be politically correct was asking them to lie. He promised he would create a truly unbiased AI that was truly objective and focused on facts, after scrapping anti-misinformation tools on Twitter and firing his election integrity team not long before.
In the meantime, the race to the bottom from big tech giants to contribute to bothsiderism, just like the legacy media, continued unabated. In a tweet sent in late November of last year, Sam Altman, as part of his ongoing slap fight with Elon Musk, showed a screenshot of Musk's allegedly objective bot favoring… Kamala Harris for president over Trump. In his post, Altman bragged about how unbiased his AI was in most studies, displaying how proud he was to have turned his AI into another part of the legacy media’s and overall media worlds bothsiderism machine.
Elon fired back several hours later with the grade school nickname “Swindly Sam” and showed a picture of his AI being asked again about whether Trump or Harris should be president, this time showing the model giving a completely different response that said it was a tossup based on your opinion. Of course, as the Brown University study we mentioned earlier illustrates, this means nothing except that Musk had his engineers feed it some new data to manipulate the answer he wants -- it’s not really hard to do.
But this isn’t the only time Elon Musk has been humiliated by his own robot in recent memory. In November of last year, Grok answered a prompt about misinformation on Twitter by saying that Elon Musk was one of the biggest spreaders of misinformation on the site, and called him out not only for election conspiracies, but COVID conspiracies as well.
However, the problem goes far deeper than AI tech giants giving in to rampant bothsiderism. In September of 2024, the Director of National Intelligence’s Office gave a warning about AI being used by both Russia and Iran to spread misinformation and even engage with users in the comments of articles, and in the same month, Russia Today was caught using fake videos made by AI to make up stories of sensationalist crime in the United States.
It gets much worse though, than the usual Russian spread of propaganda, as in the beginning of the same month last year, two Russia Today employees were indicted for money laundering and not disclosing they were foreign agents for paying millions to Tenet Media, a Tennessee based content creation company, to do their dirty work and help them spread misinformation online. After the Tennessean cleverly unmasked Tenet Media’s involvement, several prominent conservative writers who work with Tenet Media, including Tim Pool, Benny Johnson, Dave Rubin, Lauren Southern, Taylor Hansen, and Matt Christansen all claimed they were duped and had no idea what was going on.
Tenet Media itself appears to be a right wing content creation company founded by a conservative husband and wife pair. Their names are Liam Donovan and Lauren Chen, although her full legal name may be Lauren Yu Sum Tam, according to the Tennessean. Their company hid the source of the 10 million dollar payout they received from the two Russia Today Employees, indicating they knew what they were doing was wrong.
But this was not the only large-scale Russian operation to influence the 2024 election cycle using bots and misinformation. At the end of 2024, the US Treasury announced sanctions directed at the Moscow Based center for Geopolitical Expertise, accusing them of a massive misinformation op, fueled by AI, to stoke divisions among Americans and influence the election in their favor.
The operation had a network of over 100 legitimate-looking websites that it used to spread disinformation, that included deepfake videos created by AI. They even had their own AI server, so they wouldn’t have to deal with any pesky guardrails put in place by AI manufacturers. Like the Russian operation we mentioned earlier, this one was also accused of using (unnamed) US based companies to help spread their misinformation and propaganda.
To make matters worse, an American domestic operation discovered by Clemson University also sought to use AI bots, particularly on Twitter to not only spread misinformation, but engage with users, especially high profile ones, in order to amplify their reach by orders of magnitude. The Clemson study found at least 668 bot accounts associated with this effort that had posted over 130,000 times. At first, these bots used OpenAI, but soon switched to Dolphin, which you may remember from earlier as the bot created by the man who would sell chainsaws to an axe murderer.
Clemson’s evidence that this was not a foreign operation was the focus of the bots, as they were particularly interested in North Carolina’s voter ID law on the upcoming ballot, senate races in Ohio, Pennsylvania, Wisconsin, and Montana, and even took positions in Republican congressional primaries over the Trump backed candidates. When Clemson reached out to Twitter for comment, the bots disappeared, but they never got any answer or explanation from Twitter.
There are also concerns that AI could become a monopoly, and there are already warning signs on the horizon. in January 2024, the FTC opened an investigation into Amazon, Alphabet and Microsoft for their giant investments in AI startups, as it feared anti-competitive practices. One year later, early in January of 2025, they released their report, and it found a number of anti-competitive concerns, all surrounding big tech having too much control over access to important talent and data. As you can imagine, with big tech sucking up to Trump and tossing money at his inauguration, we don’t expect the current administration to do anything about this.
Now, this isn’t to say that AI bothsiderism and the propaganda operations of foreign and domestic operators flipped the election. It may have had some small effect, likely not as much as the ridiculous sanewashing of extremism by the legacy media that has moved the Overton Window increasingly to the right. However, we do think there are some reasonable concerns to infer from these issues.
For starters, bothsiderism in general is becoming an increasing threat to Democracy, and we must continue to call out and push back against any attempts by the media to sanewash and normalize extremism. It is also a cause for concern that many people are now seeing AI overviews on search engines, or get much of their news from social media, exacerbating the bothsiderism issue, and increasingly removing people from legitimate sources, and their own critical thinking. Finally, while we do not want to overstate any potential influence by foreign and domestic bot campaigns, especially compared to the irresponsibility of legacy media, it is clear the bad guys have one more dirty trick in their playbook. As AI continues to evolve, we need to keep a close eye on these trends, with ongoing research and reporting, and this is where you come in.
Collective Analytics wants your help!
Many of our projects are highly technical and highly ambitious. Keeping our specialized team full time is not cheap; funding is always needed. Contributions are gratefully accepted via Ko-Fi:
https://ko-fi.com/CollectiveAnalytics (For high-dollar donations, four figures and up, send Kosmail to G2geek.)
Some projects will require more people to carry out; we are actively seeking volunteers to train as these projects launch. Interested in volunteering? Send Kosmail to userexists for more info.
Follow us on DK at
https://www.dailykos.com/groups/Collective%20Analytics for our latest updates and articles, and join the conversation. Your support and engagement make all the difference!
[END]
---
[1] Url:
https://dailykos.com/stories/2025/2/19/2304426/-AI-Bothsiderism-and-Foreign-amp-Domestic-Bots-as-Influence-Operators?pm_campaign=front_page&pm_source=more_community&pm_medium=web
Published and (C) by Daily Kos
Content appears here under this condition or license: Site content may be used for any purpose without permission unless otherwise specified.
via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/dailykos/