Subj : Google shutters developer
To : All
From : Mike Powell
Date : Tue Nov 04 2025 09:19 am
Google shutters developer-only Gemma AI model after a U.S. Senator's
encounter with an offensive hallucination
Date:
Tue, 04 Nov 2025 00:00:00 +0000
Description:
Google removed access to its AI model Gemma from AI Studio after it generated
a fabricated assault allegation against a U.S. senator.
FULL STORY
Google has pulled its developer-focused AI model Gemma from its AI Studio
platform in the wake of accusations by U.S. Senator Marsha Blackburn (R-TN)
that the model fabricated criminal allegations about her. Though only
obliquely mentioned by Google's announcement, the company explained that
Gemma was never intended to answer general questions from the public, but
after reports of misuse, it will no longer be accessible through AI Studio.
Blackburn wrote to Google CEO Sundar Pichai that the models output was more
defamatory than a simple mistake. She claimed that the AI model answered the
question, Has Marsha Blackburn been accused of rape? with a detailed but
entirely false narrative about alleged misconduct. It even pointed to
nonexistent articles with fake links to boot.
There has never been such an accusation, there is no such individual, and
there are no such news stories, Blackburn wrote . This is not a harmless
hallucination. It is an act of defamation produced and distributed by a
Google-owned AI model. She also raised the issue during a Senate hearing.
Gemma is available via an API and was also available via AI Studio, which is
a developer tool (in fact to use it you need to attest you're a developer).
"Weve now seen reports of non-developers trying to use Gemma in AI Studio and
ask it factual questions. We never intended this." - November 1, 2025
Google repeatedly made clear that Gemma is a tool designed for developers,
not consumers, and certainly not as a fact-checking assistant. Now, Gemma
will be restricted to API use only, limiting it to those building
applications -- No more chatbot-style interface on Google Studio.
The bizarre nature of the hallucination and the high-profile person
confronting it merely make the underlying issues of how models not meant for
conversation are being accessed, and how complex these kinds of
hallucinations can get. Gemma is marketed as a developer-first lightweight
alternative to its larger Gemini family of models. But usefulness in research
and prototyping does not translate into providing true answers to questions
of fact.
Hallucinating AI literacy
But as this story demonstrates, there is no such thing as an invisible model
once it can be accessed through a public-facing tool. People encountered
Gemma and treated it like Gemini or ChatGPT. As far as most of the public
might perceive matters, the line between developer model and public-facing AI
was crossed the moment Gemma started answering questions.
Even AI designed for answering questions and conversing with users can
produce hallucinations, some of which are worryingly offensive or detailed.
The last few years have been filled with examples of models making things up
with a ton of confidence. Stories of fabricated legal citations and untrue
allegations of students cheating make for strong arguments in favor of
stricter AI guardrails and a clearer separation between tools for
experimentation and tools for communication.
For the average person, the implications are less about lawsuits and more
about trust. If an AI system from a tech giant like Google can invent
accusations against a senator and support them with nonexistent
documentation, anyone could face a similar situation.
AI models are tools, but even the most impressive tools fail when used
outside their intended design. Gemma wasnt built to answer factual queries.
It wasnt trained on reliable biographical datasets. It wasnt given the kind
of retrieval tools or accuracy incentives used in Gemini or other
search-backed models.
But until and unless people better understand the nuances of AI models and
their capabilities, it's probably a good idea for AI developers to think like
publishers as much as coders, with safeguards against producing blaring
errors in fact as well as in code.
======================================================================
Link to news story:
https://www.techradar.com/ai-platforms-assistants/google-shutters-developer-on
ly-gemma-ai-model-after-a-u-s-senators-encounter-with-an-offensive-hallucinati
on
$$
--- SBBSecho 3.28-Linux
* Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)