Subj : ChatGPT can remember more
To : All
From : Mike Powell
Date : Sun Apr 20 2025 09:50 am
ChatGPT can remember more about you than ever before should you be worried?
Date:
Sat, 19 Apr 2025 19:30:00 +0000
Description:
ChatGPTs new memory features promise smarter, more personal responses but at
what cost to privacy, control, and connection?
FULL STORY
======================================================================
ChatGPTs memory used to be simple. You told it what to remember, and it
listened.
Since 2024, ChatGPT has had a memory feature that lets users store helpful
context. From your tone of voice and writing style to your goals, interests,
and ongoing projects. You could go into settings to view, update, or delete
these memories. Occasionally, it would note something important on its own.
But largely, it remembered what you asked it to. Now, thats changing.
OpenAI, the company behind ChatGPT, is rolling out a major upgrade to its
memory . Beyond the handful of facts you manually saved, ChatGPT will now
draw from all of your past conversations to inform future responses by
itself.
According to OpenAI , memory now works in two ways: saved memories, added
directly by the user, and insights from chat history, which are the ones that
ChatGPT will gather automatically.
This feature, called long-term or persistent memory, is rolling out to
ChatGPT Plus and Pro users. However, at the time of writing, its not
available in the UK, EU, Iceland, Liechtenstein, Norway, or Switzerland due
to regional regulations.
The idea here is simple: the more ChatGPT remembers, the more helpful it
becomes. Its a big leap for personalization. But its also a good moment to
pause and ask what we might be giving up in return.
A memory that gets personal
Its easy to see the appeal here. A more personalized experience from ChatGPT
means you explain yourself less and get more relevant answers. Its helpful,
efficient, and familiar.
Personalization has always been about memory, says Rohan Sarin, Product
Manager at Speechmatics , an AI speech tech company. Knowing someone for
longer means you dont need to explain everything to them anymore.
He gives an example: ask ChatGPT to recommend a pizza place, and it might
gently steer you toward something more aligned with your fitness goals a
subtle nudge based on what it knows about you. It's not just following
instructions, its reading between the lines.
Thats how we get close to someone, Sarin says. Its also how we trust them.
That emotional resonance is what makes these tools feel so useful maybe even
comforting. But it also raises the risk of emotional dependence. Which,
arguably, is the whole point.
From a product perspective, storage has always been about stickiness, Sarin
tells me. It keeps users coming back. With each interaction, the switching
cost increases.
OpenAI doesnt hide this. The company's CEO,. Sam Altman, tweeted that memory
enables AI systems that get to know you over your life, and become extremely
useful and personalized.
That usefulness is clear. But so is the risk of depending on them not just to
help us, but to know us.
Does it remember like we do?
A challenge with long-term memory in AI is its inability to understand
context in the same way humans do.
We instinctively compartmentalize, separating whats private from whats
professional, whats important from whats fleeting. ChatGPT may struggle with
that sort of context switching.
Sarin points out that because people use ChatGPT for so many different
things, those lines may blur. IRL, we rely on non-verbal cues to prioritize.
AI doesnt have those. So memory without context could bring up uncomfortable
triggers.
He gives the example of ChatGPT referencing magic and fantasy in every story
or creative suggestion just because you mentioned liking Harry Potter once.
Will it draw from past memories even if they're no longer relevant? Our
ability to forget is part of how we grow, he says. If AI only reflects who we
were, it might limit who we become.
Without a way to rank, the model may surface things that feel random,
outdated, or even inappropriate for the moment.
Bringing AI memory into the workplace
Persistent memory could be hugely useful for work. Julian Wiffen, Chief of AI
and Data Science at Matillion , a data integration platform with AI built in,
sees strong use cases: It could improve continuity for long-term projects,
reduce repeated prompts, and offer a more tailored assistant experience," he
says.
But hes also wary. In practice, there are serious nuances that users, and
especially companies, need to consider. His biggest concerns here are
privacy, control, and data security.
I often experiment or think out loud in prompts. I wouldnt want that retained
or worse, surfaced again in another context, Wiffen says. He also flags
risks in technical environments, where fragments of code or sensitive data
might carry over between projects, raising IP or compliance concerns. These
issues are magnified in regulated industries or collaborative settings.
Whose memory is it anyway?
OpenAI stresses that users can still manage memory delete individual
memories that aren't relevant anymore, turn it off entirely, or use the new
Temporary Chat button. This now appears at the top of the chat screen for
conversations that are not informed by past memories and won't be used to
build new ones either.
However, Wiffen says that might not be enough. What worries me is the lack of
fine-grained control and transparency, he says. It's often unclear what the
model remembers, how long it retains information, and whether it can be truly
forgotten.
Hes also concerned about compliance with data protection laws, like GDPR:
Even well-meaning memory features could accidentally retain sensitive
personal data or internal information from projects. And from a security
standpoint, persistent memory expands the attack surface. This is likely why
the new update hasn't rolled out globally yet.
Whats the answer? We need clearer guardrails, more transparent memory
indicators, and the ability to fully control whats remembered and whats not,"
Wiffen explains.
Not all AI remembers the same
Other AI tools are taking different approaches to memory. For example, AI
assistant Claude doesnt store persistent memory outside your current
conversation. That means fewer personalization features, but more control and
privacy.
Perplexity , an AI search engine, doesnt focus on memory at all it
retrieves real-time web information instead. Whereas Replika, AI designed for
emotional companionship, goes the other way, storing long-term emotional
context to deepen relationships with users.
So, each system handles memory differently based on its goals. And the more
they know about us, the better they fulfill those goals whether thats
helping us write, connect, search, or feel understood.
The question isnt whether memory is useful; I think it clearly is. The
question is whether we want AI to become this good at fulfilling these roles.
Its easy to say yes because these tools are designed to be helpful,
efficient, even indispensable. But that usefulness isnt neutral, its
intentional. These systems are built by companies that benefit when we rely
on them more.
You wouldnt willingly give up a second brain that remembers everything about
you, possibly better than you do. And thats the point. Thats what the
companies behind your favorite AI tools are counting on.
======================================================================
Link to news story:
https://www.techradar.com/computing/artificial-intelligence/chatgpt-can-rememb
er-more-about-you-than-ever-before-should-you-be-worried
$$
--- SBBSecho 3.20-Linux
* Origin: capitolcityonline.net * Telnet/SSH:2022/HTTP (1:2320/105)