(2025-01-27) Some things no one is focused on regarding genAI
-------------------------------------------------------------
Again, no in-depth analysis this time but rather some observations. First, it
really looks like all the craziness going on in the world right now is
merely a distraction from something bigger. GenAI is no exception. Everyone
is taking about the new grift called "Project Stargate", about DeepSeek vs.
OpenAI, about whether or not AGI/ASI is reachable and what it even is, about
agents/superagents/duperagents... But in fact, all that seems to be nothing
more than an information shroud for the people to turn off their critical
thinking and not look at the real state of things as of today. And the real
state of things is, everything is coming to the repetition of 100-year-old
history, only now the totalitarian governments (who, of course, will never
openly admit they are totalitarian) will have a lot more technical
capabilities to pursue their goals of mass surveillance and control. This,
by the way, is why they need such huge datacenters. AI is, and always has
been, just a facade.
Second, I'll never get tired of repeating the only criterion for determining
the usefulness of any piece of technology, the only question that you should
ask: do you really know what you're running and do you have enough control
over it? Anything closed-source is a trojan by default, and (on paper) they
even made it illegal for anyone to prove otherwise. Again, GenAI is no
exception. I don't share the excitement of DeepSeek fanboys for it being so
cheap, unless they have enough computational resources to run it fully
locally (which is not cheap at all). I have run some distilled variants of
R1 locally and they left me pretty much impressed, but I didn't sign up at
their official website to test the 671B model, it never is an option for me.
In terms of privacy and security, cloud-based DeepSeek is no better than
ChatGPT, so no one should sign up for either of them. But again, even
open-weight doesn't equal open-source. I'm running open-weight models
locally because they are sandboxed enough to do no harm, but I never forget
that I'm interacting with a black box and should treat its output
accordingly.
Third, the question that bothers most tech people right now, like "Will genAI
replace software engineers?", is not a question for me either. I may have
already mentioned this in an earlier post, but... Technically, AI can't
replace software engineers. Idiotic management can. It's already happening
to less "brainy" positions like copywriters or frontenders, and there is a
practice of "soft displacement" of SDEs as well: companies started including
mandatory ChatGPT subscription into the work account package, and LLM
prompting started appearing in the CVs and job requirements. Which is
already insane enough, if you ask me. For them, it no longer matters how
good you are at programming, now it matters how good you are at asking genAI
to do something for you. The repercussions of this approach are not so long
to follow. Just imagine a project with a large codebase where no one
understands anything because it all had been autogenerated, and someone has
to fix a security issue or other bug that could be obviously avoided if the
code had been written in a normal way. Ironically, with the recent
advancement of reasoning models like DeepSeek's R1, it would make much more
sense for genAI to replace project managers instead of developers. Of
course, productivity was never the true goal of such "initiatives", so the
latter scenario is rather unrealistic.
Lastly, a chat interface in the _natural_ language is one of the most
inefficient ways to do things when interacting with machines. It's much
easier for me to type (and for the machine to understand) "ls ~" than "give
me the list of files in the home directory". Even if you hide everything
behind "agents" and their pipelines, you still have to interact with LLMs by
giving them prompts and reading results. You know, programming/scripting
languages were invented for a reason. There always has been a search for a
balance between "what is the easiest for the computer to understand" and
"what is the easiest for a human to understand". Making computers understand
humans in their own language will never give precise results no matter how
much computing power you throw at it, just because human language is
imprecise by its nature. If anything, there is going to be a point where
making LLMs function closer to the human brain will actually decrease their
performance. Because no human follows a perfect pattern of reasoning either.
And this is normal. This is what, among other things, makes us humans. An
open question is, however, how much more resources will be wasted until this
threshold is reached and will the "stakeholders" ever admit that it has been
reached in the first place?
Nevertheless, as I have been reassured once again, artificial intelligence is
nowadays a much lesser threat than natural stupidity. This is what the next
generation of John Connors will have to resist first.