(C) Alec Muffett's DropSafe blog.
Author Name: Alec Muffett
This story was originally published on allecmuffett.com. [1]
License: CC-BY-SA 3.0.[2]


A question for everyone who is saying that #AI must be regulated (a) soon, and (b) by the Government rather than self-regulation; #BigAI

2023-05-15 07:28:11+00:00

There are a bunch of people who are opining off-of the attached tweet; critical opinions generally can be characterised along the lines of:

the tech industry cannot police itself / look at the history of social media / facebook was bad, this will be worse / the internet is bad / imagine if airlines ran the aviation regulators / regulation is fine so long as you get independent experts to help it along / only lunatics should run asylums / it’s like oil-tobacco-asbestos all over again / white male privilege incarnate / even they don’t know how it works / jurassic park with terminators / …

So: back in the 1990s I watched Governments hand down regulations on cryptography, only for them to be infuriated when the proposed regulations flopped, or as/when open source communities blew straight past those regulations.

AI technologies are going to be open-source, ubiquitous and home-deployed. What will happen if/when open-source AI projects do the same to this fresh hell of regulation?

To be frank: I feel that a lot of this is not about AI, instead it’s about “#BigAI” and corporations. I have written at length elsewhere about the foolishness of regulating the shape of code; it would be wise for folk now to stop attempting that, and instead to regulate intents and outcomes, because “regulations on AI” are not going to go anywhere than cack-handed repression of software projects.
[END]

[1] URL: https://alecmuffett.com/article/66058
[2] URL: https://creativecommons.org/licenses/by-sa/3.0/

DropSafe Blog via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/alecmuffett/