(C) Transparency International
This story was originally published by Transparency International and is unaltered.
. . . . . . . . . .



Bribes for bias: Can AI be corrupted? - Blog [1]

[]

Date: 2023-05

Recently your social media feed may have been flooded with headlines on the advances in Artificial Intelligence (AI) or even AI-generated images. Text-to-image algorithms such as Dall-E2 and Stable Diffusion are becoming hugely popular. ChatGPT, a chatbot developed by OpenAI, is now the world’s best-performing large language model, reaching 1 million users in its first week – a rate of growth much faster than Twitter, Facebook or TikTok.

As AI demonstrates its ability to craft poetry, write code and even pollinate crops by imitating bees, the governance community is waking up to the impact of artificial intelligence on the knotty problem of corruption. Policy institutes and academics have pointed to the potential use of AI to detect fraud and corruption, with some commentators heralding these technologies as the "next frontier in anti-corruption."

Amid all the excitement, it can be easy to lose sight of the fact that AI can also produce undesirable outcomes due to biased input data, faulty algorithms or irresponsible implementation. To date, most of the negative repercussions from AI that have been documented are unintentional side-effects. However, new technologies present new opportunities to wilfully abuse power, and the effect that AI could have as an “enabler” of corruption has received much less attention.

A recent Transparency International working paper introduces the concept of "corrupt AI" – defined as the abuse of AI systems by public power holders for private gain – and documents how these tools can be designed, manipulated or applied in a way that constitutes corruption.

Politicians, for instance, could abuse their power by commissioning hyper-realistic deepfakes to discredit their political opponents and increase their chances of staying in office. The misuse of AI tools on social media to manipulate elections through the spread of disinformation has already been well documented.

Yet corrupt AI does not just occur when an AI system is designed with malicious intent. It can also take place when people exploit the vulnerabilities of otherwise beneficial AI systems. This becomes of greater concern with the significant push worldwide towards digitalising public administration. Algorithm Watch, for instance, recently concluded that citizens in many countries already live in "automated societies" in which public bodies rely on lines of code to make important social, economic and even political decisions.

Digitalising government services has long been recognised as reducing officials' discretion when making decisions and thereby constraining opportunities for corruption. Yet, as our paper demonstrates, replacing humans with AI brings novel corruption risks. These are four good reasons why the risk of "corrupt AI" should be taken seriously.

[END]
---
[1] Url: https://www.transparency.org/en/blog/bribes-for-bias-can-ai-be-corrupted

Published and (C) by Transparency International
Content appears here under this condition or license: Creative Commons CC BY-NC-ND 4.0.

via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/transparency/