(C) Freedom House
This story was originally published by Freedom House and is unaltered.
. . . . . . . . . .



The Democratic Stakes of Artificial Intelligence Regulation [1]

[]

Date: 2024-02

Across the world, artificial intelligence (AI) tools are being used to intensify censorship, power surveillance, and supercharge disinformation. As Freedom House detailed in Freedom on the Net 2023, advances in AI tools have made the need for regulations that seek to safeguard free expression, privacy, and democratic participation increasingly pressing. Effective, tailored policymaking—alongside private sector action and robust oversight from civil society—can help ensure that the use of AI supports the protection of human rights.

Early efforts by the United States (US) government indicate that the Biden administration understands the urgent need to enshrine protections for human rights in AI regulations. The administration’s Blueprint for an AI Bill of Rights, released in October 2022, laid out a set of rights-based principles for the design and use of AI systems. The administration also secured voluntary commitments regarding AI safety and security from several companies. In October 2023, President Biden signed a sweeping executive order designed to embed those principles in policy. The executive order lays out a promising foundation for rights-respecting AI regulation in the United States, and policymakers across government should act to further this critical effort.

Protecting information integrity ahead of the 2024 election

Generative AI tools threaten to make online disinformation campaigns more effective as products like Midjourney and OpenAI’s DALL-E become more advanced, easy to use, and accessible. AI–generated video, audio, and images may be more difficult to identify as manipulated than those altered by conventional tools. As voters across the United States prepare for the upcoming 2024 election, AI–manipulated media misrepresenting candidates’ positions and policies have already circulated, some without clear labels to indicate their nature. No matter whether produced for entertainment or to manipulate political beliefs, AI–generated content may shape the information people consume about candidates, voting, and the election at large.

Experts are working to research and develop systems that can cryptographically mark and identify AI–generated content from the outset, a practice commonly referred to as “watermarking.” While broader transparency measures are critical to ensuring effective oversight of AI systems, watermarking may make it easier for people and the social media platforms they use to identify and counter misleading AI–generated content as it spreads. However, watermarking is not effective for text-based content and may present human rights risks if poorly designed. For example, watermarking that requires the person generating the content to reveal identifying information could expose that individual to serious harms, especially in places where free expression is repressed. The Biden administration’s executive order calls on the Department of Commerce to research technical systems on watermarking and provenance and issue recommendations for the industry, an important measure to facilitate industry-wide adoption of such practices. More broadly, political parties, committees, and candidates should refrain from intentionally misrepresenting candidates in advertising that features media generated or manipulated by AI.

Safeguarding privacy and due process

Sophisticated surveillance systems used across the United States rely on AI to identify people, as with biometric recognition tools, or make judgements about their behavior, like social media monitoring systems. By treating everyone as a potential wrongdoer, such systems forgo judicial standards like probable cause and sweep up massive amounts of sensitive information. By deploying machine learning models that can process information at great speed and scale in ways that are inscrutable to their users and those implicated, AI–powered surveillance systems may infringe on people’s rights without opportunity for oversight and appeal.

Automated mass surveillance poses serious threats to people’s liberties. To protect people’s rights, safeguards that ensure that the technology is used only when necessary and that its use is both legal and proportionate to the problem at hand should be implemented. The Biden administration’s executive order offers one opportunity to impose such critical restraints: it empowers the attorney general to create best practices for the deployment of AI in the criminal justice system, including with regard to police surveillance. The attorney general should direct agencies to enact regular and independent assessments of AI products in criminal justice use cases, and limit the use of AI–powered surveillance systems that are widely known to infringe on human rights, like emotion recognition and so-called “predictive policing” systems.

The pitfalls of public service automation

Federal agencies with nondefense functions identified over 1,200 current and planned use cases for AI systems in 2022, many of which relate to delivering core public services. Using AI systems in sensitive areas like public assistance, healthcare provision, and education can speed up decision-making, but introduces risks that may curtail human rights. For example, AI decision-making systems that rely on data tainted by patterns of historical discrimination may perpetuate harmful bias when making determinations. AI decision systems may also create new hurdles for those seeking to appeal decisions about their benefits, particularly for people who lack digital literacy.

Transparency around when systems may be deployed, oversight over their functioning, and clear documentation of how to appeal to a human reviewer can mitigate the human rights risks that come with the use of AI systems. The Biden administration’s AI executive order takes important steps on this front, directing agencies to mitigate algorithmic discrimination in public benefits delivery. Implementation guidelines issued in November further detail procedures for human oversight and appeal of certain decisions. In carrying out these provisions, agencies should consult with civil society organizations and academics who have worked extensively on algorithmic discrimination in benefits systems to ensure safeguards are adequate.

The long horizon on AI regulation

Though the executive order represents a strong first step toward rights-respecting AI regulation in the United States, it is subject to the inherent constraints of executive action and comes with a steep price tag for the federal budget. The order itself could be withdrawn by a future presidential administration, underscoring the need to codify its provisions into law to ensure it has a lasting impact.

When the appropriations process begins in a few months, Congress should appropriate sufficient funding for agencies to implement the order. In the longer term, Congress should work with the executive branch, technologists, and civil society to embed a rights-based approach to AI governance into law, including mechanisms like human rights impact assessments, transparency requirements, strong safeguards on AI surveillance, and pathways to appeal decisions made by AI systems. Such rights-respecting AI regulations are critical to ensuring that these technologies strengthen, rather than undermine, US democracy—with ramifications for AI governance efforts across the world.

[END]
---
[1] Url: https://freedomhouse.org/article/democratic-stakes-artificial-intelligence-regulation

Published and (C) by Freedom House
Content appears here under this condition or license: Creative Commons.

via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/freedomhouse/