(C) Radio Free Europe/Radio Liberty
This story was originally published by Radio Free Europe/Radio Liberty and is unaltered.
. . . . . . . . . .
Flagrant rule-breaking by governments and corporate actors [1]
[]
Date: 2024-04-23 23:01:00+00:00
Tech wielded to stoke hate, division and discrimination poses threat in landmark year of elections
Amnesty International found that political actors in many parts of the world are ramping up their attacks on women, LGBTI people and marginalized communities who have historically been scapegoated for political or electoral gains. New and existing technologies have increasingly been weaponized to aid and abet these repressive political forces to spread disinformation, pit communities against each other and attack minorities.
The report also points to the expansive use of existing technologies to entrench discriminatory policies. States including Argentina, Brazil, India and the UK have increasingly turned to facial recognition technologies to police public protests and sporting events and discriminate against marginalized communities – particularly migrants and refugees. For example, in response to legal action by Amnesty International, the New York Police Department revealed in 2023 how it used the technology to subject Black Lives Matter protests in the city to surveillance.
The nefarious use of facial recognition was no more pervasive than in the West Bank of the Occupied Palestinian Territories where it was used by Israel to reinforce restrictions on freedom of movement and help maintain the system of apartheid.
In Serbia, the introduction of a semi-automated social welfare system resulted in thousands of people losing access to vital social assistance. This particularly affected Roma communities and people with disabilities, demonstrating how unchecked automation can exacerbate inequality.
With millions fleeing conflicts around the world, the report notes how abusive technologies were relied upon for migration governance and border enforcement, including through use of digital alternatives to detention, border externalization technologies, data software, biometrics and algorithmic decision-making systems. The proliferation of these technologies perpetuates and reinforces discrimination, racism, and disproportionate and unlawful surveillance against racialized people.
Meanwhile, spyware has remained largely unregulated, despite the long-term evidence of the human rights violations it drives, with activists-in-exile, journalists and human rights defenders usually among those targeted. In 2023, Amnesty International uncovered the use of Pegasus spyware against journalists and civil society activists in countries including Armenia, the Dominican Republic, India and Serbia, while EU-based and regulated spyware was freely sold to states the world over.
Big Tech’s surveillance business model is pouring fuel on this fire of hate, enabling those with malintent to hound, dehumanize and amplify dangerous narratives to consolidate power or polling. Agnès Callamard
Over the past year the rapid trajectory of generative AI, has transformed the scale of the threat posed by the gamut of technologies already in existence – from spyware to state automation and social media’s run-away algorithms.
In the face of rapacious advancements, regulation has largely remained stagnant. However, in a sign that European policymakers are beginning to act, a landmark EU-wide Digital Services Act came into force in February 2024. While imperfect and incomplete, it has nevertheless triggered a much-needed global debate on AI regulation.
“There is a vast chasm between the risks posed by the unchecked advancement of technologies, and where we need to be in terms of regulation and protection. It’s our future foretold and will only worsen unless the rampant proliferation of unregulated technology is curtailed,” said Agnès Callamard.
Amnesty International exposed how Facebook’s algorithms contributed to ethnic violence in Ethiopia in the context of armed conflict. This is a prime example of how technology is weaponized to pit communities against each other, particularly in times of instability.
The human rights organization forecasts that these problems will escalate in a landmark election year, with the surveillance-based business model underpinning major social media platforms such as Facebook, Instagram, TikTok and YouTube acting as a catalyst for human rights violations in the context of elections.
“We’ve seen how hate, discrimination and disinformation are amplified and spread by social media algorithms optimized to maximize ‘engagement’ above all else. They create an endless and dangerous feedback loop, particularly at times of heightened political sensitivity. Tools can generate synthetic images, audio and video in seconds, as well as target specific audience groups at scale, but electoral regulation has yet to catch up with this threat. To date we’ve seen too much talk with too little action,” said Agnès Callamard.
In November, the US presidential election will take place in the face of increasing discrimination, harassment and abuse on social media platforms towards marginalized communities including LGBTI people. Threatening and intimidating anti-abortion content has also become rife.
About a billion people are voting in India’s election this year against a backdrop of attacks on peaceful protesters and systematic discrimination against religious minorities. In 2023 Amnesty International revealed that invasive spyware had been used to target prominent Indian journalists, and more broadly tech platforms have become political battlefields.
“Politicians have long used manipulation of ‘us vs. them’ narratives to win votes and outmanoeuvre legitimate questions about economic and security fears. We’ve seen how unregulated technologies, such as facial recognition, have been used to entrench discrimination. Coupled with this, Big Tech’s surveillance business model is pouring fuel on this fire of hate, enabling those with malintent to hound, dehumanize and amplify dangerous narratives to consolidate power or polling. It’s a chilling spectre of what’s to come as technological advances rapaciously outpace accountability,” said Agnès Callamard.
[END]
---
[1] Url:
https://www.amnesty.org/en/latest/news/2024/04/amnesty-international-sounds-alarm-international-law-flagrant-rule-breaking-governments-corporate-actors/
Published and (C) by Radio Free Europe/Radio Liberty
Content appears here under this condition or license: By permission of RFE/RL.
via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/rferl/