(C) Daily Kos
This story was originally published by Daily Kos and is unaltered.
. . . . . . . . . .



When Is Twitter Not A Platform? When It's A Product [1]

['This Content Is Not Subject To Review Daily Kos Staff Prior To Publication.', 'Backgroundurl Avatar_Large', 'Nickname', 'Joined', 'Created_At', 'Story Count', 'N_Stories', 'Comment Count', 'N_Comments', 'Popular Tags']

Date: 2022-10-10

The Supreme Court took up two cases that have the potential to revisit how products like Twitter and YouTube and Facebook can or cannot face liability:

The Supreme Court will take up two cases testing legal protections for apps and websites, according to a newly released list of orders. The first case is Gonzalez v. Google, a dispute over whether sites’ recommendation systems are covered by Section 230 of the Communications Decency Act. The second is Twitter v. Taamneh, which covers the standard for finding a site illegally supporting terrorists. The cases will offer the Supreme Court a chance to dramatically reshape the online legal landscape — not only for huge companies like Twitter and Google but also smaller web services that are protected by laws like Section 230. Supreme Court takes up two cases on platform liability and terrorism - The Verge

The Verge headline is little bit misleading. What is at stake here is less about section 230 and more about the way section 230 has swallowed all other forms of product liability law. Section 230 basically says that if you are n online platform you cannot be held liable for doing two things: hosting other's people content and moderating that content. On the surface, these seem completely unobjectionable: if you want to run a blog or social media site or ISP, you don't want to be responsible when someone spams you with child porn or directions on how to build a nuclear bomb. Similarly, if you are hosting a forum dedicated to baking cupcakes and schmuck won't shut up about the superiority of cakes as a dessert, you need to be able to boot him without fear of legal reprisal in order to effectively maintain your community. Now, these are actually more complicated issues once you get past the surface issues and you could frankly argue that Section 230 is not needed for a robust internet --after all, the US is largely alone in having this kind of law and the internet and community spaces do just fine in the rest of the liberal democratic world. But that is an argument for another day, as neither of these rights is at stake here.

The real issue is that judges have expanded the meaning of section 230 to invalidate product liability law. Very, very simply, product liability says that if you build a product and it causes harm, you are liable for the results of that harm. Judges have read section 230 to exempt online platforms from product liability laws if the product in question involves user generated content, as it does in both these cases. And that has lead to disastrous results for society, and a lot of unearned money for places like Twitter and Google and Facebook.

The issue here is not the user content -- it is that Google and Facebook and twitter's recommendation algorithms, in search of engagement and attention and thus ad and subscription dollars, choose to place that user generated content in front of other users. In doing so, they create harm for people. In other words, their decisions about how to design and operate their products have created harm and under normal circumstances they would be liable for that harm. But because courts have chosen to read Section 230 as invalidating all product liability claims as long as the product involves user generated content, such harm goes unpunished and is indeed rewarded.

Proponents of such a reading argue that any weakening of Section 230 inevitable means that the internet as we know it would die. That is not true. Some companies would be less profitable, but that is a feature, not a bug: companies that build harmful products should be less profitable, or not profitable at all. Punishing companies that hurt people with their products is good and I am frankly amazed I have to argue that.

Nor would holding internet companies to the same product liability standards every other company in the world is held to in the realm of online speech. Companies would still not be liable for user generated content -- only how their product used their content to cause harm. And companies could still moderate -- removing content is not the same thing as recommendation engine highlighting the content and surfacing it. They are, in fact, complete opposites, and pretending otherwise is disingenuous in the extreme.

There is nothing special about the internet. It is just a tool, like any other tool. And when companies create products with that tool that harm others, they should be held to the same standard.

[END]
---
[1] Url: https://www.dailykos.com/stories/2022/10/10/2127909/-When-Is-Twitter-Not-A-Platform-When-It-s-A-Product

Published and (C) by Daily Kos
Content appears here under this condition or license: Site content may be used for any purpose without permission unless otherwise specified.

via Magical.Fish Gopher News Feeds:
gopher://magical.fish/1/feeds/news/dailykos/