Supreme Court Sidesteps Ruling on Scope of Internet Liability Shield
The Supreme Court on Thursday said it would not rule on a very important issue for the tech industry: whether YouTube could invoke federal legislation that would hold internet platforms legally responsible for what users post. A woman killed in a terrorist attack.
The court instead, in an incidental lawsuit, found that a separate law allowing action for “knowingly providing substantial assistance” to a terrorist does not generally apply to technology platforms in the first place, i. The court ruled that it was not necessary to determine whether the Liability shield applied.
of the court unanimous decision In the second case, Twitter v. Taamneh, No. 21-1496, both cases were effectively settled, allowing the judge to avoid difficult questions about the scope of Section 230 of the 1996 Act, the Communications Decency Act. I made it.
Simply put, no signature opinion In the YouTube case, Gonzalez v. Google, No. 21-1333, the court said it “does not address the application of Section 230 to complaints in which it appears that few plausible claims for relief are stated.” Instead, the court remanded the case to the Court of Appeals “to consider plaintiffs’ complaints in light of our decision on Twitter.”
The Twitter incident linked to Naulas Alassaf, who was killed in a terrorist attack at a nightclub in Istanbul in 2017, was claimed by the Islamic State. His family sued Twitter and other tech companies for allowing ISIS to use their platform to recruit and train terrorists.
“Plaintiffs’ allegations are insufficient to establish that these defendants abetted and abetted ISIS in carrying out related attacks,” Judge Clarence Thomas wrote to the court.
This ruling allowed the judge to avoid a judgment on the scope of Section 230 of the Communications Decency Act. This law is his 1996 law aimed at fostering the nascent creation then called the Internet.
Section 230 was a reaction to the decision to hold online message boards accountable for what users posted on the grounds that they moderated their content. The provision reads, “No provider or user of interactive computer services shall be treated as a publisher or speaker of information provided by other information content providers.”
Section 230 helped enable the rise of giant social networks like Facebook and Twitter by ensuring that sites were not held legally responsible for each new tweet, status update or comment. rice field. Limiting enforcement of the law could expose platforms to lawsuits for directing people to posts and videos that promote extremism, encourage violence, damage reputations, and cause emotional distress.
The ruling comes at a time when the development of cutting-edge artificial intelligence products raises serious questions about whether the law can keep pace with rapidly changing technology.
The lawsuit is brought by the family of Nohemi Gonzalez, a 23-year-old university student who was killed in a terrorist attack at a Paris restaurant in November 2015 that also targeted the Bataclan concert hall. was A lawyer for the family alleged that YouTube, a subsidiary of Google, used algorithms to push Islamic State videos to interested viewers.
A bipartisan group of lawmakers, academics and activists has grown more skeptical of Section 230, which protects tech giants from the repercussions of disinformation, discrimination and violent content across platforms. claims.
In recent years, they have put forward a new argument that platforms lose protection when algorithms recommend content, target ads, or introduce new connections to users. These recommendation engines are ubiquitous and power features such as YouTube autoplay and Instagram suggestions for accounts to follow. Most judges rejected this reasoning.
Lawmakers are also calling for changes to the law. But political realities have largely prevented these proposals from gaining momentum. Republicans are outraged at tech companies removing posts from conservative politicians and publishers and want platforms to remove less content. Democrats want the platform to remove more misinformation and other misinformation about the new coronavirus infection.