Tiktok's algorithm could trigger liability for dangerous videos

TikTok recommended a dangerous "challenge" to a 10-year-old, leading to her death. A US court ruled liability protections don’t shield TikTok.

listen Print view
Six people, each with a smartphone, stand in a circle so that the smartphones approach the center of the circle

(Image: Shutterstock.com/ View Apart)

11 min. read
Contents

Tiktok can in principle be held liable for decisions made by its algorithms. This is according to the US Court of Appeals for the Third Circuit, referring to a censorship decision by the US Supreme Court on July 1. The tenor: If an algorithm compiles third-party content in such a way that the compilation becomes an independent statement, this statement is attributable to the operator of the algorithm, even if the content itself does not originate from him. Another US federal appeals court (9th Circuit) has found another way to hold operators of online services liable for third-party content.

Update

Court documents available for download at the end of the article

The reason for the case in the Third Circuit is0 a sad one: the death of a ten-year-old child. Although Tiktok stipulates a minimum age of thirteen, the child used the Chinese video app. Its algorithm recommended a video on the "For You Page" that contained a life-threatening challenge ("Blackout Challenge"): Users should please film themselves strangling themselves until they lose consciousness. Unfortunately, the child followed the challenge and did not survive. Now the child's estate and mother want to sue Tiktok and its parent company Bytedance in a US federal district court. The District Court rejected this, but the Federal Court of Appeal interpreted the law differently and sent the case back to the first instance.

Videos by heise

The sticking point is once again the famous Section 230, part of the US federal Telecommunications Act of 1996, which grants immunity for content that operators of interactive communications services do not provide themselves, but which is posted by third parties (with exceptions that are not relevant here). The textbook example is a web host that should not be held accountable for any stupidity that its customers post on their own hosted websites.

Section 230(c)(1)​

No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

There is no compulsion to distribute third-party content under US federal law. This problem was even the trigger for Section 230: a forum operator generally removed postings that were not suitable for minors; a judge saw this as the basis for making the operator liable for all postings that were not deleted. Legislators responded with Section 230 in order to keep hosting services available and affordable and not force them to play censorship police.

However, the boundary between the dissemination of third-party content and statements attributable to the service operator itself is unclear. Of course, you have to take responsibility for your own statements. The US states of Texas and Florida want to force large online services by law to distribute content that they do not want to distribute. Deleting postings would be just as illegal as reducing their distribution. Operators would even be prohibited from taking measures to protect children on their own initiative. Rewarding or favoring certain postings would also be prohibited.

In the view of the US Supreme Court, these state laws against censorship are likely to constitute censorship themselves. Accordingly, operators of online services have the right to decide what they display and how, even if the posts themselves originate from third parties. This is because these selection decisions are in themselves an expression of opinion, even if only very few posts are blocked. The operator then expresses which content it rejects, the Supreme Court explained on July 1. And the First Amendment of the US Constitution enshrines the right to express opinions, which state laws are not allowed to interfere with.

The US Federal Court of Appeal for the Ninth Circuit has now referred to this: if an operator uses algorithms that themselves make statements ("expressive algorithms"), the operator must be liable for these decisions. Section 230 only protects against liability for third-party statements. The situation is different for algorithms that make selection decisions based on user input or previous user behavior; a classic example is search functions where the user enters search terms of their own choosing. In the court's view, Section 230 does provide protection for the resulting expenditure.

Since the plaintiff claims that Tiktok's algorithms are of the former type, the federal district court may not dismiss the case, says the federal appeals court. So it sends the case back, and the district court must determine whether the suggestion of the dangerous video on the For You page was the result of the child's prior input or a judgmental algorithm for which Tiktok could be liable. Only then can the district court decide whether Section 230 actually bars the suit. The Federal Court of Appeal concedes that many other US courts have interpreted Section 230 much more broadly, in favor of liability protection for online services.

Meanwhile, judges of the US Court of Appeals for the Ninth Circuit have taken a completely different approach to holding service operators liable for third-party statements despite Section 230. This district includes the states of California and Washington, where the headquarters of numerous IT companies are located. Accordingly, the host providers should be liable for promises to prevent certain postings – even if the operator (or its algorithms) did not select the postings at all.

The starting point is a legal issue decided by the Court of Appeal in 2009 (Barnes v Yahoo). At that time, a woman contacted the head of the PR department with the complaint that her ex-boyfriend was repeatedly creating unauthorized profiles in her name. The PR manager promised that the responsible department would "take care of it"; nevertheless, the ex-boyfriend was able to continue setting up Yahoo profiles in the woman's name. She sued Yahoo and succeeded in having the lawsuit dealt with despite Section 230 because Yahoo had given an enforceable promise.

This year, the judges are taking up the argument they made at the time and expanding it considerably: operators should not only be liable for third-party content if they make a direct promise to remove it, but also if the promise is of a general nature. Two cases have become known in this regard.

Videos by heise

In Calise v Meta Platforms, Meta could be liable for advertisements placed on Facebook by Chinese scammers – in violation of Meta's rules. Facebook users who fell for the ads have sued Meta for damages. They allege that Meta deliberately ignored the advertisers' breach of contract in order not to lose the business. In doing so, Meta breached the contract concluded with Facebook users, in which Meta promises to prevent harmful content on Facebook.

In June, the Court of Appeal held that the alleged basis for the claim was contractual in nature; Meta was not being sued as a publisher or commentator, but as a contractual partner. Section 230 was therefore not applicable. The lawsuit will therefore return to the federal district court, where the plaintiffs will have the opportunity to substantiate their allegations.

The second case concerns Yolo Technologies. This company provided an extension for the Snapchat messaging app. The extension allowed Snapchat users to post public questions. (Snapchat operator Snap has since deactivated Yolo's extension, note). Third parties could respond without revealing their identity. However, in a description of the extension, Yolo threatened to reveal the identities of the harassers in cases of harassment.

Yolo is said not to have complied with this in some cases of children being harassed. Requests for the names of the perpetrators went unanswered in at least two cases, according to the complaint. Three children and the estate of one child want to hold Yolo liable for harassing postings by unknown persons. Here too, contrary to the first instance, the appeal judges believe that Section 230 does not protect against having to take responsibility for postings by strangers. Whether the children can rely on Yolo's announcement to disclose certain names must now be clarified by the district court.

Seen in this light, it is likely that in almost all cases it will be possible to find principles that catch operators of interactive services not as publishers or commentators, but on some other basis. After all, the terms and conditions of virtually all reputation services contain clauses designed to prevent criminal content.

The case law of the Ninth Circuit thus threatens to pull the rug out from under Section 230. There would hardly be any scope left, which is rarely the intention of the legislature. If other federal court districts were to firmly oppose this interpretation, it would increase the likelihood that the US Supreme Court would also take up this issue. On the legislative side, the view that Section 230 should be reformed is widespread; however, opinions differ widely as to how, which is why Section 230 persists after almost 30 years.

The case against Tiktok is Taiwanna Anderson et al v Tiktok et Bytedance. The interlocutory decision of the U.S. Court of Appeals for the Third Circuit is docket no. 22-3061. The case now moves back to the U.S. District Court for Eastern Pennsylvania, where it is pending under docket no. 2:22-cv-01849.

(ds)

Don't miss any news – follow us on Facebook, LinkedIn or Mastodon.

This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.