When MSNBC host Joy Reid saw a tweet decrying a racist incident this summer, she responded like many other people—she retweeted it.
The tweet in question came from an activist and showed a photo of a woman in a Make America Great Again cap appearing to berate a 14-year-old Latino boy. A caption implied she shouted “dirty Mexican” and “You are going to be the first deported,” and urged Twitter users to “spread this far and wide” because “this woman needs to be put on blast.”
Unfortunately for Reid, whose retweet broadcast the message to her 1.2 million followers, the tweet was wrong. The woman in the image, Roslyn La Liberte of Southern California, had said nothing of the sort.
The teenager in the picture later explained he and La Liberte had a civil conversation, and said the pair even hugged.
Five days after the retweet, Reid acknowledged the mistake by tweeting a news story that described what really happened:
By then, however, La Liberte had hundreds of vitriolic emails, which called her vile names and threatened to assault her. She also received menacing voicemails, including one from a man who shouted, “I will smack you upside your f**king head you stupid f***ing c**t.”
Sadly, this is all too common on Twitter: Someone posts a false and inflammatory tale, others retweet it, and an online mob descends on the unlucky target. This episode stands out, however, because La Liberte is suing Reid in federal court for allegedly defaming her with the retweet.
La Liberte may have a case. While judges have been inclined to treat inflammatory tweets (including those of Donald Trump) as opinion or hyperbole—types of speech that don’t count as defamation—that doesn’t mean you can’t libel someone on Twitter. Falsely portraying someone as a vicious racist could certainly qualify.
Reid, of course, didn’t do that. Instead, she just used Twitter’s retweet button to repeat what someone else said. The law, however, might not see a difference between tweeting and retweeting.
Lawyer Ed Klaris, who runs a media and intellectual property firm in New York, doesn’t see a distinction.
“The traditional rules of re-publication apply. You as a tweeter are very much a publisher,” says Klaris. He likens the situation to a newspaper that prints a letter to the editor that contains false and defamatory information. In such a case, the target of the letter can sue both the letter writer and the newspaper.
Or, in the context of Twitter, La Liberte can sue the author of the tweet as well as Reid for republishing it via her retweet. Klaris isn’t the only one who sees it this way; a recent Hollywood Reporter story cites lawyers who think Reid will lose the case.
If a judge agrees with this interpretation, the consequences could be enormous. A victory for La Liberte would create a new danger not only for journalists, but for many other Twitter users who inadvertently retweet false information from time to time.
Courts Silent on Retweets
La Liberte’s lawsuit doesn’t specify how much money she’s seeking over Reid’s tweet, but does state the claim is worth at least $75,000.
There’s no guarantee La Liberte will prevail, of course. In response to a request for comment, her lawyers sent a document to Fortune, which argues the case should be thrown out, and that La Liberte should pay damages for filing a frivolous lawsuit.
This isn’t just wishful thinking. Reid’s lawyers are relying on a well-known law known as the Communications Decency Act (CDA). The law, broadly speaking, says “no provider or user of an interactive computer service” can be held responsible for what other people say on an Internet platform.
Many Internet entrepreneurs have relied on the CDA as a legal foundation for their business. For instance, the law ensures Facebook isn’t responsible for criminal threats posted by its users, or that a blog owner isn’t liable for defamatory rants posted by a trollish commenter.
How the law applies to retweets is unclear, however. Even though Twitter’s retweet button has been around since 2009, no court has decided whether those who retweet defamatory claims are shielded by the Communications Decency Act.
Professor Eric Goldman, who has written extensively about the law, says retweets are clearly covered.
“It’s not even a hard case. Retweeting is just a different technical way of sharing third party content with a broader audience,” he said, citing a pair of cases involving email. In those cases, courts sided with defendants who sent or forwarded defamatory content written by a third person.
Free speech scholar Eugene Volokh, who recently published a blog post on the email cases, shares Goldman’s view. In an interview with Fortune, he added that Reid’s case is strengthened by the fact her retweet didn’t include additional comment endorsing the opinions in the tweet.
Meanwhile, the New York lawyer Klaris disagrees that a judge will let Reid use the CDA as a shield. He argues that allowing the law to protect anyone who retweets a false statement is too broad a reading, and would make the traditional republication rule meaningless.
A Bigger Role for Twitter?
As it stands, the Reid case is troubling because either outcome will produce an unsatisfactory result. If La Liberte wins, millions of people will face legal jeopardy for the commonplace act of sharing what they see on social media—a situation that would chill free speech. But if Reid wins, there is little to dissuade people from contributing to online mob behavior of the sort that dragged La Liberte through the mud.
This raises the question of whether Twitter and other online platforms should do more to stop false and defamatory information from going viral in the first place. One idea for addressing the problem—incidentally, suggested by a former Fortune editor—is a warning system would let those in Reid’s situation respond more promptly by broadcasting a correction (as Reid did but only five days later), and by removing the original retweet or shared post from their social media feeds.
This wouldn’t stop people from sharing defamatory content altogether (it’s just too easy when all it takes is clicking “retweet” or “share”) but it would certainly mitigate the problem. But would the social media companies even consider offering such a notification tool?
“This is an ongoing legal issue. We don’t have a statement to share,” said a Twitter spokesperson in response to a question about Twitter’s obligations in the Reid case.
The response is not surprising. Social media platforms have long seen policing user posts as a political minefield, and are wary of becoming arbitrators in deciding what is defamatory or fake news.
In this vacuum of authority, Klaris predicts courts may become more willing to interpret the CDA in a way that curtails the law’s protection. He acknowledged, however, that they have declined to do so in the past and that it may be a matter for Congress.
Volokh, the free speech scholar, pointed to legal precedents establishing a broad scope for the law, and says any changes should come from lawmakers, not the courts.