This past Sunday, Steve Stephens filmed himself killing 74-year-old Robert Godwin and posted it to Facebook minutes later. The video remained accessible through his personal Facebook account for roughly two hours, though Facebook disabled the entire account within 23 minutes of a user reporting the video, per an official Facebook statement posted on Monday.
During the two-hour period that the video remained on Stephens’ Facebook page, it was widely downloaded and went viral across Internet platforms. This led to swift criticism of Facebook, which has faced scrutiny in the past when graphic videos, including gang rapes, killings by police, and suicides remained posted on the site for hours, days, or weeks.
This week’s tragedy has renewed the debate over the level of responsibility social media companies like Facebook have in monitoring and permanently removing graphic content.
Currently, these responsibilities are primarily ethical, not legal. While people have long been able to sue for emotional distress if they witness a family member’s death, these cases are hard to win even outside of the Internet context. In recent years, relatives have sued television stations that live-aired the deaths of loved ones, though they often lose in court.
While it isn’t clear how emotional distress lawsuits would play out in a social media setting, there are other cases testing the legal limits. Facebook, for example, has been sued multiple times by “revenge porn” victims after sexually explicit photos or videos were shared on the site without their consent. These cases have also been largely unsuccessful.
Companies like Facebook usually win these lawsuits because they are protected by the 1996 Communications Decency Act (CDA), which shields them from liability based on graphic content posted by users. The CDA states that social media companies are not the speakers or publishers of information shared by users. It also provides that such companies cannot be held liable for decisions they make about whether to remove content. These provisions eliminate most opportunities people have to sue.
In addition, the Federal Communications Commission (FCC), a government agency created to regulate interstate communications, only creates federal rules and regulations for things like radio and TV, not social media companies. With TV, for example, the FCC can determine what material is inappropriate and fine a company for violating those rules. It can also require delay safeguards for live TV. No such regulatory mechanisms exist for social media.
Given that neither the law nor federal agencies provide clear legal parameters for companies like Facebook, they are largely left to self-regulate based on the company’s ethical commitments. Facebook’s community standards prohibit “graphic images when they are shared for sadistic pleasure or to celebrate or glorify violence,” but the company relies largely on users to report inappropriate content. These reports are reviewed by moderators to determine whether content violates community standards.
Facebook (FB) has admittedly had problems with its screening processes. It has received significant backlash for decisions related to deleting and deactivating accounts featuring protest activity and conflicts between people and police. It has also received complaints about not equally moderating posts related to nudity.
Moving forward, Facebook has much work to do to address how it monitors and removes violent and graphic content. CEO Mark Zuckerberg admitted this during the company’s developer conference, saying they “have a lot more to do here.” And in Monday’s statement, Facebook’s vice president of global operations, Justin Osofsky, noted that the company is developing artificial intelligence that will help preclude videos from being shared in their entirety and working to improve review processes. Though these bare statements perhaps move in the right direction, Facebook must do more.
For instance, Facebook could implement a delay system similar to systems used by TV networks. TV delays for live broadcasts are generally between five and 10 seconds and allow moderators to regulate what viewers ultimately see. While it is likely implausible to hire that many human moderators to handle Facebook’s nearly 2 billion users, artificial intelligence could possibly handle the task and do more than just shorten what can be shared.
It may also be time for the FCC to step in and regulate social media companies like Facebook, to the extent it has the authority and power to do so. While the agency has been hesitant to do so in recent times, public sentiment may dictate that the federal government step in to provide parameters.
All of this, of course, would raise serious and legitimate censorship and free speech concerns. If social media users don’t want to see rapes, murders, and suicides as they scroll through their feeds, however, something must change.
Shontavia Johnson serves as the Kern Family Chair in Intellectual Property Law and directs the Intellectual Property Law Center at Drake University Law School. She curates content related to law, innovation, and policy.