TikTok is hiring policy managers for ‘shocking and graphic content’ as it tries to combat the awful side of social media 

TikTok CEO Shou Zi Chew
TikTok CEO Shou Zi Chew
Celal Gunes/Anadolu via Getty Images

Hi there, it’s tech reporter Alexandra Sternlicht.

TikTok is staffing up in its fight against awful content. 

You could become TikTok’s “North America Product Policy Manager for Shocking and Graphic content,” which involves watching “deeply disturbing content on a daily basis,” according to a recent TikTok job posting. This includes viewing and creating policy for images and text related to “death, injury, torture, mutilation, and animal abuse.” 

You’d work in New York City and earn at least $93,000 annually.

Hate NYC? TikTok is also hiring for regional roles of a similar nature in Austin and San Jose (the lucky person who gets the California job will make at least $113,777 annually). The new hires will “address some of the most objectionable and disturbing content” with the goal of promoting a “positive and safe environment for all of [TikTok’s] users.”

These job openings appear to be part of a larger effort by TikTok to police the most pernicious content on its service. Globally, TikTok has 265 open job postings that include the word “torture,” 263 mentioning “sexual abuse,” and 258 citing “bestiality” and “murder,” giving masochists plenty of opportunities to pursue their passions. 

Every major social media company has teams that handle trust and safety. Generally, however, the job postings use language that somewhat masks the ugly realities of the work. 

That said, it’s been well-documented that content moderators, often low-paid contractors based in developing countries, have faced long-term psychological damage from viewing the most awful content on the internet. In its job postings, TikTok acknowledges the psychological impact of the work, spelling out that one qualification for candidates is having a “resilience and commitment to self-care in order to manage the emotional demands of the role.” 

These workers will be part of the company’s trust and safety organization, tasked with staying up-to-date on emerging social media trends to “predict and prevent violations” of TikTok’s community guidelines. They are also responsible for drafting, analyzing, and implementing content policies for shocking and graphic content in the U.S. and Canadian markets. 

Other than the trait of resilience, and five years of relevant experience, TikTok says candidates, should have a “passion for limiting user exposure to some of the most harmful content” and be “optimistic, principled, solutions-oriented, and self-starting,” among other things. 

It’s quite a moment for TikTok to have posted these job openings. During a Congressional hearing last year, a lawmaker confronted TikTok CEO Shou Zi Chew with disturbing content that a New York boy consumed on the platform before committing suicide. In response, Chew said TikTok takes mental health issues “very seriously” and provides resources “for anyone who types in” suicide-related searches. That month, the boy’s parents filed a wrongful death suit in Suffolk County Supreme Court against TikTok and its China-based owner, ByteDance. In October, the New York judge moved the case to a different court for “lack of subject matter jurisdiction.” 

All this comes as TikTok fights for its future in the U.S. after President Joe Biden signed a bill into an unprecedented law that forces ByteDance to find a buyer for its U.S. TikTok business or exit from the country.

A spokesperson for TikTok did not respond to Fortune’s request for comment about the job postings. 

Alexandra Sternlicht

Want to send thoughts or suggestions to Data Sheet? Drop a line here.

NEWSWORTHY

Meta delays Europe AI rollout. Meta has unwillingly paused plans to roll out its AI in Europe, after multiple complaints to privacy regulators. The issue is Meta using the public content of Facebook and Instagram users to train its AI without their express consent, allegedly in violation of rules under the EU General Data Protection Regulation. The Irish privacy regulator asked Meta to hit the brakes; the company said it was "disappointed" and slammed the move as "a step backwards for European innovation [and] competition in AI development."

Apple discrimination suit. Apple faces a class action lawsuit over its allegedly systematic underpayment of over 12,000 female employees in California, Reuters reports. The suit claims that female engineers, marketing professionals, and AppleCare workers get lower pay because Apple uses previous salaries and “pay expectations” to determine starting pay, and because its performance evaluation system is biased against women.

Microsoft’s Smith goes to Washington. Microsoft president Brad Smith yesterday appeared before the House Homeland Security Committee to apologize profusely for his company’s failure to protect U.S. government systems from a serious intrusion by Chinese spies, and a damaging attack by Russian intelligence. “As a company, we need to strive for perfection in protecting this nation’s cybersecurity,” he said. “Any day we fall short is a bad day for cybersecurity and a terrible moment at Microsoft.” As the Washington Post points out, some security pros are contrasting this admission with the new Windows feature called Recall, which regularly screenshots everything users are doing, and which Microsoft claims will keep that information secure. Speaking of which…

Microsoft pauses Recall rollout. The feature was going to go live from next week in the first Copilot+ AI PCs to hit the shelves, but Microsoft has decided Recall could use some more testing with the Windows Insider community first. “We are adjusting the release model for Recall to leverage the expertise of the Windows Insider community to ensure the experience meets our high standards for quality and security,” the company said, according to The Verge—but even Insiders, who are the first members of the public to test Microsoft features, aren’t going to get their hands on Recall just yet.

ON OUR FEED

“The notion that the CEO of a major, publicly traded Delaware corporation could—with the evident approval of his board—start a competing company, and then divert talent and resources from his corporation to the startup, is preposterous.”

—Excerpt from a lawsuit against Elon Musk, filed by some Tesla shareholders who are irked at his decision to found xAI, which they see as a rival AI company to which Musk has been diverting Tesla resources. As TechCrunch notes, Musk has long been claiming that his automaker is actually an AI company.

IN CASE YOU MISSED IT

The riddle of the BeReal deal, by Allie Garfinkle

A 58-year-old Canadian man stole trade secrets from Tesla and tried to sell them on YouTube, authorities say, by Amanda Gerut

Companies crave fresh data to train AI models. This startup’s recipe? Data made from scratch—by AI, by Sharon Goldman

Jeff Bezos-backed Perplexity AI wants to upend search business—but news outlets say it’s just ripping them off and inventing quotes, by the Associated Press

3 in 4 Gen Zers are interested in vocational training as uncertainty and AI shape the minds of the next ‘toolbelt generation’, by Sam Pillar (Commentary)

BEFORE YOU GO

NSA to OpenAI. OpenAI has a new board member: retired Gen. Paul Nakasone, who was until February the head of both the National Security Agency and U.S. Cyber Command. With that résumé, it’s less than surprising that Nakasone will sit on the OpenAI’s board’s security and safety subcommittee. TechCrunch notes that Nakasone and OpenAI both have histories of getting their hands on data they maybe shouldn’t have—and that OpenAI indicates its latest hire will help the company figure out market opportunities in cybersecurity.

This is the web version of Fortune Tech, a daily newsletter breaking down the biggest players and stories shaping the future. Sign up to get it delivered free to your inbox.