LeadershipCryptocurrencyInflationGreat ResignationInvesting

A growing army of online trolls is using dangerous lies to take down executives and companies. Now they’re coming for you.

June 2, 2022, 10:07 AM UTC
Big Lie - Disinformation - Misinformation - Fortune 500
Illustration by Nazario Graziano

It didn’t take long for the conspiracy theorists to weave a fresh tragedy into their twisted narrative. Just hours after a disturbed 18-year-old armed with an AR-15 assault rifle and racist hate walked into a grocery store in Buffalo and murdered 10 innocent people, on Sunday, May 15, the mass shooting was already being reimagined as part of a plot involving some of the world’s largest companies.

The thread is convoluted, but it boils down to this: A rising number of zealots in the internet’s back alleys, like 8kun, BitChute, and GETTR—egged on by the scare-mongering pundits Alex Jones and Tucker Carlson—insist the hate crime was a false flag operation orchestrated by U.S. federal agents, who trained the shooter and arranged the attack as a means of rekindling public calls for gun control. “It’s really just an exemplar of how the far right tries to shift attention away from something that just happened,” says Welton Chang, the founder of A.I. startup Pyrra Technologies, which is positioning itself to help organizations arm themselves against disinformation. “In the post–Sandy Hook era, the false flag stuff is pretty much automatic whenever there’s a mass shooting.”

But that was just the beginning. The tangled narrative quickly spiraled outward to include a supposedly orchestrated food shortage and the current nationwide dearth of baby formula. It implicated the Biden administration and Abbott Laboratories, maker of Similac, for shuttering a plant and diverting formula to illegal immigrants. The action, according to the conspiracy theorists, was devised to increase breastfeeding as a means of spreading COVID vaccines to newborns. Some of the largest institutional investors, including Vanguard, Fidelity, and State Street, as well as the World Economic Forum, are also wrapped into the web as would-be puppet masters—of Abbott in particular and, more broadly, the global economy. (For more on how the World Economic Forum has become a target of conspiracy theorists, read this story.) Chang shows me several hateful and violent screeds that attempt to tie it all together, including this gem. “Instead of shooting ten random people,” it reads, “that Buffalo schizo should have gone for BlackRock.”

If Chang appears frustrated while trying to explain the inexplicable insanity on the screens in front of him, it’s not for a lack of intellect. He’s one of the most highly educated people you’ll ever meet, with a Ph.D. and an MA from the University of Pennsylvania, an MA from Georgetown, and a BA from Dartmouth. Nor does he lack experience with complex ideas. He was a senior researcher at the Johns Hopkins Applied Physics Laboratory and the CTO of Human Rights First, and he spent a combined nine years as a Defense Department analyst and Army intelligence officer. But over several conversations that span months, his brow is almost continually furrowed. He started Pyrra in hopes of helping organizations battle precisely this type of narrative warfare, which he considers to be nothing less than an existential threat to society. Less than a year in, he appears a little weary. 

Welton Chang, the founder of A.I. startup Pyrra Technologies, wants to help companies battle against disinformation.
Ryan Donnell for Fortune

Chang tells a quick story about the road he’s traveled to exasperation. It was 2011, his second operational tour in Iraq. He was leading a team charged with training Iraqi intelligence officers. “They would bring us these conspiracy theories about Iran and then try to write it up as analysis to send to their bosses. And we were like, ‘Guys, this is not how this works,’ ” he says with a small chuckle. “Little did I know that would be, like, the future of analysis. Just write up whatever you think and call it official. These people have built this mental playground where they’re the heroes, they’re the ones holding the line against the forces of evil.”

Who are “these people”? They’re a decentralized cabal of grifters, mercenaries, profiteers, politicians, think tanks, activists, and nation-states. Their motives are varied: market manipulation, wealth, attention, power, even systemic destruction. With a legion of malcontents and trolls at their disposal, they goose stock prices, rankle executives, sway policies, and, perhaps more than anything else, sow confusion about right and wrong, real and fake, truth and lies. They’re bent on making you doubt your own eyes, ears, and instincts. And they’re winning. We’ve become a nation divided in virtually everything, save for our collective cynicism. We no longer trust government, the media, medical institutions, or experts of any kind. We don’t trust our money, the justice system, regulators, or the rule of law. According to the 2022 Edelman Trust Barometer, there’s one institution that has gone relatively unscathed: the corporation. Because people identify with the brands that serve and reflect them—until they don’t. And now, well, there’s just no easy way to say this: Now they’re coming for you.

Chang provided a complimentary Pyrra account during my reporting as well as help turning me on to and untangling various threads. His A.I. software makes it easy to search across fringe websites and user forums by company and by narrative. It serves up daily reports ranking comments according to their degree of hatred and violence. It doesn’t take long to uncover dark fables that put some of the world’s most powerful companies—Amazon, Apple, Microsoft, Tesla, and Walmart, to name a few—in the crosshairs.

Which shouldn’t be surprising. There’s a staggering opportunity to inflict damage on companies through narrative combat—in part because the typical enterprise is woefully unprepared to defend itself. It’s difficult to identify sources of misinformation (false perceptions) and disinformation (the willful, systematic spread of lies), and even harder to track momentum. There is no department to handle damaging memes when they achieve escape velocity. Vitriol and lies can be matters for brand reputation, PR, or crisis comms. But they can also invoke IT, HR, legal, risk management, and security. Which means they’re everyone’s problem—and no one is responsible.

Compared to government targets, the private sector has an even richer and larger playing field and a … far more vulnerable audience. And it’s only going to get worse.

Paul Kolbe, director of the intelligence project at the belfer center for science and international affairs, harvard kennedy school

Crazy theories often peter out. But not always. Take the QAnon-driven theory in 2020 that Wayfair was engaging in child sex trafficking via $15,000 cabinets. Whether it started with nefarious intent or as a twisted joke, the rumor gained traction at the hands of unwitting coconspirators who shared it across social media. The campaign’s immediate damage was largely confined to child-trafficking hotlines, which were flooded with false alerts. But nearly two years later, Wayfair remains a euphemism on far-right message boards. As I witnessed in my Pyrra reports, to say someone “shops at Wayfair” or “has a bedroom full of Wayfair closets” is to call him a pedophile. (A Wayfair spokesperson declined to comment.)

When I ask Chang for telltale signs of when disinformation is likely to boil over, he cites a cozy relationship between trolls and Fox News, especially Tucker Carlson. He suspects that someone on Carlson’s staff “reads these boards, and they see a little thread that starts to pick up steam,” Chang says. “The ‘Don’t Say Gay’ bill was nothing when it was first introduced and then within a day of the Tucker piece, it was thousands of matches. Next thing you know, Tucker is talking about Disney empowering pedophiles. It becomes this feedback loop that gets us further and further away from an agreed form of reality—and this is how we’ve arrived at this moment where we all fucking hate each other. Oh, and now people are saying [Disney CEO] Bob Chapek is going to get fired.” 

To check Chang’s perspective with someone who has less of a vested interest, I call Paul Kolbe, director of the Intelligence Project at the Belfer Center for Science and International Affairs at Harvard’s Kennedy School. He spent 25 years in the CIA, including stints in leadership in the former Soviet Union and the Balkans. He only amplifies Chang’s concern. “Compared to government targets, the private sector has an even richer and larger playing field and a less sophisticated, far more vulnerable audience,” he says. “And it’s only going to get worse.” 

Spreading disinformation at alarming speed

“2016 was a ‘holy shit’ moment. You had Brexit and Trump, and the experts and the polls were all wrong. Everyone was asking, what the fuck happened?” That’s Chris Perry, speaking from his home office in New Jersey. Perry grew up outside Detroit and joined his dad’s graphic design business after college only to see it wiped out by the advent of digital media. He knows viscerally how companies can be obliterated by advances in technology, and now, as the chairman of Weber Shandwick Futures, he’s trying to prevent it en masse. 

Perry recently started a practice at Weber, one of the world’s largest communications agencies, devoted to what he refers to as media security. He considers big companies to be at high risk because of how various tools and tactics have propagated and been put out for hire. Malicious actors are using A.I. and bot networks, faux press releases, and deepfakes to erode our collective ability to know what’s true. According to a May report, the cybersecurity firm Nisos discovered a content distribution system code-named Fronton, built by the Russian firm Oday Technologies, that harnesses botnets to spread disinformation at alarming speeds. The service creates so-called newsbreaks to produce “noise around a brand or company … with little to no expense.”

It’s precisely the kind of thing that Perry has been warning clients about. “They’re using tools to spread narratives not just for commercial gain, but to inflict harm on companies, to inflict harm on people,” he says.


COLLATERAL DAMAGE

In the disinformation era, online chatter can quickly
drag companies into seemingly unrelated controversies.
On the social media site GETTR, recent fulmination over a
mass shooting in Buffalo took a detour into a conspiracy
theory about Wall Street and the baby-formula shortage.

Courtesy of Pyrra Technologies

The idea behind the media security practice is to help clients see how narrative conflict works and prepare them to mitigate threats, with techniques ranging from monitoring to analysis to countermeasures. Perry won’t divulge his client list—disinformation is not a topic that companies like to be associated with, even when they’re trying to get ahead of it—but he claims many members of the Fortune 500 are represented, from automotive, food services, and defense to packaged goods, biopharma, and hospitality, as well as global NGOs. Their attraction speaks to the urgency of the problem and the pedigree of his partners, including Blackbird.AI, a startup that, similar to Pyrra, uses machine learning to understand how damaging information flows across networks and causes harm. 

Blackbird was founded by a couple of computer scientists, Wasim Khaled and Naushad UzZaman. In 2014, the longtime friends wanted to start a company that would have a positive social impact and turned their focus on the corruption of information systems by unseen forces. “We didn’t have a good sense of what was really happening,” Khaled says from his New York office. “But we started the journey of understanding the scope of the problem.”

While most companies employ a social media listening tool like Brandwatch or Sprinklr, Blackbird built an engine to track emerging threats across the open, dark, and gray webs. It creates reports that go well beyond mentions or word clouds. The team spent six years building proprietary A.I. and network analysis technologies to parse billions of data points and events across millions of conversations, in multiple languages. “The goal has always been to proactively surface root causes and signals to understand manipulation,” Khaled says, “rather than just reacting after something breaks the surface and causes damage.” 

The fruit of their efforts is the Blackbird Signals Framework, which classifies information operations on four key parameters: narrative, cohorts, manipulation, and influence. Narrative is the story line around a topic or organization. For example: “5G towers cause COVID.” Cohorts is the community pushing the narrative—in this case, anti-vaxxers. Manipulation is a description of whether the narrative is being driven by humans or “anomalous” behavior. “This could include things like bot networks, a small number of accounts coordinating to spread hoaxes and conspiracies,” Khaled says. Influence ranks the key nodes shaping the narrative. “Sometimes these can be harmful actors. But in some cases, they can be trusted voices.”

Khaled shows off a variety of Blackbird’s reports on various narratives. They’re highly complex visualizations resembling cosmic galaxies of usernames, topics, and companies connected by colored thread. The graphics engine turns and rotates the visualizations in real time, allowing a user to drill down into a given node and understand the relationship to every other. They’re as engrossing as they are frightening.

Khaled explains that the same tactics can be used by groups with all kinds of agendas—from sowing chaos to advocating a legitimate cause. As an example of the latter, we drill down into a report he created to dissect the #BoycottCocaCola campaign that spurred the exodus of American corporations from Russia after its forces invaded Ukraine. “Boycott Coca-Cola displays really high levels of anomalous activity, very likely due to a combination of bot activity and hashtag amplification,” he says.

Khaled shows me how the seed of the campaign was planted on March 4 by a few primary actors. Later in the day, many users aligned with the political left begin amplifying calls for boycotts. “Then you see a number of other companies get caught up in it, including McDonald’s and Pepsi,” he says. On March 8, hundreds of accounts can be seen celebrating Coca-Cola’s announcement that it would suspend Russian business operations. McDonald’s soon followed. “That was basically a four-day period between the tweet, the campaign, and the narrative,” Khaled says, “that resulted in pulling billions of dollars in operations out of the country.”

It’s an awesome display of the power of narrative warfare to influence business decisions—and clear evidence that it can be an effective tool on either side of the political aisle. “What I always tell people is that digital chatter leads to real-life, high-impact outcomes,” he says. Khaled admits that his team isn’t at the point of telling clients how to handle such narratives. For now, at least, Blackbird’s niche is to identify manipulated and harmful information campaigns, try to predict their acceleration and spread, and let crisis comms execs determine when to engage and how to learn from every incident. “This stuff is all in its infancy. The tools that they’re using today are evolving almost exponentially,” he says. “Threat actors can punch in a few directives and get 1,000 generated articles, 10,000 generated tweets, and do maybe three or four sides of an argument at the same time. This is going to be hard. And it’s going to be a constant kind of battle between people who have incentives to drive these campaigns, and people who are trying to detect and get ahead of the problem.”

Getting ‘doxed’ on your birthday

For Chris Krebs, the disinformation problem is personal. The former director of the Cybersecurity and Infrastructure Security Agency at the U.S. Department of Homeland Security, and a lifelong Republican, created a CISA website to debunk election disinformation. In that capacity he dismissed various election-related conspiracy theories and publicly declared the 2020 election “the most secure in American history.” You probably remember him as the guy who Donald Trump fired via Twitter

We’re chatting over a charcuterie plate at the Cheesetique in Alexandria, Va., along with one of his analysts, Isabella Garcia-Camargo. After his dismissal, Krebs started a consultancy with Alex Stamos, the former chief security officer at Facebook, called the Krebs Stamos Group, and cochaired the Aspen Institute’s Commission on Information Disorder. I ask how it felt to be a victim of populist pitchforks. “I still am. Garrett Ziegler doxed me a few weeks ago, on my birthday,” he says, referring to a former White House adviser and the practice of publishing a private citizen’s contact information online. “I had a few families over and we’re watching basketball, and all of a sudden my phone was starting to meltdown.”

Krebs has turned his personal experience into his livelihood—helping companies and their leaders prepare for the highly personal nature of the attacks that are coming. “The targets are the companies and the brands and their executives. The actors could be foreign governments, influencers, and conspiracy theorists,” he says. Competitors, too. “Information is the new battle space. Think about companies making affirmative declarations about staying in the Russian market. There’s an element of disinformation—name and shame. Some may be legitimate social responsibility campaigns, but you’re also seeing competitors trying to get a leg up and diminish the brand of an opponent.”

Chris Krebs was fired by Donald Trump for refuting claims that the 2020 election was stolen; now he helps companies fight disinformation.
Greg Nash—Getty Images

Some companies, according to Garcia-Camargo, are more well-equipped than others. A vaccine maker, for example, has “monetization methods and libraries of content to address the anti-vaccine movement,” she says. “But when you see a company like Wayfair or Nike get attacked, there’s not an anti-sneaker movement, so it’s very difficult [for them] to really understand the threat that they’re going to be facing.”

An important first step for companies is figuring out when to engage with an emerging narrative thread. There’s no way to anticipate the ramblings of every basement dweller or Russian sovereign wealth fund. But Krebs thinks the best investment a company can make is to build the expertise to monitor the proper channels, to understand the difference between a campaign rising on, say, Telegram, versus Facebook, and integrate the responsibilities into the workflow. “That’s the piece that’s really missing right now. This is a different flavor of crisis communications,” he says. “There’s also the insider threat. Traditionally, you would have thought of them as whistleblowers, but now you’re seeing a lot of that with a politicized, conspiracy theorist flavor to it. You have to be careful about managing data in your organization.” 

Garcia-Camargo stresses the importance of safeguarding employees. She tells a story of spending two summers as a Facebook intern. On her first tour, everyone had Facebook-branded swag. The following year, during the Cambridge Analytica scandal, the logos went away. “As you were walking out the door,” she says. “The doorman would make sure you took off your badge.” 

It’s a move straight out of the playbook in operational intelligence, where government employees and spies, who are often targeted for intelligence collection, are under strict orders not to broadcast their affiliation. “We have to start extending that mindset. You have to be thinking about what information and indicators you’re putting out there that other people can pick up on,” Krebs says. “It’s not just about where you work. It’s your phone number, your kids’ names, your address.” 

If he sounds paranoid, that’s how death threats, emails, and letters to the house will shape one’s mindset and behavior. Krebs walked into our meeting, in an empty restaurant on a rainy day in sleepy Alexandria, with a ball cap pulled down to his eyebrows. “I used to care what people said, the feedback I’d get on my testimony, or my speeches,” he says. “Now, I’m like, ‘Hey, call me a dumbass as much as you’d like. As long as you’re not threatening me, my kids, or my dogs.’ ”

Ads for big brands on sketchy sites

Over the course of dozens of interviews, I have many conversations about the role that Facebook, Google, Twitter, and TikTok play in the spread of false narratives, lies, and hate speech. They all have talking points and issue policy statements about their efforts to counter misinformation. But most of the experts I speak to believe those companies will never stifle disinformation until they’re forced to do it. The consensus on Facebook, now part of the renamed Meta Platforms, is especially harsh: While it makes a big deal of scouring its platform, Facebook has made repeated, deliberate choices to inflame the masses knowing that sensational content draws views, clicks, and shares. Its business model is highly profitable cynicism on a global scale.

This is not acceptable to Steve Brill. Brill is the curmudgeonly onetime founder of Court TV and the defunct media-focused magazine Brill’s Content, among other ventures. He’s currently the cofounder of NewsGuard, a four-year-old company that employs dozens of journalists to read and score news sites based on accuracy and reliability. Microsoft licenses NewsGuard rankings to help its news teams vet which sites to include in their aggregations. But so far Brill has gotten stiff-armed by social media platforms. He accuses those companies of hiding behind opaque content-monitoring algorithms and avoiding accountability. “If Delta knew that someone got on a plane the week before and tried to stab the flight attendant or open the exit door,” he says, “they not only would try to keep that person off their planes, but one would assume they would tell the other airlines.”

Our conversation sends me down a different sort of rabbit hole, which involves the massive and unwitting exposure of companies to disinformation via programmatic advertising. The London-based NGO Global Disinformation Index regularly issues reports that enumerate the appearances of some of the world’s biggest brands on dozens of sketchy sites and implicate the ad-serving companies responsible for the placements. The organization’s April missive has the adtech company Criteo placing Walmart ads against a conspiracy-mongering column claiming the Russian war in Ukraine is part of the “Great Reset” agenda spurred by the World Economic Forum and the global elite. (For more on how the World Economic Forum is battling misinformation, read this story.) It shows Google serving an ad for Ibis hotels on a Spanish language site connecting Hunter Biden to military biological labs, and Amazon serving up ads—for itself—against an article equating partial-birth abortions to Russian attacks on Ukrainian civilians. According to a recent Forrester report, in the past year top brands have spent $2.6 billion advertising on news sites spreading misinformation via programmatic media. 

In the course of my reporting, I spend an unhealthy amount of time viewing Russian propaganda, on sites like Pravda, Zero Hedge, and RT.com. Many of the brands I associate with follow along in the form of banners, pop-ups, and video ads: Safeway, Dick’s Sporting Goods, Betterment, a boutique New York hotel where I recently stayed, United Airlines, Vivino—as well as unfamiliar brands, including the high-end kitchen and bath appliance company ZLine. I contact many of these companies to see if they know where their brands are showing up. Some don’t reply; others won’t go on the record. But a statement from ZLine’s director of marketing, Mason Watkins, reflects the consensus. “Unfortunately, there are only so many capabilities advertising platforms offer in the way of ensuring advertisements don’t appear on placements like these,” he says via email. “It certainly bothers us, especially when [it] forces us to act proactively to ensure our content is not displayed next to disinformation.”

Google refused to provide someone to speak on the record about the issue, opting instead to flood my inbox with policy statements. So I forwarded a handful of screenshots of ads juxtaposing misinformation. A spokesman replied that the company has taken “appropriate action.” At best, it’s a frustrating game of Whac-A-Mole. Google says it provides easy-to-use tools to limit any brand’s exposure to misinformation. Advertisers seem to have little idea that they exist or how to use them effectively.

This stuff is all in its infancy. The tools that threat actors are using today are evolving almost exponentially.

Wasim Khaled, ceo & cofounder, blackbird.ai

According to Matt Rivitz, this has been Google’s playbook for years. Rivitz is the founder of Sleeping Giants, a campaign for media and social media responsibility formed to alert brands when they inadvertently fund hate and extremism via programmatic advertising. “Brands still have absolutely no idea where they’re showing up at any given time,” he says.

Part of the problem is the complexity of adtech. The massive race to “monetize the whole internet” has, according to Rivitz, led the adtechs to abdicate any responsibility for vetting content. “Which means you’re inevitably going to serve ads up to Nazi sites, to pedophilia sites, to terrorist sites, to Russian state sites, disinformation sites, you name it,” he says.

Rivitz blames the dynamic on a lack of regulation and an abdication of responsibility. “It’s the people providing the supply. A TV network is not going to air random shows from Nazis in the eight o’clock hour because they’ve just given up any kind of control,” he says. “Google, up until six weeks ago, had ads running on Tass.com, RT, and Sputnik. They were supporting propaganda, using someone else’s money to do it, and they were making money themselves.”

Lately, Rivitz has been trying to move beyond activism and into helping brands through a venture called NOBL. Almost a mix between Brill’s NewsGuard and a startup like Pyrra or Blackbird, it uses A.I. to analyze text and determine a web page’s credibility so advertisers land only on quality pages. “It’s never going to eliminate all disinformation or all extremism,” he says. “But I think that ultimately, what you’re spending your money on is what we get more of. If you spend money on everything, we’re going to get more disinformation, we’re going to have more extremism, we’re going to have more harassment, and we’re going to get more health disinformation that’s gonna kill all of us.” 

Looking for solutions

Disinformation is so scary because it’s the umbrella threat that amplifies the danger of all other systemic and existential threats. By flooding the zone with counternarratives, “dis-informers” cause confusion and, most of all, attract attention. They also impart a sense of agency to those who have none. The masters of the discipline are highly skilled at using hyperbole and uncertainty to create the perception of conflict. We’re all naturally attracted to conflict and extreme thoughts. We provide our attention in search of a resolution—and then we’re hooked. 

This is how disinformation is weaponized to sow uncertainty about climate change and to cause rational, highly vaccinated people to shun COVID shots. It’s a superpower to fan the flames of racism and hatred, to propagate financial corruption and to instigate real-world violence. And if you stand for something as a business that runs afoul of a special interest—it could be gender equality, the right to choose, environmental responsibility, or any other number of values you or your employees hold dear—it’s a force to be reckoned with. 

So what do you do about it? After talking to dozens of people, I was unable to find any convincing strategies. It’s early days to be sure—at least for the good guys. There was general agreement that boils down to the old maxim, “If you can’t measure it, you can’t improve it.” So step 1 is to be aware of what’s being said in the far corners. The more data we have, the more we develop the intelligence to understand how memes evolve, the better we will become at knowing when to take action and when to ignore. Experts warn that while it may be tempting to turn a blind eye to a bubbling, hateful meme in hopes that the truth will prevail, that’s becoming an increasingly fraught strategy in a world where truth no longer exists. (Moderna, for example, has countered anti-vaccine misinformation online by flooding the zone with facts and science.)

The optimistic take says the battle will evolve along the lines of cyberwarfare. We’ll build a robust line of defense in the same way we protect ourselves against hackers. We’ll develop experts, form corporate departments, and establish clear reporting lines. We’ll create best practices and spend a lot of money to, if nothing else, ensure the survival of the social structure that enables businesses to thrive. Of course our cybersecurity systems aren’t perfect, but they are robust, evolving, and lucrative. Cybersecurity revenue is expected to approach $400 billion by 2028. By comparison, according to PitchBook, $300 million in venture funding has been directed at startups battling misinformation in the past 18 months. We’re not talking about EV- or crypto-style funding, but at least it’s a marked increase over the $6 million that went into the space in 2020. 

The startups getting funneled that money comprise a diverse and extremely intelligent collection of white hats. They’re migrants from national security, military intelligence, communications, computer science, journalism, and even theoretical and computational physics. They’re steeped in complex systems theory, data analytics, programmatic advertising, and artificial intelligence. They’re also former corporate executives, communications consultants, and government operatives—all intent on helping to develop accepted and effective strategies to defend against the rising bile in the internet’s darkest alleys. There are surely easier, more pleasant ways to make a living. But for these self-chosen disinformation warriors, none more important.

Jeffrey M. O’Brien (@jeffreyobrien) is cofounder of the Bay Area storytelling studio StoryTK.

A version of this article appears in the June/July 2022 issue of Fortune with the headline, “The Big Lie is Coming for You”

Sign up for the Fortune Features email list so you don’t miss our biggest features, exclusive interviews, and investigations.