In February, three MIT cybersecurity researchers reported that they had found major security flaws in the online voting application Voatz. Offering what’s known as a “bug bounty”—a payment for anyone who discovers and reports a security hole in software—Voatz sought to encourage independent “white hat” hackers to shore up the security of its service.
But the MIT team quickly found the reward was an even bigger problem than the bug. The terms of the Voatz bug bounty, set by the company and administered through the bug reporting platform HackerOne, said researchers couldn’t test Voatz’s app itself. Instead they’d have to use a copy of the app, which the researchers said didn’t work properly. According to MIT team member Michael Specter, that would have been a threat to the validity of the research. The bounty also didn’t allow for reporting of certain kinds of attacks, a restriction the researchers argued didn’t reflect real-world conditions.
While bug bounties have become an increasingly popular part of companies’ cybersecurity toolkit in recent years, researchers have run into an array of problems with the way they are structured and managed. Critics say the programs, particularly those run with intermediaries like HackerOne and Bugcrowd, often limit the scope of researchers’ work and their ability to share findings. These shortcomings, they say, could ultimately leave important software more vulnerable to “black hats,” or malicious hackers.
Katie Moussouris, a former HackerOne executive who has also helped Microsoft start a bounty program, has publicly called attention to these issues. In a keynote address at the RSA security conference in February, Moussouris, who holds significant stock in HackerOne, said that in their current form many bug bounty programs are superficial “security Botox,” meaning they’re better for helping companies to look good than they are for actually securing software.
The leaders of bug bounty services counter that putting guardrails around bounty programs, at least temporarily, serves the larger goal of balancing white-hat ideals of total transparency with the needs of companies whose resources and reputations are on the line.
“Bug bounty programs are amazingly successful at identifying vulnerabilities,” says HackerOne CTO Alex Rice. “Getting companies working with [external] security researchers is the most important step.”
“Buying researchers’ silence”
The controversy surrounding the Voatz bug bounty isn’t an isolated case. In recent incidents involving PayPal, streaming platform Netflix, drone maker DJI, and videoconferencing software Zoom, security researchers reporting bugs through bounty programs found themselves tangled in procedural or contractual runarounds—some of them downright Kafkaesque.
In particular, researchers have been galled by nondisclosure clauses that are often part of bounties run through HackerOne and Bugcrowd. In order to submit a report to some public bounties, researchers must agree to restrictions on discussing their findings publicly. In limiting public knowledge about possible security vulnerabilities, nondisclosure clauses benefit individual companies, critics say, at the expense of broader advances in cybersecurity.
Nondisclosure clauses can be appropriate when security researchers are hired to conduct “penetration testing” under contract, critics grant. But when applied to incoming reports from the public, the clauses appear to undermine a widely accepted practice among cybersecurity researchers known as “coordinated vulnerability disclosure.”
A ticking clock sits at the core of the concept of coordinated disclosure. If a bug has been reported but not fixed within a reasonable time frame—generally, between 30 and 90 days—it is generally considered ethical for a hacker to disclose a bug publicly. That norm originated in the 1990s, when independent security researchers found some companies wouldn’t even acknowledge their reports of dangerous bugs. The threat of releasing a hacking method publicly encouraged businesses to fix their vulnerabilities quickly.
After a bug is patched, publicly discussing it can help programmers to fix or prevent similar vulnerabilities elsewhere. As cybersecurity analyst Keren Elazari has put it, this public dialogue helps make white-hat hackers “the Internet’s immune system.” A recent survey of cybersecurity professionals by the security firm Veracode found that 90% regard public disclosure of vulnerabilities as a “public good” that improves cybersecurity overall.
Chris Wysopal, Veracode’s CTO and one of the pioneers of coordinated disclosure, worries that the rise of bug bounties is weakening that knowledge sharing among security researchers. Recent cases illustrate exactly how that is happening.
For example, when Jonathan Leitschuh discovered a serious vulnerability in the videoconferencing software Zoom last year, he had coordinated-disclosure norms in mind. Ultimately, Leitschuh chose not to pursue a bug bounty Zoom offered through Bugcrowd, because nondisclosure terms would have prevented him from talking about his findings. The bug was fixed only because he was eventually able to go public, he says.
“[Zoom’s] first response was, This is not a vulnerability,” Leitschuh says. “After 24 hours of having the media holding their feet to the fire, they admitted, Okay, it’s a vulnerability.” Leitschuh now thinks that nondisclosure clauses in bug bounties are equivalent to “buying researchers’ silence.” Zoom declined to comment for this story.
Casey Ellis, cofounder and CTO of Bugcrowd, says his company encourages its customers to be generous in their disclosure terms and pushes clients to minimize restrictions. Rice at HackerOne says he also supports disclosure in many cases, but also that existing disclosure standards may not be all they’re cracked up to be. He admits that in the Zoom case, there were “clear benefits to disclosure,” but says that the public and the press often misinterpret disclosures.
“I’m not sure what benefit users get from publishing a bunch of unvalidated security vulnerabilities,” Rice tells Fortune.
The nondisclosure dance
The nondisclosure terms of bug bounty programs often appear to remain in force even if a bug is deemed “out of scope”—broadly, something that’s not considered a threat and therefore won’t be fixed. But the fundamental question of what constitutes a “valid” bug speaks to one of Moussouris’s biggest critiques.
For instance, recent vulnerabilities at PayPal and Netflix were deemed “out of scope” by workers who reviewed bug bounty submissions. But the terms of the bounty programs—PayPal’s through HackerOne, Netflix’s via Bugcrowd—nonetheless restricted the researchers from publicly discussing the exploits they found.
Both findings were ultimately published without permission. Though Rice says HackerOne allows researchers to request permission to publish in such cases, the researchers who reported the PayPal vulnerability did not receive clearance to disclose.
With Netflix’s vulnerability, a Bugcrowd worker warned the researcher that he had violated the platform’s terms by tweeting about his findings after they were deemed out of scope for the bounty. In a statement, Bugcrowd said in part that “it’s important that the disclosure comes only after a discussion between the researcher and customer’s program owners so that both parties reach a mutually agreeable disclosure timeline.” In cases in which a researcher violates disclosure restrictions, Bugcrowd “work(s) with the researcher to remove this information from public forums to protect the researcher and customer.”
The Voatz case, however, has become a dramatic example of the risks of the opacity built into many bug bounties. Because Voatz is considered critical election software, the MIT team ultimately was able to report their discoveries through the U.S. government’s Cybersecurity and Infrastructure Security Agency, instead of through HackerOne.
Voatz disputed their findings and accused the researchers of acting in “bad faith.” Voatz CEO Nimit Sawhney alleges that the MIT researchers were motivated by an “urge to make a scandal” as part of a “coordinated campaign” whose “main goal is to stop any and all Internet voting.” Specter defends his group’s turning to the media “because they would be best situated to clearly and accurately communicate information to the public at large.”
By early March, West Virginia found the MIT researchers’ claims credible enough that the state decided that it will use a different system for its May primary. On March 13, an independent research group released a second report confirming many of the MIT group’s claims and finding additional issues. According to reporting by Cyberscoop, the report included critical vulnerabilities that had been submitted through Voatz’s HackerOne bounty but were classified as noncritical by the election app.
Then on Monday, March 30, HackerOne announced that it was removing Voatz from the platform, the first time it has taken that drastic action. The move was apparently a response to Voatz’s response to the MIT researchers, including subsequent changes to strip legal protections from hackers testing its app.
Voatz CEO Sawhney characterized HackerOne’s move as a “mutual decision,” but the bug bounty company declined to confirm this characterization. “We work tirelessly to foster a mutually beneficial relationship between security teams and the researcher community,” HackerOne said in a statement to Fortune. “[The Voatz bounty program] ultimately did not adhere to our partnership standards and was no longer productive for either party.”
The irony of all this is that if the MIT group hadn’t skirted HackerOne’s nondisclosure terms and reported its findings to the federal government, the flaws in Voatz’s app may never have come to light at all.
“Chaos, every single time”
Both HackerOne and Bugcrowd have a financial interest in touting the benefits of bounties, while making things easy on their customers. HackerOne, which administers programs for the likes of Nintendo, Starbucks, and Slack, has raised $110 million in venture capital since its 2012 founding. Bugcrowd, a similar service, has raised just over $50 million, and its clients include Fitbit, HP, and Motorola.
But security veterans worry that the fashion for bug bounties, including among major firms, is eclipsing more effective approaches to software security.
Davi Ottenheimer has held executive security roles at MongoDB and EMC, now part of Dell. He considers bug bounties unnecessary to serious cybersecurity, and he says he has consistently gotten good-quality bug reports from independent researchers without running formal bounty programs. “The best researchers, sure, they’ll take some money,” Ottenheimer says. “But mostly what they want is a better world.”
A survey by Veracode confirms that. While 47% of responding organizations said they had a bug bounty program, on average only 19% of their bug reports came through those programs. And while 57% of respondents who had reported a bug said they expected communication about their report, only 18% expected payment.
The MIT researchers who uncovered the Voatz bugs echo that sentiment. “We were interested in figuring out how well they’d respond to the bugs we found,” says Specter. “We weren’t interested in the money at all.”
Veracode’s Wysopal feels messaging from bug bounty platforms has contributed to confusion. “They lead with [the idea that] the crowdsourced way is the best and most efficient way to secure your software,” he says. “But if you look at the economics of it, firms like Google and Facebook look at bug bounties as an add-on, a backstop, icing on the cake.
“The cake is, Let’s have trained developers with the right tools building secure software,” he adds.
Rice considers it HackerOne’s mission to advocate for the benefits of bug bounties, even if that means being flexible on ideals like transparency and disclosure.
“I cannot overstate how much work [disclosure] is for an organization and a communications team,” he says. “It’s chaos every single time…We have customers who sign up for the public scrutiny,” Rice adds, “and those who would rather not.”
But Ottenheimer says that’s exactly the problem: Companies want the good press that comes with bug bounties but without the transparency these programs should entail. “It’s about optics,” he says.
Ottenheimer points out that headline-generating bug bounties failed to prevent one of the biggest disasters in cybersecurity history. “They added a $2 million bounty to the Yahoo budget,” he says. “Yet 3 billion accounts were compromised [later that year].
“The news said, ‘Look how great they are—they spent $2 million.’ But that doesn’t map to safety at all.”
More must-read tech coverage from Fortune:
—How the coronavirus stimulus package would change gig worker benefits
—Inside the global push to 3D-print masks and ventilator parts
—Apple focuses on what’s next amid coronavirus outbreak
—A startup is building computer chips using human neurons
—Listen to Leadership Next, a Fortune podcast examining the evolving role of CEO
—WATCH: Best earbuds in 2020: Apple AirPods Pro Vs. Sony WF-1000XM3
Catch up with Data Sheet, Fortune’s daily digest on the business of tech.