Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward

Facebook’s Security Chief Says Critics Are Unfair. Is He Right?

October 9, 2017, 3:35 PM UTC

Facebook’s chief of security Alex Stamos took to Twitter over the weekend in a candid outburst that dressed down journalists and academics for making facile assumptions about how tech firms operate.

The Twitter rant is notable because tech executives—especially at Facebook—rarely speak without the blessing of a public relations team. If Stamos’s 18-part tweet storm was indeed an organic outburst (and not a pre-planned PR tactic), it’s an interesting retort to the legions of haters taking aim at Silicon Valley. But before deciding if Stamos has a valid point, it’s worth taking a closer look at what he said.

Stamos begins by criticizing Quinta Jurecic, an author at the well-respected Lawfare blog, for saying Facebook could do a better job of using algorithms to screen controversial ads (like the ones bought by Russia to incite U.S. voters). He points out such algorithms could have biases of their own:

Stamos goes on to note algorithms are necessary to sort the huge pile of information on the Internet, but that relying on machine learning to police them is perilous. For instance, using machine learning to censor ads could create an automated “Ministry of Truth” ripe for abuse by governments.

The Facebook exec also suggests Silicon Valley’s critics are inflamed by biases of their own, and are prone to conflating discrete issues to a half-baked theory of everything:

Stamos signs off with another “careful what you wish for” warning, and implies government-imposed solutions could be worse than the problems the tech industry is now confronting:

All of this went over well on Twitter as tech types praised Stamos for speaking like an ordinary person, and echoed his claim that journalists misunderstand the challenges Facebook confronts.

Still, it’s hard not to shake the suspicion that Stamos’s Twitter stance is not part of Facebook’s larger mission at the moment, which is to keep regulators away from the platform.

His twin messages of “trust us, we’re good people” and “this is too hard for you to understand” may be sincere, but they also reinforce Facebook’s recent claims that it can bring it’s runaway algorithms under control, and there’s no need for the government to do so.

Get Data Sheet, Fortune’s technology newsletter.

Stamos also appears intent on deflecting the conversation away from hard questions about Facebook’s business model and its de facto monopoly on social media. As sociologist Zeynep Tuckecki (who is possibly the unnamed academic who Stamos pillories in his tweet storm) has pointed out in the New York Times, the company has a vested interest in preserving the status quo:

Here’s the hard truth: All these problems are structural. Facebook is approaching half-a-trillion dollars in market capitalization because the business model—ad-targeting through deep surveillance, emaciated work force, automation and the use of algorithms to find and highlight content that entice people to stay on the site or click on ads or share pay-for-play messages—works.

Stamos doesn’t address these points and nor does he bring up antitrust remedies, which could solve the problem that platforms like Facebook may have become too big to control in the first place.

As for those who work at Facebook, Stamos is right that critics are quick to see them, unfairly, as stereotypes. Contrary to recent caricatures, many Facebook employees are thoughtful, well-read, and mindful that what they build has a profound effect on the world. Nonetheless, at this point, it will take much more than a series of tweets for public perception of the company to change.