The following is an excerpt from Gary Marcusâs book Taming Silicon Valley: How We Can Ensure That AI Works for Us.
The question is, why did we fall for Silicon Valleyâs over-hyped and often messianic narrative in the first place? This chapter is a deep dive into the mind tricks of Silicon Valley. Not, mind you, the already well-documented tricks discussed in the film The Social Dilemma, in which Silicon Valley outfits like Meta addict us to their software. As you may know, they weaponize their algorithms in order to attract our eyeballs for as long as possible, and serve up polarizing information so they can sell as many advertisements as possible, thereby polarizing society, undermining mental health (particularly of teens) and leading to phenomena like the one Jaron Lanier once vividly called âTwitter poisoningâ (âa side effect that appears when people are acting under an algorithmic system that is designed to engage them to the maxâ). In this chapter, I dissect those mind tricks by which big tech companies bend and distort the reality of what the tech industry itself has been doing, exaggerating the quality of the AI, while downplaying the need for its regulation.
Letâs start with hype, a key ingredient in the AI world, even before Silicon Valley was a thing. The basic moveâoverpromise, overpromise, overpromise, and hope nobody noticesâgoes back to the 1950s and 1960s. In 1967, AI pioneer Marvin Minsky famously said: âWithin a generation, the problem of artificial intelligence will be substantially solved.â But things didnât turn out that way. As I write this, in 2024, a full solution to artificial intelligence is still years, perhaps decades away.
But thereâs never been much accountability in AI; if Minskyâs projections were way off, it didnât much matter. His generous promises (initially) brought big grant dollarsâjust as overpromising now often brings big investor dollars. In 2012, Google cofounder Sergey Brin promised driverless cars for everyone in five years, but that still hasnât happened, and hardly anyone ever even calls him on it. Elon Musk started promising his own driverless cars in 2014 or so, and kept up his promises every year or two, eventually promising that whole fleets of driverless taxis were just around the corner. That too still hasnât happened. (Then again, Segways never took over the world either, and I am still waiting for my inexpensive personal jetpack, and the cheap 3D-printer that will print it all.)
Silicon Valley hypeâand its rewards
All too often, Silicon Valley is more about promise than delivery. Over $100 billion has been invested in driverless cars, and they are still in prototype phases, working some of the time, but not reliably enough to be scaled up for worldwide deployment. In the months before I wrote this, GMâs driverless car division Cruise all but fell apart. It came out that they had more people behind the scenes in a remote operations center than actual driverless cars on the road. GM pulled support; the Cruise CEO Kyle Vogt resigned. Hype doesnât always materialize. And yet it continues unabated. Worse, it is frequently rewarded.
A common trick is to feign that todayâs three-quarters-baked AI (full of hallucinations and bizarre and unpredictable errors) is tantamount to so-called Artificial General Intelligence (which would be AI that is at least as powerful and flexible as human intelligence) when nobody is particularly close. Not long ago, Microsoft posted a paper, not peer-reviewed, that grandiosely claimed âsparks of AGIâ had been achieved. Sam Altman is prone to pronouncements like âby [next year] model capability will have taken such a leap forward that no one expected.âŚItâll be remarkable how much different it is.â One master stroke was to say that the OpenAI board would get together to determine when Artificial General Intelligence âhad been achieved,â subtly implying that (1) it would be achieved sometime soon and (2) if it had been reached, it would be OpenAI that achieved it.
Thatâs weapons-grade PR, but it doesnât for a minute make it true. (Around the same time, OpenAIâs Altman posted on Reddit, âAGI has been achieved internally,â when no such thing had actually happened.)
Only very rarely does the media call out such nonsense. It took them years to start challenging Muskâs overclaiming on driverless cars, and few if any asked Altman why the important scientific question of when AGI was reached would be âdecidedâ by a board of directors rather than the scientific community.
The combination of finely tuned rhetoric and a mostly pliable media has downstream consequences; investors have put too much money in whatever is hyped, and, worse, government leaders are often taken in.
Two other tropes often reinforce one another. One is the âOh no, China will get to GPT-5 firstâ mantra that many have spread around Washington, subtly implying that GPT-5 will fundamentally change the world (in reality, it probably wonât). The other tactic is to pretend that we are close to an AI that is SO POWERFUL IT IS ABOUT TO KILL US ALL. Really, I assure you, itâs not.
Many of the major tech companies recently converged on precisely that narrative of imminent doom, exaggerating the importance and power of what they have built. But not one has given a plausible, concrete scenario by which such doom could actually happen anytime soon.
No matter; they got many of the major governments of the world to take that narrative seriously. This makes the AI sound smarter than it really is, driving up stock prices. And it keeps attention away from hard-to-address but critical risks that are more imminent (or are already happening), such as misinformation, for which big tech has no great solution. The companies want us, the citizens, to absorb all the negative externalities (an economistâs term for bad consequences, coined by the British economist Arthur Pigou) that might ariseâsuch as the damage to democracy from Generative AIâproduced misinformation, or cybercrime and kidnapping schemes using deepfaked voice clonesâwithout them paying a nickel.
Big Tech wants to distract us from all that, by sayingâwithout any real accountabilityâthat they are working on keeping future AI safe (hint: they donât really have a solution to that, either), even as they do far too little about present risk. Too cynical? Dozens of tech leaders signed a letter in May 2023 warning that AI could pose a risk of extinction, yet not one of those leaders appears to have slowed down one bit.
Another way Silicon Valley manipulates people is by feigning that they are about to make enormous barrels of cash. In 2019, for example, Elon Musk promised that a fleet of ârobo taxisâ powered by Tesla would arrive in 2020; by 2024 they still hadnât arrived. Now Generative AI companies are being valued at billions (and even tens of billions) of dollars, but itâs not clear they will ever deliver. Microsoft Copilot has been underwhelming in early trials, and OpenAIâs app store (modeled on Appleâs app store) offering custom versions of ChatGPT is struggling. A lot of the big tech companies are quietly recognizing that the promised profits arenât going to materialize any time soon.
But the abstract notion that they might make money gives them immense power; government dare not step on what has been positioned as a potential cash cow. And because so many people idolize money, too little of the rhetoric ever gets seriously questioned.
A dramatic overestimation of value
Another frequent move is to publish a slick video that hints at much more than can be actually delivered. OpenAI did this in October 2019, with a video that showed one of their robots solving a Rubikâs Cube, one-handed. The video spread like wildfire, but the video didnât make clear what was buried in the fine print.
When I read their Rubikâs Cube research paper carefully, having seen the video, I was appalled by a kind of bait-and-switch, and said so: the intellectual part of solving a Rubikâs Cube had been worked out years earlier, by others; OpenAIâs sole contribution, the motor control part, was achieved by a robot that used a custom, not-off-the-shelf, Rubikâs Cube with Bluetooth sensors hidden inside. As is often the case, the media imagined a robotics revolution, but within a couple years the whole project had shut down. AI is almost always harder than people think.
In December 2023, Google put out a seemingly mind-blowing video about a model they just released, called Gemini. In the video, a chatbot appeared to watch a person make drawings, and to provide commentary on the personâs drawings in real time. Many people became hugely excited by it, saying stuff on X like âMust-watch video of the week, probably the year,â âIf this Gemini demo is remotely accurate, itâs showing broader intelligence than a non-zero fraction of adult humans *already*,â and âCanât stop thinking about the implications of this demo. Surely itâs not crazy to think that sometime next year, a fledgling Gemini 2.0 could attend a board meeting, read the briefing docs, look at the slides, listen to every oneâs words, and make intelligent contributions to the issues debated? Now tell me. Wouldnât that count as AGI?â
But as some more skeptical journalists such as Parmy Olson quickly figured out, the video was fundamentally misleading. It was not produced in real time; it was dubbed after the fact, from a bunch of still shots. Nothing like the real-time, multimodal, interactive-commentary product that Google seemed to be demoing actually existed. (Google itself ultimately conceded this in a blog. ) Googleâs stock price briefly jumped 5 percent based on the video, but the whole thing was a mirage, just one more stop on the endless train of hype.
Hype often equates more or less directly to cash. As I write this, OpenAI was recently valued at $86 billion, never having turned a profit. My guess is that OpenAI will someday be seen as the WeWork moment of AI, a dramatic overestimation of value. GPT-5 will either be significantly delayed or not meet expectations; companies will struggle to put GPT-4 and GPT-5 into extensive daily use; competition will increase, margins will be thin; the profits wonât justify the valuation (especially after a pesky fact I mentioned earlier: in exchange for their investment, Microsoft takes about half of OpenAIâs first $92 billion in profits, if they make any profits at all).
The beauty of the hype game is that if the valuations rise high enough, no profits are required. The hype has already made many of the employees rich, because a late 2023 secondary sale of OpenAI employee stock allowed them to cash out. (Later investors could be left holding the bag, if profits never materialize.)
For a moment, it looked as if that whole calculation might change. Just before the early employees were about to sell shares at a massive $86 billion valuation, OpenAI abruptly fired its CEO Sam Altman, potentially killing the deal. No problem. Within a few days, nearly all the employees had rallied around him. He was quickly rehired. Guess what? Business Insider reported, âWhile the entire company signed a letter stating theyâd follow Altman to Microsoft if he wasnât reinstated, no one really wanted to do it.â It is not that the employees wanted to be with Altman, per se, no matter what (as most onlookers assumed), but rather, I infer, that they wanted the big sale of employee stock at the $86 billion valuation to go through. Bubbles sometimes pop; good to get out while you can.
Downplaying AI pitfalls
Another common tactic is to minimize the downsides of AI. When some of us started to sound alarms about AI-generated misinformation, Metaâs chief AI scientist Yann LeCun claimed in a series of tweets on Twitter, in November and December 2022, that there is no real risk, reasoning, fallaciously, that what hadnât happened yet would not happen ever (âLLMs have been widely available for 4 years, and no one can exhibit victims of their hypothesized dangerousnessâ). He further suggested that âLLMs will not help with careful crafting [of misinformation], or its distribution,â as if AI-generated misinformation would never see the light of day. By December 2023, all of this had proven to be nonsense.
Along similar lines, in May 2023, Microsoftâs chief economist Michael Schwarz told an audience at the World Economic Forum that we should hold off on regulation until serious harm had occurred. âThere has to be at least a little bit of harm, so that we see what is the real problem. Is there a real problem? Did anybody suffer at least a thousand dollarsâ worth of damage because of that? Should we jump in to regulate something on a planet of 8 billion people when there is not even a thousand dollars of damage? Of course not.â
Fast-forward to December 2023, and the harm is starting to come in; The Washington Post, for example, reported: âThe rise of AI fake news is creating a âmisinformation superspreaderââ; in January 2024 (as I mentioned in the introduction), deepfaked robocalls in New Hampshire that sounded like Joe Biden tried to persuade people to stay home from the polls.
But that doesnât stop big tech from playing the same move over and over again. As noted in the introduction, in late 2023 and early 2024, Metaâs Yann LeCun was arguing there will be no real harm forthcoming from open-source AI, even as some of his closest collaborators outside of industry, his fellow deep learning pioneers Geoff Hinton and Yoshua Bengio, vigorously disagreed.
All of these efforts at downplaying risks remind me of the lines that cigarette manufacturers used to spew about smoking and cancer, whining about how the right causal studies hadnât yet been performed, when the correlational data on death rates and a mountain of causal studies had already made it clear that smoking was causing cancer in laboratory animals. (Zuckerberg used this same cigarette-industry style of argument in response to Senator Hawley in his January 2024 testimony on whether social media was causing harm to teenagers.)
What the big tech leaders really mean to say is that the harms from AI will be difficult to prove (after all, we canât even track who is generating misinformation with deliberately unregulated open-source software)âand that they donât want to be held responsible for whatever their software might do. All of it, every word, should be regarded with the same skepticism we accord cigarette manufacturers.
Silicon Valleyâs perceived enemies
Then thereâs ad hominem arguments and false accusation. One of the darkest episodes in American history came in the 1950s, when Senator Joe McCarthy gratuitously called many people Communists, often with little or no evidence. McCarthy was of course correct that there were some Communists working in the United States, but the problem was that he often named innocent people, tooâwithout even a hint of due processâdestroying many lives along the way. Out of desperation, some in Silicon Valley seem intent on reviving McCarthyâs old playbook, distracting from real problems by feinting at Communists. Most prominently, Marc Andreessen, one of the richest investors in Silicon Valley, recently wrote a âTechno-Optimist Manifesto,â enumerating a long, McCarthy-like list of âenemiesâ (âOur enemy is stagnation. Our enemy is anti-merit, anti-ambition, anti-striving, anti-achievement, anti-greatness, etc.â) and made sure to include a whistle call against Communism on his list, complaining of the âcontinuous howling from Communists and Luddites.â (As tech journalist Brian Merchant has pointed out, the Luddites werenât actually anti-technology per se, they were pro-human.)
Five weeks later, another anti-regulatory investor from the Valley, Mike Solana, followed suit, all but calling one of the OpenAI board members a Communist (âI am not saying [so and so] is a CCP asset⌠butâŚâ). There is no end to how low some people will go for a buck.
The influential science popularizer Liz Boeree recounts becoming disaffected by the whole âe/accâ (âeffective accelerationismâ) movement that urges rapid AI development:
I was excited about e/acc when I first heard of it (because optimism *is* extremely important). But then its leader(s) made it their mission to attack and misrepresent perceived âenemiesâ for clout, while deliberately avoiding engaging with counter arguments in any reasonable way. A deeply childish, zero-sum mindset.
In my mind, the entire accelerationist movement has been an intellectual failure, failing to address seriously even the most basic questions, like what would happen if sufficiently advanced technology got into the wrong hands. You canât just say âmake AI fasterâ and entirely ignore the consequencesâbut thatâs precisely what the sophomoric e/acc movement has done. As the novelist Ewan Morrison put it, âThis e/acc philosophy so dominant in Silicon Valley itâs practically a religion.âŚ[It] needs to be exposed to public scrutiny and held to account for all the things it has smashed and is smashing.â
Much of the acceleration effort seems to be little more than a shameless attempt to stretch the âOverton window,â to make unpalatable and even insane ideas seem less crazy. The key rhetorical trick was to make it seem as if the nonsensical idea of zero regulation was viable, falsely portraying anything else as too expensive for startups and hence a death blow to innovation. Donât fall for it. As the Berkeley computer scientist Stuart Russell bluntly put it, âThe idea that only trillion-dollar corporations can comply with regulations is sheer drivel. Sandwich shops and hairdressers are subject to far more regulation than AI companies, yet they open in the tens of thousands every year.â
Accelerationismâs true goal seems to be simply to line the pockets of current AI investors and developers, by shielding them from responsibility. Iâve yet to hear its proponents come up with a genuine, well-conceived plan for maximizing positive human outcome over the coming decades.
Ultimately, the whole âaccelerationistâ movement is so shallow it may actually backfire. Itâs one thing to want to move swiftly; another to dismiss regulation and move recklessly. A rushed, underregulated AI product that caused massive mayhem could lead to subsequent public backlash, conceivably setting AI back by a decade or more. (One could well argue that something like that has happened with nuclear energy.) Already there have been dramatic protests of driverless cars in San Francisco. When ChatGPTâs head of product recently spoke at SXSW, the crowd booed. People are starting to get wise.
âThe new technocratsâ
Gaslighting and bullying are another common pattern. When I argued on Twitter in 2019 that large language models âdonât develop robust representations of âhow events unfold over timeââ (a point that remains true today), Metaâs chief AI officer Yann LeCun condescendingly said, âWhen you are fighting a rear-guard battle, itâs best to know when you adversary overtook your rear 3 years ago,â pointing to research that his company had done, which allegedly solved the problems (spoiler alert: it didnât). More recently, under fire when OpenAI abruptly overtook Meta, LeCun suddenly changed his tune, and ran around saying that large language models âsuck,â never once acknowledging that heâd said otherwise. All thisâthe abruptly changing tune and correlated denial of what happenedâreminded me of Orwellâs famous line on state-sponsored historical revisionism in 1984: âOceania has always been at war with Eastasiaâ (when in fact targets had shifted).
The techlords play other subtle games, too. When Sam Altman and I testified before Congress, we raised our right hands and swore to tell the whole truth, but when Senator John Kennedy (R-LA) asked him about his finances, Altman said, âI have no equity in OpenAI,â elaborating that âIâm doing this âcause I love it.â He probably does mostly work for the love of the job (and the power that goes with it) rather than the cash. But he also left out something important: he owns stock in Y Combinator (where he used to be president), and Y Combinator owns stock in OpenAI (where he is CEO), an indirect stake that is likely worth tens of millions of dollars. Altman had to have known this. It later came out that Altman also owns OpenAIâs venture capital fund, and didnât mention that either. By leaving out these facts, he passed himself off as more noble than he really is.
And all thatâs just how the tech leaders play the media and public opinion. Letâs not forget about the backroom deals. Just as an example, weâve all known for a long time that Google was paying Apple to put their search engine front and center, but few of us (including me) had any idea quite how much. Until November 2023, that is, when, as The Verge put it, âA Google witness let slipâ that Google gives Apple more than a third of the ad revenue it gets from Appleâs Safari, to the tune of $18 billion per year. Itâs likely a great deal for both, but one that has significantly, and heretofore silently, shaped consumer choice, allowing Google to consolidate their near-monopoly on search. Both companies tried, for years, to keep this out of public view.
Lies, half-truths, and omissions. Perhaps Adrienne LaFrance said it best, in an article in The Atlantic titled âThe Rise of Technoauthoritarianismâ:
The new technocrats claim to embrace Enlightenment values, but in fact they are leading an antidemocratic, illiberal movementâŚThe world that Silicon Valley elites have brought into being is a world of reckless social engineering, without consequence for its architectsâŚThey promise community but sow division; claim to champion truth but spread lies; wrap themselves in concepts such as empowerment and liberty but surveil us relentlessly.
We need to fight back.
Excerpted from Taming Silicon Valley: How We Can Ensure That AI Works for Us by Gary Marcus. Published by The MIT Press. Copyright 2024. All rights reserved.
