This essay originally published in the Sunday, June 30, 2024 edition of the Fortune Archives newsletter.
It’s difficult to go to a dinner party, let alone a board meeting, these days without discussing artificial intelligence. Business executives and tech moguls are excited about AI’s potential to bring about a new era of efficiency and productivity. AI promises to change everything from Hollywood to drug research and development.
At the same time (someone across the table is sure to point out), AI technology is expensive and often unreliable. There are risks around data privacy, copyright, disinformation, and bias. Professionals of all stripes are worried that the technology will put them out of work. And some—including people at the forefront of AI development—have even warned that the technology could destroy humanity.
Many of these same desires and dreads have dogged AI since its inception. Computer scientist John McCarthy coined the term “artificial intelligence” for a summer workshop at Dartmouth College in 1956, aiming to unify researchers from various fields who were pondering ways to create “thinking” machines. Eight years later, in October 1964, Fortune staff writer Gilbert Burck took the pulse of the field in a provocative article titled “Will the computer outwit man?”
AI researchers, Burck wrote, had already worked out ways to generate short pieces of music and poetry in the style of the Beat poets. “In twenty years or so,” Burck wrote, “the computer will doubtless be mass-producing ephemeral tunes of the day more cheaply than Tin Pan Alley’s geniuses can turn them out.” His forecast was off by at least a factor of three. But today artists, filmmakers, and advertising execs are again debating whether AI can be truly creative—and whether it is about to put them out of work.
Burck found that fears of AI-powered computers leading to mass unemployment, or even the end of human civilization, swirled around the scientists working on the technology in the 1960s. He reported that AI researchers predicted we were on the cusp of “an era dominated by intelligent problem-solving machines.” Sound familiar?
If that came to pass, Burck warned businesses against ceding decision-making power to AI, noting that “nothing would make a company more vulnerable to smart competitors than to abdicate responsibility to the neat, clean, consistent judgements of the machine.”
He was making a broader point that should resonate today: As AI becomes more capable, “it will beguile [us] into abdicating [our] capacity and obligation to make the important decisions, including moral and social ones,” he wrote.
This risk—not that the machine will outwit us, but that we will outwit ourselves by surrendering too much decision-making authority to machines—is as relevant today as it was in the 1960s.
This is the web version of the Fortune Archives newsletter, which unearths the Fortune stories that have had a lasting impact on business and culture between 1930 and today. Subscribe to receive it for free in your inbox every Sunday morning.