Goldman Sachs executive urges coders to study philosophy as it will prepare them to ‘debate a stubborn AI’ 

Eleanor PringleBy Eleanor PringleReporter

Eleanor Pringle is an award-winning reporter at Fortune covering news, the economy, and personal finance. Eleanor previously worked as a business correspondent and news editor in regional news in the U.K. She completed her journalism training with the Press Association after earning a degree from the University of East Anglia.

A frustrated woman looks at her laptop screen
AI will need the steadying hand of philosophy to make it usable, Goldman Sachs’ chief information officer believes.
Sean Anthony Eddy—Getty Images

Goldman Sachs’ chief information officer, Marco Argenti, is advocating for his team to delve into engineering studies to better navigate the complexities of AI technology.

Argenti emphasized the importance of critical thinking skills, encouraging staff (and even his daughter) to explore disciplines like philosophy alongside traditional engineering.

In a recent article in the Harvard Business Review, Argenti highlighted the value of philosophical study, citing its role in refining problem-solving abilities.

While knowledge of Aristotle, Plato, and Socrates won’t alone secure a position as an AI engineer, Argenti believes that coupling it with technical expertise promises enhanced code quality.

He explained: “The ability to develop crisp mental models around the problems you want to solve and understanding the why before you start working on the how is an increasingly critical skill, especially in the age of AI.”

Indeed, while aspects of AI have been embedded in tech services for many years, the launch of large language models (LLMs) like ChatGPT and Microsoft’s Bing bot has prompted a wave of new launches as rivals have sought to keep up.

Yet the majority are doomed to fail—at least according to the late Harvard professor Clayton Christensen, who said 95% of new services launched in any sector will not succeed.

And while Argenti said AI can “write higher-quality code than humans,” there’s a catch: “It can work well, but not do what you want it to do.”

This is where the art of prompt engineering comes in, Argenti writes, adding: “The quality of the output of an LLM is very sensitive to the quality of the prompt. Ambiguous or not well-formed questions will make the AI try to guess the question you are really asking, which in turn increases the probability of getting an imprecise or even totally made-up answer.”

It’s perhaps no surprise that these roles pay anywhere between $30 an hour to $405,000 a year at the time of writing.

Landing one of these roles—and asking the questions that train AI to function as intended—require “reasoning, logic, and first-principles thinking,” Argenti continues, adding: “[These are] all foundational skills developed through philosophical training.”

Reinvigorating philosophy

For years philosophy students have had to accept that their degree does not, typically, lead to a high-paying job.

According to education platform HeyTutor—citing Federal Reserve Bank of New York analysis of U.S. Census Bureau data—philosophy has one of the highest unemployment rates after graduation in the U.S., with a rate of 6.2%.

Underemployment for the subject stands at just over 50%, while the median early-career wage stands at $36,000.

But with support from the likes of Argenti, that has the potential to change.

He wrote: “Having a crisp mental model around a problem, being able to break it down into steps that are tractable, perfect first-principle thinking, sometimes being prepared (and able to) debate a stubborn AI—these are the skills that will make a great engineer in the future, and likely the same consideration applies to many job categories.”

And AI has indeed been proved to be stubborn at the best of times.

Take Microsoft’s Bing bot, which reportedly told one user the incorrect date and then scolded the individual for pointing out the bot’s error. It then proceeded to tell the user they should apologize, and instructed them to “End this conversation, and start a new one with a better attitude.”

While Argenti said it is imperative not to lose the skills to “open the hood” of large language models, he added: “Automating the mechanics of code creation and focusing on our critical thinking abilities is what will allow us to create more, faster and have a disproportionate impact on the world.

“Helping AI help us be more human, less computer.”

Fortune Global Forum returns Oct. 26–27, 2025 in Riyadh. CEOs and global leaders will gather for a dynamic, invitation-only event shaping the future of business. Apply for an invitation.