• Home
  • News
  • Fortune 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
TechAI

OpenAI CEO Sam Altman says ‘very subtle societal misalignments’ with AI keep him up at night

Sunny Nagpaul
By
Sunny Nagpaul
Sunny Nagpaul
Down Arrow Button Icon
Sunny Nagpaul
By
Sunny Nagpaul
Sunny Nagpaul
Down Arrow Button Icon
February 13, 2024, 3:08 PM ET
OpenAI CEO Sam Altman walking in the U.S. Capitol
OpenAI CEO Sam Altman knows AI has the power to do good, but things can still go “horribly wrong.”Kent Nishimura—Getty Images

OpenAI CEO Sam Altman says it’s not a fear of “killer robots,” or any other Frankenstein-tech creature that AI could power that keeps him up at night. Instead, it’s the technology’s ability to derail society, insidiously and subtly, from the inside. 

Recommended Video

Without adequate international regulations, the software could take society by storm when “very subtle societal misalignments” are not addressed, Altman said while speaking virtually at the World Governments Summit in Dubai on Tuesday. The tech billionaire stressed “through no particular ill intention, things just go horribly wrong.” 

AI can, and already is, helping people work smarter and faster. It can also help people live easier with options for personalized education, medical advice, and financial literacy training. But as the new technology continues to infiltrate, well, everything, many are concerned about how it’s growing largely unchecked by authoritative regulators, and what the aftermath might be on important sectors like elections, media misinformation, and global relations.

To his credit, Altman has consistently and loudly vocalized such concerns, even though his company unleashed the disruptive chatbot known as ChatGPT onto the world.

“Imagine a world where everyone gets a great personal tutor, great personalized medical advice,” Altman asked the crowd in Dubai. People can now use AI tools, like software that analyzes medical data, stores patient records on the cloud, and designs classes and lectures “to discover all sorts of new science, cure diseases, and heal the environment,” he said. 

Those are some ways AI can help people on a personal level, but global impact is a much bigger picture. AI’s relevance is its ability to be of the times, and our times right now are clouded with disinformation-afflicted elections, media misinformation, and military operations—all of which AI offers up use cases for, too. 

This year, elections will be held in more than 50 countries, where voting polls will open to more than half the planet’s population. In a statement last month, OpenAI wrote that AI tools should be used “safely and responsibly, and elections are no different.” Abusive content, like “misleading ‘deepfakes’” (a.k.a. fake, AI-generated photos and videos), or “chatbots impersonating candidates,” are all issues the company hopes to anticipate and prevent. 

Altman didn’t specify how many people would be working on election-troubleshooting issues, according to Axios, but did reject the idea that a large election team would help avoid these trappings in elections coverage. Axios says Altman’s company has far fewer people dedicated to election security than other tech companies, like Meta or TikTok. But OpenAI announced it’s working with the National Association of Secretaries of State, the country’s oldest nonpartisan organization for public officials, and will direct users to authoritative websites for U.S. voting information in response to election questions. 

The waters are muddy for media companies as well: At the end of last year, the New York Times Company sued OpenAI for copyright infringement, while other media outlets, including Axel Springer and the Associated Press, have been cutting deals with AI companies in arrangements that pay newsrooms in exchange for the right to use their content to train language-based AI models. With more media-backed AI training, the potential to spread misinformation is of concern, too. 

Last month, OpenAI quietly removed the fine print that prohibits the technology’s military use. The move follows the company’s announcement that it will work with the U.S. Department of Defense on AI tools, according to an interview with Anna Makanju, the company’s vice president of global affairs, as reported by Bloomberg.  

Previously, OpenAI’s policy prohibited activities with “high risk of physical harm,” including weapons development, military, and warfare. The company’s updated policies, devoid of any mention of military and warfare guidelines, suggest military use is now acceptable. An OpenAI spokesperson told CNBC that “our policy does not allow our tools to be used to harm people, develop weapons,” or for communications surveillance, but that there are “national security cases that align with our mission.” 

Activities that may significantly impair the “safety, well-being or rights of others,” are written clearly on OpenAI’s list of “Don’ts,” but the words are little more than a warning as it becomes clear that regulating AI will be an enormous challenge that few are rising to. 

Last year, Altman gave testimony at a Senate Judiciary subcommittee meeting on the oversight of AI, asking for governmental collaboration to establish safety requirements that are also flexible enough to adapt to new technical developments. He’s been vocal about how important it is to regulate AI to keep the software’s strength and power out of the wrong hands, like computer scammers, online abusers, bullies, and misinformation campaigns. But common ground is hard to find. Even as he supports more regulation, Altman has issues with regulation proposals from the European Union’s AI Act, the world’s first comprehensive AI law, over terms like data and training transparency. Meanwhile, the White House has outlined a bill for AI rights, which emphasizes algorithmic discrimination, data privacy, transparency, and human alternatives as key areas that need regulation.

Fortune Brainstorm AI returns to San Francisco Dec. 8–9 to convene the smartest people we know—technologists, entrepreneurs, Fortune Global 500 executives, investors, policymakers, and the brilliant minds in between—to explore and interrogate the most pressing questions about AI at another pivotal moment. Register here.
About the Author
Sunny Nagpaul
By Sunny Nagpaul
LinkedIn icon
See full bioRight Arrow Button Icon

Latest in Tech

InnovationBrainstorm Design
Procurement execs often don’t understand the value of good design, experts say
By Angelica AngDecember 8, 2025
24 minutes ago
Big TechStreaming
Trump warns Netflix-Warner deal may pose antitrust ‘problem’
By Hadriana Lowenkron, Se Young Lee and BloombergDecember 7, 2025
9 hours ago
Big TechOpenAI
OpenAI goes from stock market savior to burden as AI risks mount
By Ryan Vlastelica and BloombergDecember 7, 2025
9 hours ago
AIData centers
HP’s chief commercial officer predicts the future will include AI-powered PCs that don’t share data in the cloud
By Nicholas GordonDecember 7, 2025
11 hours ago
Future of WorkJamie Dimon
Jamie Dimon says even though AI will eliminate some jobs ‘maybe one day we’ll be working less hard but having wonderful lives’
By Jason MaDecember 7, 2025
15 hours ago
CryptoCryptocurrency
So much of crypto is not even real—but that’s starting to change
By Pete Najarian and Joe BruzzesiDecember 7, 2025
20 hours ago

Most Popular

placeholder alt text
Real Estate
The 'Great Housing Reset' is coming: Income growth will outpace home-price growth in 2026, Redfin forecasts
By Nino PaoliDecember 6, 2025
2 days ago
placeholder alt text
AI
Nvidia CEO says data centers take about 3 years to construct in the U.S., while in China 'they can build a hospital in a weekend'
By Nino PaoliDecember 6, 2025
2 days ago
placeholder alt text
Economy
The most likely solution to the U.S. debt crisis is severe austerity triggered by a fiscal calamity, former White House economic adviser says
By Jason MaDecember 6, 2025
1 day ago
placeholder alt text
Economy
JPMorgan CEO Jamie Dimon says Europe has a 'real problem’
By Katherine Chiglinsky and BloombergDecember 6, 2025
1 day ago
placeholder alt text
Politics
Supreme Court to reconsider a 90-year-old unanimous ruling that limits presidential power on removing heads of independent agencies
By Mark Sherman and The Associated PressDecember 7, 2025
17 hours ago
placeholder alt text
Big Tech
Mark Zuckerberg rebranded Facebook for the metaverse. Four years and $70 billion in losses later, he’s moving on
By Eva RoytburgDecember 5, 2025
3 days ago
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.