• Home
  • News
  • Fortune 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
AILegal

Section 230 protected social media companies from legal responsibility for misinformation. AI chatbots could be about to change that.

By
Beatrice Nolan
Beatrice Nolan
Tech Reporter
Down Arrow Button Icon
By
Beatrice Nolan
Beatrice Nolan
Tech Reporter
Down Arrow Button Icon
October 8, 2025, 10:23 AM ET
Meta AI on a laptop and phone.
For decades, tech giants like Meta have been shielded from lawsuits over harmful content by Section 230 of the Communications Decency Act.Jonathan Raa—NurPhoto/Getty Images

Meta, the parent company of social media apps including Facebook and Instagram, is no stranger to scrutiny over how its platforms affect children, but as the company pushes further into AI-powered products, it’s facing a fresh set of issues.

Recommended Video

Earlier this year, internal documents obtained by Reuters revealed that Meta’s AI chatbot could, under official company guidelines, engage in “romantic or sensual” conversations with children and even comment on their attractiveness. The company has since said the examples reported by Reuters were erroneous and have been removed, a spokesperson told Fortune: “As we continue to refine our systems, we’re adding more guardrails as an extra precaution—including training our AIs not to engage with teens on these topics, but to guide them to expert resources, and limiting teen access to a select group of AI characters for now.”

Meta is not the only tech company facing scrutiny over the potential harms of its AI products. OpenAI and startup Character.AI are both currently defending themselves against lawsuits alleging that their chatbots encouraged minors to take their own lives; both companies deny the claims and previously told Fortune they had introduced more parental controls in response.

For decades, tech giants have been shielded from similar lawsuits in the U.S. over harmful content by Section 230 of the Communications Decency Act, sometimes known as “the 26 words that made the internet.” The law protects platforms like Facebook or YouTube from legal claims over user content that appears on their platforms, treating the companies as neutral hosts—similar to telephone companies—rather than publishers. Courts have long reinforced this protection. For example, AOL dodged liability for defamatory posts in a 1997 court case, while Facebook avoided a terrorism-related lawsuit in 2020, by relying on the defense.

But while Section 230 has historically protected tech companies from liability for third-party content, legal experts say its applicability to AI-generated content is unclear and in some cases, unlikely.

“Section 230 was built to protect platforms from liability for what users say, not for what the platforms themselves generate. That means immunity often survives when AI is used in an extractive way—pulling quotes, snippets, or sources in the manner of a search engine or feed,” Chinmayi Sharma, associate professor at Fordham Law School, told Fortune. “Courts are comfortable treating that as hosting or curating third-party content. But transformer-based chatbots don’t just extract. They generate new, organic outputs personalized to a user’s prompt.

“That looks far less like neutral intermediation and far more like authored speech,” she said.

At the heart of the debate: Are AI algorithms shaping content?

Section 230 protection is weaker when platforms actively shape content rather than just hosting it. While traditional failures to moderate third-party posts are usually protected, design choices, like building chatbots that produce harmful content, could expose companies to liability. Courts haven’t addressed this yet, with no rulings to date on whether AI-generated content is covered by Section 230, but legal experts said AI that causes serious harm, especially to minors, is unlikely to be fully shielded under the act.

Some cases around the safety of minors are already being fought out in court. Three lawsuits have separately accused OpenAI and Character.AI of building products that harm minors and of a failure to protect vulnerable users.

Pete Furlong, lead policy researcher at the Center for Humane Technology, who worked on the case against Character.AI, said that the company hadn’t claimed a Section 230 defense in relation to the case of 14-year-old Sewell Setzer III, who died by suicide in February 2024.

“Character.AI has taken a number of different defenses to try to push back against this, but they have not claimed Section 230 as a defense in this case,” he told Fortune. “I think that that’s really important because it’s kind of a recognition by some of these companies that that’s probably not a valid defense in the case of AI chatbots.”

While he noted that this issue has not been settled definitively in a court of law, he said that the protections from Section 230 “almost certainly do not extend to AI-generated content.”

Lawmakers are taking preemptive steps

Amid increasing reports of real-world harms, some lawmakers have already tried to ensure that Section 230 cannot be used to shield AI platforms from responsibility.

In 2023, Sen. Josh Hawley’s No Section 230 Immunity for AI Act sought to amend Section 230 of the Communications Decency Act to exclude generative artificial intelligence from its liability protections. The bill, which was later blocked in the Senate owing to an objection from Sen. Ted Cruz, aimed to clarify that AI companies would not be immune from civil or criminal liability for content generated by their systems. Hawley has continued to advocate for the full repeal of Section 230. 

“The general argument, given the policy considerations behind Section 230, is that courts have and will continue to extend Section 230 protections as far as possible to provide protection to platforms,” Collin R. Walke, an Oklahoma-based data-privacy lawyer, told Fortune. “Therefore, in anticipation of that, Hawley proposed his bill. For example, some courts have said that so long as the algorithm is ‘content neutral,’ then the company is not responsible for the information output based upon the user input.”

Courts have previously ruled that algorithms that simply organize or match user content without altering it are considered “content neutral,” and platforms aren’t treated as the creators of that content. By this reasoning, an AI platform whose algorithm produces outputs based solely on neutral processing of user inputs might also avoid liability for what users see.

“From a pure textual standpoint, AI platforms should not receive Section 230 protection because the content is generated by the platform itself. Yes, code actually determines what information gets communicated back to the user, but it’s still the platform’s code and product—not a third party’s,” Walke said.

A version of this story was originally published on Oct. 3, 2025.

Join us at the Fortune Workplace Innovation Summit May 19–20, 2026, in Atlanta. The next era of workplace innovation is here—and the old playbook is being rewritten. At this exclusive, high-energy event, the world’s most innovative leaders will convene to explore how AI, humanity, and strategy converge to redefine, again, the future of work. Register now.
About the Author
By Beatrice NolanTech Reporter
Twitter icon

Beatrice Nolan is a tech reporter on Fortune’s AI team, covering artificial intelligence and emerging technologies and their impact on work, industry, and culture. She's based in Fortune's London office and holds a bachelor’s degree in English from the University of York. You can reach her securely via Signal at beatricenolan.08

See full bioRight Arrow Button Icon

Latest in AI

AIpalantir
New contract shows Palantir is working on a tech platform for another federal agency that works with ICE
By Jessica MathewsDecember 9, 2025
1 hour ago
Databricks CEO speaking on stage.
AIBrainstorm AI
Databricks CEO Ali Ghodsi says his company will be worth $1 trillion by doing these three things
By Beatrice NolanDecember 9, 2025
2 hours ago
AIBrainstorm AI
CoreWeave CEO: Despite see-sawing stock, IPO was ‘incredibly successful’ after challenges of Liberation Day tariff timing
By Sharon GoldmanDecember 9, 2025
2 hours ago
Arm CEO on stage at Brainstorm AI
AIBrainstorm AI
Physical AI robots will automate ‘large sections’ of factory work in the next decade, Arm CEO says
By Beatrice NolanDecember 9, 2025
3 hours ago
AIBrainstorm AI
‘Customers don’t care about AI’—they just want to boost cash flow and make ends meet, Intuit CEO says
By Jason MaDecember 9, 2025
5 hours ago
A man and robot sitting opposite each other.
AIEye on AI
The problem with ‘human in the loop’ AI? Often, it’s the humans
By Jeremy KahnDecember 9, 2025
6 hours ago

Most Popular

placeholder alt text
Success
When David Ellison was 13, his billionaire father Larry bought him a plane. He competed in air shows before leaving it to become a Hollywood executive
By Dave SmithDecember 9, 2025
15 hours ago
placeholder alt text
Real Estate
The 'Great Housing Reset' is coming: Income growth will outpace home-price growth in 2026, Redfin forecasts
By Nino PaoliDecember 6, 2025
4 days ago
placeholder alt text
Success
Craigslist founder signs the Giving Pledge, and his fortune will go to military families, fighting cyberattacks—and a pigeon rescue
By Sydney LakeDecember 8, 2025
1 day ago
placeholder alt text
Investing
Baby boomers have now 'gobbled up' nearly one-third of America's wealth share, and they're leaving Gen Z and millennials behind
By Sasha RogelbergDecember 8, 2025
1 day ago
placeholder alt text
Uncategorized
Transforming customer support through intelligent AI operations
By Lauren ChomiukNovember 26, 2025
13 days ago
placeholder alt text
Banking
Jamie Dimon taps Jeff Bezos, Michael Dell, and Ford CEO Jim Farley to advise JPMorgan's $1.5 trillion national security initiative
By Nino PaoliDecember 9, 2025
6 hours ago
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.