• Home
  • Latest
  • Fortune 500
  • Finance
  • Tech
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
NewslettersEye on AI

People increasingly view chatbots as if they were friends, just not necessarily super-smart ones

Sage Lazzaro
By
Sage Lazzaro
Sage Lazzaro
Contributing writer
Down Arrow Button Icon
Sage Lazzaro
By
Sage Lazzaro
Sage Lazzaro
Contributing writer
Down Arrow Button Icon
February 13, 2025, 11:31 AM ET
A person holding a smartphone that is displaying a digital avatar of a young woman.
New research from Stanford University shows that Americans increasingly describe AI chatbots as they would a human friend or companion. At the same time, somewhat paradoxically, their trust in these chatbots' competence is declining. The study raises critical questions about how chatbot interfaces should be designed to potentially discourage anthropomorphism.Photo illustJaap Arriens/NurPhoto—Getty Images

Hello and welcome to Eye on AI. In today’s edition…A Stanford study digs into peoples’ perceptions of AI; a court ruling offers a blow to AI companies’ “fair use” defense for training models on copyrighted material; a co-developer of AlphaFold launches a new protein design startup; Adobe adds a more brand safe option to the slew of new text-to-video models; and Crowdstrike launches a new AI triage technology it says can save security teams 40 hours per week. 

Recommended Video

It’s no secret that people are not only increasingly using AI chatbots, but in some cases, growing attached to them and viewing them as companions. 

A study conducted by Stanford Social Media Lab and BetterUp that tracked Americans’ perceptions of AI over the course of a year found that perceptions of AI’s human-likeness, warmth, and trust have significantly increased. Intriguingly, the same study showed even as respondents’ perceptions of trust in AI increased, their feelings around AI’s competency decreased.

“This is important because it suggests that people are changing how they think about these complex systems as they start to see AI less like powerful ‘computers’ or ‘search engines’ [and more like] friendly, helpful, human-like ‘assistants,’” Angela Y. Lee, one of the paper’s lead authors, told me. 

The research raises important questions about the role chatbots’ friendly attributes play in building blind trust, how this could lead to overreliance on the technology, and the responsibility AI companies have in terms of how they present their products. 

Perception of AI as a friend rises 

To get a sense of how the population’s perceptions of AI are changing over time, the researchers continually recruited and collected opinions on AI from May 2023 to August 2024, ultimately talking to a nationally representative sample of nearly 13,000 Americans. They asked questions about which AI tools people use, how frequently they use them, their willingness to adopt AI, and questions to assess their trust in AI, but they focused largely on how people responded when asked to provide a metaphor to describe AI.

Metaphors have long been used to describe technology and can say a lot about people’s implicit perceptions. For example, the paper notes how early metaphors of the internet as a “superhighway” showed how people thought it could connect users to diverse digital destinations.

Over the course of the study, the use of metaphors that describe AI as a distinctly non-human entity like a “computer” or “search engine” all decreased, while the rate of anthropomorphic metaphors (such as “friend,” “god,” “teacher,” and “assistant”) saw a significant jump (34%). Taken together with the 41% increase in respondent’s implicit perceptions of warmth toward the technology, the results suggest a societal shift toward seeing AI as more human-like and warm. The researchers also found differences among demographics, with older participants and non-white participants (and in particular Black participants) reporting significantly higher levels of trust in AI.

Trust issues

Notably, these positive feelings didn’t correspond to an increased perception that AI is competent: Implicit perceptions of AI as competent decreased by 8% over time.

Information about AI’s inaccuracies and penchant to hallucinate is everywhere, and scrutiny is only growing as the technology is further developed and people are increasingly encouraged to adopt it. From the blundered launch of Google’s AI Overviews where it told a user to put glue on pizza to the continuous onslaught of studies highlighting chatbots’ hallucination problem (my Eye on AI co-writer Jeremy Kahn covered a fresh BBC study on chatbot hallucinations pertaining to current events in Tuesday’s newsletter), it’s easy to see why confidence in AI chatbots’ abilities has only gone down. Just this past week, Google even had to correct a factual inaccuracy given by Gemini in its Super Bowl commercial.

With trust rising even amid decreased perceptions on competence, the researchers argue that the choices tech companies make when introducing AI technologies, especially chatbots, can influence how likely people are to trust them. For example, developers could give chatbots a more neutral tone, not have it use pronouns like “I,” and limit the ways it emotionally engages with the user.

“It’s important to remember that too much blind trust in AI may have consequences, such as overreliance on the technology,” Lee said. 

And with that, here’s more AI news. 

Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com

AI IN THE NEWS

Thomson Reuters wins the first major AI copyright case in the U.S. The company sued legal AI startup Ross Intelligence in 2020, claiming it unlawfully used materials from its Westlaw subsidiary to train AI models. On Tuesday, a judge sided with Thomson Reuters, offering a blow to the “fair use” defense many companies training AI models are relying on. The ruling could have major implications for the AI industry and the likely outcome of a growing number of similar cases. At least 38 AI-related copyright cases are currently pending in U.S. courts, plus a new one announced today. You can read more from The Register.

Top news publishers sue AI company Cohere for copyright and trademark violations. The News Media Alliance—representing members including Condé Nast, The Atlantic, Forbes, Vox Media, Business Insider, Politico, Advance Local Media, and more—is arguing that the company engaged in widespread and systematic infringement of publisher content to train its AI systems. The suit alleges that Cohere used content without authorization, accessed content blocked by paywalls, and that its products spit out verbatim regurgitations of the publishers’ news content (and includes 4,000 examples to demonstrate this). It also alleges that Cohere’s products hallucinate damaging information, offering users fake information under the publishers’ names. You can read more from the Wall Street Journal.   

Google DeepMind AlphaFold co-developer launches a new protein design startup. Led by Simon Kohl, Latent Labs is emerging from stealth today with $50 million in funding and four additional staff members from DeepMind. The company is initially focused on developing frontier models for protein design that it will make available to partner organizations on a project-basis. In the longer term, the company tells Eye on AI it wants to make biology “programmable,” eventually reducing the reliance on the wet lab experiments needed to develop drugs and making it possible to do more of that work computationally. You can read the company’s launch press release here and more from the Financial Times here.

Adobe debuts a text-to-video generator as AI video tools go mainstream. Called Generate Video, it rivals OpenAI’s Sora and other recently-released AI video tools from companies including Google, Amazon, ByteDance, and Pika Labs. Adobe is marketing its tool as “production ready,” for instance touting the fact that the model was trained only on content the company has the rights to, absolving copyright concerns. Adobe has also updated the Firefly web app that hosts many of its AI tools to integrate them with its programs including Photoshop, Premiere Pro, and Express. You can read more from The Verge.

OpenAI’s Sam Altman says GPT-4.5 and GPT-5 are coming in “weeks to months.” In a long post on social media platform X, Altman sought to clarify OpenAI’s product roadmap. He said that the company’s long-rumored “Orion” model would be released under the product name GPT-4.5 soon and would be the company’s last “non-chain of thought” model. That means GPT-4.5’s capabilities, like its other GPT-branded predecessors, would have capabilities mostly derived from massive scale pre-training (the P in GPT stands for “pre-trained”). GPT-5, on the other hand, Altman said, would combine the breadth of abilities and instant responses of the GPT series with the “chain of thought” reasoning abilities of OpenAI’s “o” series of models. These models are designed to provide better answers at questions involving math, coding, and logic by using more computing time when they are sent a prompt. The models use this “test time compute” to generate a sometimes lengthy set of steps towards solving a problem. This allows the model to better mimic human reasoning. Altman said GPT-5 would essentially incorporate the capabilities of o3, its best reasoning model to date, and that OpenAI would cease to offer o3 as a separate model. You can read Altman’s post here.  

FORTUNE ON AI

Exclusive: Legal AI startup Harvey lands fresh $300 million in Sequoia-led round as CEO says on target for $100 million annual recurring revenue —by Sharon Goldman

Exclusive: Fal, generative media platform for developers, raises $49 million Series B —by Allie Garfinkle

Read the full letter of intent Elon Musk’s lawyer sent to OpenAI this week —by Jessica Mathews

Why OpenAI’s battle to dominate tech may hinge on better product design —by Sharon Goldman

OpenAI’s DeepResearch can complete 26% of ‘Humanity’s Last Exam’ — a benchmark for the frontier of human knowledge —by Greg McKenna

Baidu CEO defends heavy AI investments as competition heats up and Apple reportedly looks elsewhere —by Lionel Lim

AI CALENDAR

March 3-6: MWC, Barcelona

March 7-15: SXSW, Austin

March 10-13: Human [X] conference, Las Vegas

March 17-20: Nvidia GTC, San Jose

April 9-11: Google Cloud Next, Las Vegas

May 6-7: Fortune Brainstorm AI London. Apply to attend here.

EYE ON AI NUMBERS

40

That’s at least how many hours of triage work Crowdstrike says its new AI-powered cybersecurity tools could save the average team of security operations analysts every week, based on its own trials.  

Like in a hospital emergency department, cybersecurity analysts have to quickly look at every alert coming in—or triage—to see which need their attention most urgently. Called Charlotte AI Detection Triage and announced for general availability today, Crowdstrike’s technology is designed to filter out false positives (something incorrectly identified as malicious) and quickly close low-risk alerts so cybersecurity analysts can zero in on higher-risk threats more quickly. Crowdstike says it trained Charlotte on millions of real-world triage decisions and that it can automate triage decisions with over 98% accuracy. Cybersecurity is one of many industries where leaders believe AI can be used to automate workers’ most tedious and repetitive tasks, enabling them to focus on more meaningful work. 

This is the online version of Eye on AI, Fortune's biweekly newsletter on how AI is shaping the future of business. Sign up for free.
About the Author
Sage Lazzaro
By Sage LazzaroContributing writer

Sage Lazzaro is a technology writer and editor focused on artificial intelligence, data, cloud, digital culture, and technology’s impact on our society and culture.

See full bioRight Arrow Button Icon

Latest in Newsletters

NewslettersCFO Daily
When AI takes the tasks, managers take the relationships
By Sheryl EstradaDecember 11, 2025
38 seconds ago
NewslettersTerm Sheet
Cursor has growing revenue and a $29 billion valuation—but CEO Michael Truell isn’t thinking about an IPO
By Beatrice NolanDecember 11, 2025
2 hours ago
OpenAI chief operating officer Brad Lightcap (right) speaking at Fortune Brainstorm AI 2025 in San Francisco, California. (Photo: Stuart Isett/Fortune)
NewslettersFortune Tech
What OpenAI’s ‘code red’ will accomplish
By Andrew NuscaDecember 11, 2025
2 hours ago
Nvidia CEO Jensen Huang arrives for a meeting with Republican members of the Senate Banking Committee in the Dirksen building on Wednesday, December 3, 2025.
NewslettersCEO Daily
Business leaders make their 2026 predictions for the Magnificent 7: ‘I’d rather be in Jensen’s seat than anywhere else’
By Diane BradyDecember 11, 2025
2 hours ago
Curly haired woman in a black dress speaking.
NewslettersMPW Daily
Natasha Lyonne says Tilly Norwood is a ‘fear tactic’—not the real future of AI in Hollywood
By Emma HinchliffeDecember 10, 2025
21 hours ago
Goldman Sachs' logo seen displayed on a smartphone with an AI chip and symbol in the background.
NewslettersCFO Daily
Goldman Sachs CFO on the company’s AI reboot, talent, and growth
By Sheryl EstradaDecember 10, 2025
1 day ago

Most Popular

placeholder alt text
Success
At 18, doctors gave him three hours to live. He played video games from his hospital bed—and now, he’s built a $10 million-a-year video game studio
By Preston ForeDecember 10, 2025
1 day ago
placeholder alt text
Politics
Exclusive: U.S. businesses are getting throttled by the drop in tourism from Canada: ‘I can count the number of Canadian visitors on one hand’
By Dave SmithDecember 10, 2025
1 day ago
placeholder alt text
Economy
‘Be careful what you wish for’: Top economist warns any additional interest rate cuts after today would signal the economy is slipping into danger
By Eva RoytburgDecember 10, 2025
19 hours ago
placeholder alt text
Economy
‘Fodder for a recession’: Top economist Mark Zandi warns about so many Americans ‘already living on the financial edge’ in a K-shaped economy 
By Eva RoytburgDecember 9, 2025
2 days ago
placeholder alt text
Uncategorized
Transforming customer support through intelligent AI operations
By Lauren ChomiukNovember 26, 2025
15 days ago
placeholder alt text
Success
Netflix–Paramount bidding wars are pushing Warner Bros CEO David Zaslav toward billionaire status—he has one rule for success: ‘Never be outworked’
By Preston ForeDecember 10, 2025
21 hours ago
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.