• Home
  • News
  • Fortune 500
  • Tech
  • Finance
  • Leadership
  • Lifestyle
  • Rankings
  • Multimedia
NewslettersEye on AI

Is ChatGPT the end of trust? Will the college essay survive?

Jeremy Kahn
By
Jeremy Kahn
Jeremy Kahn
Editor, AI
Down Arrow Button Icon
Jeremy Kahn
By
Jeremy Kahn
Jeremy Kahn
Editor, AI
Down Arrow Button Icon
December 15, 2022, 2:25 PM ET
Ulrich Baumgarten via Getty Images

Hello and welcome to December’s special edition of Eye on A.I.

Is ChatGPT the end of trust? That’s what some people are suggesting after the release of OpenAI’s chatbot ChatGPT, which is shockingly good at mimicking human writing in almost any format, from computer code and poetry to blog posts and polemical essays. Much of what the chatbot spits out is factually accurate. But much of it isn’t. And the problem is that there is no easy way for a user to ensure that ChatGPT’s responses are accurate. ChatGPT expresses both fact and fiction with equal confidence and style.

Never mind that the written word has had trust issues since the very beginning of writing. (Ancient scribes were often propagandists and fabulists after all.) There does seem to be something different about the way ChatGPT can create fluent and confident answers to almost any question in less than a second—and right now since OpenAI isn’t charging for it, it does so at zero cost to the user. Before, creating a convincing fraud would take time and serious effort. But tools like ChatGPT mean that the marginal cost of creating misinformation has essentially dropped to zero. That means we are likely to see an explosion of it.

Some say we have already seen the first victim of this misinformation eruption: Stack Overflow, a site that provides community-sourced answers to people’s computer coding questions, had to bar users from submitting answers created by ChatGPT after being overwhelmed with responses created by the chatbot. The problem, Stack Overflow said, is that the answers seemed very convincing, but were actually wrong, and it was taking their community moderators too long to vet all the answers and discover the flaws.

Things are going to get a lot worse if one of the new advocates for open-sourcing A.I. models decides to build a ChatGPT clone and make it fully available for free as an open-source project. (Right now OpenAI still controls the model behind ChatGPT and users can only query the model through an interface that OpenAI could shut down, or impose charges to use, at any time. Its terms of use also bar people from using the bot to run misinformation campaigns.) Already Ehmad Mostaque, the former hedge fund manager who runs Stability AI, the company that helped train and open-source the popular text-to-image system Sable Diffusion, has asked his Twitter followers whether Stability should create an open source version of ChatGPT.

As part of its release of ChatGPT, OpenAI also released an A.I. system that can detect whether text was created using ChatGPT. The open-source A.I. startup Hugging Face hosts an interface to that ChatGPT detector on its website and, in experiments, Casey Fiesler, a professor of information science at the University of Colorado at Boulder, said on Twitter that when she fed the detector five student-written essays and five created using ChatGPT, it flagged all five ChatGPT-made ones with 99.9% confidence. But some researchers say they doubt the detector will work on all future versions of the A.I. system, or will work for any similar, but not identical, large language models that others train. Earlier research on large language models had found that A.I. systems were poor at differentiating between A.I.-created and human-written text.

One area where many people think ChatGPT and similar systems will have an immediate and profound effect is education. Many are saying such systems mean the end of using any kind of write-at-home essay or report for student assessments. It might mean the end of college application essays and term papers. The Atlantic had a very good piece examining this last week. I asked a friend of mine who is a university professor what he thought and he answered unequivocally that the term paper was finished. He said he thought professors will have to rely solely on proctored exams where students are asked to hand write their essays (or type on computers that they can prove are not attached to the Internet.)

Kevin Scott, Microsoft’s chief technology officer, said at Fortune’s Brainstorm A.I. conference in San Francisco last week that teachers who were wringing their hands about ChatGPT were making “a pedagogical mistake” in confusing the essay, which he said was simply “an artifact,” with the important information that the teacher is actually trying to ensure that the student has learned.

He seemed to say that ChatGPT would no more destroy the teaching of humanities than the calculator had destroyed the teaching of mathematics. “In a sense, nothing really is changing here other than you have this tool, and the student themselves has to become the teacher to the model,” he said, meaning that the student will still have to go over the answer that the large language model produces and ensure that it is not making up information. The student, for now, would still have to provide accurate citations of where the information was coming from. “Your job is: be the editor for this thing, be the teacher, coax it into getting you the output that you really need. That’s the important thing, that’s the special thing about us. The thing is just a tool.”

Scott is not alone in the view that ChatGPT could actually be great for education. But I think Scott and others are missing something here. Teachers use essays for more than just assessing what facts a student has learned. That may be true in elementary school. But at the high school and, certainly at the university level, teachers use essays not simply to see what facts students know but if they can use those facts to make a well-reasoned argument. The facts are simply supporting evidence. They are necessary but not sufficient in order to get top marks. Teachers also use essays to assess how well the student is able to express their ideas in writing—how graceful his or her prose is, can the student come up with original and appropriate metaphors, etc.

Perhaps most importantly, it is difficult to separate the act of composition from the act of thinking—by writing a person is forced to structure their thoughts, refine their ideas, marshal evidence, and consider counter-arguments. There’s a lot of learning that takes place in the act of composition itself. Most of that disappears when ChatGPT or its successor bots can simply churn out page after page of well-written and well-structured prose and the student is reduced to being a mere fact-checker and annotator. We write not merely to convey information, but to conjure and refine it.

Read on for a few more A.I.-related stories from the past week. Fortune’s newsletters, including Eye on A.I., are going on hiatus for the holidays. The next Eye on A.I. will be in your inboxes on Jan. 10. In the meantime, happy holidays and a happy new year to you all! See you in 2023.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

A.I. IN THE NEWS

Illustrators bemoan the ease with which A.I. tools like ChatGPT and Midjourney are allowing anyone to create children’s books.Time magazine chronicles what happened after Ammaar Reshi created a children’s book without having to do any of the writing or illustrating himself and began selling the self-published book on Amazon. But artists protested that such technology was profiting off their own work since A.I. systems like GPT and Midjourney are trained on vast databases of existing human-created images and text. “The main problem to me about A.I. is that it was trained off of artists’ work,” Adriane Tsai, a children’s book illustrator, told Time. “It’s our creations, our distinct styles that we created, that we did not consent to being used.”

DeepMind’s code-writing A.I. AlphaCode called a ‘stunning’ advance. That’s according to Science magazine, which published a peer-reviewed version of DeepMind’s research on a coding bot that could compete successfully against human coders. (AlphaCode had initially been announced back in February.) AlphaCode can solve 34% of assigned coding problems, a performance that far exceeds a competing system called Codex that OpenAI debuted in 2021. In online coding competitions with at least 5,000 competitors, the system outperformed 45.7% of human programmers.

Alphabet employees worry the company is falling behind in the race to commercialize advanced A.I. technology. That’s according to a report in CNBC that said at a recent all-hands company meeting, employees questioned the company’s decision not to release its own powerful chatbot A.I., which is called LaMBDA, more widely in light of the surge in popularity around OpenAI’s ChatGPT. Right now, LaMBDA is only available to researchers inside Google and a handpicked group of others, with very limited public access through Google’s A.I. Test Kitchen. According to CNBC, Alphabet CEO Sundar Pichai and Jeff Dean, the long-time head of Google’s A.I. division, responded to the question by saying that the company has similar capabilities to OpenAI’s ChatGPT but that the cost if something goes wrong would be greater because people have to trust the answers they get from Google.

About the Author
Jeremy Kahn
By Jeremy KahnEditor, AI
LinkedIn iconTwitter icon

Jeremy Kahn is the AI editor at Fortune, spearheading the publication's coverage of artificial intelligence. He also co-authors Eye on AI, Fortune’s flagship AI newsletter.

See full bioRight Arrow Button Icon

Latest in Newsletters

NewslettersMPW Daily
Female exec moves to watch this week, from Binance to Supergoop
By Emma HinchliffeDecember 5, 2025
2 days ago
NewslettersCFO Daily
Gen Z fears AI will upend careers. Can leaders change the narrative?
By Sheryl EstradaDecember 5, 2025
2 days ago
NewslettersTerm Sheet
Four key questions about OpenAI vs Google—the high-stakes tech matchup of 2026
By Alexei OreskovicDecember 5, 2025
2 days ago
Facebook CEO Mark Zuckerberg adjusts an avatar of himself during a company event in New York City on Thursday, Oct. 28, 2021. (Photo: Michael Nagle/Bloomberg/Getty Images)
NewslettersFortune Tech
Meta may unwind metaverse initiatives with layoffs
By Andrew NuscaDecember 5, 2025
2 days ago
Shuntaro Furukawa, president of Nintendo Co., speaks during a news conference in Osaka, Japan, on Thursday, April 25, 2019. Nintendo gave a double dose of disappointment by posting earnings below analyst estimates and signaled that it would not introduce a highly anticipated new model of the Switch game console at a June trade show. Photographer: Buddhika Weerasinghe/Bloomberg via Getty Images
NewslettersCEO Daily
Nintendo’s 98% staff retention rate means the average employee has been there 15 years
By Nicholas GordonDecember 5, 2025
2 days ago
AIEye on AI
Companies are increasingly falling victim to AI impersonation scams. This startup just raised $28M to stop deepfakes in real time
By Sharon GoldmanDecember 4, 2025
3 days ago

Most Popular

placeholder alt text
AI
Nvidia CEO says data centers take about 3 years to construct in the U.S., while in China 'they can build a hospital in a weekend'
By Nino PaoliDecember 6, 2025
14 hours ago
placeholder alt text
Big Tech
Mark Zuckerberg rebranded Facebook for the metaverse. Four years and $70 billion in losses later, he’s moving on
By Eva RoytburgDecember 5, 2025
2 days ago
placeholder alt text
Real Estate
The 'Great Housing Reset' is coming: Income growth will outpace home-price growth in 2026, Redfin forecasts
By Nino PaoliDecember 6, 2025
19 hours ago
placeholder alt text
Success
Nvidia CEO Jensen Huang admits he works 7 days a week, including holidays, in a constant 'state of anxiety' out of fear of going bankrupt
By Jessica CoacciDecember 4, 2025
3 days ago
placeholder alt text
Economy
The most likely solution to the U.S. debt crisis is severe austerity triggered by a fiscal calamity, former White House economic adviser says
By Jason MaDecember 6, 2025
10 hours ago
placeholder alt text
Asia
Despite their ‘no limits’ friendship, Russia is paying a nearly 90% markup on sanctioned goods from China—compared with 9% from other countries
By Jason MaNovember 29, 2025
7 days ago
Rankings
  • 100 Best Companies
  • Fortune 500
  • Global 500
  • Fortune 500 Europe
  • Most Powerful Women
  • Future 50
  • World’s Most Admired Companies
  • See All Rankings
Sections
  • Finance
  • Leadership
  • Success
  • Tech
  • Asia
  • Europe
  • Environment
  • Fortune Crypto
  • Health
  • Retail
  • Lifestyle
  • Politics
  • Newsletters
  • Magazine
  • Features
  • Commentary
  • Mpw
  • CEO Initiative
  • Conferences
  • Personal Finance
  • Education
Customer Support
  • Frequently Asked Questions
  • Customer Service Portal
  • Privacy Policy
  • Terms Of Use
  • Single Issues For Purchase
  • International Print
Commercial Services
  • Advertising
  • Fortune Brand Studio
  • Fortune Analytics
  • Fortune Conferences
  • Business Development
About Us
  • About Us
  • Editorial Calendar
  • Press Center
  • Work At Fortune
  • Diversity And Inclusion
  • Terms And Conditions
  • Site Map

© 2025 Fortune Media IP Limited. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Use and Privacy Policy | CA Notice at Collection and Privacy Notice | Do Not Sell/Share My Personal Information
FORTUNE is a trademark of Fortune Media IP Limited, registered in the U.S. and other countries. FORTUNE may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.