Hello and welcome to Eye on AI.
Generative AI has taken nearly every sector by storm, and one area where it caused an immediate panic is education. Since the moment ChatGPT dropped in late 2022, educators have been scrambling just to figure out whether to ban or embrace the technology in schools, let alone how to structure learning in a world where students can outsource their writing and even much of their thinking to chatbots. As someone who’s been writing about AI since it was only a fascination of a small group of academic researchers and technologists, it was the moment last year when I overheard a table of teachers next to me at a restaurant grappling over what to do about it in their classrooms that I knew AI had really hit the mainstream.
The questions around generative AI’s role in education are far from answered, but the technology is currently making a big entrance into colleges in the form of textbooks. Leading textbook company Pearson has outfitted its digital textbooks—specifically 50 science titles, such as Intro to Biology and Intro to Chemistry—with generative AI study tools. As of this summer, 70,000 students in over 1,000 institutions are already using these AI textbooks, according to Pearson. This fall will mark the first full semester where AI textbooks are the norm at schools and colleges, from Ohio State University to the University of Miami, University of Colorado Boulder, and many more in between.
The generative AI capabilities within the Pearson textbooks come in two forms. First, there’s a general chatbot students can query with any questions they might have about the subject and contents of the book. Next, Pearson incorporated generative AI into the practice questions within the digital textbooks. When students get a question wrong, they’re now taken through a brief set of additional questions—powered by generative AI—meant to lead them to the correct answer without telling it to them outright. The process was inspired by and intended to mimic how a professor would help a student understand a concept during a one-on-one conversation during office hours, explained Chris Hess, a former professor and director of AI product management, Higher Ed at Pearson.
“We know what problem they’re working on. We already know the answer to that problem. Think about it—if Sage came in and was struggling on this problem, I wouldn’t just give you the answer. I would ask you a series of leading questions that would try to diagnose your misconceptions and help you overcome those,“ he told me.
I had the chance to test this with the Intro to Biology textbook, and I have to say, I came away feeling it was effective. I took Intro to Biology (with an old school hardcover Pearson textbook) years ago, but it’s been quite a while since I thought about the granular concepts of evolution and Punnett squares, let alone answered any test questions on the subjects. But after going through the additional gen-AI-created questions, I was not only able to determine the correct answer to the initial questions, but I also felt I had a pretty solid understanding of the concepts. I felt myself understanding more and more as I progressed through them as if I was being led to the answer with just enough help, exactly just as Hess and the team who created the feature intended.
Pearson has compared student engagement with the eTextbooks that have the AI tools versus those that don’t and found that students with the AI tools are nearly doubling their number of sessions with their textbooks. The company plans to expand the technology to more and more titles and has decided that every Pearson textbook will be AI-equipped. Hess said he strongly believes in making sure this doesn’t become something “the haves get and the have-nots don’t.”
The technology is built on ChatGPT and uses a largely RAG-based approach, referring to the technique that allows an LLM to retrieve information from a specific document or source, such as the content of the textbooks, in this case. As with any LLM, hallucinations, wherein models seemingly make up untrue information, are a concern, but Hess emphasized they’ve worked hard to minimize this.
“It isn’t a walled garden around the book completely. It’s a walled garden around the problem, and so it knows the problem,” he said.
It’s also still early stages for the rollout, and Pearson is still experimenting with the best ways to use the technology. While the features are currently based on ChatGPT, Hess said the company is “future-proofing itself.”
“If Claude comes up with a better version, we can switch models,” he said, adding that they’re also exploring ways to fine-tune models. Pearson also doesn’t plan to stop at textbooks and is working on tools for teachers next, Hess said.
While students can easily Google information or ask ChatGPT, and many probably already are, Hess believes the benefit of bringing generative AI right into the textbook is unmatched.
“Sure you can go to ChatGPT, but it’s better for the student to have the benefit in the actual product right there in line and in context. You don’t need to leave, and you’re going to get a certain synergy of language from the book, and that is a really powerful experience,” he said. “We know the context of the student, and in the longer term, we will know more about that student’s learning journey overall.”
And with that here’s more AI news.
Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com
AI IN THE NEWS
Microsoft and Apple won’t hold observer seats on OpenAI’s board. That’s according to the Financial Times. Microsoft, which has deep investments in and a close partnership with OpenAI, has given up its observer, non-voting seat on the company’s board. Apple was also supposed to get an observer seat on OpenAI’s board following its announcement to integrate ChatGPT into the iPhone, according to previous reports, but that will no longer be the case. The companies’ step back from OpenAI’s board comes amid increased antitrust scrutiny around Microsoft’s relationship to AI in the EU. Fortune’s David Meyer wrote about why it’s for the best that the companies don’t have a role on the board.
Anthropic launches new features to help Claude users engineer better prompts. That’s according to TechCrunch. Users of Claude 3.5 Sonnet will now be able to generate, test, and evaluate their prompts with a new suite of features aimed at helping them improve how they work with the model and generate better outputs. The Evaluate feature, for example, will allow users to compare how effective various prompts are side-by-side and rate them on a five-point scale. Prompt engineering was starting to look like one of the hottest new jobs in tech, and there have been reports that some models essentially prompt engineer users’ inputs under the hood. The news from Anthropic shows how essential prompt engineering—however it happens—currently is to the process of working with LLMs.
Bioptimus unveils a supersized open-source AI model to foster disease research and diagnosis. That’s according to Bloomberg. Called H-optimus-0, the model is trained on hundreds of millions of images and is designed to perform complex tasks such as identifying cancerous cells. The Paris-based startup is one of many companies chasing AI for medical research, including Google, startups like Generate:Biomedicines and Insilico Medicine, and various pharmaceutical companies.
FORTUNE ON AI
The European VC wunderkind behind Mistral, Revolut and Slack just raised $2.3 billion and predicts the AI revolution is only in ‘the earliest innings’ —by Prarthana Prakash
Knowledge workers don’t seem to think AI will replace them—but they expect it to save them 4 hours a week in the next year —by Steve Hasker
Exclusive: Intuit is laying off 1,800 employees as AI leads to a strategic shift—by Sheryl Estrada
AI CALENDAR
July 15-17: Fortune Brainstorm Tech in Park City, Utah
July 21-27: International Conference on Machine Learning (ICML), Vienna, Austria
July 23: Google earnings
July 30-31: Fortune Brainstorm AI Singapore (register here)
July 31: Meta earnings
Aug. 12-14: Ai4 2024 in Las Vegas
EYE ON AI NUMBERS
48%
That’s how much Google says its gas emissions grew in the past five years as it’s ramped up investments in AI. The company blamed the sharp increase on data center energy consumption and supply chain emissions in its 2024 Environmental Report published last week.
“As we further integrate AI into our products, reducing emissions may be challenging due to increasing energy demands from the greater intensity of AI compute, and the emissions associated with the expected increases in our technical infrastructure investment,” the report reads.
Google is not alone in exponentially increasing its energy usage in pursuit of AI. Microsoft, for example, is also reversing progress on its goal to be climate-negative by 2030 (or climate-neutral by 2030, in Google’s case) all thanks to AI. Microsoft recently struck a record deal for carbon credits, agreeing to buy 500,000 credits from oil company Occidental Petroleum for “hundreds of millions of dollars” in order to offset its carbon emissions from AI, the Financial Times reported yesterday. But really, it’s an industry-wide problem.
In my most recent edition of the newsletter, I wrote about how this issue is inherent to the technology as it exists today and why AI won’t ever be ethical as long as this remains true. The essay caused quite a stir in my comments section on LinkedIn, and some argued nuclear energy will provide more than enough energy to support AI, or that AI is a good thing because it will motivate us to solve the worsening energy crisis. What’s clear, however, is that a lot of damage has already been done and the rate of AI development is happening far faster than any efforts to mitigate its energy usage.