We’re in the thick of the AI revolution, but we might look back on January 2014 as one of the most pivotal moments in business history. That was the month that Demis Hassabis sold his AI company, DeepMind, to Google. He rebuffed a higher offer from Meta’s Mark Zuckerberg, and the acquisition scared Elon Musk so much that he decided to launch a rival company with Sam Altman, now called OpenAI.
Fast forward to today, and Hassabis is still the one to beat. He runs all of Google’s AI initiatives, including Gemini, which is quickly eating away at OpenAI’s user base. In his spare time, Hassabis won a Nobel Prize, and he runs a startup called Isomorphic that wants to solve all disease with AI.
In a new episode of Fortune 500: Titans and Disruptors of Industry, Fortune’s Editor-in-Chief Alyson Shontell sat down with Demis at the World Economic Forum in Davos to learn where he thinks the future is heading. Listen to the vodcast above.
Here’s some of what they discussed during their 45-minute conversation:
On his early inspirations and the origin of DeepMind
- How childhood fascinations in astronomy, cosmology, and chess pulled him toward AI as a way to understand the mind and the universe
- Why he started DeepMind in 2010 “with the mission of solving intelligence—and then using it to solve everything else
On selling DeepMind to Google
- Why he sold the company to Google, even after receiving a higher offer from Meta
- Why he told Larry Page, then-CEO of Google’s parent company Alphabet, that acquiring DeepMind might be the most important acquisition in the company’s history
- How the sale inspired other tech leaders, including Elon Musk and Sam Altman, to join forces
- How the sale set the stage for AlphaFold, an AI-powered platform that predicts protein structures and earned Hassabis a Nobel Prize
On Isomorphic, an AlphaFold spinout
- Why he views traditional drug discovery as “incredibly inefficient,” and how Isomorphic is using AI to make it faster and more precise.
- Whether 2026 could see Isomorphic’s first cancer drugs advance into clinical trials
How Hassabis manages his time between different teams
- Why fostering interdisciplinary collaboration is central to his leadership approach
- How Hassabis structures his day into two work shifts, one that spans from 10 p.m. to 4 a.m.
On Google’s AI surge in 2025
- Why his early priority was ensuring that models like Gemini and the image-generation system Nano Banana were truly “best in class”
- Why he sees DeepMind as the “engine room” powering innovations across Google products like Chrome and YouTube
On ensuring that tech advancements serve both business and society
- Why he believes Google must be willing to disrupt itself—especially in areas like Search—before others do
- Why commercial success is vital for continuing to offer resources like AlphaFold to the public at no cost
- How he hopes DeepMind and Google can “be a role model for all the good things that can come with AI”
On the near future of AI
- Why Hassabis believes that by year’s end, AI could build and delegate tasks to autonomous agents
- Why Hassabis is hopeful about AI-powered glasses, and when the next breakthrough moment in robotics will happen
- Hassabis’ vision of one “universal assistant” that can live across all of a user’s devices
Read the transcript, which has been lightly edited for length and clarity, below.
A quick message from our sponsor:
Jason Girzadas, CEO of Deloitte US: [Use cases for AI agents are] on the topic of every CXO conversation I’m a part of, and I think the thought process has to be looking for high impact areas that may not be necessarily the most glamorous or high profile functional areas, but are ripe for automation and use of this technology to create efficiencies as well as innovation. And over time, AI agents will be also in customer facing and growth oriented domains. In our case, Deloitte, we’re using it within our financial organization, looking at very mundane processes like expense management and working capital management. We’re seeing other organizations using it in call centers and with software development, that can be automated.
Where Hassabis’ interest in AI came from
You’ve had a huge 2025. It sounds like you’re gearing up for a great 2026, but before we get into both of those things, I want to just take a step back so people can get to know you a little bit better. One of the things you love is chess, you’re a chess master. You also love astronomy. I’m curious how both of those things took you into AI or shape how you think about AI?
Hassabis: Yeah, well, I’ve always been interested in things like astronomy, cosmology, physics as a kid, because I’ve always been interested in the big questions. What’s actually happening here in the universe? The nature of consciousness, all of these types of things. So you get sort of drawn to physics if you’re interested in the big questions.
And then for me, for chess, I also love games. I love strategy. I ended up training my own mind by playing chess as a kid, very seriously.
And then that got me thinking about thinking—and how does the brain work? And then I’ve combined all that together. That sort of led me to AI and computers, and AI being a way to understand our own minds, but also a perfect tool for science and understanding the universe out there.
Hassabis’ decision to sell DeepMind to Google
You cofounded DeepMind a number of years ago, and in about 2014 you sold it to Google for about $500 million at the time. It was a hot deal. I know Meta wanted it too. And from my perspective, I think we’re going to look back on that moment as one of the most transformative moments of business history.
You’ve given Google the foundation with which to build an incredible AI machine and really take it into the future. When you look back on that, how do you feel about that moment? How did you make that decision? Did you know it was going to be such a big moment at the time?
We did, actually, those of us that were involved in the science. We started DeepMind in 2010, which was 15+ years ago now, and nobody was talking about AI. But we knew, and we set out with the mission of solving intelligence and then using it to solve everything else.
So we wanted to be the first company to build artificial general intelligence. And the main thing we wanted to apply it to was solving scientific problems. So when Google came along in 2014—and it was actually driven by Larry at the time, Larry Page, who was the CEO—we knew that in some ways we were sort of underselling.
But, on the other hand, what mattered to me was not the money, it was the mission, and being able to accelerate our progress towards artificial general intelligence and answering these scientific questions that we were trying to solve. And I felt that teaming up with Google would accelerate that, mostly because they had obviously enormous compute power, and we see today how important that is for developing intelligence.
So at the time, I did mention to Larry, and also the head of search at the time, who was driving the deal that it might turn out to be the most important acquisition Google has ever done. Which is saying something, because they’ve acquired YouTube and Android, they’ve got a good history of buying important things.
Now, if you go back and you look at the origins of OpenAI—Elon and Sam got together because they were afraid that Google might now have a monopoly in the AI space with the Deep Mind acquisition. So, really, that also kind of created a mega competitor at the time.
Yeah, I guess there’s all these sort of butterfly effects that happen. And I think partly also was the success of things like AlphaGo, the first program to be the world champion at the game of Go. Using these kinds of learning systems that we’re familiar with today—reinforcement learning, deep learning, at the heart of it—I think that was a big watershed moment as well. In 2016, it’s actually the 10 year anniversary of that breakthrough. And I think that really started the starting gun for the modern AI era, including things like OpenAI. I know the founders of that watched that match, and wanted a piece of that action.
The development of AlphaFold—and the spin-off of drug-discovery platform Isomorphic
Under Google and Alphabet, you’ve been able to have a lot of moon shots, take risks, try things that haven’t necessarily led to money immediately, but have been profound breakthroughs. And [for] one of them, you won a Nobel Prize. Congratulations, it’s incredible.
I was wondering if you could just tell me a little bit more about AlphaFold and why that’s such a big deal in terms of how we could be looking at solving diseases moving ahead.
I think this is one of the benefits of being a part of Google and Alphabet, having the resources and the time to really go after these sorts of deep scientific problems. And AlphaFold, I think, is the best example of that.
It’s basically a solution to a 50-year-old grand challenge in biology—can you determine the 3D structure of a protein just from its amino acid sequence, basically from its genetic sequence? And this is incredibly important, because proteins basically do everything in your body, from muscles to neurons firing. Everything depends on proteins. And if you know the 3D structure of a protein, what it looks like in your body, then you kind of partially know what the function it does, what it supports.
Obviously, it’s important also for disease, because things can go wrong with proteins. They can fold in the wrong way, like in something like Alzheimer’s, and then that can create a disease. So [it’s} really important for drug discovery, as well as fundamental biology. And AlphaFold, it was a solution to this problem that was posed 50 years ago by another Nobel Prize winner—actually Christian Anfinsen—that this should be possible and to go directly from a one dimensional string of amino acid sequence to this 3D structure. How does it scrunch up into a ball?
And AlphaFold was a solution, and it’s so efficient. Not only is it accurate, we folded all 200 million proteins known to science, and then we put that on a huge database with the European Bioinformatics Institute. And for free, into the world for everyone to use. So now over 3 million researchers around the world make use of AlphaFold every day.
Wow. You’re using some of it, I believe, for Isomorphic, which is a startup that you have, I want to say on the side. So you’re doing two huge jobs at once. You’ve raised hundreds of millions of dollars with Google, of course, as a backer, for Isomorphic.
Can you just explain the mission there? And you have some lofty goals, like you say, we’re going to solve all disease. You don’t say cure, you say solve. Also walk me through how hard it is to get a drug to trial, because that’s historically been very difficult.
So that was always the idea behind AlphaFold. Obviously there’s a lot of fundamental science that can be done if you understand the structures of proteins, including designing new proteins that do new things. So you can sort of use AlphaFold in reverse to sort of go, Okay, I want this particular shape. How do I get it from a genetic sequence?
But to do drug discovery, knowing the structure of protein is only one small part of that whole process. And usually it takes, like on average, 10 years to go from understanding a target for a disease, all the way to a drug that’s ready for the market. So it’s an enormous amount of time and cost. Billions of dollars, a decade or more. And most drugs fail along the way. It’s only like a 10% success rate. So it’s just incredibly inefficient, because biology is so complicated.
So what I’ve always dreamed about doing, and was the first thing I wanted to apply AI to, was human health, improving human health. What could be a more important use of AI? And AlphaFold was the proof point that this could be possible.
And then Isomorphic, we spun that out after AlphaFold was done—so three, four years ago—to develop additional AlphaFold-level breakthroughs surrounding AlphaFold so you can think more in the chemistry space. So, if you now know the structure of a protein, you need to know where the chemical compound you’re designing—the drug, basically–is going to bind to the protein and what it is going to do. And so you need to build other AI systems that can predict all of that.
So that’s what we’ve been doing in Isomorphic. It’s going incredibly well. We have great partners with Eli Lilly and Novartis, the best pharmas in the world. We have like 17 drug programs active already, and we plan to eventually go to hundreds.
And I think this is the way to make real step change progress in human health. You basically do your search and your hypothesis searching in silico, and that’s hundreds, thousands of times more efficient than doing it in a wet lab. And you save the wet lab part just for the validation step.
Of course, eventually you have to test it in trials, human trials, and all those types of things to make sure everything’s safe. But you can do all of your search and design, or almost all of it, in silico. That’s the plan.
2026, you mentioned, is going to be a big year. I imagine, for both Google and for Isomorphic. Do you anticipate, in early 2026, this could be the moment that you get the first drug to trial? And might it be in cancer?
Yes, so we’re working on, actually, several spaces. Cancer, cardiovascular, immunology, and then eventually we’d like to branch out to all therapeutic areas. We’re building a general drug discovery engine platform, you can think. And we are already in pre-clinical trials, very early stage, for some cancer drugs. And then, hopefully by the end of the year, if those are successful, we’ll start going towards clinical trials.
Hassabis’ daily schedule, and how he builds the teams he delegates to
How do you manage yourself and your time and your teams? Because you’re achieving really, really difficult things, whether it’s the launch of Gemini 3—which was very successful and well received—or it’s getting drugs to trial. These are sounding like very different things, two different teams to run. You can’t be in two places at once. How are you doing this? How are you running two companies?
You know, one of my skills is bringing together amazing, world class interdisciplinary teams. I’ve loved managing those teams. I love composing those management teams together. And I’ve got incredible teams both at Google DeepMind and Isomorphic.
If we take Isomorphic, for example, we’ve blended top biologists and chemists along with top machine learning and engineering. And I think there’s a lot of magic that happens when you have these kinds of interdisciplinary groups.
And then if we think about the Google DeepMind side, there we’ve tried to blend together the best of the startup worlds, like what we were doing at DeepMind originally. And then scale, in a kind of multinational scale, with all the advantages of having these amazing product surfaces that we can immediately deploy—technologies like Gemini 3 and immediately get great feedback from users. And also help in the everyday lives of billions of users. So it’s amazingly exciting and motivating, actually. And in terms of the way I manage my time, you know, I don’t sleep very much.
Like a couple of hours?
Yeah, well, a bit more than that. That would be bad for the brain. So I do try and get six, but I have unusual sleeping habits. I sort of manage during the day, and try and pack my day in the office with as many meetings as possible, back to back, almost no break between. Then I get home, spend a little bit of time with the family, have dinner, and then I sort of start a second day of work at about 10 P.M. and go to 4 A.M., where I do my thinking, more creative and research work. And it’s worked out. I’ve done that for about a decade now, and it works well.
I can’t imagine being creative at four in the morning, but if it works for you.
Yeah, I come alive at about 1 A.M.
How Hassabis helped Google catch up to increased competition in the AI race
You’re clearly good at motivating teams to do hard things. In 2023, a decision was made at Google to put two different AI teams under you. How did you work out management kinks there and get the team shipping again? Because there was this feeling that Google was a little bit asleep at the wheel for AI. And I’m curious if you think that’s true, and how you got them to wake up?
Yeah, well, we had two world class groups in original DeepMind and Google Brain. And actually, I think often, as a collective, we don’t get enough credit for the fact that I think about 90% of the modern AI industries are built on technology or discoveries made by one of those two groups. From Transformers to AlphaGo and deep reinforcement learning. So we have, and we still have, I think, the deepest and broadest research bench. So we have incredible talent. I think, better than anywhere else in the world by a long way.
But it was getting complicated having two groups, especially given the amount of compute needed in this scaling era. So that was really why we had to put the two groups together, so we could pull all of the talents together working on a single project in Gemini. But also, even [a company] like Google didn’t have enough compute to have two frontier projects under one house, so we needed to combine all of our easels together.
You know, I’m a very collaborative person. I’m very open minded about different ways of working and I’m always looking to improve as well. One of the watch words I live by is this Japanese word, Kaizen, that I love. Which is sort of striving for continual self-improvement. And that’s what I always try to do.
I’m always in learning mode. Perhaps that’s why I like building learning machines, because I like learning, and there’s always something you can learn no matter how expert you are at what you do. And bringing the two groups together and trying to combine the best of both cultures has been great, and I think we’re reaping the rewards of that now.
Google DeepMind is the way we think about it, like the engine room of Google. It’s like the nuclear power plant that’s plugged into the rest of this amazing company in Google. And I think one of the things we did—one of the things I’m very proud of—is getting the shipping culture going and sort of rediscovering, I guess, the golden era of Google, back 10, 15 years ago and taking risks. Calculated risk, shipping things fast, and being innovative.
And I think that’s all working out really well now, whilst at the same time being thoughtful and scientific about and rigorous about what we put out in the world, whether that’s engineering or scientifically. And I think, and I hope, we’re getting that balance right.
You mentioned going back to the golden era of Google, so much so that the founders, at least Sergey, seems like he’s back involved. How is it like working with him on AI and Google?
It’s been great. And Larry is too, in different ways. Larry, more strategically. Sergey has been in the weeds, programming away on things like Gemini, and it’s been fantastic seeing them engage.
Are you putting him to work, are you like, Sergey, I need this code right now?
No, it’s more like he chooses what to work on, but it’s great seeing him in the office and pushing things in certain directions. And it’s easier if the founders are heavily involved. And I still act as well like a cofounder of Google DeepMind, in terms of what we’ve got to do, and strategically, what we pick to do. And that’s something I think I’ve learned to do well over the last, you know, 10-15, years.
When you have some ambitious goal, like solve all disease or build AGI, what are the intermediate goals that are also very ambitious, but are waypoints? What are the right ones to pick? And I think we’ve done that historically pretty well with most of the Alpha projects, AlphaGo, AlphaFold, and so on, and then now Gemini. And I think that’s really critical, actually, for any very ambitious scientific and engineering project, is breaking it down into manageable steps so that you can see you’re in the right direction.
And I think that we very clearly are with the technology that we’re building. And it’s been an incredible couple of years for us, and I think we’re getting into our groove, I would say. And I think other people, and the external world, are starting to feel that, including things like Wall Street and the share price.
It definitely seems like there must have been some sort of KPI measurement, charge ahead, unifying moment, because the launch of Gemini 3 brought much fanfare, among other launches, that caused OpenAI to go to a code red, which they claim happens all the time. And then you have this huge monster deal with Apple that is monumental, I think, for the industry.
So I’m curious what happened internally, behind the scenes—how did you set those KPIs for the team? And then how are you setting them to keep the momentum in 2026?
Well, look, I think for me, it always starts with the research. Like having the best models in this case, and obviously fundamental research feeding into that. And I always believe you then need to reflect that obviously as quickly as possible in your products, and then you’ve got to get your marketing and distribution right. But none of it matters if your models aren’t best in class, aren’t state of the art.
So that’s what we focused on, first with the Gemini models, but also our other models like Nano Banana, our image model, which went super viral. And that was a big part of their success last year, our video model, Veo, our world model. So there’s more than just large language models, and we’re kind of state of the art on all of those.
And then it was about sorting things out internally, almost rebuilding the infrastructure in some way at Google, so that you could reflect very quickly the power of the latest models into the lighthouse products, including Search, YouTube and Chrome. All these amazing surfaces that we have, as well, of course, as the Gemini app. It was new for everyone in the industry. and I think it takes a little while to re-architect things around that.
And very much, again, this idea of Google DeepMind being the engine room, providing the engine for the rest of the organization to use. I think that took a year, 18 months, to get right, but I think we’re seeing the results of that now. I think there’s still more to go, by the way, and we can have even faster velocity. And the other thing is just also instilling this culture of intensity and pace and focus and really focusing only on the things that matter, and cutting out distractions. Maybe the final thing I would say is, I think there’s a lot to say, especially in today’s very noisy world, to just consistently deliver good decisions, good rational decisions. And over time, minimal drama. I think it’s just amazing how much that compounds over time. I think we’re building a lot of momentum now, and I think hopefully we’ll see that even more this year,
How Hassabis balances AI safety with rapid development
Sort of like we mentioned before, the decision to sell DeepMind to Google was a monumental moment, a transformational moment in business—if you’re successful now, I think that will be perhaps the biggest transformation in business.
How does that weigh on you to make sure that you, as a leader, are driving this in a direction that’s good for society, good for the workforce, good for Google? Because it is a little bit of an innovator’s dilemma where this is the search king, a huge business model based on ads, and if you’re successful…
…well look, it is a classic innovator’s dilemma. I think we’ve navigated it pretty well so far, and search is more successful than ever. But also there’s this aspect of, if we don’t disrupt ourselves, someone else will. So you’re better off being ahead of that, I think, and doing it on your terms. And so I think that’s what we found. In terms of responsibility, I’ve felt that way not just at Google, but before DeepMind, even in my academic career.
Because myself and Shane especially, our chief scientist, when we started DeepMind, it seemed like a fanciful idea, but we really believed that it would be possible to create artificial general intelligence. And we understood what I think more and more people are understanding now—how transformative to the world that would be. Amazing for things like science and human health and maybe helping with energy and so on.
But also, there are risks. It’s a dual purpose technology. Harmful actors, bad actors could use it for harmful ends. And eventually, as technology becomes more autonomous, more agentic, and we get towards AGI, there’s technological risk too. And so I worry a lot about all of those things.
“In today’s very noisy world, it’s important to consistently deliver good, rational decisions with minimal drama.”
Demis Hassabis, cofounder and CEO of Google DeepMind
And we have to make sure that the engine and the economic engine works as well, so we have enough money to fund our research and fund things like AlphaFold and give it to the world for free. That’s not easy. It costs a lot of money to create something like it and hire the researchers to create something like AlphaFold, but we do a lot of things like that, and I want to do more things like that for the world, but that requires us to be successful also on the commercial side. So I think there’s a balance to be had there.
But the responsibility in part comes in as well, and I feel like we can do this at Google—we have the platform to show how AI can be deployed in a responsible way and a beneficial way for all of society. And all of us who are frontier labs producing AI, we have choices about—what should we use AI for? Are we going to use it for things like medicine and for alleviating administration and helping with things like poverty, or are we going to use it for exploitative things? And I think that we’re going to try and be a role model for all of the good things that can come with AI.
It doesn’t mean we won’t make any mistakes. We will because it’s such a nascent and complex technology, but we’ll try to be as thoughtful as possible, and we’ll try to be as scientific about it as possible, too. The scientific rigor we bring to our work, and always have, I think, is going to matter here a lot. I mean, it’s a scientific endeavor in the end.
And then I hope that the kind of reliability and security and safety that we like to work on will come through in our products. I think the market will reward that. Because if you think about enterprises that use these technologies, as they get more sophisticated, they’re going to want to know—if you’re a big bank or insurance company, health company, medical company—that you have some guarantees about what your AI systems that you’re bringing in are going to do.
So I think that could be a good aspect of AI becoming very commercial, is that there’ll be commercial incentives to be robust and reliable and secure, and all the things that you’d want actually in preparation for AGI coming into the world.
Where AI will go in the future, from glasses to robotics to medicine
So when you look at the year ahead, what do you think the story of AI will be? What will we achieve?
Well, I mean every year is pretty pivotal in AI. And it feels like, at least for those working at the coalface, that 10 years almost happens every year. And I think this year will be no different.
It’s very intense, but you’ve also got to, every now and again, look up at the strategic picture. I think that, at least for us, with Gemini 3, we’ve crossed a watershed moment, in my opinion. And hopefully, those of you who’ve used it will feel that it’s very capable now. And I’m certainly using it in my everyday life to help me with my research and summarizing things and doing some coding.
So I think that these systems are now ready to maybe build agents. The whole industry has talked a lot about agents and more autonomous systems and delegating whole tasks to them. But I think maybe by the end of this year, we’ll really start seeing that.
I’m very excited about assistance coming into the real world, maybe with glasses. We have a big project on smart glasses. I think that the AI technology is only just about there to make that actually viable. And I think that could be a kind of killer app for glasses. I think that part, bringing that into the world. Also robotics—I still think there’s more research to be done on robotics, but I think over the next 18 months or so, we’re going to see breakthrough moments in robotics, too.
So all of these areas we’re pushing very hard on, as well as, of course, improving Gemini itself.
Those aren’t the glasses, right? I would buy those if they were.
No, they’re not.
I was going to ask you about the future form of—computers were not built for AI and all the things that AI can do. What do you think is the future for it? It sounds like glasses…
…I think glasses would just be one of the solutions. I have this side notion—we talk about this notion internally of a universal assistant, and what we mean by that is an assistant that’s super helpful in everyday life. Recommending new things, enriching your life, dealing with admin, all of these types of things, but it goes across all of the surfaces. So it exists on your computer, on your browser, on your phone, and then I think there’ll be new devices too, like glasses. And it will be the same assistant that understands your context across the different conversations you’ve had, whether that’s in your car or in your office. And if you want it to, that can all be integrated together. And, I think, help you improve your life across all those different aspects.
Maybe for Christmas next year, the holidays next year, we can all get our Google Glasses.
That’s, that’s the idea.
You had them just way too early, I think, before when they…
…like a lot of things we’ve done at Google, we maybe pioneered all these spaces. Perhaps a little bit too early, in hindsight, with glasses. Both the technology of making them not too chunky and things, but also I think it was missing the killer app. And I think an AI digital system could be that.
Yeah, amazing. One last question for you. I want to ask your biggest, boldest prediction for how AI will transform the world. When you look ahead—I know you said 10 years is one year now—but when you’re looking ahead, are you [seeing] the abundance world where AI can solve all of our problems? What does it look like?
I think done right, in 10-15 years time, we’ll be in a new golden era of discovery. That’s what I hope, a kind of new renaissance. And I think human health will be revolutionized. Medicine won’t look like it does today. I think that personalized medicine, for example, will be a real reality. And I think we’ll have used these AI technologies to solve many big problems in science, and things like new materials, maybe help with fusion, or solar, or optimal batteries—some way of solving the energy crisis. And then I think we’ll be in a world of radical abundance, where we can use those energy sources to travel the stars and explore the galaxy. That’s what I think our destiny is going to be.
Amazing. Well, thank you. I hope that that’s what you build. And thank you for all your efforts on it.
Great to talk to you.












