Artificial intelligence, or AI, is all around us today—and it's often invisible as we go about our daily lives.
Companies are successfully using AI for credit card fraud detection, speech recognition, Web search rankings, automated customer service, legal discovery, photo search, translation, and even farming. A few weeks ago, Google’s AI program AlphaGo beat Lee Sedol, a human champion of Go, an ancient and complex game long thought to be beyond computers’ ability to master. It won after learning to play from millions of games. After studying millions of patient records, Stanford’s AI Lab and IBM’s Watson diagnose certain types of cancer more accurately than human physicians. The list of positive uses of AI is growing.
When we first worked on the AI behind self-driving cars, most experts were convinced they would never be safe enough for public roads. But the Google Self-Driving Car team had a crucial insight that differentiates AI from the way people learn. When driving, people mostly learn from their own mistakes. But they rarely learn from the mistakes of others. People collectively make the same mistakes over and over again. As a result, hundreds of thousands of people die worldwide every year in traffic collisions.
AI evolves differently. When one of the self-driving cars makes an error, all of the self-driving cars learn from it. In fact, new self-driving cars are "born" with the complete skill set of their ancestors. So collectively, these cars can learn faster than people. With this insight, in a short time self-driving cars safely blended onto our roads alongside human drivers, as they kept learning from each other’s mistakes.
As with other breakthrough technologies, recent progress in AI has spurred public debate. Some voices have fanned fears of AI and called for urgent measures to avoid a hypothetical dystopia.
Get Data Sheet, Fortune’s technology newsletter.
We take a much more optimistic view. The history of technology shows that there’s often initial skepticism and fear-mongering before it ultimately improves human life. The original Kodak camera was seen as destroying art. Electricity was believed to be too dangerous when it was first introduced. But once these technologies got into the hands of millions of people, and they were developed openly and collaboratively, those fears subsided. Just as the agricultural revolution has freed us from spending our waking hours picking crops by hand in the fields, the AI revolution could free us from menial, repetitive, and mindless work. AI will do those things we don’t want to—like driving in bumper-to-bumper traffic.
We believe AI has the potential not only to free us from the negative, but to enhance what’s most positive about us as human beings. In playing with AlphaGo, grandmaster Sedol gained a much deeper understanding of the game and has since dramatically improved his level of play. We could all be like Sedol, harnessing AI to improve the things we do every day.
Imagine a world where clever apps and devices could help us recognize every person we’ve ever met, recall anything we’ve ever said, and experience any moment we’ve ever missed. A world where we could in effect speak every language. (We already see glimmers of this today with Google Translate.) Sophisticated AI-powered tools will empower us to better learn from the experiences of others, and to pass more of our learnings on to our children.
Do we worry about the doomsday scenarios? We believe it’s worth thoughtful consideration. Today’s AI only thrives in narrow, repetitive tasks where it is trained on many examples. But no researchers or technologists want to be part of some Hollywood science-fiction dystopia. The right course is not to panic—it’s to get to work. Google, alongside many other companies, is doing rigorous research on AI safety, such as how to ensure people can interrupt an AI system whenever needed, and how to make such systems robust to cyberattacks.
Let’s get beyond the chatter and build working solutions. The lesson with self-driving cars is that we can learn and do more collectively. Google has, for example, open-sourced the free platform TensorFlow—the code is in the open for all to see and contribute to. It allows AI researchers around the world to collaborate more easily, sharing actual code rather than just research papers. That way, we can see what computers can learn, how they use data, and use the wisdom of our smartest minds to control and improve AI.
Indeed, it's already clear that Silicon Valley is not the only place that will make progress in AI; this is truly a global effort, with global potential. We believe AI will serve everyone best if it’s built by a diverse range of people, such as those joining Google’s new machine learning group opening in Zurich, and countless other global hubs.
For us, ultimately the hypothetical, long-term concerns are far outweighed by our excitement for the endless possibilities. Even today AI is already doing a lot of good for all of us. We can’t wait to see AI free us of mindless, menial work and empower us to unfold our true creative powers.
Eric Schmidt is executive chairman of Alphabet, Google's parent company. Sebastian Thrun is president and chairman of Udacity, an online education company.