James Manyika, a top Google executive tasked with weighing the impact of the firm’s technology on society, has a delicate load to balance with A.I. The Google SVP unveiled a new project Thursday to bring artificial intelligence to sustainable development efforts, as Google pushes forward with ambitious A.I. plans and tries to move past previous controversies — including Google’s firing one of a prominent A.I. ethics researcher in 2020.
“I think it’s unfortunate that Timnit Gebru ended up leaving Google under the circumstances, you know, perhaps it could have been handled differently,” he said, referring to the researcher, who was at the time one of the few Black women in the company’s research division.
Manyika was not working at Google when the company fired Gebru. Google hired him in January for a new role as senior vice president of technology and society, reporting directly to Alphabet chief executive officer Sundar Pichai. Originally from Zimbabwe, Manyika is a well-respected computer scientist and roboticist. He spent more than two decades as a top partner at McKinsey & Co. advising Silicon Valley companies and was director of the firm’s in-house think tank, the McKinsey Global Institute. He served on President Barack Obama’s Global Development Council and is currently vice chair of the U.S. National A.I. Advisory Committee, which advises the Biden administration on A.I. policy.
Manyika was speaking exclusively to Fortune ahead of Google’s announcement today of a $25 million commitment aimed at advancing the United Nations’ Sustainable Development Goals by helping nongovernmental groups access A.I. The company also launched an A.I. for the Global Goals website that includes research, open-source software tools, and information on how to apply for grant funding.
Google made the announcements in conjunction with the opening of the UN General Assembly in New York this week. The company said that in addition to money, it would support organizations it selects for grant assistance with engineers and machine learning researchers from its corporate charity arm, Google.org, to work on projects for up to six months.
The company began assisting NGOs working on the UN Sustainable Development Goals in 2018. Since then, the company says it has helped more than 50 organizations in almost every region of the world. It has helped groups monitor air quality, develop new antimicrobial substances, and work on ways to improve the mental health of LGBTQ+ youth.
Manyika’s hiring comes as Google has sought to repair its image, among the wider public and its own employees, around the company’s commitment to both technology ethics and racial diversity. Thousands of Google employees signed an open letter protesting Gebru’s firing, and Pichai apologized, saying that the way the company had handled the matter “led some in our community to question their place at Google.” Nonetheless, months later the company also dismissed Gebru’s colleague and cohead of the A.I. ethics group, Margaret Mitchell. At the time, it said it was restructuring its teams working on ethics and responsible A.I. Those teams now report to Marian Croak, a Google vice president of engineering, who in turn reports to Jeff Dean, the head of Google’s research division. Croak and Manyika are both Black.
Since arriving at Google, Manyika says he has been impressed by the seriousness with which Google takes its commitment to responsible A.I. research and deployment and the processes it has in place for debating ethical concerns. “It’s been striking to me to see how much angst and conversations go on about the use of technology, and how to try to get it right,” he said. “I wish the outside world knew more about that.”
Manyika says that while it’s important to be alert to ethical concerns surrounding A.I., there’s a risk in allowing fears about potential negative consequences to blind people to the tremendous benefits, especially for disadvantaged groups, that A.I. could bring. He is, at heart, he made clear, a techno-optimist. “There’s always been this asymmetry: We very quickly get past the amazing mutual benefits and utility of this, except maybe for a few people who keep talking about it, and we focus on all these concerns and downsides and the complications,” he said. “Well, half of them are really complications of society itself, right? And yes, some of them are, in fact, due to the technology not quite working as intended. But we very quickly focus on that side of things without thinking about, are we actually helping people? Are we providing useful systems? I think it’s going to be extraordinary how assistive these systems are going to be to complement and augment what people do.”
He said a good example of this was ultra-large language models, a type of A.I. that has led to stunning advances in natural language processing in recent years. Gebru and a number of other ethics researchers have been critical of these models—which Google has invested billions of dollars in creating and marketing—and Google’s refusal to allow her and her team to publish a research paper highlighting ethical concerns about these large language models precipitated the incident that led to her firing.
Ultra-large language models are trained on vast amounts of written material found on the internet. The models can learn racial, ethnic, and gender stereotypes from this material and then perpetuate those biases when they are used. They can fool people into thinking they are interacting with a person instead of a machine—raising risks of deception. They can be used to churn out misinformation. And while some computer scientists see ultra-large language models as a pathway to more humanlike A.I., which has long been seen as the Holy Grail of A.I. research, many others are skeptical. The models also take a lot of computer power to train, and Gebru and others have been critical of the carbon footprint involved. Given all these concerns, one of Gebru’s collaborators in her research on large language models, Emily Bender, a computational linguist at the University of Washington, has suggested companies should stop building ultra-large language models.
Manyika said he was attuned to all of these risks, and yet he did not agree that work on such technology should cease. He said Google was taking many steps to limit the dangers of using the software. For instance, he said the company had filters that screen the output of large language models for toxic language and factual accuracy. He said that in tests so far, these filters seem to be effective: In interactions with Google’s most advanced chatbot, LaMDA, people have flagged less than 0.01% of the chatbot’s responses for using toxic language. He also said that Google has also been very careful not to release its most advanced language models publicly, because the company is concerned about potential misuse. “If you’re going to build things that are powerful, do the research, do the work to try and understand how these systems work, as opposed to kind of throw them out into the world and see what happens,” he said.
But he said that eschewing work on the models altogether would mean depriving people, including those most in need, of vital benefits. For instance, he said such models had enabled automatic translation of “low-resource” languages—those for which relatively little written material exists in digital form—for the first time. (Some of these languages are only spoken, not written; others have a written form, but little material has been digitized.) These include languages such as Luganda, spoken in East Africa, and Quechua, spoken in South America. “These are languages spoken by a lot of people, but they are low-resource languages,” Manyika said. “Before these large language models, and their capabilities, it would have been extraordinarily hard, if not impossible, to translate from these low-resource languages.” Translation allows native speakers to connect with the rest of the world via the internet and communicate globally in ways they never could before.
Manyika also highlighted many of the other ways in which Google was using A.I. to benefit societies and global development. He pointed to work the company is doing in Ghana to try to more accurately forecast locust outbreaks. In Bangladesh and India, the company is working with the government to better predict flooding and provide alerts to people’s mobile phones with advanced warnings that have already saved lives. He also pointed to DeepMind, the London-based A.I. research company that is owned by Alphabet. It recently used A.I. to predict the structure of almost all known proteins and published these in a free-to-access database. He said that such fundamental advances in science would ultimately lead to a better understanding of disease and better medicines and could have a large impact on global health.