Asia is fast embracing AI, but concerns about its language skills and environmental impact abound

Jeremy KahnBy Jeremy KahnEditor, AI
Jeremy KahnEditor, AI

Jeremy Kahn is the AI editor at Fortune, spearheading the publication's coverage of artificial intelligence. He also co-authors Eye on AI, Fortune’s flagship AI newsletter.

Josephine Teo, Singapore's minister of digital development and information.
Josephine Teo, Singapore's minister of digital development and information, at last week's Fortune Brainstorm AI conference in Singapore.
Graham Uden—FORTUNE

Hello and welcome to Eye on AI.

Last week, I was in Singapore at the first Fortune Brainstorm AI conference we’ve held in Asia. The conference attracted top regional executives from some of the largest tech companies and biggest names in AI—Microsoft, OpenAI, and Google—as well as IBM and hardware companies like Qualcomm and HP. There were also leaders from some of Asia’s most successful tech companies, including super-app Grab, Japanese e-commerce giant Rakuten, Singapore’s DBS Bank, as well as some of the region’s largest insurance firms, most prominent venture capital investment houses, and founders of some of the regions hottest AI startups, not to mention key officials from Singapore’s tech-savvy government.

I want to share some impressions and highlights from a fascinating two days of panels and conversations.

It was clear that AI, and generative AI in particular, is diffusing rapidly across the world. If anything, Asia may be adopting the technology faster than elsewhere. I was struck by the large number of executives at the conference who raised their hands when asked if they had not only conducted proofs of concept using generative AI but actually had generative AI applications in full deployment. It was far more hands than I had seen raised when I asked the same question at our Brainstorm AI conference in San Francisco in December and in London in April. (When I pressed the executives in Singapore about what applications were in deployment, the leading answer involved chatbots or AI coaches to assist human call center operators resolve customer questions faster and more accurately.)

Many of the issues that are preoccupying executives in Asia about how to build AI products successfully—particularly concerns about reliability and cost—are the same ones that are top of mind for executives in the U.S. and Europe. Debanjan Saha, the CEO of AI company DataRobot, said that businesses had to figure out how to close three essential “gaps” with AI before they could realize its full potential: the value gap, the confidence gap, and the expertise gap. It was clear from a lot of sessions that businesses are struggling with AI’s lack of reliability, but that many companies are figuring out ways to use AI effectively through techniques such as retrieval augmented generation (RAG), fine-tuning, and the use of a mix of AI models of different sizes to moderate other AI models.  

But some issues are of particular salience in the region—ensuring that Gen AI solutions work well in a variety of local languages, for one. Singapore has helped to train its own LLM for Southeast Asian languages called SEALion, although there was debate at the conference about whether this was a useful national project over the long term. It was also an open question whether open-source models, including small language models, from companies such as Meta or Mistral—or a Chinese company like ModelBest—could provide good coverage of these languages, with better overall performance across tasks, and better optimizations over time to run on a variety of different devices, including mobile phones.

Another issue that takes on even more prominence in Southeast Asia is the computing and energy demands of Gen AI and how that may affect countries’ sustainability plans. Singapore, which is both power- and water-constrained, had for a time barred the construction of any new data centers in the country. But last year it allowed an additional 80 megawatts of datacenter capacity to be built and it has approved an additional 300 megawatts to come online soon. Jacqueline Poh, managing director of Singapore’s Economic Development Board, said she was confident the city-state would have enough data center capacity to serve the AI needs of its economy. And Josephine Teo, Singapore’s minister for digital development and information, noted that the country already has some of the densest data center infrastructure on the planet.

In one fascinating session, Tim Rosenfield, from Australian company Sustainable Metal Cloud, explained how using specialized immersion cooling (where the entire server rack is surrounded by flowing oil-based coolant that can transfer heat away much more efficiently than air-cooling or water-cooling alone) and reengineered server racks could reduce AI’s energy demands and carbon footprint by 50%. On the same panel, former IBM chief AI scientist Seth Drobin, who is now a general partner with venture capital fund 1infinity Ventures, said the idea that we will need ever-bigger models to deliver AI capabilities was essentially a dead end from a sustainability perspective, and that the industry should turn back toward more-specialized, smaller models if it wanted to deploy AI without giving up on the climate.

I chaired two of the more optimistic sessions at the conference: One on how AI can accelerate our quest to find cures for diseases and perhaps even help us discover ways to slow or reverse natural aging, and the other on AI’s transformational, and overwhelmingly positive impacts on education.

If you weren’t able to make it to Singapore and want to see what you missed, you can catch up on most of the conference sessions on Fortune’s website here. With that, here’s more AI news.

Jeremy Kahn
jeremy.kahn@fortune.com
@jeremyakahn

Before we get to the news… If you want to learn more about AI and its likely impacts on our companies, our jobs, our society, and even our own personal lives, please consider picking up a copy of my new book, Mastering AI: A Survival Guide to Our Superpowered Future. It’s out now in the U.S. from Simon & Schuster and you can order a copy today here. In the U.K. and Commonwealth countries, you can buy the British edition from Bedford Square Publishers here.

AI IN THE NEWS

Two OpenAI cofounders and another top exec leave the company. OpenAI cofounder and president Greg Brockman announced he is taking an extended leave of absence from the company. Separately, John Schulman, who was also a member of OpenAI’s founding team, announced he was jumping over to OpenAI’s rival Anthropic, explaining the decision in a post on social media platform X as motivated by a desire to sharpen his focus on AI alignment—the work of trying to ensure powerful AI systems will act in accordance with human wishes and values. And, at the same time, The Information reported that OpenAI’s vice president of consumer product, Peter Deng, had left the company. You can read more about the three departures here.

Elon Musk renews lawsuit against OpenAI. The tech billionaire filed a federal lawsuit against OpenAI, which he helped found as a nonprofit lab in 2015, alleging that his cofounders Sam Altman and Greg Brockman had engaged in “a long con” to persuade him to fund the lab. He alleges Brockman and Altman misled him about OpenAI’s true purpose only to later transform the organization from a nonprofit dedicated to open research and developing artificial general intelligence for the good of humanity into a company that no longer makes most of its AI models and research available for free, is closely aligned with Microsoft, and churns out AI products for commercial purposes. The suit largely reiterates breach of contract claims Musk made in a state lawsuit he filed in March and subsequently dropped, except the new lawsuit adds federal racketeering claims. Neither Musk, Brockman, nor OpenAI, have responded yet to Musk’s latest suit. You can read more in this Washington Post story.

Nvidia warns of production issues. Nvidia and its main supplier, TSMC, are facing production challenges with their next-gen AI chips, potentially delaying shipments planned for this year, the Financial Times reports. TSMC has encountered problems producing Nvidia's new Blackwell graphics processing units, which are designed to be used in large data center clusters. The news helped drive Nvidia's shares down 15% and TSMC's by 10%—although both stocks were also hit by a global market rout this week. Analysts cited by the FT said that fixing these issues might delay production but won't significantly impact Nvidia's long-term prospects or AI adoption.

Google hires the cofounder and much of the team from Character AI. The tech giant has hired the three cofounders of chatbot maker Character AI, as well as much of its staff while paying the chatbot firm a licensing fee for its technology. The move will see Character AI’s CEO Noam Shazeer, who had been part of a Google team that helped develop the kind of neural network that underpins the entire generative AI boom, rejoining the tech titan. The structure of the deal is very similar to an arrangement Microsoft used to hire Mustafa Suleyman and most of his team from the chatbot startup Inflection and which Amazon used to hire the founders and much of the team from AI assistant company Adept. While tech companies insist the deal structure helps to shield the employees they are hiring from possible lawsuits from jilted venture capital investors over theft of trade secrets or breach of fiduciary duty, others speculate the true purpose of arrangements is to make it harder for antitrust regulators to bring action over what are in practice, if not in form, acquisitions of the smaller companies. As my Fortune colleague Sharon Goldman reports, the deals may also be a clear sign that the venture capital frenzy for AI startups may be beginning to fade, with many AI startups struggling with a combination of sky-high costs of developing and running AI models and uncertain business models for products such as consumer-facing chatbots.

U.K. government axes AI investments. The new Labour-led British government has decided to cut funding for two big AI-related projects that the previous Conservative government had championed. Gone is £800 million for a new AI supercomputing cluster that was to have been built at the University of Edinburgh and also £500 million that was to have helped British AI researchers fund the expensive computing infrastructure needed for AI projects. The Labour government has said it is facing a £28 billion fiscal “black hole” that was bequeathed to it by its predecessors, a claim the Conservative party has denied. You can read more from the BBC here.

EYE ON AI RESEARCH

Meta open sources model that makes it easy to segment images and video. The social media giant, which has become a leading factory of powerful open-source AI models, has just released a new AI model that can easily segment both still images and video into their representative components and, in the case of video, track those objects through time. The Segment Anything Model 2 (SAM 2) can even do this with objects it has never encountered before. The model will make it much easier for people to do video and photographic segmentation without having to train a model specifically for the class of objects the developer wishes to track. That could have huge applications in everything from video game development and robotics to biology and industrial safety, and, yes, government surveillance. It is yet another example of how quickly fundamental AI capabilities are developing. It also belies the idea that neural network-based models are poor at “compositionality”—determining the parts from the whole in an image—unless they have seen countless examples of particular parts and wholes in training. Meta has a cool-looking blog on the model which you can view here and you can read the research paper on SAM 2 here.

FORTUNE ON AI

Generative AI is getting kicked off its pedestal — it will be painful but it’s not a bad thing —by Sharon Goldman

Markets have overestimated AI-driven productivity gains, says MIT economist —by Daron Acemoglu (Commentary)

‘The Godmother of AI’ says California’s well-intended AI bill will harm the U.S. ecosystem —by Fei Fei Li (Commentary)

Nvidia challenger Groq just raised $640 million for its AI chips. Its college dropout CEO says a viral moment was ‘a game changer’ —by Sharon Goldman

Google scraps its Olympic Gemini ad after viewers revolt against its dystopian theme —by Seamus Webster

AI CALENDAR

Aug. 12-14: Ai4 2024 in Las Vegas

Dec. 8-12: Neural Information Processing Systems (Neurips) 2024 in Vancouver, British Columbia

Dec. 9-10: Fortune Brainstorm AI San Francisco (register here

BRAIN FOOD

Would one way to develop safer AI be for the U.S. and its allies to offer time on AI supercomputers to other countries?  That is the idea behind “Chips for Peace,” a proposal from Cullen O’Keefe, director of the Institute for Law & AI, which bills itself as an independent think tank. The general idea is loosely modeled on “Atoms for Peace,” which was an Eisenhower-era proposal for the U.S. to share nuclear power with other nations in the hopes of dissuading them from trying to develop nuclear power—and nuclear weapons—of their own. That plan foundered in the rough waters of the Cold War. But the chips for peace idea is an intriguing one. The idea of offering AI computing capacity and the most powerful AI models to less well-off countries could help the U.S. keep control of the technology and enable it to better police how AI systems are being used, potentially preventing catastrophic risks. The thing is—I doubt it will work. The cost of AI capabilities is rapidly falling, with only the very bleeding edge requiring massive amounts of computing power. It's highly likely that in the future much smaller models will be able to do dangerous things. But if you can do those dangerous things with smaller models, it will be very hard to use either sticks—or carrots such as those O’Keefe is proposing—to ensure other countries continue to use your infrastructure. That said, you can read more about the idea in the blog O’Keefe wrote here.

This is the online version of Eye on AI, Fortune's weekly newsletter on how AI is shaping the future of business. Sign up for free.