Governments may take a softer approach to encourage responsible AI: ‘Overregulation will stifle AI innovation’

By Lionel LimAsia Reporter
Lionel LimAsia Reporter

Lionel Lim is a Singapore-based reporter covering the Asia-Pacific region.

Phoram Mehta, APAC chief information security officer for Paypal, speaking on innovation and regulation of AI at a Fortune Brainstorm AI breakout session in Singapore on July 31, 2024.
Phoram Mehta, APAC chief information security officer for Paypal, speaking on innovation and regulation of AI at a Fortune Brainstorm AI breakout session in Singapore on July 31, 2024.
Graham Uden for Fortune

Governments are trying to navigate a tricky balance with generative AI. Regulate too hard, and you risk stifling innovation. Regulate too lightly, and you open the door to disruptive threats like deepfakes and misinformation. Generative AI can augment both the capabilities of nefarious actors, and those trying to defend against them.

During a breakout session on responsible AI innovation last week, speakers at Fortune Brainstorm AI Singapore acknowledged that a global one-size-fits-all set of AI rules would be difficult to achieve. 

Governments already differ in terms of how much they want to regulate. The European Union, for example, has a comprehensive set of rules that govern how companies develop and apply AI applications.

Other governments, like the U.S., are developing what Sheena Jacob, head of intellectual property at CMS Holborn Asia, calls a “framework guidance”: No hard laws, but instead nudges in a preferred direction.

“Overregulation will stifle AI innovation,” Jacob warned.

She cited Singapore as an example of where innovation is happening, outside the U.S. and China. While Singapore has a national AI strategy, the city-state does not have laws that directly regulate AI. Instead, the overall framework counts on stakeholders like policymakers and the research community to “collectively do their part” to facilitate innovation with a “systemic and balanced approach.”

Like many others at Brainstorm AI Singapore, speakers at last week’s breakout acknowledged that smaller countries can still compete with larger countries in AI development. 

“The whole point of AI is to level the playing field,” said Phoram Mehta, APAC chief information security officer at PayPal. (PayPal was a sponsor of last week’s breakout session.)

But experts also warned against the dangers of neglecting AI’s risks.

“What people really miss out is that AI cyber hacking is a cybersecurity risk at a board level that’s bigger than anything else,” said Ayesha Khanna, cofounder of Addo AI and a cochair of Fortune Brainstorm AI Singapore. “If you were to do a prompt attack and just throw hundreds of prompts that were … poisoning the data on the foundational model, it can completely change the way an AI works.”

Microsoft announced in late June that it had discovered a way to jailbreak a generative AI model, causing it to ignore its guardrails against generating harmful content related to topics like explosives, drugs, and racism.

But when asked how companies can block malicious actors from their systems, Mehta suggested that AI can help the “good guys,” too.

AI is “helping the good guys level the playing field … It’s better to be prepared and use AI in those defenses, rather than waiting for it and seeing what types of responses we can get.”