Europe Thinks Ethics Is the Key to Winning the A.I. Race. Not Everyone Is Convinced

Can ethics be a competitive advantage in the booming artificial intelligence industry? Europe’s executive body hopes so, but some industry voices are skeptical.

The European Commission on Monday announced a pilot of ethical A.I. guidelines proposed by a group of independent experts that includes representatives of companies such as Google, IBM and SAP, along with A.I.-focused academics, digital rights activists and trade bodies.

The guidelines say A.I. systems should support people’s rights; people should retain control over their own data; algorithms should be secure and reliable; the decisions emitted by A.I. systems should be transparent and someone should be accountable for them; and the systems should “be used to enhance positive social change and enhance sustainability and ecological responsibility.”

There’s no official talk yet about the EU introducing rules for companies using the technology, though the guidelines could feed into such legislation at some point in the future.

Andrus Ansip, the European Commission’s vice president for digital matters, said in a statement that A.I. ethics would help people to trust the technology and therefore benefit from it more fully.

“Ethical A.I. is a win-win proposition that can become a competitive advantage for Europe: being a leader of human-centric A.I. that people can trust,” Ansip said.

The purpose of the pilot is to see whether the proposed A.I. ethics guidelines actually make sense in practice. Companies participating in the program will report back on their experiences trying to match the guidelines to what they need to do in the A.I. field. The pilot starts in June and the high-level expert group will look at the results early next year.

The European Commission’s approach to testing the guidelines was welcomed by the Center for Data Innovation (CDI,) an affiliate of a prominent U.S. tech-industry lobbying group called the Information Technology & Innovation Foundation. CDI analyst Eline Chivot said the approach was “a welcome alternative to the EU’s typical ‘regulate first, ask questions later’ approach to new technology.”

However, Chivot also decried the idea that A.I. was inherently untrustworthy, and claimed there was no evidence for the assertion that the public will only trust A.I. if it can be explained.

“The belief that the EU’s path to global A.I. dominance lies in beating the competition on ethics rather than on value and accuracy is a losing strategy. Pessimism about A.I. will only breed more opposition to using the technology, and a hyper focus on ethics will make the perfect the enemy of the good,” Chivot said.

It remains to be seen whether the ethical focus can make a difference to Europe’s relative laggard status in an A.I. scene that is currently dominated by the U.S. and China.

A.I. development relies on data, which is needed to train the underlying systems. The biggest global tech firms are American, and they have a big advantage because their vast numbers of users are providing tons of data every day. Chinese companies also have an advantage in that they are not burdened with strong consumer privacy protections, which also allows them to freely exploit user data. The EU has strong data protection laws, which limits how much companies operating there can exploit people’s personal data.

The European Commission will soon—probably in early summer—release recommendations for promoting A.I. investment in the bloc. Recent data from analysts at IDC suggested that European spending on A.I. this year will reach $5.2 billion, which is just 0.2% of GDP, though at least it’s a 49% increase over 2018’s spending.

The subject of A.I. ethics has been in the spotlight recently due to Google’s abortive attempt to set up its own ethics board. Thousands of Googlers voiced their objections to two members of the panel in particular: Kay Coles James, the anti-LGBTQ-rights head of the right-wing Heritage Foundation; and drone company chief Dyan Gibbens. One board member quit, others tried to fend off criticism over their continued participation, and Google ended up cancelling the whole initiative and “going back to the drawing board.”

That said, the EU’s group of independent experts also has harmony issues. Consumer advocate Ursula Pachl, a member of the group, said in a Euractiv interview that it included too many industry representatives, with the result that potentially negative issues with A.I. were sidelined in the group’s output.

There’s also an ongoing debate over the creation of autonomous weapons—A.I.-based systems that could mark people for death without human input. Most of the world is pushing for an outright ban on such “killer robots,” but the U.S., U.K., Russia, Israel and Australia have been pushing back.

The EU’s A.I. ethics-guidelines pilot will involve companies and organizations from outside the EU as well as within, and the ultimate aim is to achieve some sort of coordination with policymakers from countries such as Japan and Canada.

Subscribe to Well Adjusted, our newsletter full of simple strategies to work smarter and live better, from the Fortune Well team. Sign up today.

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward