How tech companies are trying to prevent ethical lapses around A.I.
The tech industry has had no shortage of soul-searching moments in recent months (or years) with reports and accusations of ethical lapses—at Google, Facebook, and countless other companies and startups—piling up in the media and before Congress.
Is there a way for tech companies to more responsibly navigate the gray, unregulated spaces they’re often operating in—especially where artificial intelligence is involved?
Industry experts speaking at the Fortune Most Powerful Women Summit in Washington, D.C., on Monday offered their prescriptions.
Principles and structured governance
“The approach we’ve taken is to be very principled about it,” said Raj Seshadri, president of data and services for Mastercard, a company whose data includes 90 billion financial transactions involving 2.7 billion cards in 210 countries. The company has principles around data security and transparency as well as consumers’ rights, she noted, adding that when it comes to artificial intelligence, her team tries to probe the A.I.’s sources: Are there sampling biases in the data? Are you coding in the right things?
“It’s about having a disciplined approach,” Seshadri said, explaining that Mastercard has tried to create safeguards to ensure the ethics of its technology—from training its data scientists to identify biases in the data that drives A.I., to creating a governance committee that those data scientists can turn to when they need guidance. “Sometimes the problems are quite tricky and difficult,” she added. “It’s not that straightforward—we need a few brains to come together to think about it.”
Lisa Edwards, president and COO of Diligent, a software company that specializes in corporate governance, stressed the importance of having people from diverse backgrounds in those data-science governance positions, to prevent a situation where a group’s bias simply reinforces the technology’s bias.
Checking and rechecking the technology
Edna Conway, vice president and chief security and risk officer for Microsoft Azure, emphasized that the most advanced technology can also help evaluate products and A.I. systems for biased output. She argued that companies should test their technology “relentlessly” to ensure that A.I. is being used ethically. Seshadri agreed, noting that Mastercard reviews its products periodically to make sure they are working as intended and without unanticipated biases or unforeseen consequences.
Subscribe to Fortune Daily to get essential business stories delivered straight to your inbox each morning.