Why AI Needs to Be Audited Before We Have Self-Driving Cars

July 18, 2018, 12:20 AM UTC

Before cutting edge technologies like self-driving cars become mainstream, companies and researchers must ensure that they can audit and inspect the underlying artificial intelligence technologies.

That’s one of the takeaways from a panel at Fortune’s annual Brainstorm Tech conference in Aspen, Colo. about AI’s impact on humanity.

One of the panel participants was former Epic Games (the developer of the popular Fortnite video game) president Mike Capps, who is now the CEO of the Raleigh, NC-based software startup Diveplane. In an interview with Fortune prior to the session, Capps explained that Diveplane is testing a product with corporate customers that lets them audit their data crunching technology. In that way, they can learn how their computers make decisions in fields like healthcare and financial services. The ability for people to inspect and deduce how these systems make their decisions will likely make the technology more appealing to regulators.

Although excitement is growing about AI technologies potentially helping pharmaceutical companies discover new drugs and businesses better identify sales targets, researchers still have difficultly understanding how these complex technologies make decisions.

Capps said his company’s product is designed to probe various kinds of machine-learning and data crunching systems that companies are increasingly using, but are not intended to audit sophisticated computer systems powered by deep-learning technology used by companies like Facebook (FB) and Google (GOOG). He declined to comment on how they company’s technology works, but he said that Diveplan will reveal more details in the future.

About using AI to help cars drive themselves, General Motors vice president of strategy Mike Ableson said that he expects the government to eventually impose regulations considering that people’s lives are at stake.

The idea of self-driving cars making their own decisions concerns Capps, who said that people need to be “able to unpack the AI somehow” in order to understand why its drives the way it does. Clearly, if an autonomous car smashes into a pedestrian, researchers would want to know why.

Ableson said that while he has been a passenger in GM’s self-driving cars because he knows “where all the data is going,” he is not using any AI-powered voice-activated digital assistants like Amazon’s Alexa or the Google Assistant because he does not know how his personal data is being used by those products. His concerns coincide with a growing number of people who want more transparency from companies about how they use people’s digital data to improve their products or grow their businesses.

Get Data Sheet, Fortune’s technology newsletter.

Capps, on the other hand, joked that “at home I have a Google and Alexa in every room because I’ve given up,” citing how common these technologies have become.

Read More

Artificial IntelligenceCryptocurrencyMetaverseCybersecurityTech Forward