By Alan Murray and David Meyer
November 5, 2018

Good morning.

I had the privilege of spending the weekend at Stanford University debating the social implications of A.I. with some of the most thoughtful people I’ve encountered on the subject. Four have written books on the topic: Pedro Domingos (The Master Algorithm), Ajay Agrawal (Prediction Machines), Deirdre Mulligan (Privacy on the Ground) and Max Tegmark (Life 3.0.). I recommend all four, but for different reasons: Domingos provides the best overview; Agrawal gives a readable economist’s take; Mulligan does a lawyer’s deep dive into regulatory issues; and Tegmark is an engaging futurist. Also on hand were chess master and democracy crusader Garry Kasparov, Intel VP Naveen Rao, and Mark Nehmer of the Department of Defense. The event was hosted by billionaire entrepreneur Tom Siebel, and attended by his growing stable of talented Siebel Scholars. To top it off, Don Henley provided entertainment.

I came away energized, and further convinced that the ability to collect ever more data from everyone and everything, and to convert that data into intelligence with ever-better machine learning algorithms, will transform society in ways we have only begun to imagine—both for good and for bad. Among the big questions that have to be grappled with:

—How do we realize the full positive potential of data-driven intelligence without trampling on individuals’ legitimate rights to control the spread and use of their own data.

—How do we instill our sense of values, ethics and morality into those algorithms, and purge societal biases reflected in the data that trains them?

—How do we defend against the weaponization of A.I. by bad actors?

—How do we deal with job market disruption as the nature of work is transformed? All the participants agreed new jobs would replace the old, but worried about the massive retraining needed to survive the transition.

—How do we address the inequality that stems from the winner-take-most nature of the digital technology revolution?

—How can democratic societies concerned with responsible development of A.I. and use of personal data compete against an authoritarian regime like China, where many of those concerns are shunted aside?

Interestingly, none of the panelists—with the exception of Tegmark—worried much about the Terminator scenario, in which machines develop their own agency, and do things people don’t want or allow them to do. A.I. will be an unprecedentedly powerful tool, but a tool nonetheless.

Some of the answers to the difficult questions above can and should be provided by business. But some certainly require smart intervention by government. And the one prediction that can confidently be made about tomorrow’s U.S. elections is that this year’s divisive election debate will leave Washington less able to grapple intelligently with complex issues. Neither party is likely to have a clear governing majority, and both will have sunk further into the tribalism that makes addressing the defining issues of our time all but impossible. My Election Day advice: vote for candidates who avoid wedge politics—if you can find them.

News below.

Alan Murray


You May Like