The combination of huge amounts of personal data on all of us and tools to analyze it can do great good in medical and scientific applications. But the same technologies also threaten the social and political order of our country, critics say.
Technology can be “the best and worst of times at the same time,” said Harvard Law School professor (and former presidential candidate) Lawrence Lessig, speaking the Cloudflare Internet Summit Thursday in San Francisco.
Lessig, who has long worried about the state of U.S. democracy, thinks that data science poses a new and dangerous threat. He shared the stage with Darren Bolding, chief technology officer of Cambridge Analytica, a data science company, who had a different take on the topic. The Donald Trump campaign used CA’s data services in the 2016 presidential campaign.
In Bolding’s view, what Cambridge Analytica does—which is aggregating information about people—from public sources, from data brokers, and its own internal sources—and then running analytics on it to determine their likes and dislikes, is just smart business.
CA uses that data to come up with lists of people that a client might want to target to sell a product—or pitch a candidate. “In the case of politics, we see this person’s propensity to vote and this is the candidate they are most likely to be interested in,” Bolding noted. (Here is transcript of the panel.)
Lessig’s point is that a campaign can use this highly specialized technology to craft very different versions of a candidate’s views for different groups. So one candidate could push himself as a populist to one sub-group of voters, a fiscal conservative to another, and perhaps a right-wing reactionary to yet another. That strategy can work, in Lessig’s view, because there is no longer a shared understanding of the world as there was 30 or 40 years ago when most Americans go their news from a handful of TV networks. The factionalization of news sources has also factionalized the electorate.
In the past, candidates built a coalition as they campaigned and they relied on that same coalition to govern if and when they won. “Everything was in plain sight and out in the open,” Lessig noted.
Bolding disagreed, while noting that some shared context is probably helpful:
I would argue that people with different points of view and interests should have their issues addressed. And if you don’t think politicians have been going around saying different things to different people… that’s been going on since the beginning of politics.
Get Data Sheet, Fortune’s technology newsletter.
The use of data analytics combined with advertising expertise, does not make for good or evil messaging, it just amplifies whatever messaging there is, Bolding said.
At the end of the panel, an attendee asked Bolding where CA drew the ethical line targeting prospective voters: “Are you creating models to target people on the basis of racial messaging?”
Bolding said he did not think CA pushed racially charged messages. Cloudflare CEO Matthew Prince, who moderated the session, pressed further: “Do you have a category called ‘racists?'”
“No. We had 15 models and we could have found a code word to cover that but we never even talked about it,” he said.
Lessig sees microtargeting in itself as a problem, citing a new ProPublica report that Facebook (FB) had an anti-semitic advertising category developed with its own algorithm. “It’s not that [Facebook CEO] Mark Zuckerberg wants to attack Jews. It’s that his technology is interested in finding things that people are interested in,” Lessig said.
While Bolding said he is does not love the idea of regulation, he conceded the need for some sort of code of ethics. “Algorithms will find the worst in us if you let them go nuts,” he said, adding that algorithm abuse happens on both sides of the spectrum.