Google’s advertising platform can be used to create ads targeting racist or bigoted people, according to a report from BuzzFeed News on Friday.
BuzzFeed put in its own keywords which were supplemented by keywords suggested by the Google platform, to create a targeted ad. The news comes a day after ProPublica reported that Facebook algorithms allowed ads targeting anti-semitic audiences.
Such test cases show that the same technology used to sell legitimate products and services can be turned to more nefarious purposes. Given that Google runs the world’s largest ad platform, which businesses and ad agencies rely on, this is a matter of concern.
Alerted by BuzzFeed, Google (googl) said the ad violated its policies against derogatory speech and disabled it. BuzzFeed said its ad got 17 impressions before being pulled, but some of those could have come from BuzzFeed itself.
Sridhar Ramaswamy, Google’s senior vice president for ads, told Fortune via email that the company’s goal is “to prevent its keyword suggestion tool from making offensive suggestions and to stop any offensive ads from appearing.”
That tool tells advertisers when their ads are offensive and thus rejected, Ramaswamy added: “In this instance, ads didn’t run against the vast majority of these keywords, but we didn’t catch all these offensive suggestions.”
The suggestions submitted in this case have since been turned off. Last year, Google said it disabled 1.7 billion ads.
This kerfuffle shows that technology doesn’t hold to any inherent moral or ethical value: The same algorithms used for legitimate purposes can be turned to more nefarious goals. Microsoft’s (msft) ill-fated Tay bot experiment in which online commenters taught the bot—basically software that takes on human characteristics powered by artificial intelligence—to say racist and sexist things also illustrates this point. In that case, it was learned tech companies must continuously tweak their own rules and guidelines in response to input.
Get Data Sheet, Fortune’s technology newsletter.
On Thursday, Harvard Law Professor Lawrence Lessig, referencing Facebook’s (fb) anti-semitic ad problem, said the company’s technology can’t help itself from wanting to solve a user problem. “It’s not that Mark Zuckerberg wants to attack Jews—it’s that his technology is interested in finding out things people are interested in,” Lessig told attendees of the CloudFlare Internet Summit
Lessig’s worry is that if technology that leads to questionable results is profitable, it will be used.”It’s fun and hopeful to talk about codes of ethics stifling the worst, but if the worst is profitable, the code of ethics will be eaten by the profit,” he said.
Darren Bolding, chief technology officer for Cambridge Analytica, the data science and advertising company used by Donald Trump’s presidential campaign, said even he worried about technology going rogue.
“Algorithms will find the worst in us if you let them go nuts,” he said.