That glowing review of [insert restaurant name here]? It could be automatically generated by AI.
Researchers at the University of Chicago have demonstrated how AI can be used to develop fake reviews that are nearly impossible to detect. In a new paper called “Automated Crowdturfing Attacks and Defenses in Online Review Systems” and first reported by Business Insider, they have described how a deep learning method called recurrent neural networks can be used to generate not only believable, but useful reviews.
The AI is trained with real online reviews to create new ones that, according to the researchers, are “effectively indistinguishable” from those written by real patrons. They therefore are not easily detected by humans, but are also reportedly not likely to be picked up by plagiarism detection software. (See if you can distinguish between the real and fake reviews below.)
The believability of these fake reviews is concerning, as online reviews have become an important (and theoretically reliable) tool for potential customers. Without the ability to distinguish between real and fake reviews, people’s trust will erode, undermining the credibility of review sites.
While fake reviews already exist, they have been human written until now. The advent of AI-generated reviews makes such attacks much easier. And in the age of fake news, this text-synthesizing technology could pose a threat to our trust system at large.
- I love this place. I have been going here for years and it is a great place to hang out with friends and family. I love the food and service. I have never had a bad experience when I am there.
- Excellent pizza, lasagna and some of the best scallops I’ve had. The dessert was also extensive and fantastic.
(Review 1 is fake, 2 is real.)