Hitmetrix - User behavior analytics & recording

AI tools enable surge in fake reviews

Fake Reviews
Fake Reviews

Researchers and watchdog groups say the emergence of generative artificial intelligence (AI) tools that allow people to efficiently produce detailed and novel online reviews has put merchants, service providers, and consumers in uncharted territory. The emergence of these tools has created a new landscape where fake reviews are produced faster and in greater volume. Such reviews have long plagued consumer websites like Amazon and Yelp, often traded between fake review brokers and businesses willing to pay for positive feedback.

AI-infused text generation tools enable fraudsters to produce reviews swiftly and efficiently, according to tech industry experts. This deceptive practice is prevalent year-round but becomes a bigger problem during peak seasons like the holidays when many people rely on reviews to help them purchase gifts. Fake reviews are found across a wide range of industries, from e-commerce, lodging, and restaurants to services like home repairs and piano lessons.

The Transparency Company, a watchdog group and tech company, reported seeing a significant increase in AI-generated reviews starting in mid-2023. In their recent analysis of 73 million reviews in sectors such as home, legal, and medical services, they found that nearly 14% were likely fake, with an estimated 2.3 million of these being partly or entirely AI-generated. “It’s just a really, really good tool for these review scammers,” said Maury Blackman, an advisor to tech startups who reviewed The Transparency Company’s work.

In August, software company DoubleVerify also observed a significant increase in mobile phone and smart TV apps with AI-crafted reviews aimed at deceiving customers into installing malicious apps. The Federal Trade Commission (FTC) has taken notice, suing companies behind AI writing tools that facilitate the production of fraudulent reviews. Max Spero, CEO of AI detection company Pangram Labs, noted that some AI-generated appraisals posted on Amazon have risen to the top of review search results due to their detailed and well-thought-out appearance.

However, identifying these fake reviews can be challenging, as external parties often lack access to internal data signals indicating patterns of abuse.

AI tools spur increase in fraud

Spero’s company has conducted detection for prominent online sites and evaluated Amazon and Yelp independently, finding many AI-generated comments posted by individuals seeking to earn a trust badge.

AI-generated reviews are not always malicious. Some consumers use AI tools to articulate their genuine experiences better, especially non-native English speakers who rely on AI to ensure accurate language use. Prominent companies are developing policies to address how AI-generated content fits into their systems for detecting and removing fake reviews.

Amazon and Trustpilot have stated they will allow customers to post AI-assisted reviews as long as they reflect genuine experiences. Yelp has taken a more cautious stance, requiring reviewers to write their own copy. The Coalition for Trusted Reviews, which includes Amazon, Trustpilot, Glassdoor, and travel sites like Tripadvisor and Expedia, has been sharing best practices and raising standards to push back against review fraud.

The recently enacted ban on fake reviews by the FTC allows the agency to fine businesses and individuals engaged in this practice. Tech companies, including Amazon, Yelp, and Google, have taken legal action against fake review brokers and claim their technologies have blocked or removed many suspicious reviews and accounts. Yet, experts like Kay Dean of Fake Review Watch argue that these efforts are insufficient.

Consumers can protect themselves by being aware of red flags in reviews, such as overly enthusiastic tones, repetitive jargon, and excessively structured writing. Research by Yale professor Balázs Kovács indicates that people often can’t distinguish between AI-generated and human-written reviews, but AI-generated reviews tend to use cliches and empty descriptors, which can act as giveaways. By raising standards and developing advanced AI detection systems, companies and coalitions aim to protect consumers and maintain the integrity of online reviews.

Total
0
Shares
Related Posts