Hitmetrix - User behavior analytics & recording

AI tools drive surge in fake reviews

AI tools drive surge in fake reviews
AI tools drive surge in fake reviews

The internet is rife with fake reviews. Researchers and watchdog groups say the emergence of generative artificial intelligence tools has put merchants, service providers, and consumers in uncharted territory. These tools allow people to efficiently produce detailed online reviews with almost no work.

Fake reviews have long plagued many popular consumer websites. They are typically traded on private social media groups between fake review brokers and businesses willing to pay. Sometimes, businesses offer customers incentives such as gift cards for positive feedback.

AI-infused text generation tools enable fraudsters to produce reviews faster and in greater volume, according to tech industry experts. The deceptive practice of creating fake reviews is carried out year-round but becomes a bigger problem for consumers during the holiday season, when many people rely on reviews to help them purchase gifts. The Transparency Company, a tech company and watchdog group, said it started to see AI-generated reviews show up in large numbers in mid-2023, and they have multiplied ever since.

For a report released this month, The Transparency Company analyzed 73 million reviews in three sectors: home, legal, and medical services. Nearly 14% of the reviews were likely fake, and the company expressed a “high degree of confidence” that 2.3 million reviews were partly or entirely AI-generated. “It’s just a really, really good tool for these review scammers,” said Maury Blackman, an investor and advisor to tech startups, who reviewed The Transparency Company’s work and is set to lead the organization starting Jan.

1. In August, software company DoubleVerify said it was observing a “significant increase” in mobile phone and smart TV apps with reviews crafted by generative AI. The reviews often were used to deceive customers into installing apps that could hijack devices or run ads constantly, the company said.

The following month, the Federal Trade Commission sued the company behind an AI writing tool and content generator called Rytr, accusing it of offering a service that could pollute the marketplace with fraudulent reviews. The FTC said some of Rytr’s subscribers used the tool to produce hundreds and perhaps thousands of reviews for garage door repair companies, sellers of “replica” designer handbags, and other businesses. Max Spero, CEO of AI detection company Pangram Labs, said the software his company uses has detected with almost certainty that some AI-generated appraisals posted on Amazon bubbled up to the top of review search results because they were so detailed and appeared to be well thought-out.

But determining what is fake or not can be challenging. Many of the AI-generated comments on Yelp appeared to be posted by individuals who were trying to publish enough reviews to earn an “Elite” badge, which is intended to let users know they should trust the content, Spero said.

Ai-generated reviews complicate consumer trust

The badge provides access to exclusive events with local business owners. Fraudsters also want it so their Yelp profiles can look more realistic, said Kay Dean, a former federal criminal investigator who runs a watchdog group called Fake Review Watch. Prominent companies are developing policies for how AI-generated content fits into their systems for removing phony or abusive reviews.

Some already employ algorithms and investigative teams to detect and take down fake reviews but are giving users some flexibility to use AI. Spokespeople for Amazon and Trustpilot said they would allow customers to post AI-assisted reviews as long as they reflect their genuine experience. Yelp has taken a more cautious approach, saying its guidelines require reviewers to write their own copy.

The Coalition for Trusted Reviews, which Amazon, Trustpilot, Glassdoor, Tripadvisor and Expedia launched last year, said that even though deceivers may put AI to illicit use, the technology also presents “an opportunity to push back against those who seek to use reviews to mislead others.”

Banning fake reviews, which took effect in October, allows the agency to fine businesses and individuals who engage in the practice. Tech companies hosting such reviews are shielded from the penalty because they are not legally liable under U.S. law for the content that outsiders post on their platforms. Tech companies, including Amazon, Yelp, and Google, have sued fake review brokers they accuse of peddling counterfeit reviews on their sites.

The companies say their technology has blocked or removed a vast number of suspect reviews and suspicious accounts. However, some experts say they could be doing more. “Their efforts thus far are not nearly enough,” said Dean of Fake Review Watch.

“If these tech companies are so committed to eliminating review fraud on their platforms, why is it that I, one individual who works with no automation, can find hundreds or even thousands of fake reviews on any given day?”

Consumers can try to avoid fake reviews by watching out for a few warning signs, according to researchers. Overly enthusiastic or negative reviews are red flags. Jargon that repeats a product’s full name or model number is another potential giveaway.

When it comes to AI, research conducted by Balázs Kovács, a Yale professor of organization behavior, has shown that people can’t tell the difference between AI-generated and human-written reviews. Some AI detectors may also be fooled by shorter texts, which are common in online reviews, the study said. However, there are some “AI tells” that online shoppers and service seekers should keep in mind.

Panagram Labs says reviews written with AI are typically longer, highly structured, and include “empty descriptors,” such as generic phrases and attributes. The writing also tends to include cliches like “the first thing that struck me” and “game-changer.”

Total
0
Shares
Related Posts