Hitmetrix - User behavior analytics & recording

Deepfakes fuel sophisticated scams, experts warn

Deepfakes fuel sophisticated scams, experts warn
Deepfakes fuel sophisticated scams, experts warn

The rapid evolution of artificial intelligence (AI) tools capable of generating convincing text, images, and even live video is enabling increasingly sophisticated and targeted scams. Cybersecurity experts are urging internet users to stay vigilant. In recent weeks, high-profile scams like a significant fraud case in France, where a woman lost €830,000, and fake donation drives for Los Angeles fire victims, illustrate that “absolutely everyone, private individuals or businesses, is a target for cyberattacks,” said Arnaud Lemaire of cybersecurity firm F5.

Phishing, one of the best-known forms of cyberattack, involves sending emails, texts, or other messages under false pretenses. These messages often aim to trick users into clicking harmful links, installing malware, or divulging sensitive information. Phishing and its related tactic, “pretexting,” accounted for over 20 percent of almost 10,000 data breaches worldwide last year, according to a report by US telecoms operator Verizon.

AI chatbots powered by large language models (LLMs) are saving attackers time and enabling the creation of more elaborate fake messages. “If someone is writing a phishing email… he can make the clues completely vanish,” Lemaire explained.

These text generators are just the beginning of what AI can do. AI can also use data from previous breaches to automate the creation of highly personalized scams, said Steve Grobman, Chief Technical Officer at security software maker McAfee. Rather than aiming for a quick payoff, attackers frequently work to gain the trust of selected individuals within target organizations over extended periods.

If an employee falls for the scam, attackers might wait until the person gains influence or until a timely opportunity arises to extort money, explained Martin Kraemer of cybersecurity training firm KnowBe4. In a dramatic example from February 2024, scammers stole $26 million from a multinational firm in Hong Kong.

Sophisticated scams using AI-generated deepfakes

A finance worker was tricked into believing he was videoconferencing with the company’s CEO and other staff members, all of whom were AI-generated deepfakes. “The latest generation of deepfake video has reached a point where almost no consumers can tell the difference between an AI-generated image and a real image,” Grobman said. Internet users need to adopt the same skepticism towards video content that they have developed for still images, he added.

Checking purported news videos against trusted sources is one way to approach this. Faced with a suspicious request from a supposed CEO for a significant bank transfer, Lemaire advised using personal details to verify the identity. Other tricks include asking video callers to pan their cameras—an action current AI technology struggles to replicate accurately.

The online scam industry is highly lucrative, with its own supply chains and an ecosystem of supportive tools, Grobman noted. Malicious programs like ransomware can encrypt data on target computers and threaten to release or delete it unless a payment is made. A suspected developer of such a program was recently arrested in Israel, pending extradition to the United States.

Despite the risks, some experts are optimistic. “I’m not too worried that the defense side will be overwhelmed by AI,” said Kraemer. AI can be used for defense as well as for attack.

However, the final line of defense remains human. Grobman concluded, “When we moved from walking and riding horses to driving automobiles, we needed to change the way we thought about transportation safety. That’s the mindset consumers need to adopt today.”

Total
0
Shares
Related Posts