Viewers familiar with the Netflix series “Black Mirror” may find it intriguing that the topic I’ve selected for discussion resembles an episode straight out of the series, blurring the lines between fiction and reality. Unfortunately, we now face the negative ramifications of emerging technologies landing in the wrong hands. In today’s world of readily available information, where social media posts share personal biometrics, AI-driven audio/video technologies enable cloning, and where the dark web operates with cryptocurrency, it has created an ideal ecosystem for malicious actors seeking innovative ways to thrive in scams.
With deep fakes generated by AI-enabled voice cloning tools accessible to anyone and abundant video content on social media platforms, malicious actors can now easily create deepfakes by harvesting biometrics data and employing cloning techniques. “AI tools such as Voicelab can be used to process a person’s voice biometrics, resulting in a deepfake voice that would sound exactly like that specific person. This is called voice cloning, which happens when voice biometrics are harvested for ransom, extortion, and fraud” (Gibson, Hagen). A mother in Arizona this year was a victim of such a scam where a cybercriminal attempted to extort her out of $1 million, claiming that her daughter was kidnapped. These bad actors employed AI-driven voice cloning technology to create a convincing deepfake, leading the mother to believe it was her daughter on the call, crying and screaming. Fortunately, this extortion attempt didn’t succeed because, despite her distress, the mother swiftly contacted her daughter and confirmed that she had not been kidnapped. Others have also contacted the FBI about extortion attempts where their photos and videos were altered into explicit content. “The photos or videos are then publicly circulated on social media or pornographic websites for the purpose of harassing victims or sextortion schemes. Scams involving deepfakes have added a new twist to so-called imposter scams, which last year cost US consumers a startling $2.6 billion in losses, according to the Federal Trade Commission” (Vijayan).
The dark web has been infiltrated with audio and video containing people’s biometrics and personal information, making them potential targets of deepfake scams. Additionally, with generative AI’s popularity on the rise and its advancements continuing, tools like ChatGPT enable threat actors to combine data such as video, voice, and geolocation- to zoom in on prime targets for these deepfake scams. “Much like social network analysis and propensities (SNAP) modeling allows marketers to determine the likelihood of customers taking specific actions, attackers can leverage tools like ChatGPT to focus on potential victims. Attacks are enhanced by feeding user data, such as likes, into the prompt for content creation, the Trend Micro researchers say” (Vijayan).
AI technologies are only getting more sophisticated, and cloning tools are readily accessible; we will continue to see a rise in deepfake cyber scams thanks to the vulnerabilities in these tools and technologies, including social media platforms, allowing people’s biometrics to get freely replicated. With deepfake tools available for any malicious actors to utilize without any regulation and rules of conduct, to AI generative tools that help gather and fuse videos, personal location information, and biometrics for malicious purposes without any controls. “Virtual kidnapping could be thought of as an AI-weaponized scam, which has elements that share similarities with benign marketing tactics and malicious phishing schemes. It is an emerging tier of AI-enabled and emotionally driven extortion scams that will have phases of evolution that are similar to what we saw and are seeing with ransomware attacks. Virtual kidnapping scams rely on voice and video files to extort victims, which are not normally policed by security software” (Gibson, Hagan). The traditional ransomware attack landscape has evolved into a more sophisticated one where human emotions are exploited, and fears are leveraged thanks to the personalization of deepfakes that are hard to differentiate from reality.
However, one of the solutions captured in the article “Virtual Kidnapping” discussed the concept of identity-aware anti-fraud techniques, such as a multilayer identity-aware system, as a solution to combating these virtual kidnapping scams. “For example, a multilayered identity-aware system might be able to determine if virtual kidnapping subjects (the individuals who are supposedly abducted by kidnappers) are moving their phones (which can be detected by the phones’ onboard accelerometer sensor) and are using them consistently or normally — which they won’t be able to do if they’ve been truly kidnapped” (Gibson, Hagan). To address these vulnerabilities tied to the various sophisticated tools that we see in the deepfake technology space, we need to get more innovative in our approach, and as the authors of this article stated, “we need to go beyond what router level” security solutions can handle- if we want to mitigate these sophisticated cyber threats.
References:
Gibson, Craig & Hagen, Josiah. “Virtual Kidnapping” Trend, Business, 28 June 2023. https://www.trendmicro.com/vinfo/gb/security/news/cybercrime-and-digital-threats/how-cybercriminals-can-perform-virtual-kidnapping-scams-using-ai-voice-cloning-tools-and-chatgpt
Vijayan, Jai. “AI-enabled Voice Cloning Anchors Deepfaked Kidnapping,” Darkreading, 29 June 2023. https://www.darkreading.com/attacks-breaches/ai-enabled-voice-cloning-deepfaked-kidnapping
Kelly, Samantha Murphy. “Virtual Kidnappings are Rattling Families Across the US,” CNN Business, 17 May 2019. https://www.cnn.com/2019/05/15/tech/virtual-kidnapping/index.html
By:
Katrina Rosseini
Tags: -
.jpg)
Virtual Kidnapping & Deepfakes: AI-weaponized Scams
The dark web has been infiltrated with audio and video containing people’s biometrics and personal information, making them potential targets of deepfake scams.