Identity Verification Solutions employ biometrics like facial features, and iris scanning to verify the identity of individuals and enhance security, making it challenging for illicit actors to deceive the systems. Despite using advanced technology, IDV solutions still fail to detect sophisticated deepfakes generated by using AI that seem highly realistic.
Deep fakes manipulate facial features and voice in such a way that even the most advanced IDV solutions can’t distinguish between real and fake. To stay ahead of evolving cyber threats, IDV solutions need to incorporate sophisticated AI algorithms into the system, recognizing anomalies in facial expressions, subtle motion, or voice patterns associated with deepfakes.
How well-prepared are IDV solutions to counter the adverse effects of deep fakes using current technology? What advancements can enhance their resilience against threats posed by deep fakes? What measures should IDV solutions implement to safeguard individuals from being exploited?
How Deepfakes Are Posing Potential Threats?
The manipulation of Digital media has blurred the boundaries between reality and illusion, allowing cyber criminals to target and deceive individuals and businesses easily. In earlier times, manipulating media such as video, audio, and photos relied on manual editing skills and advanced software. But nowadays, artificial intelligence (AI) facilitates the creation and alteration of digital content including video, audio, images, or text. While there are legitimate applications of synthetic media, it is more commonly employed in disinformation campaigns to potentially distort the truth, harm an organization’s reputation, and involve deceptive payment requests.
One common misuse of AI involves deepfakes, where AI methods are utilized to create or modify digital content, making it appear real. Deepfakes, the fusion of ‘deep learning’ and ‘fake media’ are currently used and will probably be used in the future for disinformation campaigns as they make it challenging to distinguish between real and fake, despite technological solutions. Malicious actors frequently employ deep fakes in identity theft, a widespread form of cybercrime to steal personal data and funds, and this has become more accessible in certain situations.
Is there a need to regulate deep fake technology owing to its misuse? To better understand the requirements to address this concern at national and international levels and upgrade technology to differentiate between real and fake, let’s first examine real-life instances where famous entities have faced exploitation through deep fakes.
Deep Fake Video of Tom Hanks Promoting His Dental Plan
As AI systems continue to advance in power and sophistication, there is a growing concern regarding their potential to produce visual representations of real people. Throughout 2023, AI has been a prominent subject in Hollywood discussions and several celebrities have voiced their objections against the use of their images in AI deep fakes.
Actor Tom Hanks, YouTube personality MrBeast, and TV journalist Gayle King have recently fallen prey to exploitation of their deep fakes and taken steps to stop revolving fake videos of themselves. Hanks, a two-time Oscar winner, spotted an AI deep fake of himself and took to Instagram on October 01 cautioning his followers, and wrote,
“There’s a video out there promoting some dental plan with an AI version of me.”
He added,
“I have nothing to do with it.”
Hanks previously talked about the artistic challenge that AI presents in the industry, a central concern driving recent strides led by prominent Hollywood actors and writers. The ongoing discussion highlights how technology’s evolving role influences creative processes.
Deep Fake Video of Ripple CEO Promoting Fraudulent Giveaway
The cryptocurrency sector is experiencing a relentless wave of malicious activities, with illicit actors seeking to exploit investors through frequent scams and hacks. Recently, the CEO of Ripple, a currency exchange and remittance network, has encountered a deep video of himself circulating on YouTube. A fabricated video of CEO, Brad Garlinghouse, has surfaced on YouTube, featuring a deceptive giveaway to entice users with an attractive offer of 100 million XRP tokens.
The misleading advertisement, reportedly active since November 25, 2023, prompts investors to engage by scanning a QR code and transferring a minimum of 1,000 to a maximum of 500,000 EXR tokens to the designated address in exchange for a two-fold return. The CEO warned the public about fraudulent giveaways and schemes.
The users raised their voices on the YouTube help center, particularly about the duration of the manipulated ad. A Germany-based Reddit user also complained about the ad and the support team responded to the complaint by saying the video completely adheres to Google’s policies.
In the wake of threats posed by deep fakes, social media platforms such as Facebook, YouTube, and Instagram need to upgrade policies on the sharing of digital data to protect the platforms against exploitation by malicious actors. They must take the responsibility to carefully evaluate the digital content and acknowledge the sensitivity of the matter. These measures can considerably preserve the platform’s integrity and enhance data privacy.
Additionally, the robust measures taken in this context may mitigate the challenges of deep fakes evolving in the online domain. It is a question of time until these platforms take the initiative to combat malicious activities taking place on their domains.
Deepfake Hologram of Binance Executive
Most of the time, illicit actors employ fake emails or social media credentials to target companies and their executives. Nevertheless, recent updates reveal scammers go beyond the imagination and they created a deep fake hologram of Executive from Binance, the world’s largest cryptocurrency platform.
Patrick Hillmann, Chief Communication Officer at Binance, reportedly said that a “sophisticated hacking team” manipulated video footage of his past TV appearances and created an “AI Hologram” to trick people into attending meetings.
After getting to know about his deep fake hologram, Hillmann cautioned people to be aware of such deceptions. Hillmann mentioned he discovered the situation after receiving numerous online messages thanking him for meetings he had never participated in.
He declared that the scam aimed to misguide project teams into believing that a Binance executive held a meeting to discuss the opportunity of getting their token listed on the Binance platform, which could be financially beneficial for crypto projects. Hillmann posted a screenshot of the chat as evidence where someone asked him to confirm a previous Zoom call.
A Manipulated Video of Ukrainian President Volodymyr Zelenskyy
During the Ukraine-Russia War, a manipulated video of Ukrainian President Volodymyr Zelenskyy was circulated on the Internet urging citizens to give in to Russia. The fabricated video portrayed Zelenskyy as addressing the public and advising citizens to “lay down arms”.
The deep fake video was generated using AI to edit Zelenskyy’s face and voice onto the footage. More than 12,000 Twitter users watched one version of the manipulated video. After getting to know about his deep fake video, the president immediately shared a video on Instagram, stating he didn’t make any such statement.
How is the World Preparing to Address Deep Fakes?
The threats imposed by deep fakes go beyond personal concerns to businesses and organizations, posing potential threats across different sectors. In 2021, the Federal Bureau of Investigation issued a warning to businesses about the potential for deep fake fraud, predicting that malicious actors would probably use synthetic content for cybercrimes and foreign influence operations within the next 12-18 months.
While cautioning the public about the risks and threats posed by deepfakes, Bank of America reportedly stated, “While deepfake technology is still fairly new (first developed in 2017), deepfakes rank as one of the most dangerous AI crimes of the future.”
Some reports suggest that the use of synthetic data created by using AI would probably see a 90% surge by 2026. To what extent are individual’s privacy rights considered and safeguarded with the advancement of deep fake technology? What strategies can be adopted to educate the public about deep fakes, enabling them to differentiate between real and fake? These questions become a common struggle for individuals when they come across real-world instances of deep fakes.
So what government should do in this context? They need to develop robust regulatory frameworks and collaborate with advanced solution providers to mitigate the challenges and risks posed by deep fakes.
In recent times, several bills have been passed by the US Congress and state legislatures concerning deepfakes. The state of Virginia has amended revenge porn law to criminalize non-sexual deep fake pornography. Texas has put in place legal measures making it illegal to generate deep fakes that interfere with elections. Maryland, New York, and Massachusetts are considering individualized legislative approaches to regulate deep fakes. Nevertheless, the argument persists that state laws might not be the optimal solution to address deep fake challenges, as each law potentially focuses on different aspects and is limited to each state.
Cybercriminals usually exploit high-end computing resources, time, and technical expertise to generate sophisticated deep fakes to target large enterprises and demand substantial payments. Yet, with the evolving technologies, the creation of deep fakes become increasingly accessible for cybercriminals enabling them to target companies of all sizes.
Analysis of Europol Innovation Lab Report
The Europol Innovation Lab released its first report, “Facing Reality? Law Enforcement and the Challenges of Deepfakes” under its observatory function. As per report analysis, deep fakes can be identified through manual means involving human analysts to detect distinct indicators in images and videos. Yet, this method is time-consuming and lacks scalability. The report highlights that Law Enforcement agencies empower officers with skills and technology to counteract extensive misuse of deepfakes by illicit actors. This approach involves employing technical and organizational measures against video manipulation and deep fake generation by making use of AI.
In April 2023, the Congressional Research Service, a public policy research institute of the US Congress, published a report on “Deep fakes and National Security” highlighting how deep fakes can be exploited and what are the possible solutions to tackle this concern. The report also reveals that the Defense Advanced Research Projects Agency (DARPA) incorporated two programs for the detection of deep fakes namely Media Forensics (MediFor) and Semantic Forensics (SemaFor). The programs aim to develop algorithms that can independently identify, attribute, and characterize diverse types of deep fakes, thereby strengthening the intelligence community.
Despite the integration of advanced identity verification solutions, the existing safeguards fall short, leaving certain vulnerabilities unattended. It’s a call to action, demanding identity verification solutions to develop advanced tools to better identify authentic individuals and deter fake identities. The optimization of IDV solutions involves the design and development of sophisticated tools to precisely identify the authentic individual, thereby effectively addressing the pressing concerns.