The rise and evolution of “convincing deepfake technology poses a severe threat to traditional authentication systems that rely on visual or auditory cues for verification,” warns a new report from the California-based Institute for Security and Technology (IST)
The report, The Implications of Artificial Intelligence in Cybersecurity, says that “biometric authentication systems that use facial recognition or voice analysis have already been compromised by deepfake technology in several cases.”
However, the report may not make a clear enough distinction between breaches of authentication systems that use “liveness” or presentation attack detection, which are the most common defensive tools used in authentication systems – and AI spoofing.
Liveness detection is a security method that verifies if a person is a live human being or a fake representation. It is a key part of biometric authentication systems and is used to prevent fraudsters from gaining access to systems using stolen or replicated biometric data.
Liveness detection uses algorithms to analyze data collected from biometric sensors, such as a face or fingerprint, to determine if the source is live. It’s more difficult for fraudsters to bypass security with liveness detection because it uses real-time interactions to verify a user’s identity. Most modern biometric systems include liveness detection mechanisms to differentiate between a real person and a fake representation, like checking for movement or analyzing subtle facial cues in real time.
AI spoofing, on the other hand, uses advanced algorithms to create realistic deepfakes in which false biometrics are presented to security systems.
The Information Systems Audit and Control Association (ISACA) said in a July White Paper that AI spoofing “is not limited to creating a false match, but can extend to creating biometric data convincing enough to pass higher levels of security scrutiny. For example, researchers have demonstrated how facial recognition systems can be fooled using deepfake imagery mimicking facial expressions, aging, and other subtle characteristics of previously reliable identity markers.”
The rise of AI has certainly significantly advanced the spoofing techniques used against authentication systems, making it easier for attackers to create realistic biometric identifiers, making large-scale attacks more likely … and more dangerous.
The IST report says that “biometric authentication systems that use facial recognition or voice analysis have already been compromised by deepfake technology in several cases,” and refers to a February breach report by Group-IB, Face Off: Group-IB Identifies First iOS Trojan Stealing Facial Recognition Data.
However, the Group-IB report does not describe an authentication system that’s been spoofed using AI, but rather a breach using facial recognition data that had been stolen using a “mobile Trojan specifically aimed at iOS users” that Group-IB dubbed GoldPickaxe.iOS.
Group-IB explained that “the GoldPickaxe family, which includes versions for iOS and Android, is based on the GoldDigger Android Trojan and features regular updates designed to enhance their capabilities and evade detection,” and “is capable of collecting facial recognition data, identity documents, and intercepting SMS. Its Android sibling has the same functionality but also exhibits other functionalities typical of Android Trojans. To exploit the stolen biometric data, the threat actor utilizes AI-driven face-swapping services to create deepfakes. This data combined with ID documents and the ability to intercept SMS enables cybercriminals to gain unauthorized access to the victim’s banking account – a new technique of monetary theft previously unseen by Group-IB researchers in other fraud schemes.”
Group-IB said “this method could be used by cybercriminals to gain unauthorized access to victims’ bank accounts.”
AI spoofing of biometric enabled authentication systems is possible, as advanced AI algorithms can generate highly realistic fake biometric data like fingerprints, facial images, or voice samples, potentially fooling biometric scanners and allowing unauthorized access to systems; however, most modern biometric systems incorporate anti-spoofing mechanisms to detect such attempts, making it increasingly difficult but not impossible to successfully spoof.
And while AI spoofing of authentication systems using fingerprint recognition, iris scanning, and voice recognition – the most widely used biometric methods – may be possible, newer biometrics such as vein pattern recognition and heart rate sensors could prove to be less capable of being forged, and would likely require an AI capability that can predict accurately in real time a person’s vein structure and heart rate.
About four years ago it was reported that AI was used to make high-resolution images of people look “alive” which were then used to spoof a Chinese identity verification system to fake tax invoices. However, reporting on the alleged hacking was based on an article in the Xinhua Daily Telegraph.
At present, it is difficult to gauge with any certainty just how many instances there have been in which AI generated biometric identifiers have actually been successfully used to spoof biometric-enabled authentication systems.
The IST report itself says that “at this time of writing, AI is not yet unlocking novel capabilities or outcomes, but instead represents a significant leap in speed, scale, and completeness.”
The IST report explores the opportunities and challenges of AI in cybersecurity and draws on surveys and interviews with industry experts to provide insights into how organizations are using AI, how AI impacts the threat landscape, and how to realize the benefits of AI.
The report notes that AI can help organizations respond faster to breaches, improve the accuracy and efficiency of cyber analysts, and streamline tasks, but that it also can be used by malicious actors to generate fake emails or websites, or to clone and customize websites to trick users. AI systems can also be compromised by adversarial attacks, where attackers manipulate data or inputs to confuse the system.
The report points out that because AI relies on large amounts of data, which is often personal and sensitive in nature, there are justified concerns that individuals may unknowingly divulge personal information to AI systems, which could then be exploited and misused, like in the incident the Group-IB researchers discussed.
The report says staying ahead in the arms race of AI in cybersecurity will require continued investment, innovation, and integration.
In recent months, IST “conducted a series of targeted surveys and interviews with industry incumbents, startups, consultancies, and threat researchers to capture current insights into how organizations and practitioners are currently engaging with or integrating AI technologies, the evolving impact of these tools on the threat landscape, and their forecasts for the future.”
According to IST, its study leveraged several instances “to paint a comprehensive picture of the state of play – cutting through vagaries and product marketing hype, providing our outlook for the near future, and most importantly, suggesting ways in which the case for optimism can be realized.”
“Social engineering, already a complex challenge to cybersecurity, is becoming even more formidable with the proliferation of AI-enabled deception,” the IST report says, pointing out that “bad actors have already impersonated executives in phishing schemes, creating false identities for deception, and fabricating evidence in legal and financial fraud, among other dangerous use cases.”
“Unsurprisingly, malicious deepfake[s]” are on the rise,” the report says, noting that the ability to detect them and the tactics in which they are used “is only becoming more difficult.”
The report says that “following the release of OpenAI’s Sora, which employs a text to video model, a HarrisX survey showed 1,000 American respondents a combination of eight AI-generated videos and videos created with traditional tools to test their ability to identifty AI-generated content. The survey results revealed that ‘most US adults incorrectly guessed whether AI or a person had created five out of the eight videos they were shown.’”
The IST report says authentication that relies on “something you know,” like a password, phrase, PIN, or answers to security questions, is vulnerable to the use of AI. The “ease of brute forcing something you know is now even less reliable due to AI’s ability to summarize and recall information from large datasets, to include scrapes of public social media, public records, and pooled data breach takings. Users should assume that answers to their security questions, typically used for password resets and account recovery, can be known by bad actors using AI capabilities. And now generative AI has been demonstrated to fool some biometric authentication methods, putting the reliability of ‘something you are’ at risk as well.”
“Near-term, the AI in cybersecurity advantage goes to the defender,” the IST report concluded, adding that “the home field advantage – which includes access to proprietary software source code, a full understanding of network architecture and typical user patterns, and an ecosystem of service providers who are making rapid strides to capitalize on the potential of AI – will be difficult for an adversary to overcome.”
However, the report forewarns that “sophisticated threat actors are also leveraging AI to enhance their capabilities, making continued investment and innovation in AI-enabled cyber defense crucial.”
The report makes seven recommendations, including protecting sensitive data from malicious AI-enabled content analysis by implementing “robust cybersecurity practices, including data encryption, least privilege access, and multi-factor authentication.
Article Topics
biometric authentication | biometric liveness detection | biometrics | cybersecurity | deepfakes | spoofing
Latest Biometrics News
Oct 25, 2024, 5:17 pm EDT
The World Economic Forum (WEF) has made the case for digital public infrastructure (DPI) that builds community resilience and improved…
Oct 25, 2024, 4:02 pm EDT
Following the launch of its Rock X facial authentication security and access control system, Alcatraz AI has announced partnerships with…
Oct 25, 2024, 3:32 pm EDT
TrustCloud and digital identity provider iProov are working together to elevate the security of digital identity management, amid growing cybersecurity…
Oct 25, 2024, 3:27 pm EDT
Consulting firm Gartner has released a new market research report for the identity verification and authentication industry, placing companies such…
Oct 25, 2024, 3:24 pm EDT
Australia has made another step towards digital travel credentials. The country is replacing its Incoming Passenger Card (IPC) with the…
Oct 25, 2024, 3:09 pm EDT
The government of Andorra has presented its Digital Identity Citizen Portfolio, as the tiny nation seeks to stake a claim…