AI Impacting Security Today
- Ralph Labarta
- Feb 29, 2024
- 2 min read
Updated: Feb 29, 2024
A client recently asked about the impact AI would have on their security efforts. I had to first redirect the timeline away from the future and speak about what is happening right now.
AI Augmenting Traditional Attack Methods
Similar to how workers are using AI today to improve their written communication, hackers are using AI to launch more effective email-based phishing campaigns. This means telltale red flags such as poor grammar or word choice are appearing less frequently leading to more effective deception and improved avoidance of automated detection.
According to an October 2023 report by cybersecurity vendor SlashNext, Phishing emails grew by 1,265% in the twelve-month period starting when ChatGPT was publicly released, an indication that AI tools are increasing generative AI phishing content.
A new crop of AI tools that implement fewer guardrails have appeared that can be used openly to create content with malicious intent, like writing phishing emails. But there is more. The AI models can use samples of written and contextual content to generate detailed and targeted phishing emails that impersonate company employees more effectively.
These techniques are being taken a step further by using AI generated voice impersonation. A targeted phishing email can be enhanced by a follow-up voicemail that impersonates the purported sender’s voice. Recently, a company in Hong Kong was defrauded of $25 million via a deepfake conference call that leveraged an AI generated voice impersonation of the CFO.
AI Generating Targeted Attacks Faster
Attackers, like workers, face the challenge of information overload. AI is enabling faster and more efficient attack generation by identifying potential attack vectors and coalescing data points to identify weaknesses, generate content and create tools. This is of particular concern for public companies that have generated large amounts of publicly available content. Attackers can more efficiently combine output from public “good” AI models like ChatGPT and combine them with dark web AI models whose data sources are based on exploits, stolen credentials, targeted malware, etc.
AI Generating Attack Tools
The automation of attack execution has been developing for some time. For example, the first Ransomware-as-a-Service (RaaS), where attackers could outsource ransomware attacks to an automated service, was detected as early as 2012. AI is now making custom attack tools more readily available and effective through dark web AI tools. In a recently published article, XiaoFeng Wang of Indiana University observed “DarkGPT and EscapeGPT were among those capable of producing “high-quality malware” that evaded detection and security measures.”
At this point, AI’s impact on the hacker ecosystem is not unlike its role in legitimate enterprises. Existing attack vectors are simply being made more effective and generated faster and more efficiently. Vulnerabilities can be found more easily by addressing information overload via large scale data analysis and opportunity correlation.
For our particular client, the advice in light of AI's growing presence is as follows:
Continue email security best practices and emphasize user awareness and training on traditional and innovative phishing attempts powered by AI.
Review publicly available content and understand exposure associated with voice impersonation.
Confirm procedures governing financial transactions are in place that ensure proper authorization.
Confirm procedures governing internal functions such as payroll or technology, require authentication mechanisms that confirm a caller's identity.
Comentários