t cht lk
t cht lk
In 2024, a staggering 67.4 % of global phishing incidents involved AI tactics, with the finance industry among the top targets. craft highly personalised and convincing campaigns, including spear-phishing, deepfakes and advanced social engineering techniques.
Emails are now flawless, fast and focused
– work in concert to automate malware development, reconnaissance, social engineering and co-ordinated attacks without external oversight. This is not just an LLM; it’ s an agentic AI, signifying a logical and concerning shift in the cybercriminal ' s toolkit.
AI-enhanced phishing breaking brand trust
The impact of these AI developments on brand protection is particularly acute in the realm of phishing attacks. Threat actors are already using prompt injection techniques to manipulate legitimate LLMs into generating compelling phishing content. This means phishing attacks are being launched faster, with greater frequency and with an alarming degree of personalisation.
In 2024, a staggering 67.4 % of global phishing incidents involved AI tactics, with the finance industry among the top targets. This isn ' t just about volume; it ' s about sophistication. AI enables attackers to
One of the most immediate impacts is on phishing emails themselves. Gone are the days when grammatical errors and spelling mistakes were clear indicators of a scam. AI-generated emails are often indistinguishable from legitimate corporate communications. This speed of creation and personalisation is significantly augmented by LLMs, allowing attacks to be launched and scaled rapidly. Research from 2021 showed that even with older AI, spear-phishing emails generated by AI achieved a 60 % click-through rate. Another more recent study from 2024 found that fully AI-generated phishing emails achieved a 54 % click-through rate in a human subject study, a 350 % increase over arbitrary phishing emails.
A chilling real-world example occurred in February 2024, when the Hungarian branch of a European retail company lost € 15.5 million in a business email compromise( BEC) attack. The attackers used Generative AI to create emails that perfectly mimicked the tone, style and formatting of prior
48 INTELLIGENTCIO MIDDLE EAST www. intelligentcio. com