Special Reports
A special report is content that is edited and produced by the special reports unit within The Irish Times Content Studio. It is supported by advertisers who may contribute to the report but do not have editorial control.

‘Criminals are now using deepfakes to impersonate senior executives’

In the current environment, appearance is no guarantee of authenticity, and trusting a familiar voice or face is no longer sufficient

Puneet Kureja: Strong internal controls, staff awareness and a culture of verification form the best defence to cybercrime fuelled by deepfake technology
Puneet Kureja: Strong internal controls, staff awareness and a culture of verification form the best defence to cybercrime fuelled by deepfake technology

You may not have heard of CFO fraud, but it is a well-established form of cybercrime and its “success” is being fuelled by the emergence of artificial intelligence (AI) and deepfake technology.

While fraudsters could always attempt to assume the identity of a company’s chief financial officer (CFO) and issue instructions to pay a fake invoice or make a bank transfer, this has now been taken to the next level, with the ability to create highly convincing computer-generated videos of the individual whose identity they are assuming. The EU’s cybersecurity agency, ENISA, identified impersonation attacks as among the most significant threats in its 2024 threat landscape report, noting a sharp rise in the use of deepfake technology to target executives.

According to Puneet Kukreja, EY UK and Ireland cybersecurity Leader, this form of financial crime is on the rise globally, with its scale and sophistication rapidly growing.

“Where criminals once relied on email spoofing to impersonate senior executives, they are now using AI-generated voice messages and video deepfakes to simulate highly realistic virtual meetings,” he says. “These impersonations are often convincing enough to deceive experienced professionals, and several international firms have recently reported incidents involving fabricated calls requesting urgent payments linked to fictitious transactions.”

READ MORE

This development has been made possible by the vast amount of publicly available material online, including corporate presentations, media interviews and social media content, which can be used to train AI models. When paired with increasingly accessible generative tools, Kukreja explains, the ability to carry out this kind of fraud no longer requires specialist technical skills.

“The shift to hybrid working and reliance on digital communication further increases the risk, as attackers exploit time pressures and gaps in verification protocols,” he adds. “In the current environment, appearance is no guarantee of authenticity, and trusting a familiar voice or face is no longer sufficient.”

Kukreja warns there is no single solution to prevent CFO fraud, but strong internal controls, staff awareness and a culture of verification form the best defence. “The most important principle is simple: no financial transaction should be approved based solely on a single communication, whether by email, phone call or video meeting,” he asserts, adding that even when a request appears to come from a senior executive, it should always be verified through an independent channel.

Businesses should implement clear approval workflows, including dual authorisation for high-value payments and routine verification of new or amended banking details.

“These checks should be performed by different individuals, using contact information already on file, not that provided in the request itself,” says Kukreja. “Ultimately, it is the combination of process, technology and a vigilant workforce that will make the difference.”

Danielle Barron

Danielle Barron is a contributor to The Irish Times