Deepfakes have become a rampant issue, impacting multiple business functions and leading to significant financial losses, compliance issues and reputational damage.
As this threat continues to grow, organizations must adopt advanced artificial intelligence (AI) techniques, alongside robust human and procedural defenses to detect and counteract these risks, according to a new Forrester report.
The report, titled “Detecting and Defending Against Deepfakes”, explores the challenges deepfakes present across digital channels, and shares how security and risk professionals can combat this growing threat.
According to the report, deepfakes now pose a serious risk to organizations and have infiltrated every digital medium. Call centers, for example, are increasingly vulnerable to audio-based attacks, especially deepfake voice manipulation. With just 10 to 15 seconds of a person’s speech, fraudsters can generate realistic fake voices using a text-to-speech or speech-to-speech generator.
Mobile apps are also at risk, facing both video and audio deepfake attacks. Deepfake injection methods can intercept data from a device’s camera and microphone and replace it with deepfake audio, video, or images before it reaches the mobile application or operating system.
Social media, videoconferencing, and livestreaming platforms are also lucrative targets. Deepfakes are disrupting virtual meetings to steal sensitive information, while social media and livestreaming channels serve as rich sources for harvesting images, videos, and text that can be used to create deceptive content.
The proliferation of deepfakes
Survey data further llustrates the widespread proliferation of deepfakes, revealing that 43% of people aged 16 and over have seen at least one deepfake online in the last six months.
But deepfakes are not just proliferating, they are also becoming more sophisticated. These attacks not only affect authentication, but also threaten the authorization of high-risk, high-value transactions. A relevant example occurred in 2024 when a finance worker was tricked into transferring US$25 million to fraudsters. In this sophisticated scam, criminals used deepfake technology to impersonate the company’s CFO in a video conference call, deceiving the unsuspecting employee into authorizing the fraudulent transfer.
According to Forrester, the rise of deepfakes has been fueled by several factors, including their increasing accessibility and affordability. Online deepfake generation services like Gooey.AI, Deepfakesweb.com, Deepgram.com and Wavei.ai allow users to create deepfakes in as little as 10 minutes, with costs ranging from US$10 to US$20.
The impact of deepfakes
Deepfakes are posing a serious threat to organizations, undermining online identity verification processes and biometric authentication. They allow fraudsters to steal identities, hijack accounts and execute fraudulent financial transactions. They also facilitate money laundering, exposing organizations to regulatory fines.
According to Signicat’s 2024 report on AI-Driven Identity Fraud, 42.5% of fraud attempts are now AI-driven, with deepfakes representing 6.5% of total fraud attempts. The figure marks a staggering 2,137% increase over the past three years.
The financial sector is particularly at risk, with an alarming 37.4% of all deepfake attacks directed at the industry. According to Deloitte’s Center for Financial Services, AI-generated content enabled fraud losses to reach more than US$12 billion in the US in 2023. That figure could reach US$40 billion by 2027, representing a compound annual growth rate of 32%.

Beyond financial losses, deepfakes can also cause reputational damage and a loss of trust among customers and stakeholders. These attacks can also lead to security breaches by granting cybercriminals access to sensitive systems. Finally, deepfake attacks create compliance and regulatory risks, exposing financial firms to regulatory scrutiny, sanctions, and fines.
Addressing the rising threat of deepfakes
To outpace the evolving threat of deepfakes, Forrester advises organizations to employ a multi-layered approach, incorporating advanced AI techniques, education and training.
Technologies such as spectral artifact analysis, for example, can identify unnatural patterns in audio and video. Liveness detection, meanwhile, ensures the physical presence of real persons during verification. These tools can be combined with signal path protection, behavioral monitoring, and generative adversarial networks to strengthen defenses against deepfakes, the firm says.
In addition to technical capabilities, organizations should also enhance human oversight. Effective defense strategies include setting up penetration testing processes to learn from and remediate discovered flaws and gaps, allowing teams to keep up with the technology. Another strategy involves regularly conducting AI-assisted investigations and reviews to enhance deepfake detection and interception models.
Featured image credit: freepik