There has been a profound shift in the world of cybersecurity in recent years. Whereas attacks used to focus on system breaches, code vulnerabilities, or ransomware, today the primary target is the human being. Deepfake, particularly in real time, has evolved from an experimental tool into one of the fastest-growing cyber threats, already in widespread, almost industrial use.

Recent events in Israel’s cybersecurity industry illustrate just how real this threat has become. What until recently seemed like an extreme scenario is now an everyday reality for organizations, even the most advanced and security-conscious. This is not an isolated case but a signal of a fundamental change: if such attacks can penetrate environments with high security awareness, the game has clearly changed.

The numbers tell the story clearly. Between 2022 and 2023, deepfake attacks increased tenfold, with overall growth of more than 1000% in recent years. Approximately 49% of companies worldwide have already experienced a deepfake attempt or incident, and nearly half of these attacks, 46%, involve video or real-time calls. Even processes considered relatively safe, such as recruitment, are affected: 17% of interviewers report encountering fake candidates. Hundreds of millions of dollars are being stolen, with isolated cases involving tens of millions in a single transaction. Forecasts point to damages potentially reaching tens of billions in the coming years. This is no longer an IT issue but a strategic business problem.

What makes this phenomenon so effective is not only the technology itself but the understanding of human psychology. Attackers know exactly how people respond to authority, urgency, and pressure. They do not ask for strange requests; on the contrary, they ask for actions that seem completely reasonable in the organizational context. When a familiar voice and believable face are added on screen, the likelihood that someone will question the request is extremely low.

Here lies the real problem: we are still operating according to old rules in a new world. For years, we were taught to identify suspicious emails, avoid clicking unknown links, and be wary of attachments. But no one taught us to be skeptical of a video call from our own manager. On the contrary, we learned to treat it as the most reliable channel. Israel is at the heart of this trend. In 2024 alone, there has been a 118% increase in the use of fake video and audio, with clear identification of attacks targeting technology and financial companies. Beyond that, deepfake is increasingly infiltrating other domains, including impersonation of executives and recruitment processes.

During the war between Israel and Iran in June 2025, social media platforms were flooded with images and videos of widespread destruction in Tel Aviv. Most were fake.
During the war between Israel and Iran in June 2025, social media platforms were flooded with images and videos of widespread destruction in Tel Aviv. Most were fake. (credit: SHUTTERSTOCK)

The problem takes on even greater weight in the current security reality. Amid rising tensions between Israel and Iran, cyberattacks have become a central tool in national defense, and deepfake serves as a strategic instrument at the state level. Impersonation of senior executives, real-time message forgery, and video call deception can disrupt critical processes, undermine decision-making, and even affect military operations and readiness. When a call appears legitimate but is not, the threat crosses business boundaries and reaches directly into national security.

No simple solution

There is no simple solution. Detection technologies are important, but it is difficult to believe they will suffice on their own. Anyone who thinks the problem can be solved with another system or tool is ignoring the heart of the issue. This is a shift in mindset. Organizations will need to implement procedures based on built-in skepticism. It may sound extreme, but it is reality: no sensitive action should be approved based on a single call, even if it appears completely legitimate. Secondary verification through another channel is essential, and employees must be empowered to pause, question, and check, even when approached by a senior executive.

Organizations must prepare practically by implementing dual verification procedures, training employees to identify suspicious situations, and deploying technological tools to detect fakes. There is a cost involved. More checks mean slower processes, more friction, and possibly some reduction in efficiency. But the alternative is far more expensive. With awareness, a strong organizational culture, and the right tools, it is possible to mitigate the damage and continue operating safely. The world has changed, and dealing with deepfake requires preparedness, not just sophistication.

The writer is the CEO of Cyvore.