bg
Cybersecurity
10:29, 08 April 2026
views
18

Megafon’s Virtual Assistant Eva Detects Voice Deepfakes in Real Time

Megafon has upgraded its virtual assistant Eva (Eva), adding real-time detection of synthetic speech and voice deepfakes. The system analyzes audio parameters during a call and alerts users if it detects a bot or a potentially spoofed voice. It can also distinguish routine service notifications from suspicious or potentially fraudulent calls.

The new capabilities of Eva mark a shift in consumer-grade antifraud tools. This is no longer a lab prototype but a production feature embedded in a mass-market telecom service. The timing is notable, as phone scams and AI-generated impersonation are on the rise. The project brings together telecom infrastructure, cybersecurity, and applied AI.

For users, the value is immediate. The system can flag attempts to impersonate a trusted contact or an official authority. For the market, this signals a move toward operator-level protection rather than relying solely on banks and financial institutions. In a broader context, the project shows how AI can act as a defensive layer, with Russian telecom deploying a practical model that could scale internationally.

A New Standard for Digital Hygiene

The main impact of the solution is infrastructural. If the feature maintains low false-positive rates and integrates with caller labeling, antifraud systems, and user notifications, it could set a new baseline for telecom security practices. As phone fraud and deepfake-based attacks grow, an issue highlighted by the Central Bank of Russia, large-scale protective tools have been limited. Eva’s new functionality addresses that gap.

The next step is deeper integration into multi-layer antifraud systems, combining number analysis, user behavior, conversation patterns, and interaction history. The domestic market is already moving in this direction. The state-run Antifrod system verifies a significant share of calls, while regulations on internet telephony have tightened. Together, these measures support end-to-end protection of the voice channel.

The solution also carries export potential, primarily tied to the maturity of antifraud platforms. Demand is likely to emerge for analytics engines, synthetic speech classification models, and B2B platforms for telecom operators and contact centers, particularly in CIS countries, the Middle East, Asia, and Africa, where voice fraud is increasing.

The Rise of Voice Deepfake Fraud

In 2024, Russia’s Antifrod system reached industrial scale. According to Roskomnadzor, it blocked around 577 million spoofed calls in nine months and was verifying hundreds of millions of calls daily by year-end. The market is now shifting toward validating not just the phone number but the authenticity of the voice and the content of the conversation. At the same time, the regulatory framework tightened in 2024–2025. A government decree dated December 26, 2024, streamlined telecom licensing and restricted the use of internet telephony in fraud schemes, reflecting a dual-track approach that combines regulation with operator-driven technology.

In 2025, Sber announced its own deepfake detection technologies and signaled readiness to share automated detection tools, including the Aleteya service for identifying video deepfakes. Banks and telecom operators are converging on a shared conclusion: the era of AI-enabled fraud requires dedicated systems for detecting fake audio and video content.

Throughout 2025–2026, analysts have observed growing interest among attackers in deepfake-driven schemes. Major industry players expect a rise in voice cloning attacks and highly personalized fraud scenarios. At the same time, the global discussion is increasingly focused on voice deepfake fraud. The European Parliament has pointed to a surge in scam calls in the age of generative AI, linking audio and video deepfakes to measurable financial losses. Against this backdrop, Russian initiatives are emerging as practical countermeasures.

Protecting Users from Emotionally Driven Attacks

This development reflects a new phase in the digital security market. Earlier efforts focused on blocking spoofed numbers and spam calls. The focus is now shifting toward verifying voice authenticity and analyzing conversation content – a class of problems that will expand alongside generative AI.

In the near term, Russian telecom operators, banks, and digital ecosystems are expected to adopt an AI-versus-AI model, where neural networks are used both to carry out attacks and to defend against them. A likely architecture includes three layers: call labeling and verification, synthetic voice detection, and behavioral risk analysis based on conversation patterns. Megafon’s deployment gives users an additional layer of protection against emotionally manipulative scams built around trust in familiar voices.

Russian telecom is embedding applied AI security mechanisms into mass-market services, signaling both technological maturity and a broader push toward digital sovereignty in antifraud systems.

Protection against neural network-generated voice forgeries ranked among the top three most requested features in surveys of Eva users, with subscribers showing strong interest in such a service. We listen closely to our customers, so launching this feature is not only a logical step amid rising fraud and increasingly sophisticated schemes, but also a direct response to user demand and a continuation of our customer-centric strategy
quote
like
heart
fun
wow
sad
angry
Latest news
Important
Recommended
previous
next