Deepfake Gang Busted in Ahmedabad After Using AI to Bypass Aadhaar Verification and Execute Loan Fraud
The Cyber Crime Branch of the Ahmedabad Police has arrested four individuals for allegedly orchestrating a sophisticated identity fraud operation in which they used deepfake technology and artificial intelligence tools to bypass Aadhaar-linked biometric verification systems. The gang allegedly altered victims’ registered mobile numbers, opened fraudulent bank accounts, and applied for loans — all without triggering the security alerts designed to prevent such crimes.
The case, which came to light after a city-based businessman noticed he had stopped receiving OTPs from his bank for two days, has exposed alarming vulnerabilities in India’s digital identity infrastructure and raised urgent questions about whether existing safeguards are adequate to counter AI-powered fraud.
How the Fraud Worked
According to the Ahmedabad police, the gang operated through a network of Common Service Centre (CSC) operators across Gujarat. The fraud followed a multi-step process that combined social engineering with cutting-edge AI tools:
- Target identification: The accused identified victims whose Aadhaar details — including photographs and Aadhaar numbers — were accessible through leaked databases or social engineering.
- Deepfake creation: Using AI-based tools, the gang created deepfake facial authentication videos of the victims. These videos were realistic enough to pass the face-verification checks used during Aadhaar-linked mobile number updates.
- Mobile number change: Using the deepfake video and an Aadhaar update kit obtained from a compromised CSC operator, the gang changed the victim’s registered mobile number to one they controlled.
- Account takeover: With the new mobile number linked to the victim’s Aadhaar, the gang could receive OTPs, open bank accounts, and apply for loans in the victim’s name.
- Loan disbursement: The gang applied for and received a Rs 25,000 loan using the victim’s identity, with the funds directed to an account they controlled.
The Four Accused
The arrested individuals have been identified as:
- Kanubhai Bahadursinh Parmar (32) — a CSC operator from Anand district who allegedly supplied Aadhaar update kits to the co-accused in exchange for commission.
- Ashish Rajendrabhai Waland (27) — a CSC operator based in Vadodara who allegedly passed on the Aadhaar kit used in the fraudulent mobile number update. Waland was previously booked by Vadodara Rural Police in a separate case involving the alleged preparation of fake Aadhaar cards.
- Mohammad Kaif Iqbalbhai Patel (26) — associated with a CSC centre in Bharuch district, who allegedly coordinated the use of the complainant’s Aadhaar number, target mobile number, and photograph, and facilitated the creation of the deepfake video.
- Deep Maheshbhai Gupta (29) — a machine operator from Ahmedabad, originally from Uttar Pradesh, who allegedly assisted in arranging and transmitting the victim’s Aadhaar details and photograph for the operation.
All four accused are currently in judicial custody. The Aadhaar update kit used to alter the complainant’s mobile number has been recovered by investigators.
Implications for India’s Digital Identity System
Aadhaar, which covers over 1.4 billion Indians, is the backbone of the country’s digital identity and financial inclusion infrastructure. The system’s biometric verification — which uses fingerprints, iris scans, and facial recognition — has been promoted as a robust safeguard against identity fraud. The Ahmedabad case demonstrates that deepfake technology has advanced to the point where it can defeat facial authentication systems, potentially undermining public trust in the entire framework.
The Unique Identification Authority of India (UIDAI) has not yet issued a public statement on this specific case, but officials have previously acknowledged that AI-powered fraud represents an emerging threat. In March 2026, the UIDAI announced plans to upgrade its biometric verification systems to include liveness detection — technology designed to distinguish between a real human face and a video or photograph. However, the Ahmedabad case suggests that current liveness detection measures may not be sufficient to counter sophisticated deepfakes.
The Growing Threat of AI-Powered Financial Fraud
The Ahmedabad case is not an isolated incident. Across India and globally, law enforcement agencies are reporting a surge in fraud cases involving AI-generated deepfakes. AI is increasingly reshaping industries — both for legitimate purposes and for criminal exploitation.
In the fintech sector, the threat is particularly acute. Digital lending platforms, which process millions of loan applications using automated verification, are vulnerable to deepfake-based identity fraud at scale. If criminals can bypass biometric checks using AI-generated videos, the economic damage could be enormous — particularly for small-ticket lending platforms that rely on automated KYC (Know Your Customer) processes.
Industry experts have called for a multi-layered approach to combating AI-powered fraud. This includes upgrading biometric systems with advanced liveness detection, implementing behavioural analytics to flag suspicious patterns, mandating multi-factor authentication for high-risk transactions, and establishing regulatory frameworks specifically targeting the use of AI tools for fraud.
What Needs to Change
The case highlights several systemic vulnerabilities that need to be addressed. The role of CSC operators in the fraud is particularly concerning. Common Service Centres are intended to bring government services to rural and semi-urban India, but the case demonstrates that compromised operators can become entry points for identity fraud. Stricter vetting, regular audits, and enhanced monitoring of CSC activities are essential to prevent future incidents.
Additionally, the ease with which Aadhaar update kits were allegedly obtained and misused points to gaps in the supply chain security of biometric hardware. The UIDAI may need to implement more stringent controls on who can access and operate these kits, including real-time monitoring of all updates performed through CSC channels.
As India’s AI infrastructure continues to grow — with investments like the $2 billion Nvidia-Yotta AI supercluster — the dual-use nature of this technology will become an increasingly pressing policy challenge. The same AI capabilities that enable economic transformation can also enable sophisticated fraud, and India’s regulatory and enforcement institutions will need to evolve rapidly to keep pace.
For the businessman in Ahmedabad who noticed his OTPs had stopped arriving, the experience was a wake-up call. For India’s digital economy, it should be a call to action. The AI revolution brings extraordinary opportunities, but as this case demonstrates, it also brings risks that demand immediate and sustained attention.
- India Services PMI Jumps to Five-Month High of 58.8 in April 2026 as Domestic Demand Drives Strongest Expansion Since November - May 7, 2026
- Deepfake Gang Busted in Ahmedabad After Using AI to Bypass Aadhaar Verification and Execute Loan Fraud - May 7, 2026
- Oil Prices Crash Over 11 Percent as US-Iran Ceasefire Talks Signal Possible End to Middle East Conflict - May 7, 2026