Deepfakes Are the New Acid Attacks. We're Just Not Ready to Admit It.

 






Your mother gets a call. It's you. Crying. Desperate. "Mom, I need ₹5 lakh right now!" Your heart stops. You hear your own voice breaking. Except it wasn't you. By the time your real phone buzzes, the money's gone. This isn't science fiction. This is happening in Indian homes right now. Meet deepfakes—the technology that steals your face, your voice, and your identity. 


It doesn't just destroy one person. It destroys the very concept of trust.


The Crisis We're Not Talking About.


A woman in Bengaluru discovers her face in explicit videos on WhatsApp. A CEO in Mumbai transfers ₹2 crore to someone he thinks is his CFO—but it's a video. A young man's job ends after his face endorses a fake cryptocurrency. These aren't hypotheticals.


Here's the uncomfortable truth: 47% of Indian adults have been victims of or know someone victimized by deepfake scams—nearly double the global average. Deepfake cases in India surged 550% since 2019, with projected losses reaching ₹70,000 crore in 2024.


That's not just a number. That's your neighbor. That's someone's entire future, erased.


Why This Feels Like an Acid Attack?


An acid attack destroys your face in seconds. Deepfakes destroy something equally precious: your identity.


With acid attacks, at least the crime is visible. Everyone knows something terrible happened. But deepfakes are invisible. They're infinitely reproducible. They spread across WhatsApp, YouTube, Instagram before anyone can stop them. Your face is everywhere, saying anything, doing anything. Who are you anymore?


Consider the toll: Your parents get a video of you begging for money. Your colleagues find explicit videos "featuring" you. You lose credibility. You lose relationships. Some victims contemplate taking their lives. The psychological devastation is real and permanent.


But here's the difference—and what makes it worse: An acid attack gets police attention. Deepfakes exist in a legal gray zone. For many Indian victims, there's literally no one to complain to effectively.


The Numbers Are Staggering.


  • 60% of Indians have encountered deepfake content from influencers.
  • 90% of Indians exposed to fake celebrity endorsements (₹34,500 average loss per victim).
  • 83% of voice scam victims lost money, with almost half losing over ₹50,000.
  • 62% of people aged 25-44 fell for fake celebrity ads.
  • India ranks 6th most vulnerable to deepfake pornography; 1 in 3 deepfakes online is non-consensual explicit content.


The scariest part? Women are being weaponized. Thousands have had their likenesses weaponized without consent. Journalists. Activists. Ordinary professionals. Non-consensual deepfake pornography has become a tool of retaliation and control.


The Celebrities We Love Are Being Hijacked.


Open Instagram or WhatsApp. You'll see Shah Rukh Khan promoting cryptocurrency. Alia Bhatt selling shopping deals. Elon Musk endorsing investment schemes. None of them are real.


Scammers target celebrities because Indians trust them. We follow them. We're primed to believe them. Your guard drops. You click. You invest. By the time you realize it's fake, you're gone.


Add this: 900 million Indians on smartphones. 95% on WhatsApp. 94% on YouTube. 84% on Instagram. We're constantly online, constantly exposed, constantly vulnerable.


Our Laws Aren't Ready.


You report it to police. They ask: "Is this defamation? Cheating? Fraud?" Nobody really knows. India doesn't have specific deepfake laws yet. The government proposed amendments to the IT Act for 2025, but "proposed" isn't "passed." Meanwhile, your stolen face is everywhere.


Countries like the UK criminalized non-consensual deepfake pornography years ago. India still relies on outdated laws not designed for this. The police are playing catch-up while technology advances exponentially.


What Makes India Uniquely Vulnerable?


We're hyperconnected. 900 million internet users means massive digital footprints. Every video, every interview, every selfie is training data for someone's deepfake.


We trust our celebrities. Bollywood isn't entertainment—it's part of our identity. When a familiar face tries to sell something, we believe it.


We're cash-driven. A country dreaming of financial security is perfect hunting ground for financial deepfake scams. "Your relative needs help. Urgently."


We're not all digitally literate. Older adults especially aren't equipped to spot deepfakes. A grandmother in a tier-2 city won't question a video call from her "grandson." Scammers know this.


We're poorly regulated. Unlike the EU's strict AI regulations, India's framework is still forming. Criminals operate in this vacuum freely.


The Real Cost Goes Beyond Money.


Yes, ₹70,000 crore is mind-boggling. But the real cost is incalculable.


There's the social cost: A woman's reputation destroyed. A man's career ended. Families torn apart by mistrust. A girl's future compromised.


There's the psychological cost: Anxiety about your face being used. Paranoia. Depression. Suicidal ideation among victims.


There's the democratic cost: Deepfake incidents grew 280% in 2024 in a country holding massive elections. Imagine a fabricated video of a political leader dropping before voting day, spreading across WhatsApp before fact-checkers can respond.


And then there's the erosion of truth itself. If seeing is no longer believing, what's left to trust?


What Actually Needs to Happen?


The government must move fast. Deepfake-specific IT Act amendments can't be "coming"—they need to be here.


Tech companies must step up. WhatsApp, Instagram, YouTube need AI-powered deepfake detection. They need to make reporting easy. Right now, most Indians don't even know how to report deepfakes.


We need real digital literacy. Schools should teach kids to spot deepfakes. Parents should learn. Grandparents should learn. Every Indian should know the warning signs: odd eye movements, unnatural blinking, background inconsistencies.


Companies must protect their people. Employees should never authorize large transfers based on a video call alone. Verify separately.


We need detection tools accessible to everyone. Tools exist, but how many Indians know about them?


We must support victims. Counseling. Legal aid. Community support. Right now, they're suffering in silence.


The Uncomfortable Truth.


Deepfakes are the new acid attacks because both are attacks on identity. Both leave permanent scars. Both happen to people who didn't deserve it. Both happen in systems that fail to protect victims.


But deepfakes are worse in one fundamental way: they're contagious. They spread. They replicate. An acid attack destroys one person's face. A deepfake can destroy millions of people's trust in reality itself.


The technology won't go backwards. Deepfake creation tools are becoming cheaper, easier, more realistic every day. Within two years, seeing alone won't be enough to believe.


So we can do nothing and watch deepfakes become normalized—more money stolen, more women silenced, more elections influenced, more truth eroded. Or we can act now: demand better laws, support victims, educate ourselves, hold tech companies accountable.


Because right now, deepfakes are the new acid attacks. And most of us are still pretending we don't see it happening.


FAQ.


Q: How can I tell if a video is a deepfake? A: Look for unnatural eye movements, inconsistent blinking, odd lighting, audio that doesn't match lips, and background inconsistencies. But honestly, the technology is improving so fast that visual detection is becoming unreliable. Your best defense is skepticism—if something seems too convenient, verify it separately before acting.


Q: What should I do if I find a deepfake of myself? A: Document it. Report it immediately to the platform (Instagram, YouTube, WhatsApp, etc.). File a complaint with your local police cybercrime cell. Reach out to digital rights organizations. Persistence helps.


Q: Are deepfakes illegal in India? A: Not yet with a specific law. They can fall under defamation, cheating, fraud, or harassment laws depending on context. The government is working on IT Act amendments, but these haven't been finalized as of 2025.


Q: How do people create deepfakes? A: Using free or paid AI tools available online. Some need just 60 seconds of voice or a few photos of your face. This accessibility is why the problem is so widespread.


Q: Which platforms are most affected? A: WhatsApp, Instagram, YouTube, Facebook, and Telegram are the biggest vectors. Financial investment platforms are heavily exploited for scams.


Q: How can I protect my family? A: Teach them skepticism. Create an authentication protocol—a phrase or code family members use to verify each other's identity. Show them how to verify celebrity endorsements on official accounts. Never authorize transfers based on video calls alone.


Q: Are there deepfake detection tools? A: Yes, but most require technical knowledge. McAfee has a detector, but adoption is low. Your best tool remains human skepticism and verification through known channels.







Comments