The recent arrest of a Mumbai-based trafficker by the Central Bureau of Investigation (CBI) has brought renewed attention to the fake job scam in the country. Investigators say the accused lured Indians
with promises of lucrative overseas jobs, only to send them to Myanmar, where they were forced into running online scams under coercion.
While the case itself is alarming, it is far from isolated. It is part of a rapidly expanding network that combines human trafficking with cybercrime. Across India, fake job rackets have surged, with thousands of individuals targeted every year. In 2024 alone, over 123,000 digital fraud cases, including job scams, were reported, highlighting the scale of the problem.
What is emerging is not just a law-and-order issue, but a complex, global system where technology, crime, and exploitation intersect.
From Phishing To Platforms, What Is AI-Powered Cybercrime?
Cybercrime has evolved far beyond basic phishing emails and small-time fraud. It is now increasingly powered by artificial intelligence (AI), making scams more convincing, scalable, and difficult to detect.
AI tools are now being used to automate messages, mimic human behaviour, and even generate fake identities. Fraudsters can deploy chatbots that engage victims in real-time conversations, craft personalised messages based on online data, and create deepfake audio or video to build trust.
“AI-powered cybercrime is the systematic application of machine learning and generative AI to automate, scale, and precision-target attacks that previously required significant technical expertise and manual effort. Modern threat actors are using large language models to scan vulnerability databases, correlate breach data across millions of records, and generate phishing content that accurately mimics the tone and style of a known contact. The WormGPT and FraudGPT tools, which surfaced on underground forums in 2023, were early indicators of where this was heading. What makes AI particularly dangerous in criminal hands is not just speed but adaptability. These systems learn from failed attempts and self-optimize, making each subsequent attack iteration more effective than the last,” said Dipesh Ranjan, Senior Vice President, ANZ & Europe, GSI at Cyble — an AI-powered cybersecurity company.
How Human Trafficking Is Linked To The Scam
Fake job offers circulated through social media platforms, messaging apps, or recruitment agents. These offers promise high-paying roles in IT support, customer service, or digital marketing abroad. For many young Indians seeking better opportunities, especially from smaller cities, these offers appear credible.
Once recruited, victims are transported to countries such as Myanmar, Cambodia, or Laos. Upon arrival, their passports are confiscated, and they are confined to heavily guarded compounds.
Inside these facilities, they are forced to carry out online scams targeting users across the world. Refusal to comply can result in physical abuse, confinement, or threats. In effect, victims are turned into operators within a global fraud network.
“Human trafficking is increasingly linked to digital fraud through organised scam enterprises, particularly in Southeast Asia. Victims of trafficking can be made to work in illegal cyber fraud enterprises where they carry out various forms of scams, such as romance or investment fraud (some of which are carried out as a result of phishing). Victims are then often forced to target other individuals through rules of coercion and abuse. This is combined with financial crime to create a business model for human trafficking at a scale that is comparable to a call centre, where scams are executed with the use of coercion to exploit individuals and digital deception,” said Anurag Jain, Founder & CEO, Oriserve – a generative AI platform.
How Do These ‘Scam Centres’ Work?
They operate like organised offices, with structured teams, defined targets, and performance metrics. Workers are assigned specific regions or types of scams, ranging from investment fraud and cryptocurrency schemes to romance scams.
The integration of AI has significantly increased efficiency. Fraudsters can now target thousands of individuals simultaneously, analyse responses in real time, and refine their tactics to maximise success rates.
This level of organisation has led experts to describe these operations as “factories of digital fraud,” where human labour and advanced technology combine to generate large-scale financial crime.
Jain explained that cybercrime has now “evolved” into a structured “industry” with an “ecosystem” where criminals have specialised roles such as malware developers, data brokers, phishing specialists and money mules. “Many of these roles exist within a ‘fraud as a service’ model where the cybercriminal provides the necessary tools, the data used to perpetrate scams, and/or other necessary infrastructure on a subscription or commission basis. This ‘division of labour’ allows the terrorist to engage in complex or sophisticated attacks, regardless of their education or skill level. Cybercrime today also shares the same characteristics of a legitimate business in regards to global coordination and tracking of performance, as well as profit sharing.”
Why Is AI Making Scams More Personalized And Harder To Detect?
According to Jain, AI dissects a large amount of personal data from social media, data breaches, and online activity by cybercriminals to create highly targeted messages that reflect a person’s interests, behaviours, and communication style.
He further said, “Deepfake technology can reproduce both the audio and video of a specific individual, making messages appear more credible, while AI-based chatbots allow for an almost immediate, human-like interaction. These innovations create credibility by removing traditional red flags such as poor grammar, language, and generic messages from the equation, making scams look more real and far less distinguishable by people and security systems alike.”
Ranjan echoes similar views. He said the shift from generic to contextual fraud is the defining threat characteristic of this decade. “AI enables criminals to cross-reference a target’s LinkedIn activity, public social media posts, and data from prior breaches to build a behavioural profile that informs a highly tailored attack. A business email compromise attempt, for example, no longer arrives as a poorly formatted wire transfer request. It now references an ongoing project, uses the target’s first name, mirrors the writing style of a known colleague, and arrives at a time the target is likely to be distracted.”
In 2024, a finance employee at a multinational firm was defrauded of approximately $25 million after attending a video call with AI-generated impersonations of senior colleagues, he added. “Detection systems trained on historical attack patterns are structurally ill-equipped to handle this level of contextual manipulation.”
Why Has India Become A Target For AI scammers?
“India presents a very unique combination — a large talent pool, high job-seeking intensity, and a rapidly digitising recruitment ecosystem. Millions of candidates are applying through online platforms, which creates the perfect environment for scams to blend in. Add to that economic pressure, aspiration for global roles, and sometimes limited awareness of recruitment processes, and you have a highly targetable audience. Scammers are essentially exploiting scale and urgency; they know that even if a small percentage responds, the volumes make it worthwhile,” said Sonica Aron, Founder & Managing Partner, Marching Sheep – an HR advisory firm.
This dual role places India at the centre of the global cybercrime landscape, amplifying both the risks and the consequences.
Ranjan explains that India’s profile as a cybercrime target is directly related to its digital growth trajectory. With over 900 million Internet users, a rapidly expanding UPI-based payments ecosystem, and a large population of first-generation digital finance users, the country offers both scale and exploitability. “Threat actors have demonstrably used India as a testing environment for fraud techniques before deploying them in Western markets, partly because the volume of targets allows rapid iteration and partly because cross-border attribution remains difficult. The rise of parcel scams, fake customs duty fraud, and AI-driven investment scheme fraud targeting Indian users on WhatsApp is consistent with this pattern. The underlying vulnerability is not unique to India but is amplified here by the pace at which digital adoption has outrun awareness and regulatory infrastructure,” he explained.
How Should People Spot Fake Job Offers?
“Some red flags include unsolicited messages about jobs, unsustainable salary expectations, general descriptions of jobs, and most importantly, repeated requests for advance money or personal information. We can also pay close attention to how they communicate. For example, communication from unofficial channels (e.g., WhatsApp) or generic email accounts is a red flag. Creating a sense of urgency and requiring fast decisions are also common tactics of scammers. To prevent falling victim to job scams, it would be wise to verify the official website of the potential employer, evaluate recruiters against their professional profiles, and avoid any requests for advance payments while seeking employment,” explained Jain.
Aron says any request for money, whether for processing, training, or a visa, is a “big warning sign”. Aside from communication from unofficial email IDs, unstructured interviews, or “offers that are too good to be true”, candidates should look at “poor spelling, inconsistent company details” to spot a fake job. “My advice is to pause and verify on the company’s official website, and never share personal or financial information without due diligence,” she added.














