
In recent months, Pakistan's social media space has seen a surge of AI-driven disinformation, with dozens of doctored videos and images pushed by accounts linked to the security establishment. Investigations by journalists and analysts reveal that many of these viral posts are AI-generated "deepfakes" designed to inflame tensions and spread false narratives. For example, a fabricated AI-generated clip showing Air Chief Marshal V.S. Singh criticizing India's Tejas fighter program was traced to an X account with ties to the Pakistani military establishment.
Similarly, fact-checkers exposed a digitally altered report about former Indian Army Chief V.P. Malik. Pakistani propaganda handles circulated a clip of Malik spouting fake communal rhetoric. These AI manipulations often mimic real news formats, such as TV reports or social media clips, but exhibit uncanny audiovisual glitches, repetitive eye movements, clipped speech, and misaligned lip-sync that betray their synthetic origin. In each case, fact-checkers have debunked the posts and found no credible sources for the claims. However, Pakistan's intelligence agencies have become adept at spreading disinformation on social media to generate widespread confusion, maliciously target their opponents, and disrupt peace.
Two of the most prominent victims of this campaign are international journalists Yalda Hakim and Palki Sharma Upadhyay. In early December 2025, a deepfake video of Sky News anchor Yalda Hakim interviewing Imran Khan's sister Aleema Khanum went viral on Pakistan-linked feeds. The manipulated clip had Aleema purportedly calling Pakistan's army chief a "radicalized Islamist" and blaming him for seeking war with India. Sky News and Hakim immediately denounced the post as a "terrifying" fake, noting that the real interview contained no discussion of India or its army. Hakim herself tweeted that "this clip is completely fake" and never aired by Sky News, and the network clarified that Pakistani politicians had "twisted her words" to fabricate the content. This incident clearly shows that the Pakistan military, through the ISPR-handled social media accounts, would target anyone, including international figures, who would empathize with Imran Khan.
Similarly, the AI-manipulated videos of Firstpost's Palki Sharma Upadhyay (and other Indian journalists) are circulating in Pakistani networks. These fake clips showed Palki promoting Indian government-backed financial investment platforms or questioning diplomatic protocols for the Indian Prime Minister's visit to Jordan. These viral videos on Pakistani social feeds were entirely AI-generated, with no actual broadcast or transcript exists. Several such examples suggest that the state-backed Pakistani social media profiles have intensified their disinformation against India. Their only aim is to create civil-military tensions and religious divide in India through such AI-generated fake clips.

These incidents fit a pattern identified by specialists: clusters of Pakistani X/Twitter accounts, often self-presenting as Indian news consumers, synchronously amplify AI-faked stories to sow discord. One analysis found that key accounts all listed "Pakistan" as their location and exhibited coordinated behavior, classic signs of a troll farm operation. The disinformation networks do not confine themselves to the India-Pakistan rivalry. Inside Pakistan, even journalists and civilians have been targeted by AI hoaxes. Dawn columnist Sheraz Khan documented, for example, an account "PakVocals" that posted a deepfake video of Pakistani reporter Benazir Shah supposedly caught partying in a nightclub. The intent was clearly to discredit her with a lurid, false story.
In each case, social media followers encountered shocking "AI slop" that fueled outrage until expert scrutiny revealed it as fabricated propaganda. Even international conflicts have been warped in this Pakistan-led disinformation campaign. For example, during the 2025 Israel-Iran war, several Pakistani news outlets aired an AI tampered video of an Israeli studio supposedly invaded, not realizing the footage was entirely fake. This example underlines the breadth of the crisis that Pakistani social media is awash in viral deepfakes and recycled footage, from local targets to foreign conflicts, spreading rapidly without any editorial check.
What links these incidents is not random chaos but the explicit backing of the Pakistani state. Multiple sources suggest that Pakistan's military intelligence apparatus is intertwined with the disinformation campaign. One report noted that the 'PakVocals' account was followed by Pakistan's Information and Broadcasting Minister, Ataullah Tarar, suggesting high-level interest or endorsement from the country's top leadership. Furthermore, the coordination style, including rapid deletions after posts and networks amplifying one another, resembles a managed influence operation more than that of random amateurs. In media statements and press briefings, Pakistani officials have acknowledged an "organized disinformation" problem even as they publicly target others for it.
For example, Interior Minister Mohsin Naqvi repeatedly warned against "fake news" and accused Imran Khan's party leaders of running an anti-state campaign from abroad. He framed independent bloggers and exiled critics as traitors "joining the enemy," promising they would be "brought back" to face charges. Analysts claim this mirrors the disinformation war's logic that the same state machinery creating propaganda abroad also justifies internal repression by branding dissent as "foreign" plotting.
These revelations come amidst a broader crackdown on Pakistan's online information space. The government has openly blamed rampant rumors like reports of Imran Khan's death in custody for stoking panic and created new cybercrime units to police "fake news." Yet critics worry the timing is telling, as most of the purported fake news targets are Imran's supporters or independent journalists. The draconian media laws in Pakistan give state agencies even greater leeway to silence PTI leaders and critics of the military on social media. These growing AI disinformation campaigns and the ensuing legal muscle are clear evidence of a systemic military-backed propaganda machine in Pakistan.
Deepfake clips of Yalda Hakim, Palki Sharma, and other known local and international personalities are just the most visible examples. These synthetic videos spread widely on Pakistan-linked X/Twitter and Facebook pages, repeatedly echoing pro-army and partisan narratives. Multiple investigations now warn that Pakistan has become a "hub" for cheaply made AI propaganda, with virality as its chief goal. The result is a new disinformation arms race: quality deepfakes are proliferating daily, underwritten by powerful backers and skewing public perception. The trend is troubling for regional stability and for Pakistan's own information ecosystem—and countermeasures will require international vigilance to stop Pakistan from spreading mass disinformation on social media.
[Disclaimer: This is an authored article by Sarral Sharma, currently a Ph.D. scholar at the Special Centre for National Security Studies (SCNSS), Jawaharlal Nehru University, Delhi. He was a South Asian Voices Visiting Fellow in 2019-2020. He also served as a Consultant at National Security Council Secretariat (NSCS), Government of India and was a Senior Researcher at the Centre for Internal and Regional Security (IReS), Institute of Peace and Conflict Studies (IPCS), a New Delhi-based think tank. Views expressed in this article are author's own.]
[This article was first published in International Business Times, Singapore edition.]




