Krutik Poojara
 

While the world watches athletes compete for gold or diplomats sign treaties, a silent, high-stakes conflict is unfolding in the digital shadows. A single vulnerability in a ticketing platform, transportation system, or power grid can turn a moment of national pride into a geopolitical and economic crisis.What makes today's cyber front line uniquely dangerous is not only its scale, but its speed. Artificial intelligence now allows attacks to unfold faster than human defenders can reasonably respond, elevating cybersecurity from a technical concern to an issue of national resilience and market stability.

We sat down with Krutik Poojara, a globally recognized cyber security expert and researcher, founder of Nandee.ai, Code Clinicand contributor within ISACA. Following his recent work on Cybersecurity on the World's Biggest Stage, Poojara explains why the next major global shock may not begin with a missile strike, but with a line of code failing under pressure.

Q:You have researched the security of global events such as the Olympics and World Cup. Why should business leaders care about the security of a sporting event?
Krutik Poojara :

Because these are no longer just sporting events. They function as massive, temporary smart cities. Preparing for events like the Olympics or the World Cup requires connecting power grids, transportation networks, ticketing platforms, biometric identity systems, and cloud infrastructure, often under intense timelines.

For CEOs and investors, the takeaway is clear. If a state-sponsored actor can disrupt a billion-dollar global event by exploiting a trusted dependency, the same tactics can be applied to corporate supply chains and digital business models. Cybersecurity must be viewed as sovereign and economic resilience, not routine IT maintenance. The stakes include national credibility, investor confidence, and economic continuity.

Q: You often emphasize threat modeling rather than traditional perimeter defense. Is the old approach no longer sufficient?
Krutik Poojara :

Traditional defense is reactive. It assumes attacks are rare and manageable. That assumption no longer holds.

Threat modeling forces organizations to think like adversaries before systems are built. We ask difficult questions early. Who would attack this system? What are they trying to achieve? What would failure actually look like?

In the age of AI-enabled attackers, assuming the attacker will eventually gain access is a more realistic starting point. If threat modeling is not done during the design phase, what we call shifting left, organizations are building systems that look secure but fail under real pressure.

Q:Artificial intelligence is often framed as a productivity tool. You have warned about its weaponization. What concerns you most?
Krutik Poojara :

The biggest concern is scale. AI has lowered the barrier to entry for sophisticated attacks. Activities that once required elite expertise can now be automated and repeated continuously.
We are approaching a point where attacks are no longer individual events but coordinated, adaptive campaigns driven by automation. The next major incident may not resemble a traditional breach. It could involve AI-driven attack traffic overwhelming systems faster than human teams can intervene.

This is why we are seeing an arms race. Defenders must use AI responsibly to match the speed and scale of modern threats, not rely solely on human response.

Q:For non-technical executives, how should AI-driven cyber risk be understood?
Krutik Poojara :

Executives should think about AI in terms of leverage. AI allows small teams to operate at enormous scale, whether for innovation or attack.

Cyber risk is no longer linear. It compounds quickly. Organizations that succeed will not be the ones that try to prevent every incident, but those that can detect, decide, and respond rapidly. Speed and resilience matter more than perfect prevention.

Q: Many companies are rapidly integrating AI into products and operations. What security mistakes are you seeing most often?
Krutik Poojara :

The most common mistake is treating AI as a feature instead of an attack surface. Every AI system introduces new risks, including prompt injection, data leakage, model manipulation, and misuse of automated decision-making.

Too often, teams deploy AI without clearly defining trust boundaries. They focus on what the system can do, rather than what it must never be allowed to do. Without those guardrails, AI systems quietly gain excessive influence inside organizations.

Q:Your background includes research on identity and behavioral biometrics. Are passwords becoming obsolete?
Krutik Poojara :

Passwords are increasingly inadequate for today's threat environment, even though they remain widely used across the world. Passwordless adoption is growing, but static credentials still dominate most authentication systems.

The fundamental issue is that humans are not well suited to managing complex cryptographic secrets securely at scale. Password reuse, phishing, and credential stuffing continue to succeed because the model relies on something that can be guessed, stolen, or replayed.

The future of identity is adaptive and continuous. Instead of treating authentication as a single moment at login, modern systems assess trust dynamically using behavioral signals, device posture, and contextual risk throughout a session. In an era of AI-enabled attacks, identity must be continuously validated. Static credentials alone are no longer sufficient to defend against increasingly automated and scalable threats.

Q:Governments are drafting AI regulations worldwide. Do these efforts strengthen security or risk slowing innovation?
Krutik Poojara :

Regulation itself is not the problem. Poorly informed regulation is.
When policymakers work closely with practitioners, regulation can raise the security baseline and reduce systemic risk. The danger comes from checklist-driven governance that prioritizes documentation over engineering reality.

Security outcomes depend on design decisions, not paperwork. Regulation should encourage resilience, accountability, and transparency, not a false sense of compliance.

Q: Supply chain attacks continue to dominate headlines. How does AI change this risk?
Krutik Poojara :

Supply chains are now the most common attack vector. Organizations are rarely compromised directly. They are compromised through trusted dependencies such as cloud services, open-source components, data providers, or third-party AI models.

AI amplifies this risk. A compromised input can influence automated decisions across thousands of systems simultaneously. Digital supply chains must be treated like financial risk, continuously monitored and stress-tested with the assumption that failures will occur.

Q: Is cyber resilience something only large enterprises can afford?
Krutik Poojara :

Resilience is actually more critical for smaller organizations because they have less margin for error. Large enterprises may survive breaches. Startups often do not.

Resilience does not require massive budgets. It requires clarity. Organizations must understand their most valuable assets, their most likely adversaries, and their worst-case scenarios. A smaller organization with disciplined threat modeling can be more secure than a larger one overwhelmed by complexity.

Q: If you were securing a global event or a Fortune 500 enterprise today, what would you prioritize first?
Krutik Poojara :

I would start with failure planning. Instead of asking how to prevent every breach, I would ask what must never fail even if everything else does.

For global events, that usually means public safety systems and transportation. For enterprises, it is revenue continuity and customer trust. Once those priorities are defined, security becomes a business strategy rather than a technical afterthought.

Q: What does effective cyber leadership look like over the next five years?
Krutik Poojara :

Cyber leaders will increasingly act as systems thinkers rather than incident responders. They will speak the language of risk, economics, and resilience.

The most effective leaders will not promise zero breaches. They will focus on transparency, recovery, and continuity. In an AI-accelerated threat environment, trust will be the most valuable asset an organization can maintain.

Conclusion:
As global markets become more interconnected, cyber risk has evolved into a macroeconomic variable. From AI-driven trading platforms to digitally orchestrated global events, economic stability now depends on systems that operate largely out of public view.

Poojara's perspective reframes cybersecurity as a market signal. Breaches increasingly influence stock prices, regulatory action, sovereign credibility, and investor confidence with the same force as geopolitical conflict or supply chain disruption. In an era where AI accelerates both innovation and attack velocity, resilience rather than perfection has become the defining measure of institutional strength.

For business leaders, policymakers, and investors alike, the message is clear. The next global crisis may not originate on a battlefield, but inside a system that failed to anticipate how quickly code can become a weapon.