From reactive to proactive: Neha Rungta of AWS on the future of security, AI agents, and the cloud
Organizations are going through another technology inflection point. After the PC, the internet, and the cloud, AI and intelligent agents are now changing how software is built, how operations run, and how decisions are made. Code is being generated faster, systems are more interconnected, and the risk surface area keeps expanding.
Neha Rungta, Director of Applied Science at AWS Identity, works on that question every day. Her focus is on turning security from reactive, manual processes into proactive, embedded development, powered by automated reasoning, agents, and a common authorization fabric that works across clouds and systems.
In this exclusive roundtable, recorded during AWS re:Invent 2025 with MENA TECH present, Rungta explains why she wants to flip the traditional 80–20 split between reactive and proactive security, how automated reasoning is shaping AWS’s policy stack, and why she believes we are still at the “pets.com phase” of the AI revolution.
How are organizations approaching security and access today, and what is AWS’s strategy to support them?
We’re at a real inflection point. Technology is changing jobs, processes, and the way we even think about problems. To understand where this is going, I like to look backwards.
Every time there was a significant shift, such as the industrial revolution, the printing press, and the arrival of computers, people worried that “the machines” would take all the jobs. What actually happens is that work changes and entirely new industries appear.
I believe we’re at a similar point now with AI and agents. We’re already seeing it in software development and even in manufacturing, where agents are being used to automate and optimize workflows.
The challenge is: how does security keep up? Security was already stretched in the IT era. Now, with coding assistants generating more code, the volume has exploded. Humans alone cannot keep pace.
Today, roughly 80% of security spend goes into reactive security, which is basically responding to incidents, chasing threats after something already happened. I think that is too late. That treadmill is not sustainable.
Our strategy is to flip that 80–20 split toward proactive security. We want to bake security into the entire development lifecycle. From the moment someone has an idea, through design, implementation, testing, and deployment. When you do that, security becomes part of how you build, not just something you scramble to add at the end.
Has automated reasoning really moved the needle on zero trust?
On the authorization side, we launched Cedar, a common policy language that works across different contexts, containers, agents, and soon, the new agent core policy engine. The idea is to provide a deterministic, machine-checkable way to define authorization in a multimodal, multisystem world. Underneath that sits our authorization services, which we design to be provably correct.
So, in that sense, we’ve answered part of the question. We’ve defined what “good” looks like. We’ve provided patterns and building blocks you can integrate to ensure your systems authorize correctly and interoperate securely at scale. The gap is that these tools are only as powerful as people use them. Nothing prevents someone from building a system with non-deterministic or ad hoc authorization.
The next step is behavior. End users and organizations need to say, “I will only connect to systems that follow this model.” Just like you don’t visit websites without valid certificates anymore, even though that was controversial years ago, we need a similar shift for agents and AI services.
We have to define soft but deterministic boundaries: clear rules about what is acceptable, but flexible enough to adapt as we learn. That’s where automated reasoning really shines; it lets you express those rules precisely and verify them across components and their interactions.
What are the most interesting security projects you’re working on right now?
One I’m especially excited about is the AWS security agent. It’s directly tied to the idea of flipping security from reactive to proactive. We’ve been using this agent internally for months. Our teams use it during design reviews and code reviews, and we’re seeing that it helps scale security knowledge across large parts of the organization, right where development is actually happening.
The long-term vision is for security to be so embedded in the development process that you no longer think of it as a separate function. Today, development, security, and operations are often separate lanes. As more repetitive security checks are handled by the agent, human security work can move to a higher level.
We’re starting at the application layer, but the vision is much broader: everything that touches your application code, infrastructure, and deployment strategy, across clouds, hybrid environments, and on-prem. That’s the kind of proactive security surface we’re aiming for.
With varying regulations, how do you design AWS products to meet customers’ needs in each market?
This is one of the hardest challenges. Compliance remains highly manual in many organizations. Every region, and often every country, can have its own rules, especially around data sovereignty and isolation.
Today, companies, including ours, often have to work with third-party experts in certain jurisdictions because it’s impossible to keep every engineer fully trained on every regulation.
This is another domain where agents can really help. Imagine you’re building a financial services app for Greece. You should be able to ask an agent: “What requirements apply to this type of application in this country?” The agent could retrieve the relevant regulations, map them to concrete technical controls, and continuously monitor your system as it evolves.
The goal is to move compliance from a painful end-of-process validation to an integrated approach built into design and development. We’ve seen this pattern before. When autopilots were introduced in aviation, certification processes had to change, and they differed between the US and Europe. AI will drive similar processes and cultural changes across software and infrastructure.
I don’t think humans alone can keep up with the complexity that’s coming. Some current manual compliance tasks will inevitably be automated, but that will also create new roles focused on defining, overseeing, and validating those agent-driven controls.
At what stage of the AI investment wave are we currently? Is current spending justified?
From a technology perspective, I believe we’re still in a very early stage, something like the early internet era. Think about pets.com, dancing robots, and the interviews where people asked, “Why would I listen to a baseball game on the internet when I have a radio?”
Streaming, online shopping, and cloud computing all felt unnecessary or strange at first. We’ve lived through that transition in our own lifetime. I think AI is in that same “this seems cute, but why do we need it?” phase for many people.
What we’re seeing now, code generation, chat interfaces, basic content creation, is the training ground. These are not the hardest problems in the world, but they are tangible and accessible. If the technology reaches its full potential, I expect significant, deeper transformations across areas such as physics, chemistry, synthetic biology, and advanced manufacturing.
Imagine AI helping design rocket systems, solve hard materials problems, or optimize complex manufacturing lines in ways we simply cannot today. That’s where the real efficiency gains and breakthroughs will come from.
Like the dot-com era, not every company or investment will survive. Some will disappear, but the underlying shift will remain. The acceleration is faster this time.
Is today’s infrastructure mature enough for AI’s future, or does it need a fundamental shift?
Right now, much of the conversation is about power. People are asking whether tech companies are becoming energy companies, because power is already a real constraint on what we can do with AI.
If AI’s most significant impact is in manufacturing and hard sciences, it might also help us create new forms of power: cleaner energy, more efficient infrastructure, and new materials. We have not yet gone through a full cycle where AI improves infrastructure, and that stronger infrastructure then feeds back into better AI, so it is hard to predict the full effect.
No, our infrastructure is not yet fully ready for that future. But the real question is not “AI or no AI.” It is how we choose to use it. We need to move step by step, solve hard problems, and treat security, safety, and access control as core design principles so AI becomes a force for positive change, not just another source of risk.




























