Securing the GenAI era: AWS’s CJ Moses on cloud trust supply chains
Generative AI is accelerating change in cybersecurity faster than most previous technology waves. It is expanding threat actors’ capabilities, lowering the barrier to entry for malicious activity, and, at the same time, giving defenders new ways to automate, scale, and respond. This constant escalation is landing at the heart of cloud security, where trust, resilience, and simplicity are becoming equally important requirements.
At AWS re:Invent 2025, these themes have become harder to ignore as both enterprises and consumers push more of their personal and mission-critical data onto the cloud. CJ Moses, chief information security officer (CISO) and VP of Security Engineering at Amazon, said the priority remains unchanged: customers own their data, and security must evolve faster than adversaries while remaining practical for real-world builders.
In this special interview, Moses discusses how GenAI is reshaping both sides of the cyber battlefield, why supply chain and open-source risks demand deeper discipline, how Amazon is addressing AI-enabled hiring fraud, and what “verify then trust” means in the future of security teams.
Is generative AI reshaping cybersecurity for both attackers and defenders?
Yes, it is changing the game on both sides. From the attacker’s perspective, GenAI broadens the capability. Someone who couldn’t code before can now ask a model to generate the tools they need, which increases both the volume and variety of potential threat actors.
But defenders can turn the same dynamic around. GenAI helps security engineers and broader technical teams prepare faster and respond more effectively. Like earlier step-changes in technology, the computer, the internet, the cloud, GenAI is another phase in a long cat-and-mouse cycle. The priority is always to stay ahead of adversaries by using the technology more effectively and more responsibly.
How is cloud security evolving for businesses today in protection, resilience, simplicity, and accessibility?
If I look back to where we were just a few years ago in the cybersecurity space, much of the industry was focused on tactical threats and near-term issues. Whatever was right in front of you was where attention went.
I think that, with advances in technology, we’re better able to protect against many common threats now. What we’re seeing on both sides, threat actors and defenders, is everyone upping their game. That elevates security to a higher-level strategic effort, which is a good thing.
I often say the industry is starting to eat its vegetables. Do the basic things well. Don’t be the low-hanging fruit. And we have to keep evolving quicker than the adversaries.
How does AWS protect customer data in the AI era?
The number one principle at AWS has not changed from day one. Customers own the data. That means we are not going in and manually reviewing that data or training AI on it. The customer controls it. We’ve always focused on ensuring customers not only control their data but also can secure and use it appropriately.
In AI specifically, we’ve enabled deployment in customer-managed controlled environments. They control the AI, the models used, how they learn from their own data, and the level of access they’re granted. The models are not being trained or used outside the scope defined by the customer. That principle of data ownership extends directly to the AI domain.
How seriously does AWS view supply chain attacks across hardware and software?
We take supply chain security very seriously, and you need to consider it from both hardware and software perspectives.
From a hardware standpoint, we follow fundamental principles in our data centers. For example, when a new rack server is rolled in, we rewrite the software and firmware on those devices before they are powered on, so we know exactly what’s running on them. That limits the attack surface.
In high-security areas such as identity and access, we also use proprietary hardware, software, and chipsets we’ve built ourselves, so we’re not relying on outside supply chains as much. There are always external parts, but we have a deliberate program to mitigate those risks.
Software, though, is probably the bigger space now. If you can secure the hardware base, the next primary attack vector becomes limited to software and firmware.
How does AWS manage open-source risk securely?
Open source can be dangerous, yes. But it also has a strength. You don’t just have committers. You have the eyes of the world. It’s like shining sunlight into a dark space. Many problems come to light because so many inquisitive people are looking.
On our side, we’re not going to just take a package off the internet and run it. We review the code in depth to understand what we’re using. We find issues not only in open source but also in broader software environments.
We also have the engineering depth to reduce and maintain what we need, and when appropriate, we contribute back to the open-source community.
How are you dealing with AI-assisted identity fraud in hiring and “fake developer” interview attempts?
This is a real problem. There are now toolsets that help interviewees interpret questions and generate answers in real time.
The number one deterrent is in-person interviewing when needed. Even in virtual interviews, we’re wise to the tooling. When we get generic answers, that’s an indicator to go deeper. Once you move into the next layer, it becomes clear whether someone’s claimed expertise actually matches reality.
Even if someone is hired, they’ll be found out quickly because the coding environments we provide don’t allow access to external capabilities someone might rely on to pass interviews artificially.
We’ve also added requirements for specific senior roles in which the final stages involve on-site human verification. This is part of raising the bar with every hire. We’ve seen real-world cases in the broader industry involving deceptive hiring pipelines tied to threat ecosystems. These are not theoretical risks anymore.
What security model does AWS rely on for remote work, and why not traditional VPNs?
We don’t use legacy VPN-style approaches as the default. In some cases, depending on roles, we rely on workstations designed for specific tasks, but that’s a small subset.
For most of our employees, the capability is based on broad techniques applied across laptops and identities, whether you’re remote or not. The reality is that most employees are remote at some point. We’re all in Vegas now. I still need to be able to fire up my laptop and work safely from here.
We invested long ago in a zero-trust environment with strong authentication and hardware-backed protections. The aim is to make credentials much harder to reuse or extract, and to detect abnormal behavior quickly.
How will human judgment and autonomous systems share the workload in security teams?
The future is both. You will see more automated and AI-enabled systems, and the more we can safely automate, the better. But AI today is not great at judgment. It’s still non-deterministic. Humans are also non-deterministic, but they’re the only ones we can rely on for high-stakes judgment right now.
We’ll see a progression. It goes back to verify and then trust. As AI builds a track record, it earns more responsibility.
In agentic systems, one of the best security principles is to deny any single agent broad access to everything. You give small pieces of scoped access, aggregate results, and often pyramid that up to a human decision. How deep that pyramid goes before it reaches a human is part of the evolution we’re living through.
How can regulators and the private sector collaborate better to avoid impractical cybersecurity rules?
The more communication and collaboration we have, the better. Creating regulatory guidance that can’t realistically be followed doesn’t help anyone. It puts all of us at more risk.
There also needs to be clarity about shared responsibility. Some controls are owned by providers, some by customers. You can’t hold one party accountable for what another party must implement.
This is a flywheel. We continue to refine what’s practical, what’s aspirational, and what needs to be built into the technology itself rather than left as a theoretical requirement.




























