Responsible AI is everybody’s job: AWS’s Diya Wynn on building guardrails that unlock innovation
Across cloud and AI, the conversation is shifting from what we can build to what we should build. Powerful models are arriving faster than many organizations can adapt to them, and the real challenge now is ensuring these systems are safe, fair, and trustworthy for the people who use them every day.
In this exclusive interview during the AWS re:Invent 2025 event in Las Vegas, Diya Wynn, responsible AI lead at AWS, explains why responsible AI and innovation are not enemies, where the real risks are today, and how companies of any size can start building AI systems that are both powerful and safe.
How do you define responsible AI, and who should care about it inside an organization?
I treat responsible AI as a holistic approach, not a purely technical feature, and not something only data scientists or engineers should worry about. It is the way you design, develop, deploy, and use AI to minimize risk and maximize benefit.
When I look at it within an organization, I consider four dimensions at once: culture, people, processes, and technology. You need a culture of responsibility, clear roles and skills, processes that embed checks into the lifecycle, and technical tools that support those choices rather than fight them.
That means the responsibility is shared. Procurement teams play a role when buying AI systems or large language models from vendors and need to know which questions to ask.
What are the biggest risks you are seeing in organizations’ use of AI, especially regarding deepfakes?
One of the biggest risks I see is the belief that responsible AI and AI innovation are opposites. When people think they cannot do both, they tend to ignore responsibility and just run fast. That is where they introduce risk for their customers, erode trust in their products, and potentially cause real harm.
There is also an opportunity cost. If you do not consider responsible AI from the start, you may miss entire groups of stakeholders your product should serve. The system might look successful in one narrow context, but quietly disadvantage others.
For startups, SMBs, and large enterprises, what is the smartest starting point for an AI journey?
I always start with the same three foundations, regardless of company size.
First, define your core principles or dimensions. Within AWS, we discuss dimensions such as fairness, robustness, veracity, security, and privacy. Veracity is about how truthful and reliable the system’s responses are. Each organization needs to be explicit about which dimensions matter and how they will be tested and measured.
Second, secure leadership alignment. As with any digital transformation, responsible AI will not succeed without leadership support. You need leaders who are willing to prioritize it, fund it, and hold teams accountable, not just sponsor experiments.
Third, invest in education. Every team that touches AI needs to understand its responsibility. That starts with a shared definition of what responsible AI means for your organization. Additionally, engineers and data scientists need concrete practices to follow and tools that make those practices feasible day-to-day.
After that, the specifics can be scaled. You assess how heavily you use AI, how mature your organization is, and how many applications you are deploying, then decide where you need more structure, tooling, or governance. A small startup should not try to replicate everything a global enterprise does, as it can feel overwhelming and unnecessary.
Who should oversee responsible AI within a company?
There is no single perfect answer, because structures differ. In some organizations, a security assurance or risk function is already responsible for compliance and audits, making it the natural owner of responsible AI governance. In other cases, it might sit within legal or in a new center of excellence that partners with product teams.
At AWS, several teams share this work. Some teams work directly with our product and service groups to evaluate and test the systems they build. My role is in public policy, where we focus on how responsible AI intersects with laws and regulations. We also have governance functions that assess our compliance with emerging rules and requirements.
Additionally, we try to champion responsible AI across the organization, not just within a single team. The goal is to weave it into how people think about building products and services, rather than treat it as a separate checklist at the end.
Are there extra costs for companies that use AI irresponsibly, especially as regulation gets tougher?
There can absolutely be financial costs, and they are increasing. Some jurisdictions already have AI-specific laws. For example, some U.S. states have rules governing automated decision-making in hiring, and the EU AI Act includes potential monetary penalties for non-compliance. Those frameworks are still evolving, but they signal that regulators are prepared to impose real costs on misuse.
There are also reputational and operational costs that are just as important. If you deploy a system that customers perceive as unfair, misleading, or simply wrong, you damage your brand and erode trust.
Given how quickly AI is evolving compared to earlier web eras, how do you imagine regulation can realistically keep up?
I do not think regulation will ever keep pace with technology. The way we make law, build consensus, and vote simply moves more slowly than modern software and AI development.
What I find encouraging is that many regulatory efforts focus on harms. Instead of trying to name and regulate every specific model or technique, they assess the impact on people and organizations and then design rules around that. That approach is more durable.
Inside AWS, we created our responsible AI strategy before many of the current laws were on the table, and we use it to guide how we develop and deploy technology. We continue to educate teams and customers and build services that help them apply those ideas in their work.
Where do you personally draw the line and say a use case is too irresponsible?
One phrase I often return to is that just because AI can do something does not mean it should. There are use cases we can technically support that I would not consider appropriate or safe.
A clear example is imposing a moratorium on certain AI uses in law enforcement, such as certain applications of computer vision. That was a case where we decided that the risk and potential for misuse outweighed the benefits, so we declined.
For me, responsible AI gives you a path to make those decisions deliberately. It forces you to ask who might be harmed, who might be excluded, how it will perform in practice, and whether it is acceptable. And it gives you permission to say no when the honest answer is no.







































