From pilots to production: Taimur Rashid on AWS’s gen-AI innovation
Across industries, generative AI is shifting from attention-grabbing demos to a focus on measurable results. Leaders no longer want experiments that exist only on slides. They want working systems that change how software is built, how operations run, and how decisions are made, without losing control over risk, security, and compliance.
This is precisely the goal of AWS’s Generative AI Innovation Center, which now encompasses 500 specialists across strategy, applied science, and engineering. It embeds with customers from ideation to production and has completed more than 1,200 engagements, with around two-thirds reaching live deployment.
At AWS re:Invent 2025, we spoke with Taimur Rashid, Managing Director of the Generative AI Innovation Center at AWS. He explained how customers should pick their first use cases, why Nova Forge and model customization matter, how memory and agentic architectures are evolving, and why he thinks AI will reshape not just applications but the structure of entire organizations.
Where should enterprises and SMBs start with generative AI?
We work with large enterprises, startups, and everything in between, and my honest answer is that there are many valid starting points, as long as you begin deliberately.
For organizations new to generative AI, we typically begin with education. Then we lean on patterns from the 1,200 engagements we have already run. The same early use cases recur, such as AI-assisted chat experiences, content summarization and generation, and internal assistants that support supply chain, procurement, or customer service.
What has changed in the last couple of years is how we frame that first step. Two years ago, I might have said, “Go experiment.” I still want people to experiment, but now we start with a business question. What outcome are you trying to move, and how will you measure it? That could be revenue, cost, risk, or productivity. Once you know that, you can decide whether generative AI is the right tool or whether a more classic machine learning approach is better.
From there, the entry path depends on skills and urgency. Builder-heavy teams often go straight into Amazon Bedrock or SageMaker and build something custom. Teams that need quick wins can use higher-level products that connect to their data and provide AI assistance for search, research, and reporting. If they lack in-house expertise, they can leverage the Generative AI Innovation Center or trained partners to co-build the first production system, rather than running isolated pilots that never ship.
How powerful can AI models with persistent memory be in real use?
People often use the term “memory” to refer to two distinct concepts. One is a training problem: catastrophic forgetting. That happens when you keep training a model, and it overwrites things it already learned. To address this correctly, you need access to both the model weights and the original training dataset.
The other is memory at inference time, which is really about context. This is where we are very early in something much bigger than it looks today, especially when you consider agentic architectures. In Amazon Bedrock, we have already introduced episodic memory as a primitive, but that is only one piece. You can imagine long-term memory, short-term session memory, and shared memory among agents, with clear rules on what can be shared and what must remain private.
Memory and context will become core building blocks of agentic systems. As we design multi-agent architectures and more complex workflows, how we represent, store, and govern that memory will be a key area of innovation for the industry.
Do you expect agentic systems to replace traditional application architectures, or mostly sit on top of them?
The first level is retrofitting existing software-as-a-service products. You look at a current SaaS architecture and ask where agents naturally fit, whether that is orchestration, user interaction, or background work. We explored this in a paper on rethinking SaaS with agentic architectures that map those insertion points.
The second level is workflow-centric. You start from a process such as underwriting, claims handling, or order-to-cash, then ask where agents can modularize human judgment and plug into that workflow. In that case, you might not rewrite applications at all, but you would still add a layer of intelligence to coordinate tools, data, and actions.
The third level is AI-native or agentic-native applications. These are built from day one with agents, tools, and memory as primary building blocks. Many startups are already doing this. They are not trying to force agents into a legacy stack. They are designing the product around agentic concepts from the start.
In practice, enterprises will use all three patterns at once, depending on system age, risk, and strategic importance.
For smaller or simpler applications, can AI replace human developers?
We already see AI delivering a significant productivity boost to software developers. An experienced engineer can use an assistant to move faster. A junior engineer can suddenly explore patterns, libraries, and solutions that would have taken much longer before. It is like giving earlier career developers a strong multiplier on their skills.
Over time, the value will shift from simple code suggestions to a deeper understanding of architecture and systems. That is why we are investing in frontier agents who embody expertise in domains such as security and DevOps and bring it into the development loop. You can imagine agents that help design system structure, write tests, generate documentation, and reason about integration, not just autocomplete a function.
At the same time, I do not see classic engineering disappearing. Humans remain responsible for the code that ships. Organizations need observability and evaluation around AI-generated changes because models can drift and fail in ways that are not always obvious. The pattern I believe in is human judgment combined with AI assistance and strong guardrails, which together deliver better, faster results than either would alone.
What separates successful generative AI deployments from the ones that struggle?
First, successful teams start with a clear business outcome. They know which metric they want to move, whether that is cost, revenue, risk, or customer satisfaction, and they measure against it.
Second, they have strong executive sponsorship. That matters both for prioritization and for budget. Without that support, experiments often stall between a proof of concept and a real production system.
Third, they are precise about the type of application they are building. An internal assistant for developers has a very different risk profile than a consumer-facing financial adviser. That shapes decisions on guardrails, responsible use, and monitoring.
Fourth, they have at least a minimal set of foundations in place. That means a workable data platform, a clear security posture, and a path from development to deployment. It does not need to be perfect, but there needs to be a robust foundation for AI systems to land on.
Finally, the teams themselves are willing to learn. The best engagements feel like a partnership. Our role is not just to build. It is also to teach what we have seen across many industries. When customer teams are curious and engaged, production conversion and long-term value are much higher than when AI is treated as a one-off experiment.
How do you expect the hierarchy and structure of IT departments in small and medium businesses to evolve?
This goes well beyond IT. The structure of entire organizations will change.
The primary heuristic I use is that smaller teams can create a much greater impact. We already see this inside AWS. Matt Garman recently described how six people rewrote a significant piece of software in 76 days, work that might previously have required a much larger team and a much longer timeline. When you combine AI tools, agents, and skilled engineers, cycle times compress, and the coordination overhead of large teams becomes unnecessary.
As companies adopt AI across software development, operations, sales, marketing, finance, and legal, they will realize they can move faster with more nimble teams that are empowered by agents rather than surrounded by layers of manual processes. That will naturally push organizations to rethink traditional hierarchies.
This transformation will not always feel comfortable. Change never does. It will require conviction that the future will look different and a willingness to go through some adjustments. I want organizations to use AI to empower people, so agents take on more of the repetitive work in the background and humans focus on judgment, creativity, and relationships.




























