From call centers to AI teammates: AWS’s Colleen Aubrey shares her perspective
Across the cloud industry, AI is changing what customers expect from service and support. It is no longer only about answering calls quickly. Companies are under pressure to understand context across channels, read sentiment in the moment, and blend automation with human judgment in a way that feels natural, not mechanical.
This year at AWS re:Invent, that shift was clearly evident in the Amazon Connect roadmap. AWS is pushing Connect from a classic contact center tool toward a full customer experience platform powered by agents. New releases span self-service flows with Nova Sonic voice, AI teammates for human agents, richer personalization based on clickstream and profile data, and deeper observability so enterprises can see exactly how AI behaves before and after it reaches production.
In this interview, Colleen Aubrey, Senior Vice President of Applied AI Solutions at AWS, explains to MENA TECH how she sees agentic AI reshaping customer contact, why multilingual support and model “personality” still need work, and why the most complicated problem remains the final stretch of applying AI in real businesses.
What are the key AI updates for Amazon Connect at re:Invent 2025?
At re:Invent 2025, we released 29 new features for Amazon Connect, which I group into four agent-focused areas.
First, self-service. We use Nova Sonic to support natural-language calls, where customers speak in their own words. In the background, agents can complete tasks such as updating appointments or schedules while the conversation continues.
Second, AI beside human agents. I really believe in the idea of an AI teammate, or several, sitting with a person. Connect now lets AI listen to the call, suggest responses, trigger back-office actions, fill out forms, and create summaries so people can stay focused on the customer.
Third, personalization. We use clickstream data and rich customer profiles to provide real-time recommendations to human agents, so every conversation is more tailored and less generic.
Fourth, observability and control. Enterprises want to see what AI is doing. We let them trace an interaction across channels, see the reasoning behind a step and the tools used, and test flows before they go live. We now also support Nova Sonic for AI voice and have added third-party options, including Deepgram, and models such as Llama.
How do you control the tone of AI voices to ensure they remain appropriate and on-brand?
With Nova Sonic, we can tune the voice along several dimensions. We can adjust tone, pacing, accent, and language, and we already support multiple languages.
We also consider personality. Internally, we use a mental model with a few key sliders, for example, how lighthearted you want it to be, how empathetic, and where it should sit between creativity and strict accuracy.
Over time, we want to expose more of these controls directly within Connect so business owners can dial in their brand voice. The idea is that they can decide how warm, serious, or playful the AI should be, and the system then reflects those choices in day-to-day conversations.
How is agentic AI changing customer relationships, and what does that mean for marketplaces and advertising?
There is a common belief in the market that you can replace customer service teams and let AI handle everything. AI can handle a lot, but I do not believe most companies want to be completely separated from their customers.
It can handle routine and transactional work and should increasingly proactively address issues before customers need to reach out. People should focus more on complex, sensitive situations where empathy, judgment, and longer-term relationships matter.
At Amazon, we have spent many years building a direct relationship with shoppers, and we will continue to prioritize it. We are in open discussions with chat providers, including OpenAI, but we want to design our own agent-based shopping experience.
Why do so many AI projects struggle in the cloud, and what is your top priority to fix that final stretch?
The hardest part is the final stretch, where AI must operate within the constraints of a real business. That is why my group is called Applied AI.
Prototypes and demos are relatively easy. Friction arises when you add regulation, security, compliance, brand guidelines, and customer expectations. All of that makes production deployments difficult, which is why you see such a high failure rate in AI projects.
Our mission is to take that final stretch seriously for specific functions. In customer service, that means AI that truly understands confirmation numbers, addresses, and sensitive financial details with the accuracy you need. In financial services, for example, there is little room for error.
Some organizations have the technical depth to build this themselves on top of primitives like Agent Core and Amazon Bedrock. Many do not, so my group builds higher-level solutions. We focus on customer experience in Connect, on planning and decisioning for supply and demand, where Amazon has deep experience, and on healthcare, where we are removing administrative friction so clinicians can focus on patient care.
Is today’s workforce ready to work with AI agents, and how should interfaces evolve so people do not have to “learn AI”?
In general, no, not yet. In my own organization, I see a spectrum. Some people are enthusiastic early adopters who push every new tool and show how they can turn weeks of work into an hour. Others use AI occasionally for quick questions, but have not changed how they build. Some barely touch it.
I encourage companies to set strong boundaries around security and compliance, then allow significant experimentation within those guardrails. Early adopters create momentum that pulls the middle group forward, and over time, even holdouts will need AI to keep up. At the same time, the future should not require everyone to become an AI expert. This year, we formed a dedicated user experience team to define what an agent-based interface should look like.
In my view, AI should adapt to the user. It should move through systems on your behalf and bring the right information to you. Work should come to the person. Two people in the same role might have different interfaces, as there is no longer any reason to fix the user experience.
Beyond customer service, what problems are you trying to solve with AI?
We launched a focused life sciences effort at the beginning of the year, and I expect you will see results in 2026 Q1. We are working with providers of biological foundation models and using AWS compute to put those capabilities in the hands of biologists. Our first focus is antibody discovery.
Today, there is a bottleneck between bench scientists, who understand the biology, and computational biologists, who work on models and infrastructure. We want to expand the computational reach of bench scientists so they can explore many antibody candidates in silico, evaluate the properties that matter, and then send a smaller, higher-quality set into the wet lab.
Wet lab results are then fed back into the system, so you can iterate faster and increase the hit rate when you do physical testing.




























