Synthflow’s Voice AI With Memory: The Contact Center Breakthrough
- Jörn Menninger
- Dec 6, 2025
- 48 min read
Updated: 4 days ago

What Is This About?
Synthflow CEO Hakob Astabatsyan explains how memory-enabled voice AI is transforming contact centers. Unlike stateless chatbots, Synthflow's agents remember previous conversations — delivering a contact center breakthrough that combines the efficiency of AI with the continuity customers expect from human agents.
Introduction
Synthflow CEO Hakob Astabatsyan explains how memory-enabled voice AI is transforming contact centers by dramatically lowering average handle time and improving first-call resolution rates. Unlike earlier voice automation that frustrated callers with rigid scripts, Synthflow's system remembers context across interactions and adapts in real time. This interview covers the technical architecture, enterprise reliability requirements, and the business case for replacing legacy IVR systems with conversational AI that actually works.
Executive Summary
Synthflow's memory-enabled voice AI reduces average handle time and improves first-call resolution in enterprise contact centers by maintaining context across customer interactions. Unlike legacy IVR systems that force callers through rigid menus, Synthflow's system conducts natural conversations and remembers previous interactions. The technology achieves enterprise-grade reliability required for deployment in regulated industries. CEO Hakob Astabatsyan describes the architecture decisions that enable real-time voice processing at scale.
Synthflow CEO Hakob Astabatsyan explains how memory-enabled voice AI transforms contact centers with lower AHT, higher FCR, and enterprise-grade reliability.
Synthflow CEO Hakob Astabatsyan explains how memory-enabled voice AI transforms contact centers with lower AHT, higher FCR, and enterprise-grade reliability. Startuprad.io brings you independent coverage of the key developments shaping the startup and venture capital landscape across Germany, Austria, and Switzerland.
This founder interview is part of our ongoing coverage of Scaleup Founder Interviews from Germany, Austria, and Switzerland.
Key Takeaways
Atomic Answer
Management Summary
Voice AI has matured faster than most of us expected. Not in the flashy “AGI is here” sense, but in the quiet, structural ways that matter to contact centers: latency, memory, workflow execution, and reliability at scale.
So when I sat down with Hakob Astabatsyan, Co-Founder and CEO of Synthflow AI, the Berlin-based voice AI platform now powering human-like phone conversations for more than a thousand customers, we didn’t talk about hype.
We talked about the unglamorous foundations that finally make voice AI work: sub-second latency, deterministic guardrails, memory across calls, HIPAA/GDPR compliance, and infrastructure that doesn’t cry for help at a million concurrent sessions.
This episode — and this article — decode the moment voice AI became enterprise infrastructure rather than a demo with good lighting.
Our Sponsor
Quick break for something every founder should hear. One leak on the dark web can mean account takeovers, impersonation, or a board-level crisis. That’s why we partnered with NordStellar — a business-grade threat-exposure platform from the team behind NordVPN. It gives you early signals before attackers escalate — with data-breach and dark-web monitoring, attack-surface discovery, and cybersquatting detection. You’ll spot exposed credentials, shadow IT, and fake domains fast.
Startuprad listeners get an exclusive 20% Black Friday discount — go to nordstellar.com/startupradio and use code blackfriday20 before December 10, 2025. Don’t wait until your data shows up for sale — visit nordstellar.com/startupradio, code blackfriday20.
Table of Contents
The End of IVR Trees
Why Memory Became the Missing Layer
The Reality of Contact Centers Before AI
The Infrastructure Behind Modern Voice AI
Synthflow’s Memory Framework (VAL + BELL)
Deployment Blueprint: Pilot → Production in 30 Days
KPIs That Actually Matter
Why This Technology Finally Works in Regulated Industries
What Breaks at a Million Calls — and How to Avoid It
The Future of Voice-to-Voice LLMs
Advice for Founders and CX Leaders
FAQ (12–15 questions)
The End of IVR Trees
If you ever called an insurance hotline in Germany in the early 2010s, you know the drill:
“Press 1 for service. Press 2 for something unclear. Press 3 for an existential crisis.”
And if you dared to say “YES” to stay in line, the system replied:“I understood no. Goodbye.”
IVR wasn’t built for humans. It was built to cut cost until humans stopped complaining.
Voice AI changes that — not because it sounds pretty, but because it can finally follow context, understand your intent, and respond in real time without freezing for two seconds like a junior employee staring at a spreadsheet for the first time.
This is the shift Synthflow is designed for:From menus → to memory.
Why Memory Became the Missing Layer
Early LLM voice wrappers failed for one simple reason: no memory.
A caller could say:“My name is Anna from Berlin, we spoke last week about my claim.”
The AI forgot everything the moment the call ended.
Synthflow’s new memory layer — built into its VAL framework (Voice-Agent-Learning) — fixes this by letting agents:
Remember previous sessions
Retain customer preferences
Recall unresolved cases
Pass shared state across multiple agents
Suddenly, voice AI isn’t a parrot with good diction. It’s infrastructure.
The Reality of Contact Centers Before AI
Before AI, contact centers battled:
Repetition (customers explaining the same story)
Rising AHT (average handling time)
Low FCR (first call resolution)
Dropped calls
Poor agent experience
And let’s be honest: unreliable systems that looked like a 1997 Windows NT server caught in a rainstorm
Hakob explains a truth operators know well:Humans tolerate bad chatbots. But they do not tolerate broken calls.Voice failures trigger panic inside organizations.
This is why Synthflow invested heavily in:
Infrastructure teams
Sub-second latency pipelines
Reliable streaming STT → LLM → TTS
Telephony-grade uptime
Deterministic guardrails
Simulation environments
And it’s why memory matters: without context continuity, even the best AI voice falls apart.
The Infrastructure Behind Modern Voice AI
Hakob describes building voice AI as operating an airport:
“If you do your job well, nobody talks about it. If you fail once, everything stops.”
To reach sub-second latency across millions of calls, Synthflow relies on:
Highly optimized orchestration pipelines
Vendor-tuned STT/TTS layers
Route-aware telephony
Low-latency architecture
Real-time interruption handling
International failover strategies
Massive hosting capacity expansion
A dedicated reliability engineering team
This is the part nobody sees when playing with a Vonage demo on a Tuesday afternoon.
At scale, it becomes engineering mountaineering.
Synthflow’s Memory Framework (VAL + BELL)
Two internal frameworks run the show:
VAL — The Voice Agent Layer
Memory + actions + orchestration + integrations.
BELL — Build · Evaluate · Launch · Learn
This is where enterprise risk meets AI determinism:
Build: visual graph builder for deterministic flows
Evaluate: simulate thousands of calls before going live
Launch: versioning + A/B testing
Learn: analytics + auto-QA + improvement loops
The BELL framework replaces the old method of “launch and pray.”
Hakob puts it elegantly:
“You don’t want an intern freestyling on your most sensitive processes.”
So you give AI the same guardrails.
Deployment Blueprint: Pilot → Production in 30 Days
Synthflow’s rollout model is surprisingly structured:
Week 1 – Use Case Definition + Telephony Mapping
Agree on KPIs
Integrate SIP trunks
Map workflows to CRM actions
Define memory policies (read/write/consent)
Week 2 – Build + Simulate
Use Flow Builder for deterministic paths
Add memory
Add CRM/Webhooks
Run 100–1,000 synthetic test calls
Week 3 – Limited Live Rollout
Blue/green traffic split
Validate containment
Measure AHT/FCR shifts
Tune interruption handling
Week 4 – Full Deployment
Internal training
Monitoring dashboards
Weekly refinement loops
Done right, this becomes the fastest ROI upgrade in BPO/CX operations today.
KPIs That Actually Matter
Contact centers love dashboards. The problem is: most KPIs lie.
The ones that matter:
1. AHT (Average Handling Time)
Memory cuts time because callers stop repeating information.
2. FCR (First Call Resolution)
When AI has context, tickets close faster.
3. Containment Rate
The percentage of calls handled without human intervention.
4. CSAT
Even small improvements shift retention and cost-to-serve.
5. ROI (Return on Effort)
As Hakob puts it:
“If AI does even 2–3 percentage points better than humans, it’s already a win.”
Why This Technology Finally Works in Regulated Industries
Synthflow is compliant with:
HIPAA
GDPR
SOC 2 Type II
This includes:
Data locality
Non-recording modes
BAAs
Encrypted memory storage
Consent flows
Access logging
This is why healthcare and finance are now large adopters.
What used to be “impossible due to compliance” is now “mandatory due to cost.”
What Breaks at a Million Calls — and How to Avoid It
Hakob is brutally honest:
Latency spikes
Telephony instability
Rate limits
Vendor failures
Bad agent configurations
Memory conflicts
Logging gaps
Infrastructure overload
Human panic
The antidote?
1. Reliability engineering culture
“Print the reliability goal on the wall.”
2. Architecture built for black swan load
Plan for peak season on day one.
3. Deterministic flows for sensitive tasks
No hallucinations in banking or healthcare.
4. Multi-region everything
If one fails, another instant takes over.
This is how voice AI moves from “nice experiment” to “trusted system.”
The Future of Voice-to-Voice LLMs
Hakob is cautious but optimistic:
Near-term
Better reliability
More deterministic guardrails
Richer memory
Voice-to-voice inference
Longer context windows
On-device processing for enterprises
Mid-term
One-hour conversations that stay coherent
Multi-agent voice teams
End-to-end workflow automation
Domain-specific long-context LLMs
What he rejects
“AI replacing humans entirely.”The future is hybrid — with AI taking the repetitive load.
Advice for Founders and CX Leaders
Hakob’s advice is surprisingly human:
In the end, AI doesn’t reduce complexity — it raises the stakes.
Relationship Map
Hakob Astabatsyan → Co-Founder and CEO → Synthflow AI
Jörn "Joe" Menninger → Host of → Startuprad.io
Automated Transcript
1 If you run a contact center or business process outsourcing, 2 you know the pain. Customers repeat themselves, average 3 handling time aht creeps up and your 4 AI can't remember anything. Today's guest 5 hey Cop Astabastian, CEO of 6 synthflow, the Berlin voice AI platform backed by 7 Excel, just launched Memory for voice agents 8 so AI can keep contacts across 9 calls, add sub second latency inside 10 your existing CRM telephony stack. 11 In the next few minutes you'll unpack how memory 12 turns conversational AI into enterprise infrastructure. 13 So use costs, lift FCR 14 and stop making customers repeat their story. 15 Welcome to Startup Rad or IO, 16 your podcast and YouTube blog covering the German 17 startup scene with news interviews and 18 live events. 19 Our guest today is Hakob Astabatsyan, co founder 20 and CEO of Synthflow, the enterprise voice AI platform
21 out of Berlin that's scaling like few others. The 22 company powers human like phone conversation with for 1000 23 plus customers, serves 100 enterprises, 24 integrates with 200 tools from telephony to 25 CRM and runs at sub second 26 latency with 99.9% availability. 27 In 2025, Synthfloww raised US$20 million 28 Series A LED by Excel, cementing its position 29 among the world's top AI agent platforms, 30 recognized by G2 for customer satisfaction, 31 fast implementation and best ROE return 32 on investment. Today we dive into Synthflow's newest 33 capability memory. The VAL framework lets voice 34 agents remember details, preferences and 35 unresolved cases across sessions, support seamless 36 human handoff and even share state via 37 memory groups across multiple agents. If you tried 38 AI in your call center and hit the context wall, this 39 episode is for you. Hey Cop will take us from
40 pilots to global rollout. How to deploy, 41 measure and govern memory enabled voice 42 AI without adding risk. Hey Cobb, welcome to Startup 43 radio. Hey John, thanks for having me here. 44 Glad to be here. Before we get started, 45 for a little bit older people, I was wondering, 46 you remember Space Odyssey where 47 the AI hell goes crazy and tries 48 to kill all the people? Did you have at any 49 moment the intention also to have a hellboat in 50 your voice agents? Yeah, 51 it's a good question actually. When we started building this in the early 52 2023, everyone was talking about AGI, 53 right? The first version of GPT came and everyone was saying oh now 54 you're just going to click one button. AI is going to do everything. And 55 then everyone, you know there was like scandals everywhere, like people were
56 warning against AI AI. But I think in the last 57 two years everyone came to this realization, oh 58 actually the LLMs are not that smart as we 59 thought, right? I mean it's an incredible progress what we had, 60 right? And, but, but I think everyone realizes there is 61 also as much as this technology can do and we are actually 62 very far away from that AGI. So I think the 63 expectations are very way lowered right 64 now. And we're entering into this phase of 65 soberness right after the, after the, after this excitement 66 and euphoria, I think people are becoming sober in terms of being 67 realistic. What can actually this AI do? And the same 68 applies to the voice AI, right? I also realized 69 that inside fiction there have been always the bad, the 70 mean AIs think about Skynet, but I
71 do believe simply for the reason they make way better stories, 72 different topics. So let's take us to 73 the start. Like not necessarily when you 74 founded the company, but the original Spark. And when 75 did, like what was the moment, you 76 may remember when you conceived the 77 original Spark for Synthfloww and when did it click 78 that voice AI agents needed to be 79 infrastructure, not just smart stuff? Yeah, very 80 good question. Actually. We started 81 experimenting with LLMs around like March 82 2023. This is when the first, 83 I would say proper version of GPT was released with 84 an API that you could start working with. And everyone was 85 building like text bots on this LLM, basically this wrapper 86 around LLM, right? And then you would have a text bot. 87 And for us the first idea was that we wanted to build a no
88 code layer so that business folks could interact with it, right? 89 Because. Because this was like a lot of developers and in developer forums 90 everyone was trying to work with the API, but we also wanted to have this 91 no code layer. And we built the first version for 92 Chatbot. And I think it was around summer of 2023, 93 like few months after the release of GPT that 94 the first ideas of this voice orchestration started 95 resurfacing on Internet, right? And the orchestration 96 idea is actually very simple. You take the speech of a human, you 97 convert it to text, you do some work there 98 and then you send it to LLM. This text LLM gives you an 99 answer back in a text form. Then you take this text, 100 do a couple of things and then convert it back to speech. You do
101 it and you have to do it very fast so that you play 102 back the answer to the human. That's where latency comes into game. 103 This was the first orchestration. Everyone, not everyone, but I would say a few 104 teams actually, because it's very hard thing to do. Text is easy, 105 but. But voice was very hard. And we started building this 106 orchestration. And I remember around end of the summer of 107 2023, we had the first version that you could talk on the phone 108 and it was terrible. If I look back, it was terrible. The first 109 version and it would wait for two seconds to answer. 110 For example, right now when we Talk, we have 400 111 milliseconds of latency humans, we are perfect because the moment you 112 interrupt me, I immediately process it and stop and then vice versa.
113 That's why we can have such a fluent conversation together. 114 But it was not the case for AI, right? It would pose for a long 115 time and then when you would start speaking as a human, it would speak 116 over you. Right. At some point you just hang up as a human. 117 Right. This was the very first version, but it was an 118 interesting revelation for us because we realized this is 119 a very hard problem to solve and it's really worth it 120 investing our time because the demand was very high for 121 this. And then we started working on this and we released in early 122 2024. So we worked six months on this. We released the first 123 version, which is way worse than what it is right now. 124 Right. But it was already pretty good. And 125 this is where we actually started commercializing. We had our first customers
126 and we started iterating with them. And since then, 127 basically it, the technology has been progressing very 128 fast. Right. New LLMs have been coming out and lot of 129 infrastructure around this orchestration has 130 developed like Deepgram, you know, speech to text and all this kind of 131 solutions. And at some point, I think in 132 2025, early 2025, I think the entire voice 133 industry crossed the chasm of actually 134 voice. Because if you asked me in the early days, I was still not sure 135 if this is actually going to be a technology that humans will 136 be actually adopting on mass on scale. But 137 this year we passed that chasm fortunately 138 and it's just being now deployed in 139 production like all around the world. Yeah, 140 I also experienced that from a very different 141 perspective. For example, I'm member of
142 a few platforms where you can book voiceover talent. And I've 143 realized that the number of requests are going way 144 down and a few still pop up where you basically 145 hand over your voice until eternity to 146 an AI, which is not necessarily attractive 147 for people who make a living. From that. 148 Can you paint the reality of contact 149 centers, business process, outsourcers, pre memory, 150 what do aht? So the average handling 151 time or the first contact resolution, also called 152 FRC or the CSAT customer satisfaction course 153 and call deflection, typically stall without 154 the persistent contact. So. How 155 did it work before your memory introduction and to 156 all of our audience, yes, we have tlas Three letter acronyms. 157 Yeah, exactly. Yeah, it's a, it's a 158 specific industry with specific acronyms. Look, 159 so I think before, before LLMs, like the
160 way all these conversations were handled is through IVR 161 trees, right? So you would call 162 like, I don't know, I used to experience that a lot in Germany. I would 163 call like insurance company or whatever and they would say, if 164 you want to ask question about your financial thing, press 165 one. If you want to talk to customer support, press two. 166 Right? So you would have to go and then you would go into wait line 167 and wait forever. So it was actually a terrible experience for 168 humans, but sometimes you just didn't have a choice, right? And I 169 vividly remember they gave you, for example, when you 170 talk to an insurance hotline, in the beginning, they gave you 171 options where you had totally no clue where you fit in and you had to 172 try through all the menus. Yeah, was really bad
173 experience. And then it asked you, would you still like to stay 174 online? And you yelled, yes. And it said, I understood. No, 175 I'll hang up now. There was level of experience, right? 176 Yes, exactly. And it was like a lot of NLP 177 NLU technology involved there, sometimes capturing information. I mean, it 178 was a very basic technology, right? And the 179 human experience, it was not built for human experience. It was 180 built for just doesn't matter. Your experience doesn't matter. 181 But at least we will get something solved rather than never 182 solved. So it was, the mantra was 183 better late than never, sort of. 184 But it was the state of the technology. And 185 now. With this conversational AI, 186 the thing is that now it's all about experience 187 as well, right? Results, but also experience, right?
188 It's not like, oh, I just reduced my deflection rate or I just 189 increased my resolution time, but what you just mentioned, csat, 190 right? The satisfaction score, this is very, very important, right? 191 So when, when we started building Synthfloww, actually 192 for us in the very, in the beginning it was all about 193 customer experience. So this is the only thing we cared about, 194 right? Like how is your experience with AI, right? 195 Are you, are you having a good con? And 196 that's why, you know, the entire industry tries to build this empathy into 197 AI, right? Better voices. So the early feedback 198 was many voices are robotic, right. And then you 199 try to build more empathetic voice. Right? And all these things is all about 200 human experience. Ultimately it's all about humans here, right? 201 And then the second stage of the technology, when we started working
202 with contact centers, of course they have to have clear, as you 203 mentioned earlier, roe, return on investment. And this is where 204 is the second part of the conversation? Right? When you have a conversation with 205 voice AI, the first part is the experience, the voice experience. 206 The second part is the rpa, right? This business process 207 automation, like what, what is the work that AI does, right? Does 208 it, does it answer your question? Does it route you to 209 a human? Does it send you an SMS with some information? 210 Whatever the job of the AI is, it has to be done right because only 211 that way the contact centers can measure the success. 212 And this is what, what we measure at the sink flow. For us, the most 213 important KPI for our customers is that, that they get their
214 results, roe but also the end, end customer, right? 215 The person on the phone has to have an amazing 216 great experience because the entire purpose of this technology 217 is in vain if people don't have a great experience with this. 218 Totally. Yeah, exactly. 219 I was wondering what feedback, what signals. 220 Or. Any other 221 input did you receive from 222 your customers that you had to 223 prioritize this multi turn 224 memory over adding yet another feature? Because 225 looking back now, it does make sense. But if you do 226 have to make the decision, let's do A, B, C, D, 227 E. How did you decide? And 228 decide based on what? So 229 the way we build our product is always based 230 on our customer feedback, right? We talk to our 231 customers and we try to understand what is important for them.
232 That's one. And second, we always try to understand 233 is it feasible, right? Because as I mentioned earlier, right? When this 234 technology came, everyone was expecting AGI, right? 235 Like general intelligence and you just click one button and everything 236 gets done. And then I think suddenly everyone 237 woke up to the reality that the context is everything. What is 238 the context, right, of the agent take like where is the data? What 239 does it have to do? And this is for our customers, right? 240 Being like in a B2B context, right? Basically you can think of 241 Synthlow like an AI contact center, right? 242 And the whole purpose of doing a work end to 243 end, right? AI has to do a work end to end. What I mean by 244 that? Think about if someone calls and 245 wants to get.
246 Some number sent, right? Or some information sent per 247 sms, right? If the AI has a good conversation but doesn't 248 send that SMS to the customer, then the job is not done, right? 249 The action is not executed. So this is where we started 250 working, right? And you need like memory and you need, you need 251 context, right? For AI to be able to do 252 the job and for large companies, enterprises, very often 253 it needs that Synthfloww agents have to be integrated into 254 their existing tech stack ecosystem, right? 255 Maybe they use one telephony system Right. They have their 256 own CRM and they have whatever they have that 257 agent takes inflow agents have to interact with. Right. 258 So that they can actually resolve customers problem end to 259 end was. Was the key. Right. Otherwise you don't
260 have that ROE we just mentioned. And this was the reason. They're 261 just an interaction layer. They make the data from 262 your customer, meaning their tech stack, their problems, 263 their interaction. They make it feasible. They just a layer of 264 interacting instead of sitting there trying to navigate through 265 the systems. Exactly. But it also does, it also 266 navigates. Right. So you can think of it like. 267 Basically it's like a knowledge base. Right. Like existing knowledge 268 base. Like you connect it to the agents and when the agent 269 interacts with the customer, it has the information that it needs 270 to answer customers questions or send information or even 271 capture. Sometimes it's the other way around. Right. The agent captures an 272 information from customer and sends it to the CRM. 273 Right. To the existing CRM and updates, let's say the database
274 with the data. Right. So it can go both way. Either it 275 shares information with the customer or it gets the information from the customer 276 and saves in the existing database. Right. 277 And without the context. Right. All this ecosystem, the agent is 278 basically it's a demo. Right. It will not be able to do an end to 279 end work that humans do nowadays. Yeah. 280 We, we talked about the latency that it's below 281 a second sub. Second latency and like 282 having really persistent sessions. Not that it's 283 interrupted and it's, it's not useful anymore. 284 So how, what was the. 285 What was the hardest challenge there from an engineering 286 standpoint to get, to get this workable? 287 Yeah, it's a very good. So actually latency was the 288 number one problem when Fox started 289 building voice AI because by
290 following that orchestration, right. Like the speech to text and text to 291 speech, right. You had to do. 292 Was very rudimentary. The first version that was built, 293 it was very basic. Right. 294 And also like everyone was building, it was like almost for demos. 295 So you were just trying to optimize it for one call, etc. 296 But the problem became really complicated. Right. When we started 297 growing very fast. We started growing very fast. And 298 imagine like processing millions of calls every month or 299 hundreds of thousands, like tens of thousands per day and 300 all these calls together. You have to make sure that your entire 301 infrastructure. Right. Can support that. And 302 there are also so many factors that can affect 303 latency. Right. It could be related to telephony, it could be 304 related to connectivity, it could be
305 related to even how you set up the 306 agent. Right. And it was basically. But we realized 307 that this is the number one challenge to solve without this, like nothing 308 really matters in this business business. And what we did is we 309 hired engineers who are, who have been specialized 310 in this area and we have a team in Synthfloww 311 that's called like it's basically our infrastructure team 312 that only works on these topics, right? They work on reducing 313 the latency, they work on improving the 314 interruptions and then some point. It's basically two 315 components that affect this one. It's an empirical work, right? You 316 constantly collect data, you constantly improve your system 317 and infrastructure. And the second is like how you 318 build your architecture, right? Which vendors do you use, which LLM do you 319 use? Right? How do you send the data between A and B?
320 So there's a lot of architectural decisions you have to make so 321 that when you start scaling this, you don't. It doesn't break. 322 Because everyone that works in voice AI knows how 323 hard it is to ensure reliability when you process 324 millions of calls. I always bring this example, I say it's. 325 It's similar or analogical anecdotal 326 example could be for folks who work at the airport and have 327 to ensure that the planes land safely, right. One 328 coordinators. And of course if you do this job well, 329 no one talks about this, but if you do one mistake, it's a huge 330 disaster, right? Everyone will. And the stakes are 331 very high in our business as well, right? You can process million calls 332 and it's expected. But let's say 100 calls 333 break, right? Something happens suddenly people or contact
334 centers panic, right? Because I don't know, there's something 335 about calls that people get really angry when 336 these things don't go through, right? It's not the case with text bots. 337 Sometimes people get frustrated and close it and go. But there's something about 338 calls that people make huge 339 buzz around this. They close down operations and this. And 340 that's why reliability is one is basically 341 we print it out and put it on the wall at the office. For our 342 engineering team in the early days, this is 343 the most important thing we have to solve in the very beginning. Yeah. 344 Did you ever talk to psychologists why it is 345 that voice and calling is so special here? 346 I haven't talked to, but I have read about 347 similar kind of concept. It's like it's basically 348 psychologists call it anthropomorphism, which means
349 that humans basically like humans have this 350 tendency to attribute human qualities to 351 machines or non. Non living things, right? Also you see it like in 352 children, books, etc, when animals, animals talk, robots talk. 353 Robots became humans friends and there is an empathy, 354 right? So there is definitely an anthropomorphism, which means 355 humans in the beginning have this tendency to be creeped out 356 when the technology machine behaves like human. It scares 357 humans. Right? So there is a, There is an adoption. There is an 358 adoption window for humans actually to get used to it. 359 Right? So it has been with many technologies. 360 Like for us, everything we use is normal, right? The phones and 361 laptops and everything. But 362 the folks that saw it for the first time in their lives, there is always 363 some, I would say.
364 Adoption time needed to get used to this technology. It was same thing with voice 365 AI, right? People thought, this is dangerous. 366 I don't know, it's weird that the AI speaks like human, 367 right? But that's why I was saying this year we crossed the 368 chasm, because right now there's a large amount 369 of humans that are actually used to it or already had their first 370 experience talking to AI. And there were so many. Like, 371 I always bring this example of this movie called her, 372 right? When, When Joaquin Fenix is talking to the, To, 373 To. To the AI, right? All the times, basically, it's like a human. 374 And, and I think, I think humans have also 375 been prepared for this a little bit through, through popular media, 376 et cetera. But it's still like in the early days, I remember it was
377 like it was very hard to make certain type of calls. Yeah, 378 those, those people who warn about something. I vividly 379 remember that I once read there were doctors recommending not 380 to travel by train because they thought it would be 381 harmful to the human body to move faster than a horse 382 ride. Yes, yes, exactly. 383 Yeah, so we, we've seen that. Let's be 384 a tiny bit geeky here. Talk about here. What did you try 385 that didn't work? When building memory prompt only 386 hacks external key value stores, native 387 retrieval, augmented generation, and what 388 replaced it. So. 389 Look, so the very first I would say approach 390 was put, put everything. So 391 if we go back to the very beginning, 392 two chatbots, right? So it was the, this was the rug, right? 393 Like you connect a knowledge database, right? But
394 in, in, in some situations, you know, in some context, I 395 would say it's for, for, for. Especially in the Enterprise 396 situations, each situation was different, 397 right? Each situation was different, each document was different that you 398 had to reference, right? So there was some 399 complexity involved to make it work for each, especially for 400 the Enterprise. And then I think there was a 401 moment where we solved this by having the prompt, right? You have the 402 prompt field and everything that agent needs to know is 403 within the prompt, the open prompt way, right? And 404 you just paste it there and the information is there. But of course, there is 405 also a limit there, how much you can put. Because if you put too much, 406 right, if the text is huge, that also increases the 407 probability of hallucination, right? Basically, think about it.
408 The more the AI, the more information you give to AI, 409 there are more paths that the AI can go in the conversation, 410 right? So it increases basically the probability 411 that it can deviate or it can get confused from the main thing 412 that you set the AI to do, right? 413 And I would say honestly, the answer is it depends 414 because right now we have that like the open prompt thing, but we 415 also just released the Bell framework and one of that 416 Bell framework, the first one called Build, is the Flow designer. 417 So the Flow designer allows you like, it's a visual graph that 418 you can build the conversational paths, right? And 419 with that we want to address the cases where you really 420 know, you really know what the AI needs to do, right? Imagine you just
421 hired an intern, right? You don't want that intern to go 422 freestyle, right? You know your business, you just sit down and explain one 423 hour and say to the intern, look, this is the script, 424 this is what you say in the call, right? And this is what you don't 425 say in the call, right? And I want you exactly to follow this, 426 right? And this is like. And you can create it. The benefit of 427 this, it's very deterministic, it reduces the 428 hallucinations, it puts guardrails in the place. And 429 basically AI does the job that it has to do 430 if. For the open prompt 431 version, right? When you have, you basically just paste the 432 same way you prompt GPT, right? You don't build the graphs, but you just prompt 433 it for. In that this is better if you want to keep it more
434 open ended the conversation. If you want to keep the conversation 435 more open ended, right? You can give it some instructions. 436 But you also hope that AI would freestyle a little 437 bit, right? The same way you spend some time with GPT and you just wanted 438 to give also creative answers, right? So I think 439 having that, that's why we've built both, right? So that 440 our customers can kind of have more deterministic for more 441 specific cases, especially in B2B context. Determinism is 442 actually very much appreciated nowadays. And of course you 443 can also leave it a bit more open ended if your use case is such. 444 You don't to want it to do exactly that way. And, and this 445 is like the spectrum that we leave actually our customers to choose how 446 they Kind of create that, that instructions and memory.
447 Yeah. You just talked about leaving 448 your customers choices. 449 Where do they get burned by a or 450 did get burned by AI voice, promising promises, 451 God rays, evaluation, handoffs or security. And 452 how do you de risk them? Yes, 453 that's an excellent question. Right. It's basically 454 for enterprises and contact centers in B2B context. 455 Right. Where ROE is everything. Risk 456 has been the biggest challenge. And basically 457 think about early days of agentic AI is you build the agent 458 and you hope it will work. You hope it will do 459 what you want it to do. Right. It's, it's a hope. You just launch 460 it and then suddenly you come and see, didn't do what you wanted to do 461 and then you iterate with it. Right. It's very similar. When you chat with 462 GPT, you ask a question, it doesn't answer and you go back and
463 say dear chat GPT. My question was this and this, 464 please be very right. You just, you, you push back a couple of times 465 until you get exactly what you need to do. And it was very similar. Right. 466 And we said look, we have to solve this. 467 We can't leave it to chance these things. And 468 that's where we started working on Bell Framework, which 469 is like build, evaluate, launch and learn. 470 The entire idea of that framework is we 471 wanted to derisk enterprise deployments. Right. 472 It means you can build more predictable conversational 473 paths with this designer. For evaluate part 474 we build simulations. It means you don't have to go and 475 call real people to see if AI does what you want to do. You can 476 do it with synthetic data. You can do a simulated in the same environment
477 before you actually release it. So it's way faster and way 478 safer. Then there is the versioning part. 479 Maybe you want to have a couple of versions that you want to test 480 and lastly you want to do some auto QA and analytics. 481 So basically we created an environment within 482 Synthflow for customers, enterprises where they can go 483 and they can do all the work there in a very safe environment, 484 make sure that the agent does what they actually intended to do 485 and only then they release it to the public. And 486 we have been working for this for many, many months and we're 487 releasing right now. And that's the entire purpose, risking and 488 accelerating actually the deployment of these agents. 489 Yeah. 490 We also going to record a Founders Vault 491 episode after this one and they'll tell us about the toughest
492 enterprise deployment and where you failed initially, what 493 broke in production and who escalated and who you salvaged 494 it. Let's talk 495 about your current playbook and how to move from 496 a pilot to production in weeks, team 497 setup, no code versus API. And the first three 498 workflows you always target. Yes. 499 So look, basically the 500 older technology required a lot of 501 human, human touch, right? And 502 like especially the NLP NLU technology, people would have 503 to go understand, map out the conversation 504 and then go and build it, right? With Synthfloww 505 approach, we build this layer, no code layer that 506 allows customers to basically follow like three, 507 four steps and you actually have your agent, right? So 508 integrations is a bit different thing. This is where our forward deployed 509 engineers work with the customer to integrate, especially on
510 the telephony side and other tech 511 stack like CRMs, et cetera. We integrate with an API or with 512 SIP trunking basically. But once that work done, and this 513 can be done very quickly basically within few 514 days or weeks, depending on the case. But the agent 515 creation part is very straightforward. So we have reduced it 516 to an absolute, absolute minimum and basics, right? You have 517 to prompt or with a visual graph builder, you 518 just have to build a conversation and then you have to choose 519 an action that AI has to do. You can also build a custom action if 520 you don't have that. And then of course you can 521 always leverage the Bell framework for that to test 522 everything, etc. Before you deploy and then you can actually go 523 live. So we have simplified it.
524 I'll say to the limit, right? And 525 so it's very easy actually to do and we 526 continuously building new things around that so that 527 we take out this, I would say 528 this complex part from agents, 529 right? So that it becomes almost like a deterministic 530 thing you do, you just go, you build it and it works, right? That has 531 been the, the whole idea. And in terms of use cases, 532 the most common ones or workflows. 533 That we see across the board, we see a lot of 534 transfer case where AI is the first line of defense 535 and then it weeds out the irrelevant calls 536 and forwards the important ones, which actually 537 reduces the workload. With humans we see a lot of 538 customer support, or in other words, I love to call it faq, right?
539 AI answering a lot of questions, right? For customers, 540 which are very basic, so humans don't have to waste time on that. AI can 541 answer all this, it's very solid. There is also a lot of 542 qualification is happening. AI reaching out to 543 humans, asking couple of things and qualifying and 544 booking next steps. Basically these are very common, 545 widespread cases we see in our, right. B2B 546 AI contact center case, mainly on the phone, 547 right? Yeah. Talked about 548 those KPIs. Can you share some 549 specific numbers? Typical containment, the 550 Reduction in average handling time, the 551 lift in first call resolutions, meaning the customers 552 only need to call once or human hour savings 553 that you comfortable stating by 554 vertical. Look, there are 555 no single numbers, right? It very much depends on the 556 customer and the use case, right? So for example, we
557 have how complicated the case is, right? For example, we have one 558 customer in healthcare and 559 they basically benchmark AI with 560 the work that human does today, right? In terms of deflection 561 rate, right? So, and I don't remember 562 what the exact number was, let's say 17 or 18%, 563 whatever the number is, basically that customer 564 says if AI even like 1, 2, 3 565 percentage points better than this, that's already a big 566 win, right? Because humans, humans time is more 567 valuable than AI's time, right? AI is 24, 7, you know, 568 it's cheap, you can just put it there and it can do the 569 work, right? The only reason you wouldn't put AI, if 570 it's significantly worse in terms of ROE than humans, 571 then you still justify. But if AI is even slightly better. So
572 for example, in that particular case they were targeting 20% plus 573 rate, then they went with AI and the 574 human basically moves one step back and takes important 575 calls, calls that are a bit more complicated, maybe they are 576 more of an emotional nature or there is a complicated things that they have 577 to do afterwards, right? So that's what we're 578 seeing, right? In other cases, for example, if someone does an outbound 579 campaign and does like thousand calls, 580 even if you get, I don't know, five, six calls answered, 581 it could be a big win, depending what's your case, right? Maybe each 582 call can result in a lead that can result in 583 tens of thousands contracts for you afterwards, right? 584 It's a big win. But maybe you call 585 thousand times and everything less than 500 586
answers is already bad, right? And then in some 587 cases AI works, in some cases it doesn't make sense, right? The 588 roe, the simplest things to measure are 589 more like, I would say the cases that I mentioned, 590 right? Let's say the transfer and et cetera, these things are very easy to 591 measure. And in some cases you see 592 marginal improvement, right? AI is several percentage points better. 593 And it's already a good case to put AI because of the cost 594 savings. But sometimes we also have customers who 595 never had anyone there, so they didn't even have resources. And just 596 putting AI results in 80% improvement in 597 time reduction, for example, right? Like some crazy numbers, because 598 they never had that, right? They didn't have the resource and now they just put 599 AI and it does something that was never being Done.
600 And there you see like very like large numbers as 601 an improvement. 602 For our audience. If you run a CX or 603 bpo, what's the one place you'd want 604 voice AI memory like tomorrow, Bookings, 605 claims, returns, tech support. Drop it in the 606 comments. By the way, if there's anything interesting, 607 Jacob, of course we'll share it with you. And 608 let's talk a little bit about the framework. What's your 609 memory read, write policy framework, what 610 to store, for how long with what consent and 611 what controls. This is a very important question, right? 612 We touched upon anecdotally before like why are phone 613 calls so dramatic, right? Have such a dramatic impact on humans? 614 And it's also same in terms of compliance, right? 615 So for example, there is a HIPAA compliance in the US 616 or GDPR compliance in Europe. And with a Synthfloww
617 very early on in our journey, invested significant resources 618 of our company actually to undergo all these compliances, right? 619 SoC2, type 2, GDPR, HIPAA. And 620 for example, with HIPAA compliance, right, there is the toggle 621 bar in our tool that our customers 622 disable so that the calls don't get recorded. Because 623 certain types of calls in certain regions you're not allowed to record, 624 right? So in gdpr, right, for example, the data 625 hosting is a very important, important thing, right? And 626 so calls are very sensitive and we always 627 work very closely with our customers, right? So that we ensure we sign 628 like baas with our customers, right? Business associate agreements 629 we have basically right now on the Synthfloww website there is a trust vault that 630 you can access and we have all sub processing agreements 631
baas. We take this topic very seriously because 632 in some industries, and I would say in many cases like the 633 data storage and recordings is a very sensitive 634 topic and it has to be 100% compliant. 635 This is very important in our market. I was wondering for 636 callers who get interrupted, switch or something, 637 how do you maintain your context 638 coherence and avoid hallucinated calls? 639 For example, you call for your VW Golf and 640 actually they tell you yes, you're bantling is ready. 641 Yeah, that's a very good question, right? And this is 642 comes back to the to the point, right? It's also like 643 kind of the idea of Bell framework is we want to 644 make sure, right, the customers can build 645 more deterministic agents if its topic is sensitive, right? So 646 for example, you can leave the AI agent a bit open
647 if it has to answer like I don't know, some low stakes 648 situations, let's say like hypothetically has to say 649 what's the weather or something like that. Right. But 650 if it's about the order or Golf and Bentley, 651 right. You want to use the visual graph 652 builder that we built because that way you completely 653 eliminate the hallucinations. Right? Because that's what we've 654 built. It's a very deterministic environment. Right. That 655 doesn't allow AI to deviate from your. If you know, 656 as a Volkswagen. Basically too that this 657 is exactly what customer is going to ask and this is exactly 658 what the AI should answer. This is 659 the right way to go. Right. And then of course you just simulate this, you 660 create this before actually releasing this. This is very important. 661 Right. And we have invested lot of resources in building the Bell
662 framework so that our customers, these enterprises can 663 actually serve these use cases. And I can go one step 664 deeper and tell you it's even more important in financial industry, 665 for example, right. You cannot quote the wrong percentage or etc. 666 Right. So you have to adhere to the things. 667 And the short answer, basically you reduce the hallucinations through 668 graph builder and of course the simulations and versioning in 669 qa. That is great guys. 670 Right after the ad break, hey cop will unpack the deployment 671 blueprint. ThinFlow users to get memory enabled 672 agents live in under 30 days plus one 673 mistake that kills ROI. 674 Guys, welcome back after our short ad break. 675 Hey cop, can you walk us through through the 30 day 676 deployment blueprint from data mapping and 677 integration selection to evaluating metrics and
678 blue green rollout. Definitely. So the way we 679 work with our customers is in the very first step 680 we try to understand the use case and 681 the value that the customers seek to get out of this 682 technology. Right. Because there is a lot of noise around AI 683 and we want to make sure that our customers have the right 684 expectations from this technology and we can actually 685 deliver them in a promised time frame. 686 Once we have done this part, we actually get 687 into a. Period working 688 period with customers where we help them with integrations. 689 Right. We advise them on a lot of topics. Our forward 690 deployed engineers stay very close to the customers and 691 we advise them on a lot of things like for example on the agentic part 692 we demo them the Bell framework so that they can deploy
693 this in a de risked way. But also we help them 694 integrating this into their existing ecosystem. Right. Especially many 695 customers have their own telephony system and we 696 have to integrate into that with the SIP trunking and there are a 697 lot of nitty gritty details sometimes and our forward deployed engineers 698 work with them to make that happen. And generally 699 during this period we do these parts and the 700 customers would go live and start testing. Right. 701 And Then they would get their KPIs they want. Right? And 702 this would be like a success. But right now with the Bell framework, 703 that's not even needed to go to live. 704 They can even do it like in a more controlled environment. And we 705 even expect to reduce this more. Right. 706 But that's generally like roughly how we work with our customers to
707 make sure actually they get out of this technology what they want to. 708 You've been talking about the nitty gritty details and that's usually where you 709 win or lose in the startup game. Walk us through 710 growth Methodology. Bell methodology. 711 Bell framework. How do you scale from 1 use case to 5? 712 What's the sequence that compounds return on investment? Yes. 713 So I think the very first step for us 714 scaling the business, the first challenge we started facing once we scaled 715 was the volume, the infrastructure. Right. Basically we 716 were a startup of couple of folks and then 717 basically in few months we started processing millions of calls on the 718 platform. And it was very unexpected. So we had to 719 do a lot of infrastructure investments and expansions to be actually 720 able to process this. And the way we scale
721 this is we generally are very 722 selective with whom we work. Right. That's why 723 we do this. Step one, what I mentioned earlier is we 724 always try to understand the customer's expectations from this 725 technology and kind of have the same 726 expectations, right. So that we can actually help customer 727 to get there. But the work on our side is 728 more like consultative and being as a sparring 729 partner for the customer. So it doesn't require 730 large headcount on our side to work with our customers. And we 731 have built the product in a very scalable way and we continue doing that. Right. 732 We release the Bell, but then we also are cooking a couple 733 of very advanced features which I'll not talk about today, 734 but we are going to release soon in the future. 735 And basically we'll scale it with the product. Right. So there is no
736 like an inherent, I would say limit 737 to why our platform at this moment. Right. 738 Cannot process hundreds of millions calls instead of millions of calls. 739 Right. And, and this is how 740 we want to continue at this moment. Of course we have to grow our team. 741 It's natural as a company as we grow and the revenue grows and the 742 usage, we have to build different departments and be 743 better, also offer better service to our customers. 744 But there is no, I would say a fundamental blocker to 745 scale this business for us at this moment. I've seen 746 some logic sprawl in such 747 projects. How do you recommend no code ownership 748 vs ops vs API for engineering? And. 749 Would governance Prevents this logic 750 sprawl. Yeah. So I would 751 say of course the API gives more flexibility.
752 Right. And I would say for 753 smaller cases, right. So we have to, 754 I would say maybe you can say like two types of customers that 755 one type would prefer the no code, the other type would prefer the API. 756 So let's say one of our customers, a very large contact center 757 bpo, they basically use the API 758 because they integrate synth flow into their 759 operations, basically into their entire tech stack so 760 that they can offer voice AI to their customers. Right. So 761 it's like a white label integration there. And of course the 762 API is the way to go there because they want to expand on 763 this massively. But if there is, let's say 764 for smaller customers, right. If you have one particular 765 use case in mind and you want to quickly build this, it's very
766 easy to go and build with the no 767 code layer you have there because it's just click, click, click right 768 and you have the case. But we also have 769 built some sophisticated features there. It's called like multi 770 tenant, basically. 771 Portal where you can become the owner and 772 you can have many users invite your team members 773 and have sub accounts. Right. For different use cases. 774 And we have many customers who just went chose this way. Right. 775 Because it's simpler, it requires less engineering resources 776 and it's more than enough for their particular case. Yeah. 777 You do have. Some 778 case studies on your website, Med Bell, SmartCat, 779 what ingredients made those outcomes work? 780 Yeah, so I think so. One 781 important thing is that this initial period that 782 we actually are close to our customers.
783 We set their expectations right, we help them define the 784 outcomes that they want to get. Very often customers know it 785 themselves, but if they don't, they're more explorative. We help them 786 understand that. And once it's done, 787 then the rest is all about the right integrations 788 and the right agentic part, the right agent setup. 789 Right. So the agent setup we have simplified to 790 minimum. But there still could be some questions. Do I 791 go with the visual graph builder or do I prompt it myself? But of 792 course our customer support is always there to answer customers 793 questions. But the integrations part, 794 this is something that often we sometimes have 795 to share something like our documentation or etc so that 796 they can integrate into their telephony system. Or if they 797 don't have telephony system, they use us. Right.
798 Or maybe they have another tool stack like a 799 CRM Salesforce or something like that they need to 800 integrate and these are the main components. If this is done well. 801 It should start actually delivering the value. You can go to 802 production. I was wondering a little bit why you talked 803 what breaks at 1000 parallel calls, 804 speech quality, latency spikes, rate limits, 805 integration quotas. And how do you harden for 806 peak season? I mean especially if 807 you are into some kind of 808 business that has a rush towards the end of a year 809 towards Black Friday, e commerce Christmas is 810 what I had in mind. How do you prepare for something like that? 811 That's a very, very good question. And this was honestly at the 812 very beginning this was our main problem. This was 813
like when we started scaling because and thousand 814 calls are fine, it's nothing. We're talking about millions. 815 This is okay, totally fine with 816 me. Let's talk about millions. That's where real challenges 817 start happening. And honestly 818 one of the first challenges we had 819 was the capacity. So we had to. 820 Get more and more hosting capacity 821 so that we could digest all this, all this processing, 822 right? And so this is one thing you have to make sure 823 that your infrastructure, right, the cloud and the hosting, 824 you already have that capacity. It's set for scale, right? And for 825 us, because we grew so fast, we were not very much prepared in the 826 beginning. We never thought we would go in few months basically from 827 hundreds to hundreds, thousands or 828 meeting calls. And this 829 was the very first challenge, like architectural challenge, right?
830 So the number of calls per se 831 didn't affect in our case like the latency directly. But if something 832 else affected latency 833 reversely that would mean that million calls were affected by that, 834 right? So the stakes are becoming very, very high. 835 So you cannot work operate as a startup anymore, 836 right? By pushing Things and Version 1 you 837 don't care if anything broken. So suddenly you have to 838 graduate very, very quickly to actually having a 839 high performing, high reliability like triple 840 9s 4nines at some.59 reliability 841 company which is actually to be very frank, is a huge 842 challenge because you just have an engineering team of like eight people 843 and suddenly and now we are like 30 plus right? Engineering 844 team and you have to professionalize like overnight. 845 And that was like a very big challenge for us. And it's very hard.
846 This is something, this is something. Even 847 in the era of AI hasn't changed. It takes time to hire your 848 team, it takes time to build your culture, it takes time 849 to have quality people. And that's kind of was 850 a big challenge to get there. So 851 to summarize, basically one thing is you have to make sure that the architecture is 852 there, but you also have to build a very world class team because 853 your business processes, your engineering processes 854 are going to affect if you push something 855 to deployment and it affects even 856 minimally one latency or et 857 cetera. It means 4 million calls at the same time 858 and then you will have too many fires you will not be able 859 to extinguish. Right. So you have to 860 build a very strong system to have to ensure
861 that reliability. 862 Getting professional, getting a big team very fast. 863 Always a challenge. As you said, 864 you can change it simply by applying AI. 865 In the Founders Vault you'll share some internal evaluation 866 dashboard used for memory co KPIs, guardrails and so 867 on and so forth. Find the links down here in the show notes to 868 subscribe to our premium offering the Founders Vault. Just five 869 bucks a month. Let us go 870 a little bit from what is now working into the future 871 outlook. What's next for voice 872 to voice LLMs reasoning and 873 on device on edge. And where does memory 874 evolve like agent teams, shared state 875 auto summaries. So I would say like the near 876 future in terms of like roadmap is clear. 877 Longer than that. It's always hard in AI. It's always
878 like basically I think humans have been historically terrible 879 at predicting the future. Right. I 880 usually say in such cases in our interviews, I know predictions are 881 hard, especially concerning the future. Right. But, but, but you're 882 here and we'll give it a shot. Yes, no, no, I'm happy to share 883 actually. I am one of the guys who always has an 884 opinion on these things. But humans are generally. And 885 like for example, everyone 2023 was saying AGI et 886 cetera and now it's like very, very different. So look, 887 so that the near future, I think at least for our market, right. 888 For our product, it's all about basically 889 reliability, quality. 890 Ensuring outcomes like deterministic outcomes. Right. 891 So that reducing the, I would say the chance, 892 right? Reducing the chance in the agentic space. Right.
893 So that the ROE is evident. And there are many 894 things, right? It's there. You build architectural stuff around your infrastructure 895 structure, but you also build features. This is the Bell framework 896 which we will announce very soon. 897 And many similar features, right. You built now 898 and. 899 On the more I would say down the road 900 in the future. I think I'm really curious 901 kind of what's the next step from LLMs? Because basically, basically 902 products like Synthfloww very much depend on 903 that, right? Like what's next in that, what's next in speech to 904 text. And there is also like the voice to voice model from OpenAI. Right. 905 The reason we're not using that because it's like 906 commercially not viable yet. It's very expensive and it 907 also has more limitations when it comes to 908 actually agent doing tasks. Right. The
909 RPA part. So it will be I think there 910 soon. So that's one of my predictions is that voice to voice 911 and the voice to voice is amazing experience. Right. And you have 912 many of the problems that you have with the current 913 orchestration, you don't have it with there. So I'm pretty sure this will be 914 also state of the art very soon. 915 I think that's an interesting part. I think we'll see also like 916 huge progress in terms of long 917 format. So right now AI is very good for shorter 918 conversations, right. And finishing something and doing a 919 job like I don't know, one minute conversation, customer support. Right, 920 but what about one hour conversations? Right. 921 Because the longer these conversations go, the harder it 922 actually for AI to stay on track. So I 923
think that's an interesting to see like what's on the long format part 924 and of course like the complexity of the tasks. The 925 tasks that AI does nowadays are relatively simple or straightforward, 926 right. Like talk, capture something, share something, 927 update something somewhere. Right. But going 928 forward maybe it can like do something end to end, 929 right. Go and book and I don't know, follow and 930 really complicated tasks. I would say 931 this is what I would expect this to develop. 932 However, it's not going to be an overnight 933 transition. It's going to take few years at least. 934 Not only that, people don't shy away from 935 interacting when somebody calls, hey, this is a booking 936 service and you call a restaurant. In 937 24 months. What percentage 938 of customer calls will AI handle end 939 to end in regulated industries? And what will
940 still require human? Look, 941 I don't think AI is going to completely replace humans. 942 I keep saying this everywhere, that AI 943 is going to augment humans. I don't think humans will 944 be out of the loop in the near future. That's not 945 going to happen because I don't see humans 946 trusting. Like I would say. 947 Not only trusting but AI. It just doesn't have 948 that capacity to do very complicated things. Right. 949 And very long, I would say 950 RPAs automations. Right. So. 951 I think in the next five, so look for some 952 cases it's going to be 100% to answer your question, it's going to be 953 100% of the calls. Like for example for transfers, routings, 954 FAQ, first line of defense. I think it's going to be 955 100% is going to be done by AI.
956 For some cases, more complicated ones. Right. 957 Processing, et cetera, Maybe even on the governmental side, I think it's going 958 to be to be way slower. Right. Like I don't Know, let's say taxes or 959 this kind of areas. I don't see it like 960 adopting very quickly. That said, regulated spaces. 961 Also like we already see like crazy adoption in healthcare and financial 962 institutions despite. You just need to be compliant, right? 963 You have to have everything like you have to have HIPAA compliance, you have to 964 have pen tests, right. You have to ensure there and, and, and 965 you're signing SLAs and this kind of things with the customers, right? 966 If the state of the technology is there that you can guarantee that, for example 967 with visual graph builders, then. 968 Basically if the AI works for 10% of the calls, there is no reason for
969 them not to deploy it for 100% of the calls. And I think it's going 970 to be very different case by case, like use case from use case. For some 971 use cases going to be 100% for some probably 972 very low. Not even. 1%. 973 Yeah. A little bit of 974 contrarian view when we go close to the end. By the way, 975 you're holding up pretty well and we're recording for no more than an 976 hour now. I was wondering what's an 977 overrated metric or myth in an AI 978 contact center deployment that teams should 979 ignore or that it's still missing the return on investment? 980 Yeah, it's a very good question. So 981 look, I don't think that the classical ones like 982 CSAT. Or 983 like things like deflection rate or average 984 resolution time or time to answer
985 these things are actually very important, 986 right? And they do matter, right. 987 I think one thing that I always look at, 988 but I don't care so in a sense is that 989 is the subjective feedback that we get very often. For example, sometimes 990 people say oh, it was too robotic. 991 And of course at this stage it still could be 992 some elements of that it's an AI. And also I tell to 993 all my customers, customers, you know, like you have to disclose that 994 it's an AI. It's not built to lie to people that 995 it's human. That's a bad practice, right? So when humans talk with 996 AI, they should know they are talking to AI and they should have the 997 right expectations, right? If someone tells me, ah, it didn't 998 feel like it was a bit robotic, I think that's fine.
999 Of course, as long as you know it's an AI, as long as it does 1000 the job for you, right? You have a good experience, right? Better 1001 than the Ivr trees or etc of the past. We're 1002 making a huge progress as a humanity in that area. 1003 And this is like for me, I wouldn't care that 1004 much about that we're getting that subjective feedback that 1005 it's. Because first of all, it doesn't have to be exactly like 1006 humans. Right. As long as your experience is good, you know, you're talking to AI 1007 and your problem is solved on the phone. What's the 1008 problem? Right. 1009 So that's kind of what I look right now. And I'm very, very tolerant in 1010 the subjectivity towards AI. And 1011 I just think we just need more time as humans to actually get
1012 there. To say, okay. 1013 This is where we are. Right. Yeah. 1014 Interesting. Yeah. Bottom line, the slower developing part is 1015 not AI, it's the humans in the. Yeah. 1016 For our audience, I was wondering, should AI remember 1017 you across calls if it improves your service? 1018 Yes or no? Tell us why in the comments. 1019 An advice for founders and CX leaders. 1020 What's your three step guide here? And can you 1021 share some advice for our listeners who are 1022 mostly founders and executives from this 1023 plan? Yes. So look. 1024 I would share some advice on a bit like higher level. Right. 1025 So I think one thing that. It'S 1026 very different right now in the AI era compared to before 1027 AI is that it's very stressful. I 1028 don't think that founders have been under 1029 such a pressure and stress ever,
1030 ever in the history of startups because 1031 things are moving basically on weekly basis. 1032 Like every week, every day something new is happening. Like I wake 1033 up in the morning, I look into my phone. Some, someone, 1034 something, someone new model is out there. Someone released that, someone 1035 released this. This is changing. OpenAI is doing that and does 1036 so I think like these skills, navigating skills. 1037 Right. Filtering out through the noise is a 1038 superpower at this stage and founders need 1039 to learn actually not to get too distracted 1040 and too stressed down by the noise. It's going to be 1041 there for a while. It's a new technology. We're experiencing 1042 an immense progress as a humanity. 1043 I think this is already way bigger than Internet at. Right. What's 1044 happening right now. This is, we have never seen this, this
1045 type of progress. Right. And, and on a weekly basis. 1046 So really understanding and focusing on value building business 1047 is, is, is harder right now because you. There are many shiny things, 1048 right. You always confront these decisions. 1049 Oh, I know. This new model came. Shall I use this and that or 1050 do I go this way? This way? What about this? Right? And, and I think 1051 being very close to the customers, very, very close to the 1052 customers and focusing on building long term 1053 value for them for your customers is the key. 1054 This is my view, this is what I will highly recommend 1055 based on my own experience in the last Three years, I would 1056 say. Yeah. Three more questions for you. 1057 Closing Bold Insights Finish this sentence. In 1058 three years, voice AI with memory will be
1059 the default interface for for. 1060 It will be default interface for contact 1061 centers, both external and internal. It means 1062 external BPOs, right? Contact center as a service and 1063 internal contact centers for insurance, 1064 financial institutions, companies. Because we already see 1065 the ROE and the case is very clear. It's just 1066 that it's just a matter of time until enterprises 1067 start deploying and adopting this in their companies. 1068 And two closing questions. 1069 Are you open to talk to new investors? Yes, definitely. 1070 Always happy to. I think the best format 1071 is always I say it Coffee Chat. Yes, 1072 we link your LinkedIn profile down here in the show notes and I do believe 1073 every investor that has been sticking around for over an hour, they'll 1074 be interested to talk to you. And of course you're looking
1075 for hiring talented employees. Yes, 1076 always. Right. And we're hiring on a rolling basis 1077 so we are not a company that just 1078 fills in gaps. We always look for exceptional talent 1079 whether it's AI, engineering, voice AI 1080 or also commercial roles, sales, solutions, 1081 engineering, forward deployed engineers. We're always open 1082 on a rolling basis for exceptional talent 1083 and we guarantee you and promise you also an 1084 exceptional environment to work on cool things and 1085 grow as a professional. Awesome. 1086 Hey Cobb, with such a pleasure having you here for 1087 our premium subscribers. We'll be back shortly with 1088 the Founders Vault. Thank you, thank you. 1089 Thanks for having me. 1090 That's all folks. Find more news, streams, 1091 events and 1092 interviews@www.startuprat.IO. 1093 remember, sharing is caring.
Partner with Startuprad.io
Startuprad.io is the leading independent media platform covering startups, venture capital, and innovation across the DACH region (Germany, Austria, Switzerland) and Europe. We offer B2B partnership opportunities for companies looking to reach startup decision-makers, founders, and investors.
Become a Partner — Learn about sponsorship and partnership opportunities
Contact us: partnerships@startuprad.io
Editor-in-Chief: Jörn "Joe" Menninger on LinkedIn
Subscribe to the Podcast
All podcast links: https://linktr.ee/startupradio
Frequently Asked Questions
1. What is memory-enabled voice AI?
It’s voice AI that can remember customer context across calls and use it to resolve issues faster.
2. How does Synthflow’s memory work?
Through persistent storage with consent, strict governance, and deterministic workflows.
3. Does memory improve FCR?
Yes — significantly. Customers stop repeating themselves.
4. Can voice AI replace IVR?
In many cases, yes. It offers natural, fluid routing.
5. Is Synthflow secure?
Fully compliant: GDPR, SOC2 Type II, HIPAA.
6. What industries use this?
Healthcare, finance, retail, telecom, BPOs.
7. How fast is the AI response time?
Typically sub-second (≈400 ms).
8. Does it integrate with CRMs?
Yes — Salesforce, HubSpot, custom systems.
9. Can it handle peak season?
Yes, with multi-region scaling.
10. What about hallucinations?
Deterministic flows eliminate them.
11. How long is deployment?
Pilot → Production in 30 days.
12. Does it reduce cost?
Yes — through containment and FCR improvements.
13. Can humans escalate calls?
Instantly — with full context.
14. How does it handle compliance?
Selective recording + encrypted memory + data locality.
15. What’s next for this tech?
Voice-to-voice LLMs with long-context reasoning. Internal & External Linking This Month in DACH Startups - November 2025 | Deep DiveThis Month in German, Swiss and Austrian Startups - November 2025 (Top News)Camel Startups in DACH: The STS Ventures Playbook with Stephan Schubert Authority Sources https://synthflow.ai/https://synthflow.ai/resource-library The video is available up to 24 hours before to our channel members in what we call the Entrepreneur’s Vault. The Host & Guest The host in this interview is Jörn “Joe” Menninger, startup scout, founder, and host of Startuprad.io. And guest is Stephan Schubert, Investor & Managing Director at STS Ventures. Joe on LinkedIn Hakob on LinkedIn
About the Host
Joern "Joe" Menninger is the host of the Startuprad.io podcast and covers founders, investors, and policy developments across the DACH startup ecosystem. Through more than 1,300 interviews and nearly a decade of reporting, he documents the evolution of the European startup landscape. Follow Joern on LinkedIn.
Support Startuprad.io
Voice AI with memory is redefining how enterprises handle customer interactions at scale. Companies building in AI and applied machine intelligence use Startuprad.io to reach founders, operators, and decision-makers across the DACH ecosystem. If that fits your goals, explore partnerships here: Partner with Startuprad.io




Comments