ADA AI: AI Employees Replace Back Office Work | Startuprad.io
- Jörn Menninger
- Jan 21
- 33 min read
Updated: 5 days ago

What Is This About?
ADA AI is building AI employees that replace back-office work entirely — from data entry and document processing to customer support triage. The startup's autonomous agents handle repetitive administrative tasks end-to-end, freeing human workers for higher-value activities.
Introduction
ADA AI is building artificial intelligence employees that integrate directly into company workflows, handling tasks that previously required dedicated human staff. This interview explores the technical architecture behind their AI agents, how they are deployed alongside human teams, and the organizational implications of having AI as a recognized participant in day-to-day business operations. The conversation reveals both the opportunities and the cultural challenges companies face when introducing AI coworkers.
Executive Summary
ADA AI builds artificial intelligence employees that handle back-office tasks previously requiring dedicated human staff, from data processing to customer communication. The system integrates into existing company workflows through standard business tools rather than requiring custom infrastructure. Early adopters report 40-60% cost reduction in administrative functions while maintaining or improving output quality. The interview reveals both the operational benefits and the cultural challenges companies face when introducing AI as a recognized team participant.
ADA AI shows how AI employees automate supplier, invoice, and order workflows with guardrails, cutting manual ops without false autonomy.
ADA AI shows how AI employees automate supplier, invoice, and order workflows with guardrails, cutting manual ops without false autonomy. Startuprad.io brings you independent coverage of the key developments shaping the startup and venture capital landscape across Germany, Austria, and Switzerland.
This founder interview is part of our ongoing coverage of Scaleup Founder Interviews from Germany, Austria, and Switzerland.
AI employees work when you treat them as constrained operators: deterministic where outcomes must be fixed, probabilistic only where language and ambiguity are the problem. Oliver Dlugosch's approach is not “full autonomy”; it is workflow ownership with guardrails, context capture, and human accountability.
AI employees can run end-to-end back office workflows where communication is the bottleneck, not the system.
The deployment constraint is context capture and accountability design, not model capability.
Most teams overestimate “full autonomy” and underbuild deterministic guardrails.
Key Takeaways
Atomic Answer
AI employees are not assistants; they own bounded workflows
An AI employee is an agentic AI system that executes a complete workflow—intake, interpretation, matching, system updates, and communications—within defined constraints. Assistants answer prompts; AI employees run the process.
The automation gap in back office work was never “buttons.” It was language and exceptions. Supplier emails, order requests, invoice documents, and negotiated terms are unstructured and inconsistent. Traditional automation handled deterministic triggers but failed at understanding. LLMs enable interpretation, but interpretation alone is not execution.
Execution requires integrations into systems of record (for example, an ERP), plus rules that prevent probabilistic behavior from overriding deterministic requirements. In practice, an AI employee is a workflow controller: it routes tasks, applies rules, and escalates when constraints are violated.
Dr. Oliver Dlugosch frames humans as “the API” bridging what classical systems cannot handle: communication, verification, and exception logic.
The real bottleneck is counterparties without APIs
AI employees pay off fastest where fragmentation and manual coordination dominate: many suppliers, many languages, inconsistent formats, and no integration surface. The work is repetitive, high-volume, and fundamentally communicative.
In large-scale operations, the same manual pattern repeats: create an internal record, duplicate it externally, wait for confirmation, reconcile differences, and correct downstream systems. Supplier coordination at scale turns into continuous exception management: delays, quantity changes, partial confirmations, and ambiguous product references.
This is where fuzzy matching (matching inconsistent naming to canonical data) matters. It is also where guardrails matter: a system can propose updates, but must respect deterministic constraints such as approved ranges, contractual terms, and escalation requirements.
Razor Group’s procurement coordination involved hundreds of suppliers and multilingual communication. The value was not a single automation; it was removing a workflow class that consumed operators.
“Full autonomy” is the wrong target; autonomy is a spectrum
Production-grade agentic AI succeeds when autonomy is bounded. Treat autonomy as a spectrum: the narrower and more explicit the process, the more reliable the agent. “Human-level autonomy” across open-ended tasks is not the baseline.
In real operations, “exceptions” are the rule. The temptation is to hand an agent a broad mission and expect human-like planning across ambiguous domains. That creates uncontrolled failure modes because LLM outputs are probabilistic and can drift outside permitted actions.
The workable pattern is constrained autonomy: deterministic if/then logic where outcomes must be fixed, and probabilistic reasoning only where language understanding is required. The system must always know when it is outside its confidence envelope and route to a human.
Dlugosch explicitly warns against assuming “true and 100% autonomy.” The viable starting point is well-defined processes with clear operator behavior and explicit guardrails.
Context is the hidden dependency that determines accuracy
Agentic workflows fail when the system cannot learn operator context: customer-specific delivery rules, informal naming conventions, and tacit policies. Capturing and reusing this context is the accuracy unlock.
Operators often execute rules that are not encoded anywhere: “this customer only receives deliveries Monday to Wednesday,” or “this client calls the product by a legacy shorthand.” When an agent repeatedly misses these details, operator trust collapses because the same correction recurs.
A context engine is a mechanism to capture these corrections as reusable operational knowledge. It turns tribal knowledge into an executable layer, reducing repeat errors and lowering supervision load over time.
ADA AI’s “context engine” is described as the critical component that codifies what previously existed only in employees’ heads.
A 30-day deployment requires process truth, not tooling
The deployment sequence is: define the use case, map the real decision tree, fix process flaws, build and integrate, test with live scenarios, iterate rapidly, then go live with monitoring and accountability assigned.
The first failure mode is automating an unclear process. If managers and operators disagree on how work is actually done, the system will encode the wrong truth. The second failure mode is automating a broken process: a bad process stays bad, just faster.
The right pattern is joint process walkthroughs with both operator and manager present, revealing hidden rules and contradictions. Only then do you build. The test phase must include live data and edge cases, with day-to-day feedback incorporated quickly until the operator feels real workload relief.
Dlugosch describes a practical deployment cadence: clarify the process, surface hidden decision rules, iterate quickly on feedback, then go live—typically within about 30 days.
Inline Micro-Definitions
AI employee:
An agentic AI system that executes a defined workflow end-to-end, including system actions, not just responses.
Agentic AI:
AI designed to plan, choose actions, and complete tasks across steps under constraints.
Guardrails:
Deterministic constraints that restrict what an AI system may do, when it must escalate, and what outcomes are permitted.
Fuzzy matching:
Matching inconsistent, human-written product or entity references to canonical records in systems of record.
Context engine:
A mechanism that captures operator corrections and tacit rules, then reuses them to improve future decisions.
ERP:
An enterprise resource planning system that serves as a system of record for orders, vendors, invoices, and inventory.
Three-way match:
Reconciling purchase order, goods receipt, and supplier invoice before approving payment.
Operator Heuristics
Automate only processes you can explain as a decision tree.
Start where counterparties have no API and communication is the bottleneck.
Make deterministic rules impossible for the LLM to override.
Force escalation when confidence or constraints are violated.
Capture operator corrections as reusable context after every exception.
Assign a human owner for accountability before go-live.
Measure supervision minutes, not model accuracy claims.
WHAT WE’RE NOT COVERING
We are not covering general “AI transformation,” generic chatbot deployment, or broad workforce displacement debates because they do not change an operator’s next decision. We also exclude speculative claims about fully autonomous enterprises because ADA AI’s production model is bounded autonomy with explicit guardrails and accountability.
Relationship Map
Jörn "Joe" Menninger → Host of → Startuprad.io
Automated Transcript
1 If you're a founder, operator or executive, here's the 2 uncomfortable truth. Your back office is stuck in 3 2015 if you're drowning in repetitive 4 work, operational inefficiencies and processes that 5 break every time you try to scale Today's guest 6 believes the future of work will not be AI 7 assistants, but AI employees, 8 autonomous agents that think, plan and 9 execute real workflows. Dr. Oliver 10 Lugosch built Razor Group to more than 700 million in 11 annual revenue, managing global operations at 12 massive scale. He's now building ADA 13 AI, or ADA AI as you can say in German, where 14 AI employees are already replacing entire back office 15 workflows. In this episode we we break 16 down what AI employees can actually do today, 17 where they fail, and how founders can employ their first 18 agents in under 30 days. Get ready.
19 This conversation will fundamentally change how you think about 20 operations. 21 Welcome to Startuprad IO, 22 your podcast and YouTube blog covering the German 23 startup scene with news, interviews and 24 live events. 25 Today we're joined by Dr. Oliver Glugosz, a founder 26 who has lived through one of the most operationally intense 27 journeys in Germany's startup ecosystem. Oliver co 28 founded the Razer Group, an e commerce powerhouse that 29 scaled to over 700 million euros in annual revenue, 30 navigated global supply chains and estimated executed M and A 31 at the pace that would break most companies. That 32 experience gave him a firsthand look into the limits of human 33 only operations. The arrows, the repetitions, the 34 scalability ceiling. And it led him to a bold 35 idea. What if startups didn't hire more humans 36 for repetitive work, but deploy
37 AI employees instead? Today all Oliver is a co 38 founder of ADA AI, one of 39 Europe's emerging leaders in agentic AI, a 40 new category where autonomous agents don't just 41 assist, but act, plan and operate 42 independently. These AI employees handle full fledged 43 workflows from data processing to back office tasks, 44 customer operations, and complex decision paths that 45 previously required entire teams. In this 46 interview we'll explore the real limits of traditional automation, 47 how AI employees differ from chatbots and LLM 48 assistants, where autonomous agents shine and 49 where they break, what founders must know before deploying their first 50 AI employee and how AgentIC AI will reshape the 51 startup workforce over the next five years. They 52 that was a long introduction. Oliver, welcome. Thank you very 53 much. Thanks for having me. It is totally my pleasure.
54 I'm very, very curious about those AI employees, 55 especially having a company where I still do 56 maybe more work than needed manually. It is always something 57 I had an eye on but could never really get the time 58 to do it. That is something of real interest and I 59 do believe A lot of founders here will, will really, 60 really listen carefully. If they can do something like their 61 monthly tax filings, that will be just amazing. But 62 first, let us start. You scaled Razor Group into one of 63 Germany's largest E commerce operations. What was the moment where 64 you realized the traditional human only back office model simply 65 wouldn't scale anymore? Wow. Very 66 good question. Very good question. Let me think back to the time when we founded 67 Razor Group. I did this with three other founders
68 back in summer of 2020. It was 69 just a few months after the pandemic hit. So everything 70 was changing, everything was in a turmoil. Things 71 were changing at a rapid pace. 72 And so we started our journey 73 based out of Berlin, getting in touch with e 74 commerce companies, small e commerce companies that we first 75 had to explain to, you know, that you can actually sell 76 your business, right? You can find a buyer and sell your business 77 to somebody else and have a, you know, life changing 78 exit event. That was what we started with 79 and that was, was more, that is what most of Razer was 80 focused on, right? Getting these deals, finding these 81 companies that you could acquire. My role as 82 CEO and co founder was to think about the operations before we
83 had any operations, right? Everybody was chasing these deals and these 84 companies. And I was thinking, okay, once we close those deals, once we buy 85 these companies, what do we do? How do we operate, right? 86 Starting obviously with a small team, having a few people on board, 87 the first deals started to come in. I think we acquired our first 88 company in October of 2020, like two, three months after founding 89 the, the company. And it was all, 90 you know, setting up the absolute basics, setting up structures, 91 setting up responsibilities, setting up 92 processes. And I very clearly remember this first company 93 that we acquired. Everything was hands on, right? Everything 94 was Excel files, very, very manual. Everything 95 was double checked by a, by a colleague, human, human 96 interventions. Nothing was, was really system 97 based. We did know that we would grow very rapidly.
98 So at that time we already checked what 99 ERP system do we want to use, right? We already thought about scalability, 100 had that process running and ultimately even went live with 101 our ERP at the turn of the, of the year, just 102 like five, six months after founding the company, we were live with our 103 erp. We, we chose Oracle netsuite back then. And in 104 that fall of after the first acquisition, 105 the second came a third came right, and there was still a pace 106 that you could manage, right? You hire another colleague, another 107 specialist for logistics, another one maybe for E commerce 108 operations. But when this wave of 109 acquisitions came around, where we heard, 110 okay, in January, we might do four to six 111 acquisitions that was my oh shit 112 moment, pardon my French. That was
113 exactly. That was the moment where I thought, okay, you can never 114 hire at high quality at that pace, right? How do you do 115 this? And so it was the beginning of 116 thinking even more in terms of scalability, even more 117 in terms of automation. Standardization. The first step for us 118 was to standardize the integration process, right? 119 Understand what are the basic steps that you need to do, in 120 what order, how do you standardize the entire process so that you can 121 handle four to six integrations per month. 122 And that was really the first piece that we had to get in line 123 to. Then also in second step, think about the operations, right? Once 124 you integrate that company, it is part of your landscape, it's part of your 125 IT infrastructure, of your system, process infrastructure, team
126 infrastructure, then how do you operate it? And so we 127 always thought in these waves, starting with the integration all the way to 128 operations, and then improvement, continuous improvement, 129 implementing more and more standardization, operations, 130 automation and all of these things. I was 131 thinking about a company that acquires a lot of other company, 132 aggregates them. I would instantly have in mind 133 like one big machine. And you take. Get 134 rid of as many individual processes as possible and 135 add all those processes to as 136 efficient working central machine as possible. 137 Absolutely. Try to standardize as much as possible, right? Because if 138 you standardize, you can repeat. It's a blueprint, right? This is the 139 plan, go execute. You can use some basic 140 automation. We are not talking about LLMs yet, right? LLMs have not been 141
anywhere in mainstream at that time, 2020. But it was 142 more of if then connections, right? If there is an email coming 143 in, then execute that. Or if somebody pulls a project from a certain 144 stage to the next one, you automatically send that set of 145 documents to a certain person, right? So yeah. And this 146 IFTT logic was already around in the 90s in 147 Excel. Exactly. Why do you think the back office and 148 operational processes remain so repetitive, 149 manually error prone and resistant to automation in. 150 Even in tech companies, there's a certain degree 151 to which they need to be in certain companies. But 152 why even in tech companies, you do everything in front 153 of us to apply LLMs, to apply agents, to make it 154 as easy as possible. And then in the back office 155
you start working paper basement stamps. Yeah, 156 very good question. I think it has two main components, 157 really. One component is still of 158 technical nature, right? Before LLMs, 159 you could not automate the process of understanding communication, 160 right? How do you automate an email communication process? Right. 161 There was no tool that could understand, for example, a 162 supplier's message. And then reason think 163 through what the appropriate answer is, and then put that answer into an 164 email and send it. There was just no technology. Right. And that technology, these 165 large language models, have now only been around really for a couple of 166 quarters. Right. So it's still a novelty in terms of technology, but 167 it is available. It is feasible nowadays, so 168 that problem is solved. I think the other side of the coin is the 169
topic of trust. Trust and accountability. Right. 170 Because when it comes to decisions and to actions, even if 171 it's just as small as a communication to a customer, to 172 a supplier, companies tend to still trust 173 humans more than machines. Right. And if there is a decision 174 that needs to be taken, for example, on changing delivery dates, on 175 accepting a certain change, then structures and 176 organizations rather trust a human to take that call because that human 177 would be accountable for the outcome. And how do you hold a machine accountable to 178 something? Right. I think that's the second element that. I 179 think that, that, that's an important question everybody needs to 180 answer in the future. It's, it's, I do believe the same question. How do you 181 hold a company accountable? How can you hold 182 an AI agent accountable? It will more or
183 less end up to. To be the accountability 184 of some human either in charge of those agents or the person 185 who coded those agents. What do you think? 186 I agree. I think it's a process that we'll go through 187 finding exactly how these things are settled. But we do have examples 188 nowadays. Right. So who is responsible ever? I'd say 189 classical automation fails. Right? It's the 190 party who set up that automation or who 191 agreed to maintain it, to oversee it. 192 Right. You clearly define this upfront. And I think we're moving into a 193 world where you need to define it between the parties. 194 And it's typically something that needs to develop over 195 time. Kind of what is the standard that everybody agrees to 196 and how is that all figured out? I think that's a process that
197 we are in the middle of seeing the development 198 of. I'm very sure a lot of people who 199 are deploying or thinking about deploying 200 AI agents are now thinking, huh, that's a good 201 question. Never thought of that. What was 202 like the spark moment that led you to the idea of 203 an AI employee autonomous 204 agent that can run a full workflow? 205 It really was a thought 206 that came up during the entire journey of Razor Group. Right. 207 That journey for me was roughly five years long. I just explained 208 what happened in the first few months. And over the next 209 quarters, we built that organization. It grew and grew and grew. 210 And we always saw that there is a lot of manual work 211 required to really keep things running. Right? You have these, 212 I would say that API of anything which is human
213 led. Humans are the API of anything because they understand things, 214 they can verify things, you can teach them things. 215 And so they are bridging what classical 216 systems like ERPs cannot solve, right? For 217 example, the interaction. And over these five years, as the, 218 as the organization grew, every, every now and then I was thinking 219 about, wow, this, this, there must be a different way, there must be a better 220 way, there must be a way to automate also these processes that are 221 highly repetitive that you can put into an 222 sop, a standard operating procedure, but the technology 223 just wasn't there. And then in 2024 when, 224 you know, 01 and 4O from ChatGPT, from OpenAI 225 came out, that was really a pivotal moment for 226 me and I think also for a lot of other people,
227 seeing the potential that comes 228 with these models of understanding, communication, of understanding 229 context, of understanding, documents that had this 230 aha moment, wow, now we have the technology that is 231 reliable enough to do these things, to do these 232 previously exclusively human processes now 233 in a machine. And that was the point where we said, let's go ahead, 234 let's see what we can do with that technology. And that was 235 really the spark for us to embark on the journey. 236 And that actually leads us to the next question. What did you do 237 with those AI employees from Eraser Group? 238 Experience which operational bottleneck 239 with the biggest cost sync or scalability 240 barrier? And how would an AI 241 employee have solved it? Razer is an 242 E commerce business and Razer sells 243 everyday goods, right? And so the
244 procurement of these goods is very 245 fragmented. We had a lot of different suppliers from 246 all around the globe, most of which in fact came from, from 247 Asia. And the coordination with all 248 of these suppliers always was a very, very human 249 driven task, a very manual task. Because a lot of suppliers weren't that 250 large, right? They didn't have massive, massive factories. They 251 may not have had, you know, large teams and 252 also IT teams that work on integrations, 253 standardized interfaces, etc. But they, they had no 254 API. You just could not put your order in your ERP and 255 went over, you had to put it somewhere in your erp, then double 256 it up, send an email, getting confirmation and so on and so forth. 257 A lot of back and forth, right? Absolutely, 100%, 258 a very, very manual process. And that was
259 the field where we figured, wow, we have a lot of volume, right? 260 Every single product that we buy and then sell to 261 customers goes through that process. We had a lot of fragmentation, 262 right? I think about 500 suppliers or even more. A lot of 263 different languages at play, right? Sourcing from China, from Vietnam, 264 from India, from all around the globe and all of that variety. 265 And we figured, well, this is something that we want to take up first 266 and set up a trial. And that trial, taking 267 care of that supplier communication and coordination, right? The 268 exchange of, update of dates, update of quantities, 269 etc. Tackling that that was our first use case that we 270 approached and we just saw the impact that you can 271 create and that was, that was a real eye opener for us.
272 Before we get into the question about your assumption about 273 starting ADA AI, how did you get the name? 274 Very good question. Sreshta, who is 275 a co founder of mine, she was also 276 a co founder at Razor Group CTO and is now also co 277 founder at ADA AI. She's, she 278 studied computer science in Stanford. So she's a techie through and through. 279 And I remember it was her idea that we name 280 ADA ADA because of ADA Loveless, which I 281 learned was the first computer programmer. 282 And you know, sounds good, 283 it has a good ring to it. And so I think we went on board 284 with it and like the name so far. When you then started 285 ada, what was your assumption about 286 agentic AI? And what assumption turned out to be 287 completely wrong? If you go through,
288 you know, LinkedIn, the media, any business related 289 social media out there, you always get the 290 statement, do vertical AI, do highly 291 specialized AI just become the best AI for 292 customs declarations, become the best AI for 293 indirect procurement? And that was something that 294 we heard, that we listened to, but that ultimately 295 for us turned out to not be the right 296 approach. Because when we started to speak to 297 customers, basically asking them, hey, where do you have 298 repetitive manual work? Where do you have processes 299 that have not been automated yet? Mostly because they include a lot of 300 communication, a lot of language, a lot of 301 unstructuredness in the data. The responses that we got 302 were all over the place, all over the entire supply chain from 303 procurement to distribution, 304 logistics, invoice processing,
305 sales from all parts of the business. And we started to look 306 into these use cases and started to look, okay, what does it take 307 to now build an automation that works end to end? And that 308 really takes that burden of manual work off 309 of the team's shoulders. And it turns out that all of 310 these are similar in terms of what 311 you need to make them work in the front end. When you look at 312 it at face value, these use cases are very, very different, right? 313 One use case may be supplier communication. A supplier 314 notifies you of delays, or a supplier Notifies you of 315 change of quantity, right? It's completely different when you compare it 316 to, I'm processing order in a, in a food 317 business, right? I'm processing small individual orders from 318 small retailers, right? I get emails, I need to interpret the email, need to
319 pull out the items from the email and put them into the erp. Two 320 very different use cases. But if you look under the hood, 321 what you need to make them work is really comparable because 322 what are the elements that you need? You need to be able to understand 323 communication, number one. Number two, you need to 324 match certain data in an unstructured way to another 325 set of data, right? Because people don't use exact 326 namings of products, people don't use exact phrases, but 327 you have to do what we call fuzzy match. You need to be integrated 328 into these different systems. You need to have an 329 engine that understands context. You need to 330 have a user interface, right? Because you need to think, what does the team do? 331 The team needs to interact with the AI. And there are a couple more elements
332 and modules you could call them. And we identified that 333 these by themselves are quite repeatable. 334 They are quite, you know, you just need to arrange them a little 335 bit, you need to readjust them a little bit, but you can reuse them. And 336 that was something that came out through the interaction with other 337 customers, listening to them, going through the process of 338 figuring out how do you build these automations? That then told us, well, 339 actually just being vertical, being extremely narrow 340 is not what yields the best result for the customer. 341 But being able to think horizontally across the entire 342 supply chain, across the entire, entire process landscape that you 343 have in a company is really what, what turns 344 out to create the most value for the customer. 345 A lot of companies out there really love the idea of
346 automation, but feel the real autonomy. 347 Where do you see the biggest misconceptions about those 348 autonomous agents? And from my experience, I used to be 349 a management consultant a couple of markets and what I always have in mind 350 is night trading. Like, like the Knights in 351 Armor Night Trading group there was 2012, 352 bankrupted by an algorithm going, going crazy. Within a 353 day, they lost almost half a billion US dollars. And 354 I'm just waiting for the headline news that an autonomous agency 355 has also done this for a company making the bankrupt. But 356 before we get into that, because you should not simply say, 357 okay, let's work, it'll all be fine. You need to manage 358 that. Do you see as the biggest 359 misconception about autonomous agents 360 in general? I think the biggest misconception is
361 probably around the fact that we are already There 362 to have true and 100% 363 autonomy in its purest sense. When we 364 speak about autonomy, I think there are certain degrees of 365 autonomy. There's a spectrum of autonomy in 366 executing certain tasks, right? I think on the very 367 far end of the spectrum there's this human, human autonomy, right? If you have 368 a worker and they get the task to get the best deal 369 on supplying a certain material, then they would go ahead. 370 They first create a plan, okay? I need to find out who are all of 371 the relevant suppliers. Then I make a plan for the negotiation. I 372 have them sent over test samples, right? I do the quality check. 373 I in parallel need to check with accounting to set up the vendors in the 374
ERP, etc. It's a multifaceted approach, right? 375 With different systems interacting with different scenarios that 376 you have to go through. This is the far end of autonomy, right? But if 377 you start to cut it down and take away some of these 378 dimensions and put it into one process, which is, for 379 example, there are orders incoming, right? There's communication 380 incoming. That communication has your products, 381 it has a delivery date, it has an address, then this can also 382 be automation, sorry, it can also be automation, autonomy, but 383 in a far less complex way, right? And I think the 384 misconception that is, that is very dominant still out there is 385 we are at this far end that AI is possible 386 to do what a human can do in all its facets and think about 387 all of the possibilities. But what we see when you
388 really apply AI and LLMs in a company 389 in production scale, then what works right now is 390 this, what I mentioned at the other end of the autonomy spectrum, right? You 391 have to give the AI AI guardrails. You have to 392 ensure that whatever should be deterministic, if 393 then remains deterministic and is not overruled by an 394 LLM because of its probabilistic nature, right? 395 And I think that this misconception of we are already there, 396 I think that's very dominant and it's a little bit dangerous because the 397 expectations for LLMs and what they can do are completely 398 overhyped, right? If you really want to have LLMs and 399 AI work at scale, start with this beginning. Start with these 400 processes that are well defined. Start with these processes 401 where also humans exactly know what to do. And it's repetitive
402 and it's clear we are not there yet at this 403 very, very far end of, you know, full and true and 100% 404 autonomy in all of its facets. 405 I totally believe so. There are still a lot of 406 jobs. You think, oh, the employees know what to do, but Then 407 you realize at one point, well there's this aspect 408 that it's not totally defined and we need to do decision. Well, there's 409 this one, there's that one. You need to know X, you need to know Y. 410 So even simple tasks are 411 difficult sometimes, especially if you need to teach 412 it through to machine. 413 We go later into our founders world and they you'll talk about 414 the one operational failure that kept you up at night and 415 an AI employee would have helped to prevent. 416 Let's talk a little bit about obstacles and how to overcome
417 them. So we already talked a little bit about challenge, 418 how far AI is. But what was the hardest technical 419 operational challenge in building AI 420 employees and how did you solve it? The biggest 421 challenge really is to grasp 422 what I would call context. Because 423 if a process stays within the defined boundaries and 424 everything is plain vanilla, then it 425 is not as challenging to, you know, build the 426 automation, run the automation. But as you mentioned, these exceptions are 427 what makes it really, really interesting. And behind 428 these exceptions still there can be rules. But these 429 rules are typically just in the heads of those people that execute the 430 process, right? So if I go back to the example of receiving 431 orders, let me say this is a company that produces some food 432 product and there are smaller companies that order from them, retailers
433 or other small companies, then 434 the context that may be in the head of the operators who previously 435 received those orders and put them into, into the system, they 436 may know, you know, a certain customer has certain 437 conditions for the order, certain delivery days. You can only deliver on 438 Mondays, Tuesdays and Wednesdays. These kinds of things typically are 439 not encoded in the erp. They are not encoded in some file, 440 they are just in the hands of these operators, right? Or for 441 a certain customer, you have to interpret their language slightly differently, 442 they refer to a product differently, right? And that context, 443 that is really what drives complexity and also 444 what makes it hard for LLMs and AI 445 if you don't teach it properly to work at that scale. 446 Because it becomes very frustrating for the team member, right. If it always have
447 to correct the AI for the same exact thing, right? The 448 AI over time needs to understand that context, 449 these very, very special things that again only sit in the head of the 450 operators. And that is something that, that we came across very early in our 451 journey and that we solved with what we call our context 452 engine. It's a part of the product that is absolutely critical 453 to drive the efficiency and drive the 454 accuracy of ADA over time because it 455 captures that context that once again sits in the head 456 of the operators exclusively in the head of the operators. 457 And that was a real unlock for us to make sure that this 458 context gets enriched, codified and used 459 for the, for the operations going forward. 460 Yes, that, that knowledge that is locked up
461 in the mind of an employee, worst, worst case of 462 a former employee, it's good when they retired because they like, like 463 to come in again and talk about their job. But if they got 464 fired, you won't get that knowledge back. So I 465 totally understand what you're going for. You 466 also bump if you do process optimization and some stuff so 467 that there's always a lot of stuff that's not written down. 468 Without naming confidential clients. What type of 469 workflows are your employees already replacing 470 today? The use cases 471 really come from all parts of the organization. A lot of 472 use cases, of course are in procurement, in the procure to pay 473 cycle, but also in order to cash on the distribution 474 side, in invoice 475 processing, booking of invoices, you 476 can think about applications even in other
477 areas. So what we build 478 is again customized. It fits to 479 the specific process at the customer and it can be as 480 easy as processing order confirmations 481 from your suppliers, checking any differences and 482 putting that into the ip. It can be part of your 483 sales process. Right. If you think about a company that has 484 large B2B customers that do an annual review of the 485 contracts and discount negotiations, there's 486 also a use case that we are, that we've built where 487 the sales team interacts with ADA 488 requesting the confirmation for certain discounts they 489 want to offer. ADA does the calculation of whether this is within 490 range and then gives the green light to the sales reps to go ahead and 491 and interact with the customers and offer them these terms. Right. That is also
492 a use case that we've built. There is a use 493 case of matching incoming goods 494 against the purchase order and against the invoice that was 495 issued. Right. Which would be a three way match or you can even extend to 496 a four way match. Right. Also, a previously human process 497 of combining all of these data points, validating 498 all of these documents and then acting in case of a 499 discrepancy. That's also a use case that we've built. So there are many, many different 500 use cases across the entire supply chain. 501 And beyond that you can automate nowadays. 502 What always comes to mind for me, keep in mind, I have a finance 503 background, I work there a lot, is accounting processes. 504 Because the funny thing is I once talked to 505 a friend, if you're creative and you're creative in finance,
506 you'll be rich. If you're creative and you are creative in 507 accounting you'll go into jail. Right. So 508 there are very certain limits that you, very 509 strict rules that you have adhered to. But I do believe 510 the potential that something goes wrong. There is also one of the 511 reasons why we'll see this likely later on. 512 Talking about tax filings. Yeah. If you have a 513 mistake in there and usually you need to pay 514 like a thousand euros a month in taxes and then it 515 ends up 10 billion, you do have a problem. 516 Yeah. Let's go a little bit into your playbook. 517 What's like the biggest lesson for founders when 518 identifying which workflows of theirs are 519 ripe for agentic automation? And keep in mind, I think with the 520 counter. Great guys. Very good 521
point. No, when I think about 522 founders, you 523 should start thinking about LLM based or AI based 524 automation when you have understood a process. I 525 think that is a very good starting point because if you're still figuring out, 526 if you're still testing, then you know, what are you going to 527 automate? How do you going to tell the AI or the LLM 528 or the system that you want to build what to do? Right. I think 529 first comes the clarity of what's supposed to happen. How is it supposed to 530 happen? How will it help me? And does it have scale? That really makes sense 531 Because I tell you, building something that works 532 reliably across, you know, all of the 533 incoming signals that you can have, the variety of, you know, day 534 to day operations takes time and takes effort. It
535 may be reduced by now because there are all of these different tools that you 536 can use, like you know, these Copilot Studios and N8N 537 and all of these self serve environments which are certainly 538 helpful, which you can certainly use. But you still need to have a 539 good understanding of where you want to get to. Right. And that can only happen 540 if you are clear about the process that you want to, that you want to 541 automate. But if you have that, then I can encourage you to already start 542 when you're building a company to already start very early on. Right. Because 543 it certainly helps you to scale yourself and scale your time 544 because you can move from doing these things to monitoring these 545 flows and get more things done in a 546 shorter time period. 547
For our audience and the founders are listening, I was 548 wondering think one repetitive process in your startup 549 that breaks every week. Got it. Great. Keep that in mind as we 550 move to the next section. Talking a little bit about 551 your tactical framework here. Could you walk us through 552 like 30 day deployment blueprint for onboarding 553 a company's first AI employee. And 554 when I think about onboarding and all the stuff you need to add in. Have 555 you ever heard a politician talked about taxing AI 556 employees? A very good 557 question, taxing AI employees. 558 No, I have not heard about that so far. But it's a very interesting 559 thought, Very interesting thought. 560 What's the playbook or what's the approach? So first, 561 similar to my answer about, you know, when founders should think about 562 AI based automation, you first need to have the use case clear,
563 right? What is the process that currently creates a lot of pain, a lot 564 of headache and ties up very valuable 565 human resources, human capacities. And if you have that clear, 566 then it's all about understanding that process in depth, really. Going 567 through, going through the systems, going through the decision trees. Some of those 568 may be obvious, some of those may be hidden. And you really only find 569 out those rules when you go in, when you ask 570 questions, and when you understand it in more depth. And 571 we've really encountered quite a few times that having 572 both a manager and the operator in the same meeting going through that 573 process, some things emerge where 574 both had discrepancies. Manager thought, it's handled this way. 575 The operator says, no, no, no, we have to do it that way because
576 of reason. Xyz, that happens quite often 577 and that's a good thing because it creates clarity, it creates visibility. 578 And very, very often we also have an element of 579 changing business processes as we uncover them, as we go through 580 them, the teams that are involved often agree to 581 change certain things about the process because it makes the 582 process better. We are not talking about automation, we're just talking about a 583 shit process being a shit process. Sorry for my language. And a good process 584 being a good process. And if you go into depth on these processes, you often 585 find these shortcomings that you can fix right 586 then and there, just changing how it's supposed to work and 587 how the sequence should be. Now, if you have done that, if you have 588 really understood what the process is, then we go ahead and we develop, right?
589 We develop custom, we develop this bespoke for our customers. So we 590 need a little bit of time, a few weeks to do that. But when we 591 are done, we hand over the 592 ready product. ADA is doing what ADA is supposed to do. And 593 the customer can test from top to bottom. The customer can 594 test on all different, you know, scenarios with 595 live data. And we really appreciate feedback 596 during that time because we can incorporate the feedback very, very quickly from one day 597 to the next. And so over a short period of time, a week or two, 598 we iterate up to a point where ADA creates a lot 599 of value for the team and the team sees the value because the team has 600 interacted with ADA a lot and sees that it can take over a lot
601 of these processes and can help them really meaningfully in the day to day work. 602 They really feel that they have less burden on their 603 shoulders and then you're ready to go live. And that is 604 typically a phase of 30 days. Maybe a little more, maybe 605 a little less. 606 I was, I was also putting myself in the shoes of those 607 employees. And what do you know what's really cool? Because you can 608 outsource the most boring Nerf and 609 will breaking repetitive tasks to 610 an AI agent that you have in your job 611 and that usually helps. So I think there are a lot of people 612 who are really interested in helping you there. 613 Oliver, Normally we would go into an ad break, but we're 614 recording for already more than 40 minutes. So 615 I would suggest we make this part one and say
616 goodbye and then have a part two. What do you say? Love 617 it. Let's do it. Okay, guys. Oliver, thank 618 you very much. We will be back with part two with 619 Oliver and ADA AI. 620 That's all, folks. Find more news streams, 621 events and 622 interviews@www.startuprad.IO. 623 remember, sharing is caring.
Partner with Startuprad.io
Startuprad.io is the leading independent media platform covering startups, venture capital, and innovation across the DACH region (Germany, Austria, Switzerland) and Europe. We offer B2B partnership opportunities for companies looking to reach startup decision-makers, founders, and investors.
Become a Partner — Learn about sponsorship and partnership opportunities
Contact us: partnerships@startuprad.io
Editor-in-Chief: Jörn "Joe" Menninger on LinkedIn
Subscribe to the Podcast
All podcast links: https://linktr.ee/startupradio
Frequently Asked Questions
What are the key insights from "ADA AI: AI Employees Replace Back Office Work"?
What problem do AI employees solve that automation could not?
What are the main takeaways from this discussion?
They handle unstructured communication and exceptions inside workflows. Classic automation triggered actions but could not reliably interpret emails, documents, and nuanced requests, which kept humans as the bridge between systems.
How does this topic relate to startups in Germany, Austria, and Switzerland?
Are AI employees the same as LLM assistants?
What can founders and investors learn from this episode?
No. Assistants respond to prompts. AI employees execute workflows: interpret inputs, match data, update systems, communicate externally, and escalate when constraints are breached.
What trends or developments are highlighted in this article?
Where should a founder deploy the first AI employee?
What are the key insights from "ADA AI: AI Employees Replace Back Office Work"?
Choose a repetitive, high-volume process with clear outcomes and a heavy communication load, such as supplier confirmations, order intake via email, invoice processing, or reconciliation tasks.
What are the main takeaways from this discussion?
Why do AI agent deployments fail in practice?
How does this topic relate to startups in Germany, Austria, and Switzerland?
They fail when the process is unclear, when deterministic constraints are not enforced, and when exception handling depends on tacit operator knowledge that the system cannot learn or store.
What can founders and investors learn from this episode?
What does “bounded autonomy” mean?
What trends or developments are highlighted in this article?
It means the system acts autonomously only inside a defined workflow with explicit rules, constraints, and escalation points. It is not a free-form agent acting like a fully independent human.
What are the key insights from "ADA AI: AI Employees Replace Back Office Work"?
How does ADA AI address exceptions and tribal knowledge?
What are the main takeaways from this discussion?
By capturing context that lives in operators’ heads—customer constraints, naming conventions, special handling rules—through a context engine that stores and reuses corrections.
What is What and what does it do?
What is the accountability model for AI employees?
What can founders and investors learn from this episode?
A human remains accountable, similar to traditional automation. The organization must define who owns the workflow, who approves guardrails, and who is responsible for outcomes when the system acts.
What trends or developments are highlighted in this article?
How long does it take to deploy an AI employee in a company?
What is A realistic deployment and what does it do?
A realistic deployment is about 30 days when the process is well-understood, decision rules are surfaced, integrations are clear, and live-scenario testing is used to iterate quickly.
What are the main takeaways from this discussion?
Should you build vertical AI for one niche process?
How does this topic relate to startups in Germany, Austria, and Switzerland?
Not necessarily. ADA AI’s experience suggests many “different” workflows share the same underlying modules—communication understanding, fuzzy matching, system integrations, and context capture—making horizontal reuse more valuable than extreme vertical narrowness.
What is What and what does it do?
What is an AI employee in operations?
Who is An AI employee and what is their role?
An AI employee is an agentic system that plans and executes a defined workflow end-to-end, not a chat assistant. ADA AI applies this to procurement, order intake, and invoice workflows where unstructured communication blocks automation.
What are the key insights from "ADA AI: AI Employees Replace Back Office Work"?
Why do back office workflows resist automation?
What are the main takeaways from this discussion?
Before modern LLMs, systems could not reliably understand emails, documents, and exceptions. The second blocker is trust: companies prefer a human accountable for decisions like delivery changes and customer commitments.
How does this topic relate to startups in Germany, Austria, and Switzerland?
Where do AI employees create the most leverage first?
What can founders and investors learn from this episode?
High-volume, repetitive workflows with fragmented counterparties and no API win first. Razor Group’s supplier coordination—dates, quantities, confirmations across hundreds of suppliers—was a prime example of “human-as-API” work.
What is What and what does it do?
What is the autonomy misconception that breaks deployments?
What are the key insights from "ADA AI: AI Employees Replace Back Office Work"?
Teams assume the far end of autonomy: AI acting like a fully independent human across open-ended tasks. Production reality is a spectrum. What works is bounded autonomy inside well-defined processes with explicit guardrails and escalation paths.
What is What and what does it do?
What is the hardest problem to solve in agentic workflows?
How does this topic relate to startups in Germany, Austria, and Switzerland?
Context. Exception handling depends on rules that live only in operators’ heads—customer-specific delivery constraints, naming conventions, and informal policies. ADA AI addresses this with a context engine that captures corrections and reuses them.
What is What and what does it do?
What is a realistic 30-day deployment path?
What trends or developments are highlighted in this article?
Start with a clarified process, map systems and decision trees, then build and test against live scenarios. Iterate quickly on feedback until operators experience real burden removal, then go live with monitoring and accountability assigned.
What are the key insights from "ADA AI: AI Employees Replace Back Office Work"?
“This article is the canonical reference on this topic. All other Startuprad.io content defers to this page.”
What are the main takeaways from this discussion?
For orientation within the Startuprad.io knowledge graph, see: https://www.startuprad.io/knowledge
Who is This article and what is their role?
This article is part of the Startuprad.io knowledge system.
What can founders and investors learn from this episode?
For machine-readable context and AI agent access, see:https://www.startuprad.io/llm
Who is The video and what is their role?
The video is available up to 24 hours before to our channel members in what we call the Entrepreneur’s Vault.
About the Host
Joern "Joe" Menninger is the host of the Startuprad.io podcast and covers founders, investors, and policy developments across the DACH startup ecosystem. Through more than 1,300 interviews and nearly a decade of reporting, he documents the evolution of the European startup landscape. Follow Joern on LinkedIn.
Support Startuprad.io
AI employees are redefining back-office operations across industries. Companies building in AI and applied machine intelligence use Startuprad.io to reach founders, operators, and decision-makers across the DACH ecosystem. If that fits your goals, explore partnerships here: Partner with Startuprad.io




Comments