top of page

How to Safely Adopt GenAI Without Leaking Customer Data

Updated: Apr 8

 'Emotional Branding for Startups – The Invisible Growth Engine' on a dark blue tech-patterned background.

What Is This About?

Adopting GenAI without leaking customer data is a critical challenge for enterprises. This guide provides a practical framework for safely integrating generative AI into business workflows — covering data isolation, access controls, prompt hygiene, and compliance with EU privacy regulations.

Introduction

Generative AI adoption in enterprises creates a new category of data security risk that traditional cybersecurity frameworks were not designed to handle. This guide provides a practical approach to deploying GenAI tools without exposing customer data — covering data classification, prompt engineering guardrails, vendor evaluation criteria, and the organizational policies needed to capture AI productivity gains while maintaining data protection compliance.

Executive Summary

Generative AI creates a new category of data security risk where sensitive information can leak through prompts, model outputs, and training data exposure — vectors that traditional cybersecurity tools cannot detect. The guide covers a layered defense approach including data classification before AI deployment, prompt engineering guardrails, output filtering, and vendor security evaluation criteria. Organizations that implement these controls before deploying GenAI tools avoid the retroactive cleanup costs that affect companies adopting AI without security frameworks. The practical steps require minimal additional budget when built into the deployment process from the start.

Startup guide to safe GenAI adoption: practical tips on privacy, use cases, and founder mistakes to avoid.

This founder interview is part of our ongoing coverage of Scaleup Founder Interviews from Germany, Austria, and Switzerland.


Key Takeaways

Atomic Answer

🚀 Management Summary


Startup guide to safe GenAI adoption: practical tips on privacy, use cases, and founder mistakes to avoid. Startuprad.io brings you independent coverage of the key developments shaping the startup and venture capital landscape across Germany, Austria, and Switzerland.

Generative AI is powerful—but for startups, it's also full of privacy pitfalls and model myths. In this guide, Dennis Traub of AWS breaks down a safe approach to GenAI adoption. Founders, product leads, and operators will learn how to evaluate risk, select secure models, avoid legal missteps, and implement use cases that actually matter. This post anchors a full SEO content cluster on startup AI safety and innovation.


📚 Table of Contents

  1. Why Safe GenAI Adoption Matters for Startups

  2. What Founders Get Wrong About AI

  3. The "Artificial Intern" Framework

  4. How to Choose Your First AI Use Case

  5. GDPR, Model Risk & Mailbox Mayhem

  6. Innovation vs Optimization: A Founder Mindset Shift

  7. AWS Bedrock & Privacy-Compliant Model Hosting

  8. Key Takeaways for Startup Builders

  9. FAQs: Safe AI Adoption for Startups


🚀 Meet Our Sponsor

AWS Startups is a proud sponsor of this week’s episode of Startuprad.io. Visit startups.aws to find out how AWS can help you prove what’s possible in your industry.

The AWS Startups team comprises former founders and CTOs, venture capitalists, angel investors, and mentors ready to help you prove what’s possible.

Since 2013, AWS has supported over 280,000 startups across the globe and provided $7Billion in credits through the AWS Activate program.

Big ideas feel at home on AWS, and with access to cutting-edge technologies like generative AI, you can quickly turn those ideas into marketable products.

Want your own AI-powered assistant? Try Amazon Q.

Want to build your own AI products? Privately customize leading foundation models on Amazon Bedrock. 

Want to reduce the cost of AI workloads? AWS Trainium is the silicon you’re looking for.

Whatever your ambitions, you’ve already had the idea, now prove it’s possible on AWS.

Visit aws.amazon.com/startups to get started.


1. Why Safe GenAI Adoption Matters for Startups?


Startups are moving fast to implement GenAI—but many are doing so without guardrails. The result? Privacy breaches, hallucinating chatbots, or worse: lost customer trust. The foundation of safe GenAI adoption begins with understanding the actual capabilities and limitations of large language models (LLMs).


2. What Founders Get Wrong About AI


Many founders believe they need the “best” model or the most advanced tech stack to win. Others treat GenAI like a co-founder. Both are flawed assumptions.


Common misconceptions include:

  • That LLMs understand facts (they don’t)

  • That more power = better outcomes (not necessarily)

  • That privacy concerns only apply to enterprises (false—GDPR applies to all)


The first step is realizing that GenAI is not a magical solution. It’s a probabilistic, pattern-recognizing system. Your job is to direct it wisely.


3. The "Artificial Intern" Framework


Dennis Traub introduces a powerful metaphor: AI as your "Artificial Intern".

Just like an intern fresh out of university:


  • AI knows a little about everything

  • AI lacks real-world judgment

  • AI can do research, write drafts, classify data

  • But you have to check everything before it ships


This mindset shift allows teams to assign meaningful tasks to AI—without over-trusting its output.


4. How to Choose Your First AI Use Case


Your first GenAI project shouldn't be ambitious or glamorous. Instead, follow Dennis's playbook:


Ask your team:

  • What tasks always get postponed?

  • What recurring job is annoying but necessary?

  • What low-risk function could AI draft, classify, or summarize?

Then ask:

  • Can we safely hand this to an intern?

  • Would we double-check the output before it goes public?

If yes, it's a great GenAI candidate.


5. GDPR, Model Risk & Mailbox Mayhem


Giving AI access to your inbox is like handing your intern your house keys. It might accidentally email the wrong client or expose confidential info.


Key risks include:

  • LLMs storing sensitive data

  • Non-compliant vendors sending data across borders

  • Accidental exposure of customer or employee data

Dennis warns that even accidental exposure can trigger legal penalties under GDPR—and you, not the model, are responsible.


Checklist for safety:

  • Read your model provider’s T&Cs

  • Choose GDPR-compliant tools (AWS Bedrock, etc.)

  • Avoid giving write-access to mission-critical tools


6. Innovation vs Optimization: A Founder Mindset Shift


Startups are often tempted to use GenAI to optimize what already works. That’s not where innovation lives.

Dennis urges founders to solve problems that were previously unsolvable due to cost, time, or scale. This is where GenAI shines.

"Don’t just automate old problems. Solve the ones no one thought were solvable."

Examples:

  • Summarizing 100s of emails into strategic reports

  • Auto-sorting customer complaints across 12 markets

  • Real-time classification of legal contracts


7. AWS Bedrock & Privacy-Compliant Model Hosting


Public LLM APIs often retain data, use it for training, or fail to meet EU data residency requirements.


AWS Bedrock offers:

  • GDPR-compliant infrastructure (models run in Europe)

  • No data training reuse

  • Air-gapped model environments

  • Enterprise-grade encryption


This makes it a viable backend for founders who want safe AI without building a private stack from scratch.


8. Key Takeaways for Startup Builders


  • Treat AI like an intern: Train it, test it, but never trust it blindly

  • Start small: Pick use cases where failure is low-risk

  • Avoid mailbox-level access: Protect your org's data perimeter

  • Use privacy-first models: AWS Bedrock is a solid start

  • Don’t optimize—innovate: Use GenAI where humans couldn't before


🧵 Further Reading



🚪 Connect with Us

Relationship Map

  • Jörn "Joe" Menninger → Host of → Startuprad.io

Frequently Asked Questions

What is this article about: How to Safely Adopt GenAI Without Leaking Customer Data?

Adopting GenAI without leaking customer data is a critical challenge for enterprises. This guide provides a practical framework for safely integrating generative AI into business workflows — covering data isolation, access controls, prompt hygiene, and compliance with EU privacy regulations.

What are the main takeaways from this discussion?

Generative AI adoption in enterprises creates a new category of data security risk that traditional cybersecurity frameworks were not designed to handle. This guide provides a practical approach to deploying GenAI tools without exposing customer data — covering data classification, prompt engineering guardrails, vendor evaluation criteria, and the organizational policies needed to capture AI productivity gains while maintaining data protection compliance.

How does this topic connect to the broader startup ecosystem?

Generative AI creates a new category of data security risk where sensitive information can leak through prompts, model outputs, and training data exposure — vectors that traditional cybersecurity tools cannot detect. The guide covers a layered defense approach including data classification before AI deployment, prompt engineering guardrails, output filtering, and vendor security evaluation criteria. Organizations that implement these controls before deploying GenAI tools avoid the retroactive cl

About the Host

Joern "Joe" Menninger is the host of the Startuprad.io podcast and covers founders, investors, and policy developments across the DACH startup ecosystem. Through more than 1,300 interviews and nearly a decade of reporting, he documents the evolution of the European startup landscape. Follow Joern on LinkedIn.

Support Startuprad.io

Startuprad.io covers how European startups are navigating AI adoption, data privacy, and compliance. Our guides are independent and free. If this article helped you think about GenAI security, consider supporting us through a sponsorship or sharing it with your team.

Automated Transcript

1 Do you know what AI actually stands for? 2 It's Artificial Intern. And that's how 3 you have to work with. It's an intern who strangely knows a lot 4 of stuff. So they're straight out of university and they have 5 studied pretty much everything, but they, 6 they don't have any, any, any real world experience. They don't 7 have the intuition that, that you have as a business person or a 8 developer or whatever you do. They can do a lot of 9 the repetitive work, a lot of the menial work of 10 analyzing stuff, analyzing text, analyzing information, 11 or writing templates, classifying something and 12 kicking off backend algorithms. This is what they're really good at. 13 But in most cases, if they make any 14 decisions or produce any facts, 15 you should always double check and make sure. 16 Welcome to Startuprad IO,

17 your podcast and YouTube blog covering the German 18 startup scene. With news, interviews and 19 live events, 20 AWS is proud to sponsor this week's episode of startup raid 21 IO. The AWS team compromises former 22 founders, CTOs, venture capitalists, 23 angel investors and mentors ready to help you prove 24 what's possible. Since 2013, AWS has 25 supported over 280,000 26 startups across the globe and provided US$7 billion 27 in credits through the AWS Activate program. 28 Big Ideas Feel at home at AWS and with access 29 to cutting edge technologies like generative AI, you can quickly 30 turn those ideas into marketable products. Want your own 31 AI powered assistant? Try Amazon Q. Want your own 32 AI products? Privately customize leading foundation 33 models on Amazon Bedrock. Want to reduce the cost 34 of AI workloads? AWS Trainium is the silicon 35 you're looking for. Whatever your ambitions, you've already had

36 the idea. Now prove it's possible on AWS. 37 Visit aws.Amazon.com 38 startups to get started. Dennis Straub is a developer 39 advocate at aws, where he guides companies through the safe 40 adoption of emerging tech. With a deep background in cloud 41 security, developer enablement and generative AI 42 integration, Dennis helps teams test, iterate and 43 learn without putting the data or business at risk. 44 Today we unpack the AWS playbook for starting 45 with gen AI. Even if you're just getting curious. 46 Denis welcome to StartupRate IO and for every 47 podcast aficionado, we may add that you have been the 48 original voice of the German AWS podcast. 49 Oh, thank you Joe. Thanks everyone for listening. I 50 can't. I don't. Is it. Is it even still true with the AWS podcast? 51 Podcast? It has been. That was during. That was during COVID I

52 started that during COVID I put out I think 50 episodes or so 53 until Traveling started up again and unfortunately I wasn't able to 54 continue, but a few of my friends and colleagues here in Germany actually 55 picked it up and are still continuing it. 56 Anyway, thanks for having me on. On the show and 57 right in the, in the introduction, you mentioned something I think that's really, 58 really dear to my own heart and probably to most of your listeners. 59 What's the ROI in 60 AI? I think that's a question that many people have, including myself, 61 quite often. So I'm happy to talk about this today. 62 When people talk about AI, what comes to mind is 63 ChatGPT doing everything with it, but 64 it's a chat window. Plus what, 65 what has been in the news on and off is Elon

66 Musk's croc for either very great or very 67 bad answers. So everybody who's only heard about 68 that, how could you get started 69 safely with Gen AI? 70 Well, I think, most importantly, first of all, it's 71 important to understand what generative 72 AI actually is, how it works. Not in detail. 73 You don't have to. I don't have a PhD in math. I don't really understand 74 math. But you don't, you don't need to have that. But it's important 75 to have a foundational understanding of how these models work 76 and specifically what they are not. They are not people. 77 They are not human beings, even though 78 they talk like human beings. And 79 Andre Karpathy, one of the, one of the people who do a lot of foundational 80 work in AI, he actually said, 81 LLMs are like stochastic

82 simulations of people. So they behave like 83 people in a certain way in terms of putting out text, saying something, 84 but they behave like this friend that some of you may have 85 had in the past. I certainly did. That person who knew 86 every, everything. And when you, when you ask them anything, 87 they, they would have an answer. And they were so convincing with what they 88 said. But once you started questioning, you might have 89 realized, well, maybe, maybe it's not 90 really what they're saying, or maybe they are not as sure as 91 they think they are. So I try to, I try 92 to compare AI models, generative AI 93 language models, compare them with this kind of friend who 94 would like to know a lot and probably knows a lot as well, but sometimes 95 confuses things and isn't really aware or

96 doesn't want to, doesn't want to show any weakness and tries 97 to bring across whatever they come up with as convincing 98 as possible. And that's what's really important. AI 99 does not know anything. AI has been trained 100 on, on the entire Internet, basically on a lot of text 101 Material and what they do internally is just whenever 102 you type something, whenever you send something into the 103 language model, it looks at what you wrote 104 and then it compares it to what it has read in the past. 105 And then it comes up with, well, when I had this sentence, 106 most of the time the next word was this. 107 So it learns during training, it learns to 108 relate concepts with each other without actually 109 understanding the concept. Take a cat and the word 110 flurry. And an LLM sees these

111 words together very often when being trained on the Internet. 112 And then it knows with I'm doing air quotes here, it 113 knows that cats and flurry somehow 114 relate to each other and 115 may create text that puts these two 116 words together. This is very simplified, but that's effectively 117 how it works. It does not understand anything. 118 It is extremely well trained in terms of 119 pattern recognition. And it repeats patterns 120 that it originally saw. And many of these patterns 121 have been scientific papers, 122 lexical articles and all kinds of information where 123 people convincingly describe what they are 124 talking about because they are convinced. Because most of the time it's 125 actually true. And the model just 126 adapted this way of communicating. That's why it sounds 127 convinced. It is hard to 128 find any text on the Internet where somebody says, I don't know. This

129 is why most models also do not respond with I don't 130 know. They just come up with stuff. 131 And that's what's really important. 132 AI. AI models know 133 a lot, but have often have a hard time to 134 really put the things together in 135 a way that's that it's really, that it's really factual. And that 136 is something that you should be basically aware of. If you 137 know that, then you can deal with it in a certain way, then you know, 138 I shouldn't rely on it. It's not that they're not good enough 139 yet. The way these models work, they 140 will never have actual 141 understanding of what they talk about. They will always 142 be pattern recognition recognition 143 algorithms. And if you understand that, you can work 144 with them. Like I like to think about them as

145 another thing is like, do you know what 146 AI actually stands for? It's 147 artificial intern. And that's how you have to 148 work with. It's an intern who strangely knows a lot of 149 stuff. So they're straight out of university and they have studied 150 pretty much everything, but they don't have 151 any real world experience. They don't have the intuition 152 that you have as a business person or a developer or 153 whatever you do. I have 154 the exact example for this. For example, I'm using 155 a lot of chatbots for 156 many, many different functions and I've come to use them 157 for evaluating pitches, guest pitches, for the very 158 simple reason. I get up to 30 a week 159 during summer, and during winter it can be 60 to 160 100. That only makes sense for me to reply if it's

161 template, if it's not AI generated. And 162 so you mean people, people approaching you because they want to be on 163 your podcast. Okay, exactly, exactly. 164 And so basically, at first I was copying 165 simply in the email and 166 the AI of choice gave me back a potential reply, but then 167 I told it, okay, I want you to first evaluate 168 what this actually is, how likely is 169 it that it's written by an AI, what percentage is 170 by human, and then give me a pretty fair 171 assessment of X, Y and Z and this and this and that. 172 And then only when I know that there is less 173 than 50% AI involved, I look into the 174 email, tell the AI what to do, what kind of reply 175 I want to send, and then it goes out. And that's the

176 thing. One of the things that these language models are really good 177 at is classifying text, because they have 178 seen so much text during their training that it's very 179 easy for them to classify text if you help them understand 180 what the premises are, what the conditions are, what the 181 requirements are that you're looking at so they can classify. What they're 182 really bad with is coming up with facts because 183 they do not understand the concept of facts. 184 They just, they're internally, they're just numbers. 185 And texts can be represented as numbers, which is why 186 summarization, classification and similar tasks are very 187 easy, but there is no actual understanding. 188 And the use case you just described is, 189 I would be interested in understanding how many, how many false positives you 190 have in terms of how many supposedly AI generated

191 pitches you unfortunately sort out. Because 192 maybe the model or maybe the person actually writes like an 193 AI and it's, it's getting, it's getting harder to 194 actually distinguish between well trained 195 AI language models and actual human beings. 196 There are, there are some indicators right now 197 they may be out of date by tomorrow because things evolve, and 198 especially people who build systems based on AI, AI to 199 actually obfuscate the fact, to 200 hide the fact that they're using AI, they're working on this 201 as well. But let's get back to the intern, the artificial intern, 202 like every intern, especially if they know a lot, 203 they're fresh from university, they are really excited about 204 the job, and they are really excited about learning about what they 205 can do about your industry, about your use case, your products, your customers.

206 At the same time, they have no Intuition and they don't have any real 207 world experience that they can reflect on 208 whenever they approach a new task or have a new challenge to 209 reflect on and say, well, I saw something similar sometime in the past 210 and it worked this way. And they can start abstracting 211 and mapping and matching and coming up with solutions. 212 They just know their textbooks, they know what they studied, 213 they don't have the intuition. So it's your job. If you work with, with 214 an AI, it's your job. You can give them a lot 215 of tasks, but you always have to double check. They can do 216 a lot of the repetitive work, a lot of the menial work of 217 analyzing stuff, analyzing text, analyzing information 218 or writing templates, or kicking off, 219

kicking off classifying something and kicking off backend 220 algorithms. This is what they're really good at. But 221 in most cases you should really have a look at what 222 if, if they make any decisions or produce any 223 facts, you should always double check and make sure. 224 What I found is they're really, really good in for example, you 225 give them like 10 bullet points and a lot 226 of keywords and say, okay, write me a text from that. They're excellent. 227 They, they spare. They saving me a hell lot of time by doing 228 that. I see an email, I say, okay, I want this, I want this, 229 I want that. Because when you get tired and English is not your 230 native language at 10pm it gets really, really 231 difficult to formulate a straight, easy to read 232 email. And that's when it comes in quite handy. But

233 I wouldn't necessarily hand over my 234 mailbox to Gemini or ChatGPT or Claude. And that's 235 the point. I just want to know. We cannot hand over 236 the mailbox. So how do we get to really start 237 safely as a company with AI? 238 Most importantly, don't give any AI system the 239 keys to the kingdom. And your mailbox, especially if 240 you're a founder, most likely is your key to 241 the Kingdom. There are many, there are 242 many agentic orchestration systems out there 243 that allow to just connect to your Gmail 244 account or to Outlook, whatever, 245 or your calendar to connect to your data sources. 246 It's useful as long as 247 you don't let it act on this information 248 or at least not let it act without you double checking before it actually 249 acts on this information because in fact it would

250 be able to delete all your emails, 251 including important information. 252 It may be able to actually send an email to 253 somebody and it may not be the mail that you want 254 this person to get. So there's certain risks whenever you 255 give an AI or anyone like the Intern. Would you give 256 your intern access to your, to your mail account? 257 I just had in mind, as when he said, 258 for example, my wife has a pretty common first name, and 259 instead of sending her an invitation to date, I send it to 260 a potential client. The AI could 261 do that because it's the same first name. Right, right, right, 262 the AI could do that. But on the other hand, there's another risk, 263 and that's actually data, data privacy and security. 264 Because if 265 you give a language model access to your

266 email account, it has access to 267 all private information that's inside this email account. This 268 includes private information about you, but it may also 269 include pii, personally identifiable information of 270 customers of other people. It may even include 271 specific sensitive information like health information or, 272 or like, like relationships between a lawyer and 273 client. And there are certain pieces of information that, that, at least in 274 Germany, there's even more protection 275 around it. And you can actually go to jail if you, if you 276 expose your client's health information 277 or your customer's health information, you can go to jail for that. And it's 278 you, it's not your intent, intern who goes, it's you who goes to jail for 279 it. So what's, what's really important to acknowledge 280 is the fact that in your emails, there's a lot of personal

281 information, whether it's sensitive or not, 282 it is personal information. And as soon as 283 you send it to a model, you need 284 to be aware of how the provider of this 285 model interacts with that data, 286 whether they store it anywhere, whether they send 287 it somewhere else, whether they keep it secure, 288 whether they just throw it away and not store it at all. 289 This information is really important. So as soon as you give a language model 290 access or an agent access to your mailbox, 291 you give the model provider access 292 to personal information. You give your model provider 293 technically access to all the private information 294 that your customers, that your family, that you 295 yourself have entrusted your mailbox with. And 296 I mean, even if your customers wouldn't care, it would 297 be a GDPR headache because as soon as you sent that to

298 OpenAI or any other public provider 299 that provides like a public interface, 300 maybe even without any price tag, 301 you send that information somewhere else. And you as a company 302 have no information about what happened to this 303 data. You lose control over this data. And, and 304 by losing control over the data, you 305 effectively violate GDPR and maybe 306 other laws too. So it's really important to understand that 307 if you build an AI system 308 that has access to your MA box or any other 309 proprietary or private information, 310 you need to make sure that you understand the terms and conditions of, 311 of the model provider that you're working with. You need to understand 312 how, how, what do they do with your data? Do they, do they use 313 it for model training? Do they store it somewhere to

314 make it available to authorities when they 315 ask for it, for instance, or 316 maybe even outside the European Union? This is a very important piece of information, 317 especially if you work with publicly available APIs 318 and even more so if you work with APIs or 319 providers that don't charge you. 320 I mean, it should be a fairly well known 321 fact nowadays that if you don't pay with money, you usually 322 pay with data, especially with services on the Internet. And 323 that's most likely what happens with 324 many providers. I'm not going to name any 325 names. It is up to you to have a look at the 326 actual conditions and there are ways to work around that. 327 Most major model providers provide 328 ways to use their models that are GDPR compliant 329 or that allow you to use these models. GDPR

330 compliant? The models themselves are never. Or a tool that you use is 331 never GDPR compliant by itself. It's always about the way 332 that you use it. But most commercial model providers 333 actually have options that allow you to build 334 GDPR compliance systems with their models. But 335 it's usually not their chat interface. That is 336 exactly because what I was going for, because 337 why wouldn't you start with a chatbot? And how, 338 how would you look in a company, like on a meta level for 339 the first real project they can 340 use, they can do with AI. And I 341 have to admit that made me a little bit nervous because 342 currently I have somebody coding an AI based chatbot from a 343 website here. So there's two 344 questions. The first question why I wouldn't use a chat,

345 why I wouldn't build a chatbot. And the 346 second question is what I would look at when I would go 347 into a company or I would think about a use case that would actually make 348 sense. And that's an interesting question. First of all, I would not 349 necessarily say I wouldn't build a chatbot. A chatbot can be a 350 good use case, but most of the time it isn't. 351 The thing is, let me step back just one a little 352 bit. We, as 353 in everybody who's trying to use AI or trying to figure 354 out how to use AI in a useful way, are making, 355 we're falling in a certain trap and it's 356 completely normal to do that. We are trying to solve 357 problems that we have. 358 Most of them we have already solved 359 or the problems are inside of what we can

360 imagine fairly easily. And that reminds Me of back in the 361 day, back in the 90s when the world Wide Web became a thing. 362 I don't know if you had been around, I mean, Joe, you probably have been 363 around. I have been around. I don't know about the listeners, but if you have 364 been around. Back in the day, when we started building 365 websites or web pages or home pages as we called 366 them back then, back then we were mostly trying to 367 replicate what we already knew. So 368 we had yellow pages on the Internet where you could 369 find web pages like yellow pages with 370 classification systems that were based on 371 traditional libraries. Everybody had a 372 homepage which was more or less a business card. 373 And we tried to replicate print material 374 onto the screen, which was really hard because most MySpace,

375 even before that, even before that, most screens only had 800 by 376 600 pixels. Early HTML didn't really allow you 377 to position stuff. It's still hard nowadays, but back then it just didn't 378 work. The lineup lines, so the 379 connections to the Internet were so slow that you couldn't really 380 use images. And it was really hard. So everybody said, 381 every company said, well, we know we need to be on the 382 Internet now, just like they say nowadays, we need to use AI, 383 but it doesn't really work. And where's the 384 ROI in this? Where's the ROI in 385 putting my brochure, my print brochure onto the 386 screen? And it's terribly slow for, for customers to 387 load and display, but. Or what sense does 388 it make? Do you know what came to mind when you 389 talked about terribly slow? The sound of a dial up connection.

390 Right, exactly, 391 exactly. And that's the thing. We tried to 392 solve traditional problems with this new tool with this 393 new technology. We tried to use 394 traditional means and just map them onto the 395 screen. And that didn't work because the screen is not 396 made for printed stuff. The screen is 397 not a thick yellow pages, a tome 398 of addresses, of phone numbers for businesses. 399 That's not what it is. And over time, more and more people, and took 400 a few years, more and more people started realizing there's a 401 completely new way of thinking about things. And that's where 402 Google started. And Google started replacing traditional yellow 403 pages. And that's when Amazon started, 404 Amazon started replacing traditional brochures 405 on the Internet where you could read what you could buy and then 406 pick up the phone or send an email to the

407 retailer to tell them, well, I want this as a mail order. 408 That's when ebay came around, when Wikipedia or Wiki 409 as a principle started to come around, when things like Facebook and 410 Twitter came around, when we started to embrace 411 the new medium and actually use it for 412 things that we couldn't even imagine before. Traditional 413 classification systems, like in libraries, they are 414 important in libraries because they have to deal with shelf space. 415 They have to put the book somewhere and they cannot 416 put the book everywhere. But with the Internet, with 417 hyper hyper data, with hypermedia, 418 you can put the book literally everywhere. 419 You don't need classification systems, or you can have adaptive 420 classification systems, you can have dynamic classification systems. 421 You can even create something that looks at how many 422 people actually read your book and cited from

423 it, which effectively is Google. How many people actually 424 visit your homepage, your website, your application, 425 and actually link to it from their page. That is what Google looks at. 426 And it's the same situation all over 427 again, in my opinion. With AI, we're still trying to solve 428 old problems with the new tool, and we 429 haven't really figured out 430 what's the exciting new thing that we 431 can build with this tool that was 432 prohibitively costly in terms of money or in terms 433 of time, so that we didn't even think about doing it. 434 Imagine back in the day before we had the Internet and mobile phones. 435 Imagine back in the day. You live in Germany. I 436 do. When I called my relatives in the US or when my family 437 called relatives in the US that happened once a year

438 on Christmas. And every family members had about 439 five seconds to talk to them because it was an 440 intercontinental call, which was so expensive. So we never 441 talked to our relatives except on Christmas when I 442 went on vacation and I sent a postcard back 443 home. Most of the time the postcard arrived two weeks after 444 I arrived back home. And right 445 now, with mobile devices, with the Internet and everything, 446 we talk to people all over the planet all the time. 447 Yes. I vividly remember what a revelation it was 448 when I was studying in the US or working in China, when I could 449 use Skype to call people for, for 450 local, for local rates. So 451 how could a company really identify 452 what should be the first project? Because I wouldn't 453 necessarily recommend to have like this really big

454 hairy goal for the first AI 455 project, but rather something small, something 456 that really makes sense, takes a lot of maybe repetitive 457 work out of the job of the employees. Right? And 458 that's the most important question. 459 What use case? What workflow? What 460 item on your to do list gets 461 never done? 462 What are the painful things in your business that 463 nobody ever took care about because it 464 would take too much time, or because it would take too many 465 people to work on it, or because it would be just too 466 costly to do it? What are 467 the things that 468 if you had a magic wand and you could make them go away. 469 So the daily tasks, the menial things, the things 470 that bother you all the time, but 471 they need to be done or they should be done,

472 but I don't get around to doing them because I have so much to do. 473 What is that thing that you would like to be to get 474 done and it never gets done because there's no time for 475 it. Make a list of these things, write down your 476 most painful things that you have to deal with every day 477 because they don't get done and they don't get done. And then have a 478 look at it at them and think about, is this something 479 I could hand over to an AI 480 safely? Hand over to an AI, right, safely. Maybe not 481 in its completeness, maybe just a small part of it. 482 And if you want to build a startup for a certain industry, 483 talk to the people in this industry, talk to them, ask them 484 this, this question. What is the one

485 thing that has been bothering you for the last 486 30 years since you started in this industry? What is the 487 one thing that's bothering you 488 but nobody ever took care of it? 489 And then think about 490 whether this could be something that you could hand over 491 either in part or maybe even completely to an AI 492 and well, of course, don't start with the big hairy 493 goal, don't start with the big thing. Try to find a small 494 painful thing, create a solution for it 495 and then go back to the customer, go back to the market, go back to 496 the person you talk to in the industry or go back to yourself if it's 497 for yourself and see does it actually. 498 Do you know, Dennis, what my consultant mind was making of what 499 you say? Basically you put a lot of your employees into

500 brainstorming session, they come up with 20 problems, 501 you cut it down to 10 problems that are really 502 efficient if you could automate or partially automate them. And 503 then you start with the easiest. Yes. 504 And it's not only about they would be most 505 efficient, but we could finally address 506 them through automation. It wasn't possible before. 507 That's the thing. If we try to address problems that we already 508 automate and we could make them more efficient and that's not 509 innovation, that's optimization. That's a good thing. I'm 510 not saying we shouldn't optimize, we should optimize, but that's not 511 innovation, that's not the breakthrough, that's not the next Google, 512 that's not the next unicorn startup. The next unicorn startup 513 will solve a problem that everybody has, but 514 nobody even knew that they had it or nobody

515 even thought of solving it because actually solving them, it 516 wasn't even possible before. And this can be a small 517 thing. This can be a tiny, small thing. 518 It doesn't need to be a big thing. It can really be a small, tiny 519 thing. And if you have something like this, you effectively 520 have a money printing machine because everybody's going to tell, whoa, I didn't 521 know that's possible. Right. I don't have the 522 solution for you. So I cannot tell you it's this or that 523 thing. That is something that you need to look into with your specific 524 expertise, with your intuition, with your background, 525 with your creativity. Creativity. But what we've been 526 doing most of the time with AI in the last two 527 or three years really was just trying to, trying to, trying 528 to use AI to solve problem that we are already solving

529 and making them more efficient, making them least 530 less costly, reducing cost, unfortunately, firing people 531 and replacing them with AI to then figure out, 532 well, maybe. That was needed those people. Right. 533 Maybe it was the best idea. Maybe we shouldn't have listened to the 534 promises of AI now replacing everyone. 535 AI is something that can help you solve new problems 536 and AI shouldn't be used to solve a problem that's 537 already been solved unless you are in the 538 optimization stage. Big enterprises may be in that stage and 539 big enterprises may be doing the right thing when they're looking at their 540 processes and workflows and everything and think about well, where are the 541 bottlenecks? Can we apply this to individual bottlenecks in 542 here to make the overall process more efficient or more 543 scalable or whatever. But especially in the startup space,

544 you want to innovate and innovation. Innovation is creating something 545 new or solving a problem that everybody 546 thought was not solvable or didn't even think 547 about solving because it didn't. Yeah, we didn't. 548 We didn't think about calling our relatives in the US every 549 day because it just wasn't possible. It was 550 too expensive. Right, I see. 551 I would be wondering what for our 552 audience would be their first gen AI use case 553 idea and what's holding them back from trying it. To top 554 you, drop your comment or DM us on LinkedIn. 555 We'll be back after a very short ad break. 556 So let's talk a little bit more 557 specific about the problems here. 558 What do you think is the biggest misconception of non 559 technical founders they have about AI, especially around 560 model choice and privacy?

561 There's two misconceptions, one I mentioned earlier that 562 is that we mistake these things for 563 humans because they use human language and that's the way our brains work. 564 If somebody talks to it or to us or uses human language, 565 our brain automatically thinks it's a human being. And by 566 thinking this starts to make 567 assumptions. And many of these assumptions just aren't 568 true. And these assumptions lead us down a 569 path where we get disappointed, where we get 570 frustrated, where we feel like, well, it just 571 doesn't work for me. AI just doesn't work for me. It isn't there yet. 572 It will never be a human being. It will never be able to actually 573 replace a human being in that sense. 574 However, its capabilities 575 are incredible, but they are slightly different. So that's 576 the first misconception. And one of the things that we tend to do is give

577 it names, which makes it even harder 578 for us. Like I remember we had. I can say 579 it because I don't have a device in here. There's Alexa from 580 Amazon. Alexa, the device which uses human 581 language, it talks to us. And 582 when I talk to Alexa, at least the original version, 583 not Alexa, when I talked to the original person and I asked it something 584 and it didn't know or it didn't understand me because it was just rule based, 585 more or less, it would say, I can't answer 586 that question. And I would get annoyed. I 587 would get, I would feel frustrated because due to the fact that 588 it was speaking to me with a human voice, something inside of 589 my brain thought, it's a human being. And 590 the next thought was not even consciously, but probably

591 unconsciously. How stupid are you? Why don't 592 you understand? And that is, that is 593 there's a break in communication happening 594 because my brain makes some assumptions that the 595 technology doesn't fulfill. So I was frustrated. 596 The technology doesn't care, but I was frustrated. I felt 597 like that doesn't really work until I really understood. Well, it works in a different 598 way. I cannot, I cannot project 599 human consciousness into it. And 600 that's a misconception. Projecting human consciousness into the 601 thing is a misconception. This is something to be really aware 602 of. The other misconception, and that's an entirely different, different thing, 603 is that you need the most capable model. 604 That you need to really make sure that you get 605 the most capable model to get started. 606 That is a way, and that's a kind of procrastination

607 because all the models are really capable nowadays. Sure, if you 608 look at the benchmarks, the models are different and every day there's a new one 609 which beats some specific capability over all the others. 610 There's a lot of progress going on. But if you wait for the perfect 611 model you'll never get started. You can use 612 literally any of the frontier models nowadays. It could be one of 613 the open weights models like Llama or Mistral. 614 It could be one of the commercial models like 615 GPT or Claude or Nova. 616 It doesn't really matter. These models are 617 capable enough to experiment with the first 618 use cases. And once you've experimented and once you've 619 found a product market match, once you found 620 a use case that really works, then it makes sense to think 621 about, well, does it make sense to maybe use a different

622 model that's a bit more capable 623 in this specific use case? Or maybe it makes sense to 624 introduce a second model which is less expensive for 625 part of the use case? Because for instance, for 626 summarization, I don't need any reasoning capabilities, 627 I just need a good summarizer model. And for 628 the actual workflow orchestration, for the agentic 629 workflow maybe that I'm going to build, I need a model that's actually 630 able to do planning and reasoning. These are two very different 631 capabilities and they have very different costs. So 632 I might need, at a point in the future, I might want to look at 633 different models and at their price, structure, at their capabilities. But to 634 get started, just pick one. Just pick one. 635 If you have GDPR or other privacy issues that you 636 need to take into account, pick a model service,

637 a model hosting provider that provides you 638 this functionality that guarantees you, that tells you we don't 639 store your data and all the data is being encrypted and we don't use 640 the data to train our models and we don't send the data to anybody else. 641 If you come to AWS to use a model on Bedrock, 642 and no matter whether you use our own Nova models or you use Claude or 643 you use Llama or any of the other models, 644 we host these models ourselves. We're not a 645 gateway to the actual model provider. We're not a gateway to LLAMA 646 or to the Llama API or to the Anthropic API or anything. 647 We host versions of these models in air 648 gapped accounts. 649 Nobody gets into this. These models and these models 650 don't send anything anywhere. Everything that

651 happens is just your request gets 652 sent into the air gapped account, gets handed over to the 653 model. The model itself is stateless, it doesn't store anything. It 654 just takes your data, loads it into its GPU along 655 with the model algorithm, with the model weights and processes 656 that, and then it sends the response back 657 and everything else just goes back to sleep. There's nothing that we 658 store and well, we 659 do store, obviously we store telemetry, we store that 660 you actually called the model and how many tokens you use because 661 that's how you ultimately pay for that. But 662 we don't do anything with your data. 663 We even have models running in Frankfurt that you can use so that 664 you don't even have to send your data to the us. 665 We even provide access to these models through our own

666 backbone, so you don't even have to use the public Internet if you 667 want. That's what model provider or non 668 model providers, model hosting providers 669 like AWS provide. It's a little 670 more costly than just going to ChatGPT or to Claude 671 AI or to the Llama API. It's more costly. 672 But on the other hand, we have different terms and 673 conditions and we make sure that you will be able to build 674 GDPR or HIPAA or whatever compliant workloads 675 using these models. And that's an important point. If you have 676 pii, if you need to take GDPR into account from 677 day one, make sure you work with one 678 of the providers that actually give you these capabilities, 679 give you access, give you encryption, make sure that they don't use 680 your data in any other way so that you can safely say in

681 your own audit and to your own customers, I know 682 where your data is going and I guarantee 683 that it's not being handed over to somebody without my knowledge 684 or without your knowledge as a customer. That's the important thing. 685 But to get started, I might even really start 686 with a use case that 687 doesn't even need these complexities because it introduces 688 complexities. And that's the thing. As soon as you need to 689 work with sensitive data, you have 690 to think about these things. You may have to think 691 about. Even if you 692 have a workflow that uses data from 693 your database which goes through a model in a GDPR 694 compliant way and gets displayed somewhere 695 to a client, you still need to make sure that 696 the data isn't being displayed by accident 697 to somebody who shouldn't have access to them. So you need to be able to

698 ensure authentication, authorization. You need all the 699 security and compliance mechanisms that make sure that not a 700 random person on the Internet or just a random person inside 701 of your company is able to just use the agent and access 702 your customer data. 703 I see. I was wondering for our audience, if you 704 could safely test any AI idea without 705 risk, what would you build, tag us or reply to us 706 on substack or with your moochart? I have 707 two final questions for this interview, Dennis, because we are already 708 recording for more than 45 minutes. But. 709 But I do believe they're very important thoughts 710 before you even start thinking about 711 applying AI. And we already know you guys support different 712 models. You are more the infrastructure provider for 713 something like this. But I was wondering, have you seen any

714 clever AI adoption stories where companies 715 started small and then scaled rapidly? 716 The first thing really that I would look at is 717 do I even need AI for that? Many of the things that we're trying to 718 solve with AI nowadays, they have already been solved 719 and probably in a good and much less expensive and much less 720 ecologically impactful way. 721 If you have a calculator that can add up to numbers, 722 use a calculator. Don't ask an AI to do it for you. First of all, 723 it isn't very good at it. Well, they're getting better at math. But 724 why would you start up an entire cluster of big 725 Nvidia GPUs to get 726 the sum of two numbers? You shouldn't be doing 727 that. So first of all, don't try to solve already 728 solved problems. And the second thing really is

729 again, look at 730 the painful things that nobody ever tackled. 731 Look at something that has been bothering 732 you or your customer for a long time 733 and it hadn't been addressed because 734 everybody said, well, it doesn't just doesn't work and we don't have the time to 735 do it ourselves. 736 It's actually pretty good closing words. 737 We will be back for one, the Founders 738 Vault for our premium subscribers on substack and YouTube. And second, 739 you'll be back for a second interview where you get more 740 hands on when you go through all the thoughts you had 741 that you need to think through before you can even get started on 742 AI. Great. I'm looking forward to it. Me too. 743 Have a good day. Bye Bye. 744 That's all folks. Find more news, streams, 745 events and 746

interviews@www.startuprad.IO. 747 remember, Sherry is caring.

Comments


Become a Sponsor!

...
Sign up for our newsletter!

Get notified about updates and be the first to get early access to new episodes.

Affiliate Links:

...
bottom of page