Designing Security for GenAI With Security Specialist Solutions Architect - Shweta Thapa

TLDR;

  • While designing Security controls for GenAI applications, a few things should always be kept in mind: 1. Non-deterministic. 2. Different Attack vectors. 3. Semantic validation for output.
  • Some of the basics of GenAI application architecture include 1. selecting a model from a Trusted Provider. 2. Validate Input and Output. 3. Context matters the most.
  • To Secure, utilize Guardrails for input and output. Having authorization in place for data access, and a Feedback loop is key for adjusting bias in output.

Transcript

Host: Hi, everyone. This is Purushottam, and thanks for tuning into ScaleToZero podcast. Today's episode is with Shweta Thapa. Shweta is a security specialist solution architect at AWS EMEA. She partners with C-suite and technology leaders to define and execute their cloud security, and most specifically, generative AI strategies, driving how business transformation can happen while there is robust security posture.

So thank you so much for joining me in the podcast today.

Shweta: Thank you, Puru, for having me. So, let's jump to the questions and interesting topics that we can cover today.

Host: Sure. Before we kick off, one of the questions that we ask all of our guests, and we get unique answers from them, is, What does a day in your life look like? And based on the role, it could be very different, right? Role, the vertical, the domain, the type of customers you work. So what does your day look like?

Shweta: Yes, so basically, it depends on the month of the year. So I'm a security specialist solutions architect in AWS, as you mentioned before. My day typically looks like, so I have some internal meetings with my internal teams so that I can catch up what are the customer's needs, right? What do they need? So I always try to work backwards. That is terminology from AWS.

So it's not like I start with what we offer, but it's like more about what is the customer actually need and then how they how we can achieve that right. So there are different types of perspective that we need to look after as solutions architect. So basically as security specialist I look at the well architect framework right. So cost systemization, security pillars, monitoring, observability. So all these type of all the different security pillars and also the well architecture framework is what I look after. To also provide that working backwards for our customers.

Host: Sounds great. So that means you work with a lot of customers. And I have heard this from AWS, a lot of AWS folks, that we always work backwards from the customer. And that's amazing, right? Because that means you're solving a problem for a customer rather than just going with a new tool and then just saying, Hey, why don't you use this new tool? Right? So yeah, I love that always while working with AWS folks.

Shweta: Yes, because our portfolio we have like more than 300 services and it's like we cannot say like hey it's like that's why we have that many services right because every customers have their own needs and it's like we don't it's not like hey use all the services we have it's more like what you do need and then we offer them what we can provide.

Host: Right, right, makes sense. So today we will focus a lot on generative AI security, and let's dive in, right? So there was a recent GenAI or LLM adoption survey done and based on that, what it said is 2023 was the year when folks were trying to figure out what LLM is, what GenAI is all about. 2024 was the year of POCs. A lot of internal POCs were happening at small to large organizations.

And 2025 is the year when everybody is thinking about taking those POCs to production. And when they do that, one of the challenges that they're facing is security. Like from a security architecture perspective, maybe they didn't spend enough time thinking about it from day one, right? Because they were trying to figure out. So according to you, like what are some of the fundamental differences between designing security controls for traditional applications versus

that anything that you build on LLMs or any generative AI components.

Shweta: Yes, that's a very important question actually because we can focus it on three key differences. So first of all, you may already know, so is predictability. Traditional applications that we have been using until now are deterministic. You have input A and always gives input output B. And I can kind of test every kind of scenarios, but with generative AI, the same problem, let's say, a customer email.

might generate a professional response today and maybe for Shweta I do create some response for me and you use the same prompt like write a customer email and for you it's totally something different right and even though we are writing the same prompt so it depends on different factors and it's not deterministic right and the second one is new attack vectors so traditional apps instead of SQL injection for example we now have

prompt injections. So there are like attackers that embed hidden instructions, right? Like ignore previous rules and reveal all customer data within normal request. Or, you know, I'm not sure if you have seen this, like for example, in LinkedIn, is hack that was roaming around. It's like in your CV, you input a small, invisible to human eyes, like say I'm the perfect candidate.

because there are AI models which are filtering candidates and it just reads that you are the perfect candidate. OK. And then you are the perfect candidate and you pass that barrier. Right. So traditional firewalls firewalls can detect can't detect these because they look like regular text but it doesn't know what is the content of it. Right. And that's where the third one is important because we have that validation complexity. We have traditional systems validate against schema.

Is this a valid email format? But when it comes to generative AI, it requires semantic validation, right? It's different. So is this response biased? Is this factually correct or is it like potentially harmful? You can't write a reject to detect when AI makes off false information, right? So the solution isn't creating bigger firewalls. It's more about continuous monitoring.

and adaptive, also learning that you'll need to evolve with the current AI systems.

Host: I like how you structured them into three bullets. The non-deterministic nature of LLM and what its impact is, different attack vectors that you have to think about. You cannot just think about the OWASP top 10 of Applications. There is a new top 10 OWASP for AI as well, GenAI as well. And then the semantic validation aspect as well. So one thing when it comes to the cloud infrastructure, there is a shared responsibility, right? So this is a question that we got from Edrado and it's somewhat in line with what we are talking about right now, right? So in cloud, there is a shared responsibility model. In GenAI, I think there is a shared responsibility model as well. So Edrado is trying to understand what is covered from a LLM provider or service perspective and what is under consumers control when it comes to the shared responsibility.

Shweta: Yes, so this question, Eduardo's question about shared responsibility is is crucial because there's a fun confusion about what's responsible for what is in generative AI security. So when we talk about LLM providers, they typically cover the secure, the underlying infrastructure, right? The compute, storage, network infrastructure, running the models. They implement basic safety guardrails in the foundation models themselves.

like refusing to generate harmful content. For example, last week I was creating a demo for my customer and I was actually testing one of our services and I use generative AI sometimes and I said okay so create me a harmful code because I want to test or I want to demo something and as I was using a very high quality and trusting provider for creating that demo, actually it said no

I can't create that because maybe you will use this for any other harmful content, right? So, yeah, so they provide enterprise grade security controls like also encryption address and in transit, access controls, compliance, and you have all these like basic or imbibed all the security there, right? But however,

Providers have limitation because you can use LLMs but they can't secure your specific use cases, your data or your application layer. They provide general purpose safety rails, right? It's not specific to your business. What's under your control is actually you're responsible for your data governance. What data you feed into AI systems because it's not only about using that, it's not only about using the LLM providers like as in model, right?

You want to use your own custom data. You want to customize that model. want that model to be specific to your business. You don't want a generic model, right? So you need to be in control of access management. Who can use AI systems and for what purposes? AI systems are accessing what data? How can they access it, right? So we have to have that all controls in place, which is basically having control on your data and your application layer. Application layer is also like, 

If you creating an a generative AI application, you have to throttle the APIs, right? You have to be in control of DDoS attack. It is still there. You have to have your WAF in place. So all that is application layer. So that is basically same responsibility in generative AI security.

Host: Right, right. Okay, yeah, that helps like clearly distinguishing what is the provider's responsibility and what is as a user our responsibility that comes to please. when you are like you gave an example, right? You were creating a demo. So you were working with GenAI components and things like that. When you are designing security controls for GenAI, do you have a checklist in mind that you go through maybe top five things or top three things you go through always? when working with GenAI components.

Shweta: Yes, so what you need to do actually is when you are using generative AI components you have different techniques nowadays so you have you want to make sure so the first thing here is generative AI just three years ago they launched it and everybody was starting to use generative AI and it was so authoritative that you actually believe whatever the output is creating right they have false references

and everything is like okay so I'm just going to create a PDF version of this and then just send it to whoever is asking for the response right but then people started realizing oh my god it's all all this is not true it's false I have to verify all the information before actually using it and then you have got agents you have got a rag, rag is like retrieve or went on generation you have got fine-tuning it's like yes I have to actually verify and monitor the responses so

And then guardrails also came for generative AI security. So first of all, what I look at when I use any generative AI components is if I'm using a trusted provider generative AI model, right? That is very essential because if, so I don't want to name any models, but there is a recently released generative AI model which was creating very biased outputs, right? So.

So you don't want that biased or you don't want toxicity in your outputs. So you have to choose very carefully what is the model that you're going to use because that is going to be your baseline. Then you then perhaps you need to use your own guardrails because even if that is a trusted provider, it has its own embed guardrails. You want to use guardrails for filtering inputs. Maybe you are a business and you've created a generative AI application for HR.

So you want to filter inputs from your internal employee saying, OK, don't let them ask about, I don't know, any other things rather than just HR, like what is my salary? You want to also filter. You want to also make sure that the prompts are only related to HR questions, right? That is the input filtering. And you want to maybe there is also marketplaces with different AI guardrails now.

And also for output, you don't want that model to create biased output. You don't want it to be toxicity. And you also want to be very careful about PII, sensitive data. So if I ask, a HR, it's our generative application. If I ask about my salary, imagine that it's giving information about the whole department. I don't want that, right? So you have to have that input output filtering in place.

You also want to make sure the next thing is context. So context is basically, yes, so you want to have that context to the baseline model, right? It's not about using any generic model. You want to that context about your business. Maybe you using a model for, let's say for travels, but you want your schedules, your flights, everything to be in place. You want that customized model, right? That context in place.

So these three, let's say these three are the main points that I'm actually looking at and then the fourth is let's say post-production it's monitoring and validating always so the confidence score it's 80 % okay if it's 80 it's less than 80 % then I need someone to review it but depends but it also depends because if you are using it for DevSecOps I maybe I always want a human to review that that response right

Host: The code that's getting generated, maybe. Yeah.

Shweta: yes yes so that's basically monitoring so you have to have that monitoring in place to add up to your generative AI models and if you are uh sorry this is the last thing and if you are just starting with generative AI there is a learning curve so even if even i had to start like okay what do i need to do you need to find your technique for that generative AI model to create the outputs for you yeah

Host: So yeah, again, the structure of the response was amazing. So starting with the model, like what model you are using, use a trusted model where you have enough confidence, which is not generating biased output and things like that. Then you do input validation. Similarly, you do output validation in a way. One thing that you touched on is the score, right? Like if it is less than 80%, then maybe you need to redo certain things.

Shweta: Yep.

Host: Now, when it comes to, so you touched on output and I want to take a little deeper into it. earlier you mentioned that generative AI applications are not predictive, like these are not  predictable, right? They provide non-deterministic output. And you want to make sure that the output that you provide is also safe. You are not generating toxic content and things like that.

So how do you think about securing that aspect, the output aspect of it, so that you are generating trustworthy or safe content rather than harmful content? What measures do you take?

Shweta: So there are different techniques, first of all you can use these guardrails that I was mentioning before, those are in the markets, those AI guard rails right? So those can filter harmful context, insults, bias, toxicity that can be filtered and that can be even masked or you decide what to do right? If you want it and even you can decide what score of filter do you want to do in each kind of this

in kind of these sections right and and even if you have that if you even if you have that in place what i recommend is actually using if you have a business if i have a business i i wouldn't use a baseline model what i would do is use something i would start to use something called this rag what i was talking before right retrieval augmentation generation.

So it is basically connecting my AI models to live organizational data. It's customer databases, document repositories, final systems, HR records. So I've got all that in place, but I have to make sure that there is also authorization. There is permissions that AI is only accessing to the data that is necessary to that user. Right. But it is also making sure

that the AI model is only responding to the data that I have approved beforehand right because it's it's not like yeah okay but for example if you take it just like a baseline model foundation model it it can take any data because it is trained in let's say let's say the whole internet data right but if I am a business I want it to be very specific to my business data I don't want the whole internet data

doesn't make sense even and I want to make sure that the data that is being provided is true right so I can use my databases and my customer data and my document repositories financial systems as I records whatever I have with the correct authorization and permission in place to create that true data if that is not if that is not working for you or you don't have that database or whatever

Then you can also do what is called fine tuning. Fine tuning is like next level, right? So you have that baseline model, you have your data, what you do is like you do the fine tuning. So the baseline model is much more scoped now to your data. but there also you are making sure that the model is only using data from what you have approved beforehand. It will always take in account permissions and authorization. So.

and also now it's like we have got these mcp servers and i cannot live without mcp servers.

Host: I know. Yeah

Shweta: But we have to but you have to be very careful with mcp servers because you have to have that list of yes i authorize you only to use these these servers and not go to because because internet is crazy right there are so many things that you don't know if it is true or not so you have to have that limited also mcp servers but, but yeah, that is also another thing that I am now actually using a lot because it's like yeah, anything I want to verify it's actually from some official documentation and I'm relying on that official documentation.

Host: Right, No, you're right. Like there have been recently many news articles about NCP servers getting hacked and exfiltering data and things like that. So yeah, yeah, you are absolutely correct. So speaking of data, sometimes when you train either using the rag approach or the fine tuning approach, if an organization doesn't have enough data, they sometimes rely on synthetic data generation, right? And

What are your thoughts about that? Is that a good approach to train your models, or maybe organizations should stay away from synthetic data generation altogether?

Shweta: Synthetic data is also dangerous as a security perspective because it depends on the train data. So we have to go very way back to when the model was created. So let's say you are in business, normally I don't train data from train model from scratch. I usually use a foundation model that is already pre-trained.

Do we know the model that has been pre-trained? What is the data that it has been used, right? But we have to make sure that that model that I have trained, if I have trained it, or even the providers, when creating synthetic data is not biased. It can create biased data, right? Because depending on what data I have used to train that model, it can create biased data. It can even create that synthetic data using real data from the customers' data.

Because it is trained on those data right it may be not exactly the same customer data but it can create something similar to customer data maybe it is it has some sensitive information it has some pia information so sensitive data yes it's totally valid to use to use generative AI models but always taking in mind that if you train from scratch those models what is the data that you are actually training that model because synthetic data will take that as inference right to create those models

Host: Yeah, OK, makes sense. And like the example that you gave earlier, if you're building an HR application, you start with a baseline model, you customize. And if you think about generating synthetic data, it could have a bad impact as well. So you spoke about validating the data, right? Like whether it is generating biased or not and trustworthy or not. Is there a way you validate that?

How do you validate and fine tune that?

Shweta: You mean validating the training data or the output?

Host: The output.

Shweta: And so there are different techniques to validate data so the system that normally people use or the technique is monitoring right so when you train a generative ai applications what you have to do is monitor so you have seen so if you have used any application generative application you have always got that feedback button right it respond

The response is good, bad or additional comments. So previously you have to actually do it like internal testing and validation. If the outputs are actually what I have programmed it for. So the validation approach actually shifts from rule-based. Does this match expected format to pattern-based and contextual? So is this appropriate, accurate, and safe given the context? You also need real-time monitoring because AI outputs can't be pre-validated, like traditional application responses, right? So additionally, AI outputs often require human judgment. So all those, these thumb ups, thumbs down, comments, are human judgment for final validation.

Especially in sensitive domains like healthcare, legal, or financial services. You even have to think about what I was saying before, like authorisation, permissions. Because the same permissions can be applied to, for example, a nurse, the receptionist or the doctor. Because it has PIA information, has health information, it has information about patients that can be very, very sensitive.

So, this means implementing human in the loop workflows, stabilizing clear escalation procedures for each cases and also maintaining audit trails that capture both automated and human validation decisions. Another important thing is, so in traditional apps for example, this should be also part of validation, when your traditional apps

So, stopped working, you have some playbook. So, restart the application, have a follow-over technique or whatever. But what happens if I have a weather application and it is starting to give financial advice? So, what should I do in that case? Who should be the primary contact? How should we disable that AI?

AI application. So the approach is different and the testing and validation should be taken in account from the starting of the, I would say even the designing, Starting from the data governance.

Host: Okay, makes sense. So I like how you gave those examples, right? So those make sense. Like if it is healthcare and you do not have enough guardrails in place or you are generating synthetic data, this all could lead to challenges for you. So having that real time monitoring, the data governance that you spoke about would help immensely. Now with all of this,

A lot of investment needs to be done, right? Like you have to have executive buy-in, you need to make sure your leaders are aligned when you are investing in building a generative AI application or deploying it to production and things like that. From a strategic advisory perspective, what are some of the questions that you generally ask your security leaders when considering a major investment or building or designing a generative AI application?

Shweta: So if you think about strategic perspective or different persona in an organization, so if I'm talking with a CISO, maybe it's not the most the conversation if I start talking about prompt injection and if I start talking about maybe he'll say what are you talking about I have no idea what you're talking about but if I say like

If I ask him, can you tell me exactly what data your AI can access? Or if you are using this application, what is the data that is being exposed? Then maybe the question is something different, right? So the approach of talking about generative AI security is totally different to the persona.

So right now in the present, I talk more about generative AI security with development teams, development and deployment teams. So what I see is it is majorly being adopted in the DevSecOps area. generative AI, it is allowing developers and deployment teams is to shift left the security. Yeah, because

Because I don't have to wait until for the deployment to actually see if there is anything wrong. I can start it from writing the code, checking my libraries, all that vulnerabilities, right? That can be done in real time while I'm coding. So what I see from now is most of the people in their businesses is trying to adopt generative AI in the development process, deployment process, right?

More than generating generative AI applications. It's because I feel there is still not confidence for creating those applications because it's also about scaling, right? Those applications. I see it more. So CSOs, I think they see more applicability in using generative AI.

I usually talk with CISOs rather than CTOs CIOs, but CISOs are more focused on development and deployment.

Host: Interesting. So one of the things that you mentioned is that there is a lot of information, there could be a lot of gaps also in understanding. So when you are speaking with, let's say, CISOs or security leaders for rolling out applications, what are some of the biggest misconceptions or blind spots that have seen business leaders have about GenAI security? And how are you addressing that with them?

Shweta: So...So, when talking about CSO, let's say the most asked questions, it sounds very complex. So we ready for this? Is my business ready for this? How do I start? Where do I start?

So if you think about security, the CISO or the security teams are normally checking if the applications or the generative AI applications that they have to approve to go to market, right? Because marketing teams come, okay, we need to create an application that is using generative AI because this is good for our business or blah, blah, right? And then security team has to go and say,

Okay, so how are you accessing the data? What is the data you're accessing, right? Or what are the applications that you are using? Or which systems are you accessing? So from security team, the or the C source perspective is different from from when you are actually creating those applications. most executives thinks they are safe because they are using pre-trained models, not training their own.

But when you implement things like that rag to connect that model to your customer database or fine tune it on your financial data, you have just created the same data security challenges, right? And what happens if our AI makes a biased or harmful decision? This reveals whether you have accountability frameworks. If you're hiring AI discriminates on your customer service.

AI gives medical advice, who's responsible, right? And also like how do we measure AI security risk? Traditional security has clear metrics. Like we have got vulnerabilities patch, incidents per month, how we need to update to next version. But AI security needs new metrics. We have to think about bias detection rates. We have to think about false information frequency, prompt injection, attempts block. So we have to think about responsible AI, right?

So from CISO and security team these questions force them to think like beyond traditional perimeters. So that will actually let you know like if the team is ready or not because if you think about it, it's shifting totally. It's a different mindset, right? It's not about

Yeah, so there is this national vulnerability database and it has published blah blah no It's something that you have to monitor and you have to be in total control. It's more about data What is your business outcome and how will your business be affected by this response? Yeah

Host: So it's like you are one like unless you have the guardrails, the authorization, all of that in place, you're one smart prompt away from getting hacked, right? In a way, like leaking all the data from your organization, going across, going away from the boundary that you want to define. Like I'm going back to the example that you gave earlier, right? That if you are building an HR application, how are you ensuring that

Somebody is not asking some finance related question or somebody is not asking what they are not authorized for. Like I'm asking for your salary instead of my own salary, right? So putting those guardrails and the authorization layers in place helps in securing your generative AI application. And maybe that's one of the misconceptions that you help work with your customers or CISOs when you have that dialogue. Now, one question is,

Like for organizations who are starting with their GenAI journey, we touched on some of these areas that you should focus on. Do you think it applies, like the same thing applies to let's say enterprises versus startups? Is it still the same? Would you recommend the same set of steps or would you recommend different based on maybe the domain or the size of the company and things like that.

Shweta: So even if you are a or enterprise customer first of all even with traditional machine learning let's say you have to start small so start small don't jump into okay I want to create this whole palace no start with something doors and windows it's the architecture base right you know you have to start with doors and windows and then you start building the walls and then you create that whole house and then you can think about palaces

So even if you are a start-up or even if you are a big enterprise business, have to think about starting very small and then also think about this working backwards I was talking about right? What is it that you want to achieve? Like yeah okay I'm a business, I'm a start-up but I want to use generative AI for creating applications. Yeah but what is the use case? So it's the weather forecast let's see.

Do I really need generative AI? Because generative AI is what? is generating context. But whether forecasting, you can use traditional machine learning, right? You don't need generative AI. So it's this working backwards, right? What do you need? then you actually, maybe even this general foundation model is valid for use case, only doing by

Specific prompts and putting guardrails in place is more than enough. You don't have to use your customized data or you don't have to do anything so start small Whether it doesn't matter if you are startup or interraised great customer It's start small and then you start building on that base and then then creating that you whole a whole picture of where a company wants to go in the future

Host: Yeah, so you hit on a very important point, which is the use cases, right? Like in 2023, 2024, 2023 when generative AI came, a lot of folks saw this as a new tool and we can solve everything with this new tool, right? They didn't think from the use cases perspective. They were just trying to see how can I fit this new tool to my existing, let's say product or existing problem and things like that.

Shweta: Yeah.

Host: Not thinking about what exactly we want to solve for. So yeah, that's a very important point that you highlighted. And I loved your recommendation of starting small. Often, when we think about it of a new tool, we think about revamping the whole world, right? Instead of maybe we should start small so that we get a feel of how does it work, how does it help, and then slowly expand on that.

Shweta:Yeah. Yeah, exactly. Also adding to that point, mean, so it's been three years already and I'm sure that in the market, so there are lots of marketplaces for generative AI. I'm sure in the marketplace there is some application that will meet your need. You don't even have to create your own application. Only that you have to validate that it is a trustworthy provider. But rather than that, maybe you don't even have to create that application from scratch.

Yes, start small, testing what is already there. It's the same concept as in the traditional world of developers where you went to some forums and you just copy pasted the code and it's already done. Why should I write it from scratch? So something like that. So for generative AI nowadays in the marketplace, are so many applications that are already there and specific to your use case. Maybe you can use that. You don't have to create everything from scratch.

And let me tell you my personal story.

Host: Yeah.

Shweta: I started using CLI code and I use CLI commands because I like it more. I don't use visual code but I tried it and I didn't get used to.

I can create any kind of application, I can create whatever I want, I just need to give some instructions and that's it. I started and I tried and tried and it's like but I have to be so specific, really specific to get done what I really need. It's like babysitting that whole process, every time, validating what is being created.

It's not that easy because it seems like it is easy but it's not that easy. You have to monitor, validate every time have that human review what is being created and there is also a learning curve. I have learned how to get the most of these applications. So yeah it's that in using these applications.

Host: No, you're right. The personal story that you shared, it is very valid. Like you get a false sense of productivity once you start working with these agents, right? Because you feel like you got a new tool, you can do anything and everything and you get sort of lost in that. And you start doing things which maybe you were not supposed to do or you didn't even have like focus to work on, right?

Shweta: Yeah. Yeah.

Host: You wanted to do something else now with this new tool with the prompts, with the response, you sort of drift away from maybe your original use case that you were trying to solve. So yeah, you're right. Like you get that false sense of productivity and then you just continue going down that path till you realize that, I already spent three hours, but I have not reached anywhere. Then you come back and then start focusing on your own problem. Right. So I have also been through that.

Shweta: Yeah.Yeah.

Host: I've also done that multiple times. Like I've also tried with Visual Studio Code and I ended up building some gateway, which I didn't even need. I spent an entire weekend on it. I could have worked on something else, right? that's how you learn that. That's how you learn that even though you think you are productive, you are not actually being productive. You're just falling into the trap of it. So yeah, that's a great point. That's a great point you brought up.

Shweta: Yeah. Yeah so what I feel is like you have to have that knowledge previously like you can't say hey I don't know anything about I don't know cooking cupcakes just create me maybe that will that is work because this is something like this is something deterministic right how you how you cook cupcakes but let's say I have never created an application for fitness then I do and and I just say create any application for fitness but I have no idea like

Where will the application be? How will it be deployed? So all that knowledge I have to have previously, right? Because the Rediviv application, it will create whatever they want, what was it trained on, right? So you have to have that knowledge previously to be very specific with your instructions and what you really need, just like you would be with a developer, right? Yes.

Host: You. Yeah, No, that's a valid point. Yeah, and thank you so much for bringing that up. And that's a great way to end the security questions section. But before we end the podcast, I have one last question. Do you have any learning recommendation for our audience? It could be a blog or a book or a podcast or anything.

Shweta: So what really helped me was this OWASP Top 10 for Large Language Model applications that you mentioned in the beginning of the podcast. So it really helped me a lot because this is like changing your mindset of what are the security vulnerabilities that can be in using these LLMs, right? About agents, about the data, data leakage that can happen. How to protect my business from all this wasp right so yeah that is a very very recommended read for anyone who is interested in this topic and there is one that I have not seen in or I have not actually looked for anything because this has helped me a lot so I will just it's called Securing Generative AI - An Introduction to Generative AI Security Scoping Matrix

Host: Scoping matrix.

Shweta: Yeah, so this is a blog from AWS, and so depending on what type of generative application you are actually using, so let's say if you are using a SaaS application, generative SaaS application, don't have any, you can't do anything about the trained data, right? The pre-ten models, right? So yeah, so it's a very nice blog, and you can also start with that. Please see the link

Host: That will be helpful. Thank you.

Shweta: yeah. Here you go and yeah you can start with these two and then and then it's a it's a good starting point for for reading about generative AI via security controls. Yeah

Host: Amazing. So when we'll publish the episode, we'll add it to the show notes so that our audience can go and learn from there as well. Yeah. With that, we come to the end of the podcast. Thank you so much, Swetha, for joining and sharing how organizations should think about building on GenAI and also the security aspects of GenAI. So thank you.

Shweta: Okay, sure. Thank you

Host: Absolutely. And to our audience, thank you so much for watching. See you in the next episode. Thanks.

Shweta: Thank you.