AWS vs. GCP IAM Architecture & The Future of Security in 2026 with Sneha Malshetti CISSP
TLDR;
- IAM in the cloud is underrated. Always start with a clear understanding of the goal and start small. Instead of diving head first with best practices.
- Leverage tools & techniques like Role based access control, periodic Access reviews, review unusual access, unused access. Monitor to optimize them.
- Striking the balance between least privilege and velocity is hard. The same challenge exists in Non-AI & AI world. So, implement basics really well, utilize above tools & techniques and optimize over time for a better IAM posture.
Transcript
Host: Hi, everyone. This is Purushottam, and thanks for tuning into ScaleToZero podcast. Today's episode is with Sneha, Sneha Malsetti. She's a senior security engineer at Ethos, and prior to that, she was working at Credit Karma. She has over a decade of experience in the field of security engineering, worked with some of the biggest enterprises in the industry. She's also a board member in the ISC2 Silicon Valley chapter. Sneha, thank you so much for taking the time and joining me today in the podcast.
Sneha: Thank you, Purshotham. Thank you for having me. You guys are doing a great job here. I think you're building a community of your own. So yeah, I'm happy about that.
Host: Thank you so much for the kind words. Before we kick off, anything you want to add to your journey, I know that I just did a two, three bullets, but if you want to highlight anything in particular, maybe how did you get into security, what still keeps you motivated, like anything you want to add to your journey.
Sneha: Sure, so I've been in IT for around 11 plus years now and it was, I started with data and then I did my masters and then I entered into the field of security and I don't think I would have done anything different now if I look back because like I once I… got into security, it was mostly cloud security. I knew what I wanted to get after my master's. I wanted Python, wanted cloud, I wanted security.
So it was the best combination of all those three things. So I couldn't have asked for anything better. And ever since, I've been working on different aspects of cloud security itself, be it automation or building things from scratch. I built a CIEM tool for which I was awarded last year. And then I also remember when I had joined one of the companies, I helped them actually build their entire infrastructure from scratch in Terraform. So that was like the biggest learning I think that I would probably if I had to do it now, it would be a little different. But yeah, that was something that I really cherish.
Host: Yeah, like pre-GenEI era and post-GenEI era, right? Like we can clear the, there is a huge difference for sure. And you touched on CIEM, right? And we met in the summer and one of the things you highlighted was around that you have built a lot of policies. have built the identity stack at the organization you were in at that time.
And I'm excited to talk about that. I am in cloud and applications for today's episode. So let's dive in. So you have worked at multiple organizations where there is usage of multiple clouds, like AWS and GCP, I take an example. And that brings a unique perspective, cross-cloud perspective.
And one of the things that we have seen across many studies is that more than 70% of cloud attacks originate either directly or indirectly from identity. And when you bring in multi-cloud, that increases the complexity even more. So when it comes to IAM, what are some of the philosophical or architectural differences that you have seen between, let's say, AWS and GCP?
Sneha: Sure, so firstly, feel like IAM is very underrated. That's the last thought that people have. Once it scales, then you think about the IAM side of things. But I think if you blend it from the beginning, that would really help both the organization and the security teams. So that's one thing.
But apart from that, when I see both GCP and AWS IAM, there are a few things that really come into a highlight is that one is going to be the kind of approach and structure they have around IAM. One thing I've noticed is that within AWS, it's more identity centric.
While if you see GCP, it's more resource centric. Even the APIs, because I've dealt with those parts as well. So even the APIs, when you look at AWS, it is like whatever you see in the button, like in that particular console, that is an API in the backend.
But when you look at GCP, they have like a consolidated set of APIs, which is not related to or does not come under the purview of that particular resource. So all those small nuances are what really keeps them different.
Another thing that I would point out is the kind of customization that you can do with an AWS and GCP is basically when you look at AWS, are creating those custom policies and then you're using them.
Within GCP, they have a certain set of policies that are created. Of course, you can create custom policies as well, but it's much easier to use them and they are a little bit of an overprivileged policies if you see.
For example, have user, suppose you have S3. Over there it's called bucket, but let's take another example if you want. Say Lambda functions, right? In AWS, they're called Lambda, and in GCP, they're called functions. So function, there's user, there's owner, and there is editor. So based on that, you get overprivileged permissions to do things.
So just around those things, they differ. Then the third thing I would like to bring up is the hierarchy. For example, within IAM, you do not have a hierarchy within IAM in AWS. But in GCP, If you give permissions in the root level or anywhere in the parent level, you will get those permission in the child projects as well.
So those small nuances actually make them different. As in, it looks at the whole organization as a whole, but within AWS, you look at each account as a business unit of itself. So I feel like those… three things are what really differ in both of them, like service accounts, the usage of them within the two, here you have users. yeah, those things, the small nuances when you're actually transferring your code base or anything, you have to take care of those things.
Host: Yeah, that's a great start. And another ones that I have also noticed is, as you highlighted, let's say for functions, there are predefined roles and you need to assign. Or the other alternative is you create a custom role.
And one of the difference that I have seen between AWS and GCP is in GCP, cannot create a custom role using another role. You have to copy all individual permissions and then add them if you want to create a custom role. Versus in AWS, you can clearly create a policy. You can have other policies in it, even embedded and things like that.
The first point that you highlighted also, right, the identity versus resource centric, that like I started with AWS and I'm very familiar with AWS. Now, when I started using GCP, I was like, why am I doing all the bindings at a project level, not at a user role level? So there was that confusion. So you're right.
There is for sure like a difference in how they principally manage resources and access to the resources in the workload, in the cloud environment.
Any… so to work around this, do you have any best practices that you follow or that you recommend your team?
Sneha: I think one thing that we could use, it's not going to be apples to apples translation there, but you want to see what principle is. So all those concepts are going to remain same in both of them, right? There's a principle, there is authorization as in what they can do within it. then resource level, I mean, resource level IAM bindings within GCP and resource level IAM policies within AWS.
So you can probably look at what you want to achieve and what kind of perimeter security or is it just resource level security that you want and then push it at that level.
And for me, within GCP, I think everything should be within project level that we can do because now when someone has over permissive rules at a root level or at a parent level, it's not easy to deny them at the child level.
So probably avoid doing those, like avoid having root level permissions for things, probably for very generic roles, you can have that.
For example, if you are a data scientist and you have to give like data scientist roles only, probably you have a set of roles, you can give those.
But if you are looking for say production level access, I would recommend don't even put it under one of those places where there is… excessive permissioning at the parent level. Probably you want to move it around within the hierarchy also.
Host: Like what you highlighted, right? That mistake a lot of like we see that with our customers also, where they say that, I'll just give viewer access. Like I'm particularly talking about GCP that will give viewer access at an organizational level so that it drills down to all the accounts and we don't have to individually assign them at a project level.
But that also brings the challenges that you cannot restrict as you highlighted. You cannot say that at a particular project level, want to override the viewer permission that is given at the org level.
AWS in this aspect does a slightly better job with permission sets where you have to explicitly assign the permission set at an account level so that you have that control. You are not doing it at org level and things like that. So yeah, you're spot on.
Continuing with that, enterprises today use multi-cloud, whether it could be for AI, whether it could be for specific capabilities or regions, data residency, whatever it could be. And that would mean because of these identity philosophy differences, it could lead to identity silos. Like how you manage AWS is different towards its GCP because AWS has identity center. It works differently than Google Cloud identity.
How do you do a centralized identity federation in such scenario? So that when it comes to developer experience, like your users are developers or DevOps or SREs, right? When it comes to their experience, how do you keep it consistent? And you also are able to audit between them that what's going on in your cloud environment.
Sneha: So one good practice, and we all know about this, is role-based access control. So what you're going to do is just jot down the kind of roles you have within the organization, what permission should be given to both of them within both of these places. Federation, of course, helps a lot. So you can map those roles with each of those groups. And always do it to groups and not to users directly.
Because I've seen this happen where even in production, people have owner access within GCP. And I was like, this is not a good idea. But yeah, that was set up maybe some years back and always, always, always have access reviews every, I mean, if not six months, at least in once a year involve the owners of that particular data or project or account, whatever you think, or you have put in within the system or you have your environment set up as those reviews really help with seeing if someone has over privileges.
And when people move, from one role to another. Like the other day I saw someone who was a recruiter before and now he's into data operation. those role changes, you have to make sure you're managing that as well because, know, like he should not have, after the transition, he should not have his previous roles. So how are you managing that? The life cycle of that complete, basically I am life cycle handling. yeah, those are a few things that I would look at closely from.
And of course, security is everyone's responsibility, so involve the different stakeholders as well. Make sure they're also accountable for the scene.
Host: So in that case, like the example that you gave, someone was a recruiter and now they are a data scientist or they are in DevOps. How would you manage that transition? Would you say that, maybe we have a third party identity provider, let's say Okta or JumpCloud, and we manage the assignment there so that it trickles down to let's say AWS or GCP, or you will manage it at an AWS or GCP level, what would you recommend? Where should someone manage such scenarios?
Sneha: So I have been a part of Google workspaces, and I've seen how that works. And usually you have groups. So you just assign them, remove them from the other group, put them in this group, and that group has access to whatever you require access to. So there's nothing that you have to do except just moving one person from one group to another. But make sure he's not present in the previous group after his transition is over.
Host: Mm-hmm. Yeah, makes sense. So one of the things that you touched on earlier, right, that we should do our role-based access control, which makes sense. And one of the things that often it falls under is like, do you go like do governance, right? Like I am governance property.
And CISSP also focuses a lot on governance. So now, for a smaller organization, when there are like 10 people or 20 people or 50 people, it's much easier to manage. Like you would have a limited set of roles. You give access to groups. You put people in groups and things like that.
Now, as the organization grows, let's say an organization of size of credit karma or 10,000 people company, how do you manage your RBAC so that it scales? while at the same time, we do not end up creating like 10,000 roles. It becomes a rules problem. So how do you strike that balance between them?
Sneha: I think this is going to be more of a question on what your business use case is. When you have that question answered, like for example, in my previous company, it was a fintech company. So you need stricter rules. based on like another company would be an entertainment company, probably not as strict as a financial company, but then you also need to make sure that you have those RBAC principles put in place. So, just seeing where you can strike that balance.
Secondly, if it's a 10,000 people company, want to probably like sub not like basically you want to delegate the permissioning down as well. And something like a dual approval would also help in that case. Like for example, I have someone who's asking for administrative, say, S3 administrative access. And I am the manager, I know what he's doing. So it depends on me as well, whether I want to, like, basically, I should be responsible for whatever access he has, and I should be able to tell whether he has it or not. So an approval should go from my side, and also from the data owner's side, because they know that, you know, this is not going to be a person who should be accessing this.
Like, for example, HR, they would know that they do not need anyone else within the company accessing this data. So those two approvals would be good, like a dual approval system. Then there's another thing where, of course, like giving it, like trickling it down as in one person or one, like the manager should have access to whatever his or his subordinates would have access to, like they should know and they should be able to provide approval. So just based on that, that would be one. aspect of it.
Host: Yeah, makes a lot of sense. we had a guest from booking.com and they have around, I think 200-odd accounts and this was primarily AWS and they have followed very similar approach as you mentioned, where they have different OUs and they have SAPs defined at the AWS organization level, sort of to set the boundary and in each OU there are different owners who sort of own the permissioning, like giving people access, getting rid of access, things like that.
Because as you said, you cannot scale if there is a small team, central team, it's managing 200 accounts, there is no way you can manage access, govern, and things like that.
The next question related to this is, how do I audit it? What tools and techniques do you use to audit that the access which is being provided is following RBAC-based approach? If there is no anomaly in it, anomaly as in somebody just assigning someone directly a user permission versus a group-based permission?
What type of automated mechanism do you recommend for auditing what's going on and finding out any anomalies in the patterns those are used for IAM?
Sneha: Sure, nowadays we are logging everything from all the tools that we get. I feel like logging is also one of those big cost center things right now. But also it helps us find out anomalies. For example, if you, and of course there many tools for this as well. Now, for example, there are just simplest case, go to your access analyzer and you can actually get how much data or how much permissioning you have used. You can put thresholds on that.
So within that, within Access Analyzer, we had done this within the CIEM tool. We had seen like more than 60% of your permissions if they've been unused for over 90 days, then we're going to flag this particular IAM role itself.
So actually, yeah, I am binding. So this was one of those things where you're like, okay, people are not using all these roles. They're not used for 90 days. They're not going to use it further as well. You could have something like this. This is the simplest thing that you can do.
But otherwise, you could have tools that are doing it for you. There are many anomaly detection tools in the market that you can go for. Even probably build your own machine learning instance that could do this. But that's a little bit of work. Another thing that you can work with is probably, like, yeah, of course, tool sets is one thing, but see, guard duty. It does a lot of that in build, but it comes with a price. So yeah, these are a few things that you can probably look at to just get through or understand what is going on within your system and catch those anomalies. And then I know tools do a good job as well. They set a baseline, they take 30 days at a baseline, and then they find the anomalies within your system. So yeah, pretty much.
You could go for whichever you want. Again, striking the balance is very important based on your budget and the scale of company that you are and the kind of business that you do.
Host: Yeah, and I think it's great that you highlighted that some of these are free out of the box provided by cloud providers, and some of them you have to pay for. As you highlighted, Access Analyzer, it provides that information for free. And same, GCP also does something similar, where if you go to IM section, you can see that as well.
Sneha: so for GCP, they kind of put it behind a paywall, which is one of those challenges that we had. Like we built the stool and then it got paywall. So we had to go through the logs and that was another thing all together. yeah.
Host: But if you go to the console, it clearly shows that there are 1000 permissions unused out of 5000 permissions or something like that. Do you think?
Sneha: But they have like rate limiter and stuff.
Host: Rate limiting, yeah, yeah, you're right. The other option that you shared is let's say Google has SCC or AWS has GuardDuty. Maybe you can leverage that if you have budget to buy those tools. The follow up question to that is how do you do this at scale? Let's say you have limited budget. When you have, let's say, one or two accounts and you have 50 people, maybe you can go to Access Analyzer, you can see who has access to what, and you can optimize. But when you have 1,000 people or 10,000 people, in that scenario, how do you do it a better way?
Sneha: I think automation is where this entire thing would work out very well because that's what we have done. We have gathered all this data because going into each and every account doesn't make real, it doesn't really make sense to, if you are a lean team.
You do not have the time to go into each and every account. There are more than 200, 400 accounts in some of these organizations. So we have to pull data, probably phase it out as in you do so many important accounts, and then you go development testing and everything.
And then get that automation in place. Make sure you're gathering all the required data and hitting all those important APIs that give you that data. Because I remember within GCP also, there are two sets of APIs.
You need to know which one to hit and get that data. So the recommender API also gives you a lot of information about access crawl as well. then there is different, yeah, basically you need to also look at the paywall part of it as in how many pulls can you do within the day, just as a best practice, but, and to save money, of course.
And… So yeah, those are a few things that you can use. Automation really helps though. Like that's how we have done it. And that's how I wouldn't do it any different.
Host: Automation using cloud providers APIs, wherever possible. Of course, keeping rate limiting and paywall in mind so that you have a centralized visibility and you act on, let's say, as you gave an example, if somebody has not used set of permissions for the last 90 days, maybe it's not needed. And when I say somebody, can be either a user or even a role or a service.
Now, let's say I created the roles and Somebody said, hey, I need to look at storage accounts because I'm trying to debug something. So you created a policy. You kept them accessed. You may not have fine-grained that access. And that often could lead to either privilege escalation or data exploitation in both worlds, AWS or TCP. How do you monitor and how do you detect such scenarios?
Like given that you are trying to move fast, you created the rules, how do you keep tab on whether there is a privilege escalation happening, data exploration happening, and things like that, because of the rules that we created and we gave access to folks.
Sneha: I think roles should come with an expiry because most of the times they just remain there like some kind of residue just sitting there, right? So if we have that expiry defined based on again, the business use case, like if it is an important role and or say it's a production system, then you want to give those roles for just this much time. So basically just in time access, right? So if you want to give someone a role within the development environment, then they could have a longer role.
And also audit, I say audit, the production systems must be audited beforehand. Also, we need to automate and gather different external accesses within our environment, as in production environment especially, and see if we have any knowledge about all of these. Because at any point, you should know what external accesses are required by your environment.
So make sure you inventory those and then see why each one is like, who is this person who has access externally? We had done this within our environment. And then if there's some unknown ones, then you probably want to ask for a justification. Why is this, or what is this account doing, or what is this particular access for? So yeah, those things are good to have.
And then, basically, at any point in time, you should be having these kind of automations running so then you know what is coming in and if there's anything new, then that should be caught in an anomaly.
And then that is one more aspect of it because these are the guardrails we were trying to set as well. We were trying to have those external connections looked at and also if the threshold, of course, that one is helpful, but… the kind of scenario you explained that they have been given this for some genuine work and then it was left then, there's no one to pull it back. So yeah, that's when just-in-time access would come in place.
Host: I think I have seen in GCP when you do a role assignment, you can define an end date. Like when does that access go away? I don't think I have seen that with AWS or Azure. I could be wrong. But GCP does that where you can define that.
Now you touched on a key point, right? Like Just-in-Time. And there is like a lot of IAM architects are advocating for it. With just-in-time, the challenge comes that sometimes I may not know that I need for four hours or I need for eight hours, or sometimes I may not know if I just need S3 or I need logs access or EC2 access and things like that. In that case, how do you strike that balance between giving folks the limited access that they should have? versus sort of not blocking them when they are trying to let's say triage something on where they are trying to when there is an incident and you're working on it things like how do you strike that balance between the velocity part for the developers or the users of the cloud and the security who is trying to make sure that limited access is given for limited time and things like that.
Sneha: You know, with all my experience in working in different places and different levels of companies, I have not seen anyone who's tried to perfect balance it. There's no correct answer for this, to be honest, because sometimes your workload will be four hours long. Sometimes it's more than that. So it is all about, again, which environment you're in. Production, more than four hours, if you have a privileged access, is a complete no. That's just given. Right!
So that is one thing, but also if you go into some place like a development environment, of course you have access for eight hours and that should not bother you unless it's like someone says that, no, our policy says six hours, so we're only giving six hours. It depends on what the company wants to do in that case, but mostly what I've seen in best practices is eight hours.
And usually, even the process should not be very complicated to get that access. And sometimes if you need break-glass access, that process can also be automated. Like if there's a ticket, there's a current ticket going on for it. And because sometimes these things happen in the middle of the night and no one's available to approve your access. And so probably automation in that side as in if there's a JIRA ticket that's active for this and is in this particular status and this person is on call. then you can probably say, like it's a safer bet to say that, OK, this person will need this access and might, when these two things are there together, you can have, or he can have that, or she can have that privileged access during that.
Host: Hmm. Makes sense. I think you hit on a very important point when it comes to just in time. Because when it comes to just in time, that means we are saying there is an approval process, right? You cannot just get access. And if you are working on a weekend, you're trying to deploy something and all of a sudden, your deployment fails and you need some elevated access and you are waiting for someone to be available and give you that access, that means you cannot move forward, right?
So… finding those avenues to maybe do auto approvals in such an idea. So whether you bring, take context from different areas and then maybe elevate their access so that they are not blocked, that would be amazing actually.
So one last question on this area, like just in time and all. So Kailash, who is one of our common friends, so Kailash has asked like, does data perimeter boundaries play, like what are they and how do they play a role in IAM? And how can we use that along with let's say RBAC when we are deciding the RBACs?
Sneha: I think perimeter security is important. I'm not saying like it is a little older versus like zero trust and everything, but it does play an important role at least because we define defense in depth as in different layers you put defense. And so within perimeter security, I think there are a few aspects of it that I look at.
So within IAM side, having permission boundaries because no one should be able to create something or have more access or even create some IAM access that has more access than what they have. So those are some things that I would look at. Having trust ID when you're accessing external AWS accounts or when external accounts are accessing your account, that trust ID, having that one.
And then… of course, another thing that I have seen only in one place, but I think it's very useful, at least for us it was, was restricting your AWS or any cloud, restricting the access to particular regions because you're not going to use other regions. It's a hassle to maintain it.
Secondly, to have a white list of different resources that you use within the company. The reason being, there are a hundred resources within AWS that someone can just go and click and it's going to cost you.
So you won't have that white list of things. You want to put these two things in an SCP level. Also, you could have something where it says that only people, if they are on the VPN, they can access AWS. So that is also something you can do within SAPs. then, because again, SAPs are limited. are only five of them, right?
So you want to use them wisely and then having denied permissions to a few things if they're not required, for example, no one should be able to go and delete a Kubernetes instance. So things like that, you want to put those in place and then you start looking into accounts, having those permission boundaries and things like that. Also federation, how are they doing? Probably have reviews on that one.
And then who has access to everything. Of course, those things are taken care by federation. But also, at a GCP level, you do not want anyone's account to have access within production, just like I mentioned. Groups, use groups, they are usually the best practice. And yeah, I think those are the few things that I would like to highlight on.
Host: I was at an AWS event earlier this year. And you mentioned SCP, right? Like you have only five SCPs that you can define. And one of the participants asked that. And there was a product manager from IAM identity team. And one of the participants asked. And they were like, yeah, we know that's a problem. And we are trying to increase that. Hopefully this year you will hear it that, we are increasing it. I cannot commit on anything, but yeah, we are working on it. So yeah, that's a real problem, right? That when you have a large organization and you are limited to only five SCPs, what do you limit or what do you allow? It becomes a challenge.
Sneha; Yeah, it's a hard limit on that. Like if it was a soft limit and you could increase somewhere, but yeah.
Host: Quota or something like that. Yeah, that is also not a possibility. So now I'm just shifting gears to AI. Like we are in this world, so we need to talk about it.
So often it is said that we can solve many problems with AI, right? Even folks say that at scale we can solve security challenges as well. In your experience, what specific functions of maybe a security engineer's role you have seen is solved through AI today?
Sneha: I think one of the biggest ones is just log analysis and remediations. Like say a new person comes into the company and he's in shock and he doesn't know what to do with this particular issue. But the tools that we have right now, helps anyone, like any person in the organization can go and remediate a security issue. So it actually gives you that power to… understand what's going on and push yourself to remediate that as well.
Like especially the tools nowadays, take any security tool, they have those LLMs where they give you the consolidation on how to remediate the step by step. So those small things I think have helped most of the security people, maybe you're not in vulnerability management and you work in say some other field.
But you have the power now to understand what is happening and resolve it as well. So I think that gives a lot of power in the hands of security engineers to discover different fields. And that is one major breakthrough, I think, for us especially. And of course, anomaly detection, can't have like, we are all lean teams.
So I don't think we want to have like us read through hundreds and thousands of these log records and come up with something or check for anomaly. It's not possible. So you need something that would actually get these anomalies, tell you what is happening and what really went wrong, what is different in this particular log.
And then you can look into it because you have that expertise and knowledge of what exactly it should look like and then remediate it how you think is fine, but also it helps you. Kind of gives you that support that okay, this is the right way of doing it and not, like you just don't go out and just search everything on the internet and come up with something that is not right. So yeah, that really helps a lot.
Host: Yeah. Yeah, it's not humanly possible to look at, let's say, logs from the last 90 days and determine that whether somebody has overprivileged or not, right? That's why, let's say, Access Analyzer helps us doing that analysis for us. So utilizing those more and more.
Do you see there are some aspects which would still always remain human driven or longer term you see we are losing all of our jobs to AI?
Sneha: I think one thing that will be human driven is remediation because I don't trust a machine to actually go ahead and do the right thing. AI is not there yet and I am not there to trust. I'm not there yet with trust levels especially. cannot trust, let alone AI, we don't trust humans also with remediation. So we go through a different approval process and everything.
So yeah, I wouldn't trust any AI to do remediations in my environment if it's production environment. Then that is even more cautious. Like I would not probably trust on AI to do those at this point.
Host: Yeah, so it's funny, but it's very true, right? One of my friends used to work at Tesla and he was, even though he used to work at Tesla, he used to see how the self-driving is to work in certain roads. He was like, no, I don't trust the car. I'll drive it myself. So it's very human, right? So I can totally understand. We wait for a certain threshold in our mind to meet, then we start trusting. So I totally hear you.
So, with AI, one challenge that often organizations face is what data is used to train them? And is there some sensitive data, IP data, things like that, right? How do you make sure that you are not leaking any of your sensitive data when you are, let's say, training models or when you are using AI capabilities? What would you recommend? How do we stay on top of that? As an organization, let's say I'm a CSO, how do I stay on top of that?
Sneha: I think first we need to start with reviews because a lot of these AI components have a little bit of vulnerability and you need to understand what your data has. As in, I know I have been in FinTechs and other places where data was key. Data is important for us and we have to guard it.
We're not going to put our PII data and LLMs and have them do things. So probably educate the public about or educate the developers about what is good, what is not good, have policies around AI. And then once you have built that infrastructure, build that guardrail around the infrastructure, best practices, guidelines, everything. Then you start adopting it and then you build.
I know it's easier said than done because the adaptability is too fast. It's hard to keep up with how fast things are getting adopted. So I think having those guardrails, if you wait out that time and if you can push for at least maybe say two weeks and then have these things in place, then you can go ahead and be like, OK, at least the infrastructure is secure.
And then the next level would be educating people and telling them what all is not allowed to be put in AI. Like for example, people can't just put PII data into a charge GPT. That is a complete no. But then if people don't know it, then they go ahead and they do it. But you do not want those things and you are responsible. At the end of the day, it's security team is accountable.
So you have to know like where you can put those guardrails and see what is going on within the environment.
I've also seen like different teams, they just push out things on LLM they want to adopt, they want to just get the new basically the next shiny thing that's there and they want to use it, they want to try it out but you have to really put those boundaries there and say okay no this is not going to work right now we'll first review it then once that is done then of course there are other things within LLM like the data that goes and have guardrails set over there like PII should be redacted or should be replaced by something when it's being put inside of the training data. How are you going to protect your models? How are you going to protect the data that is there? What data goes into training and what data should not go into training?
All these things, like for example, you could have an enterprise Chat GPT session where it says that your data is not going into training. those are things that, OK, fine. You know that at least what you're putting in there is not really being used for training.
But if you're not paying for a product, then you are the product, right? So in those cases, you want to make sure that you're not putting any personal information there or putting any PCI data out there because that happens and people do face losses with that.
Host: Yeah. And these tools also, right? Like you can just go and sign up and then you start using it. Right. And there is no way to, in a way, restrict a developer in my team to say that, Hey, you cannot go to Chat GPT, or you cannot go to Claude or like Grok or whatever it is. Right. And that's why it's like, there are some new products also coming up with browser plugins so that you can restrict and force policies and things like that.
But yeah, it's a hard problem to solve because if you have thousand people and there are so many providers and they allow you to use without even paying. There is no paywall even, right? If you go to charge, you can use it for free. Like you don't have to pay. Of course, as you highlighted, right? Like if you want your data to stay within your tenant, maybe you have an enterprise license and things like that.
But while the two weeks that you highlighted, right? While that evaluation is maybe happening, somebody can go in there and then just put your PII and say that, hey, summarize it, or whatever use case you want to use. You sort of lost data, right? You lost your PII and things like that.
So we spoke about security quite a bit. One thing that I wanted to touch on before we end today is a community. You are big contributor to community. You have been recently elected as a board member of ICT Silicon Valley as well. So I wanted to understand like for community members or even for outsiders, what's coming up in 2026?
Sneha: I think mostly what I've seen the trend towards is AI, MCP servers, everything around that, like security around these things, because again, they are moving at a very fast pace. It's hard to keep up and it's very important that we keep up because it like your entire organization is working on protecting the data.
And I think that is what is the basically is the bread and butter of the whole organization. Because one thing that we have learned from all the different issues that have happened in the past is that it costs money. If your data is leaked, it is going to cost you money. You should follow the different compliances that are there within different countries. All these things are associated to a dollar amount. And it's a huge one, too.
So just looking at those things, think those are like, just looking at the value, think our talks are gonna be more around security within AI, because that is the new thing. And that is something where we all are trying to understand what is happening and our community needs help with that. Of course, we also, so I think as a security community, now that we are all coming together in different communities like these.
It is very important to educate each other. So whatever you do within the company, you can talk about it. Whatever others do within the company, they come and speak to you about it. You learn something new. And every single meeting, it's like learning something new, which is, I think it was amazing.
Like this year's talks were really high quality. I enjoyed learning and also I enjoyed the entire space and the kind of… basically they're giving you support and the kind of, yeah, the support and the space. I really think it's something that we all are doing at this point. Like you guys are making a great community out here by just doing talks about different IAM aspects or be it anything within security. So I think that we are just trying to mimic everyone.
Host: I think for folks listening to the podcast, what I'm hearing from you is there are two ways to contribute in a way. One is participate, like learn from it. And second is if you have some learning, share it with others in the community so that what you are learning, you're also sharing what you have learned so that you can, the community keeps growing and everyone gets benefit out of it.
So that brings us to the end of the podcast.
Learning Recommendations
Host: But before I let you go, I have one last question. Do you have any learning recommendation for our audience? It could be a blog or a book or a podcast or anything that you could recommend.
Sneha: I think one thing that really helped me, because a lot of people I know would be in the job search mode, and there was this one YouTube video that I was looking at. My friend sent it to me, and it spoke about just the basics of TLS. And it spoke for one hour about that. And I think I'll probably send it to you, and you could put it in the description.
But I think that was very helpful to understand how the basics of TLS works. And it's the handshake itself, like what all goes into it, the certificate, the kind of suite of encryptions that they have and everything. It was just so detailed that I felt like that was very helpful. And I think that is one thing I would ask everyone to watch, no matter if they know it, don't know it, doesn't matter. Just at least watch it so it spreads some understanding.
But also there are other things of books wise I am more like a yeah motivational speak I mean I basically like motivational books so probably in that there was this fearless organization if anyone gets a chance to read that that was really helpful to me so I hope it helps
Host: Okay, yeah, thank you for sharing that. What we'll do is we'll add them to the show notes so that our audience can go and learn from them. Yeah, thank you. And that brings us to the end of the podcast. So thank you so much, Sneha, for taking the time and joining with me today for this episode.
Sneha: Thank you. Thank you, Purushottam. Thank you for having me. And this was amazing. I hope you guys do more of these so I can get to see more of these. Thank you.
Host: Absolutely, it is a pleasure. Thank you so much. And to our audience, thank you so much for watching or listening. See you in the next episode. Thank you.
Sneha: Thank you.