IAM in 2026: From Anti-Patterns to Autonomous AI Agents with Advait Patel

TLDR;

  • When it comes to compliance, do not use it as a checklist. Instead, focus on the security basics and lay a solid foundation, that will help with compliance as well.
  • Security programs should keep experience in mind for Dev, Engineering, Product, etc. Bad experience could backfire and security could loose trust with others.
  • Traditional KPIs do not scale well when it comes to AI in SOC, focus on quality over quantity of usage & adoption in day to day tasks.

Transcript

Host: Hi, everyone. This is Purusottam, and thanks for tuning into ScaleToZero podcast. Today's episode is with Advait Patel. Advait is a senior SRE at Broadcom, where he balances performance, security, and scalability in the cloud environments. He's a founding member of AIVSS at OASP GenAI Security Project. He has authored multiple books, like Implementing Security with AI and Implementing Identity Management on GCP.

Thank you so much, Advait, for joining and sharing your insights today around security.

Advait Patel: Hi, Purshottam. Thanks for having me here and excited to be part of this podcast as I have heard about from my friends and everywhere that I see on LinkedIn and all. So thank you so much for having me here.

Host: Thank you! You're so kind. Before we get started on the security aspect, I know that I just shared two liners about your intro. Do you want to add anything to your journey? Like how did you get in or what keeps you exciting in the field that you are in today? Anything you want to add?

Advait Patel: Absolutely. So as you already shared senior as I did a broadcom where I wear multiple hats ranging from security being in security position being a reliable reliability guy and Handling the production environment be it in AWS be it in GCP doesn't matter Becoming a champion security lead. This is all at work. But also I tried to give back to the community by sharing my knowledge because

I still feel like the space we are in there are still some missing gaps and people need to learn from each other from the mistakes that they made knowingly, unknowingly, intentionally, unintentionally and that's what I try to help with - help the community as well. So I am very heavily involved with security organizations such as OWASP, CSA. With OWASP, I am a founding member for artificial intelligence vulnerability scoring system.

I also found out for Docsec, which is a Docker security analyzer tool. They both are incubator project status under OWASPs, which were both adopted as well.

So apart from that, I am also co-lead for CSA working group where we develop artificial intelligence control metrics where we are helping organizations in terms of auditing and compliance because the traditional auditing controls won't work with AI systems and agentic AI solutions. So that's where we drafted those AICM controls that can help organizations to start thinking into the AI direction.

I'm also involved with ISACA and ISSA with their speaking engagements and also helping them out with some research and sort of work. And also I have been working on my own, trying to fill the gap, trying to fill that missing puzzle between industry and academia by researching and by providing inputs, by providing some experience from the industry side to make this space better and somewhat more reliable and secure.

So that's what I have been doing and that's and throughout this journey I got really I met really great people they were all like very smart in their respective fields. And I also got an opportunity speaking at flagship conferences such as SANS, OWASP, Blue Team Con, ConCon, and et cetera.

And when I meet people, doesn't matter whether it is a conference or a event or seminar, it feels really good because each person has different each person comes with different mindset and each person comes with some sort of different key action item that we can just go ahead and implement in our day to day life.

So it was really nice and what keeps me going is engineering because this is not a static fit. It continuously grows just like AI - no one knew like two to three years over the year will become integral part of our lives. Right.

It was before it was cloud before cloud it was something else. Right. So it's it continuously challenges me at least and it helps me grow every single day. It's not like okay I know everything I learn everything today but tomorrow when I open my laptop there will be something new there will be some there will be new challenges new operations new automations that will also help me move upwards and onwards. So that's what keeps me going.

Host: Yeah, like there are so many things that you touched on and those are very relevant today like AI and the constant changing aspect of our engineering life or even the community aspect that you highlighted learning from others like when you like let's say you and I are chatting here, but we meet we'll have a we'll have a very different conversation data at a personal level and we can have learn from each other as well which we can implement in our day to day life.

And it looks like you contribute to the community quite a bit and you are part of many organizations. That's amazing because not a lot of people are able to do that at that scale. You are able to do that. That's amazing for the community. Thank you for doing that. Like we are also beneficiaries offered. So that's why I want to thank you for doing that more and more.

And hopefully like the conference season has begun, so hopefully we'll run into each other pretty soon somewhere in the US.

So you touched on AI. So let's use that as a way to get into the security aspects of what we wanted to cover. So we wanted to cover AI in cloud security operations.

And one of the things that you highlighted is the security controls are maybe not enough or complex and things like that. So infrastructure security and compliance are often too complex in a SID world.

What are some real world challenges that you see today when it comes to infrastructure security and compliance?

Advait Patel: Sure. So when you think or when you see SREs into the infrastructure cycle or in any company, SREs are the most closer, like I mean, SREs are the most closest to the production environment, right? They are the ones who will access the production environment every, maybe every minute or every now and then, right?

So SREs, like they work in reliability, in security, in compliance, whenever it is like cost-related things automation and everything right so they are basically everywhere not just be one part of the product so that also that is why meeting the security and compliance also very difficult at the same time it's also very important right because when you think about compliance in it would be very difficult if you just put compliance into the document checklist okay I need to do this A B C and D and then I will be compliant that is that's not gonna work now right because systems can do anything you we are more focusing on AI we are more focusing on automation.

So at the same time we also have to think about how we can keep our systems patched how we can have the visibility in our systems how we can have logs that is collecting everything how we can track down the system changes like who who access what who made changes when and what was the changes that got infected etc right so these are these sort of things they have to be in place from since day one rather than finding out something and then implementing that control.

So that is where compliance is very important. And if we… put it in the engineering perspective rather than like SOC or auditing perspective then it would be always difficult, always challenging to complete it or to prove it at the end of the year. But if we do it right way, if we have the controls, if we have the guardrails, if we have the visibility since day one, then you don't have to worry about it, then you don't have to think, now auditors are coming next week, what do I do? How should I tackle this problem, right?

But if you have everything in place since day one, then you don't have to worry about it. And when these things are in place, then also as an engineer, you feel confident. You feel confident about yourself. Also you feel confident about your product. And eventually you feel confident about passing any security or any compliance. So if you think it, sorry, go ahead.

Yeah, so if you think that, okay, if you treat it as a document checklist or if you treat it as some predefined checklist, then it would always be difficult to achieve. But if you think it as a goal, as an action item, then it will be eventually easier.

Host: Yeah, so that from that's one of the things that I got from your statement is like your response as well that do not consider this as just a certification that I need to do and I am done right. Do the groundwork so that your foundation is set right.

So that that helps with certification, but at the same time it helps with or it helps with your compliance at the same time it helps with your security guardrails and security basics put in in the right way.

Now, when it comes to organizations, growing organizations, large organization, it would be very challenging to stay on top of it. Right. And today with AI, I am pretty sure we can automate some of some of these things. How are you thinking about automating some of the compliance for posture related things for security checks and maybe avoid some of these bottlenecks?

Advait Patel: Sure, so. Like I will share from my perspective and how my team thinks is basically there will so many things coming in and going out, right? So we need to focus the actual problems that are that we are facing rather than just thinking everything together and then doing nothing, right?

So always our focus is would be to automate because we don't believe in manual intervention. If it is, if it happens once, that is fine. If it happens twice, then we need to think about it. If it happens again and again, that something needs to be automated, something needs to be some action items still need to taken. So that's where first of all, we designed the big picture, whether it will help or not, whether using automation will increase our burden, whether using automation will increase or decrease our efficiency or visibility of the picture.

But sometimes what happens is if I automate something, the rest of the team doesn't know what actually happens behind the picture and whether it is actually be helping us or not. Its automation should not be the checklist. Okay, I automated 50 % of my infrastructure task, but it has to be efficient. It has to be helping you on your side rather than on your opposite side.

So that's what and if we do it in a right way, if we do it in a more controlled way and if we do it like if it if you do it so that it can help us then it would help our team to achieve the goals rather than working in the opposite direction and where we don't have any control or we don't have any visibility. So that's what we think and that's what we try to implement at work as well.

Host: Make sense. So one of the aspects of SRE and which you lead is the IAM implementation. And in 2026, we talk a lot about security as a service. When it comes to security, how is the IAM implementation shaping the security in let's say today's JNI world? How do you see IAM playing a role?

Advait Patel: So. So what I believe is security should not be an afterthought, but security should always come first into the picture. It should come at the time of designing phase. It should be implemented at the time of product building. It should be part of the platform.

That's what I believe when someone says security as a service or when I say security as a service, it should come right with the platform, right with the infrastructure, not something that, you discover and you you were vulnerable and then you figure something out. So that's not a security service, but security services always comes first and always part of the infrastructure development since day one.

Coming back to the IAM, IAM is what is like, okay, people or the tools have the right amount of access. They have the access when they need the most. They have the access when they are trying to do something. For example, when people need access to check the logs to track the logs to track the changes to see what is happening and all right and if those IAM related access and controls are if they are manual and then people would feel it a burden like they will go I need to file a ticket I need to wait for an IT security guy to to see the approval to see if I have the enough approval from my managers and not and once they do it once they check once I reviewed then they they will approve or reject it right?

So this entire process takes maybe depending on the company size or the process involved, it will take maybe a from day or multiple couple of days, right? So people, developers are getting bored with these workflows, right? And we are also in a way, we are creating hurdles for them so that unknowingly because as a security person, I want to make sure that my system is secure, but at what cost? At the development cost, right?

So… For security service for 2026 this has to be gone. We need to automate it. We need to add the approved workflows so that it can be automated when a developer needs an access. They can simply go and request the just-in-time access which will review the predefined controls, which will review the predefined set of actions and which will grant the access right away.

Not right now, maybe in five minutes or maybe in 10 seconds, but not, it should not take one day or three days. Right. And that way we can also, we can also improve the product efficiency by releasing developers because security is not like locking everything. Security is like lock whatever needs to be locked and grant whatever needs to be granted.

So that is where the security service comes in. And that is where I think IEM and identity should also follow in 2026.

Host: Yeah, you highlighted a very key thing, which is the developer experience, right? Of course, security is important, but if it is affecting the developers' experience, then developers will never be your champions, right? If you go back maybe three months down the line, say that, hey, I want to roll out a new program. They'll be like, yeah, the last program itself was a bad experience. Maybe we don't want to support it anymore. Right?

Like you lose that trust and you lose the champions with engineering or product or others, other parts of the organization. So, developer experience is super important.

The second thing that you highlighted about like the workflows, just-in-time access and providing unblocking developers as soon as you can goes a long way totally. So now, let's say if you are architecting an IAM program today compared to maybe what we have seen five years ago. How would you what change would you introduce?

Advait Patel: Sure, so first of all today everything is about AI right? So I will definitely use AI to first of all manage and check the guardrails to controls and see to manage the permissions so that when someone needs it they should have it at that not they should not wait for X amount of time, but they should have it so that they can do their work faster.

And as you, you touched upon a good point, right? That it should not be blocked or it should not be a blocker. It should not hinder anyone's work or progress, right? Because if you think it's like the entire cycle, if you break a point in between the entire cycle, the other parts will also fall apart, right?

So it should be like, okay, I need to use some sort of tools to some sort of automation, some sort of scripts or whatever it is to make their lives easier, to make my infrastructure easier because from my perspective, I have to make the system secure and reliable from also their perspective, they have to continuously make the product more useful, more reliable, right?

So we both have to work hand in hand and that's what the architecture should look like. Shouldn't only focus on architect's security, shouldn't only focus on reliability, but it should be like, the both teams should work hand in hand. So that architecture should also reflect this solution.

Host: Make sense. Yeah, I think you're doubling down on that experience, making it easier for for for rest of the org to sort of follow the guidelines that is put together by security rather than thinking that security is a blocker or a team of no, right? So make sense. Now, let's say you have implemented the IAM program or using AI for SOC and things like that.

So this is one of the questions that we got from our common connection, Valene. And the question that they have is, what are the KPIs that you look at to see how effective the change is? What KPIs would you recommend security leaders look at?

Advait Patel: Sure, so if you think like traditional KPIs, Like when we talk about traditional KPIs, three things will come into picture. Main time to react, main time to detect, and main time to resolve, right? So these things, they were great. I'm not saying that they weren't helpful or they weren't useful, but they were great. But with today's systems, with today's picture, they are not useful. They're not, I mean, they're not.

I won't say they are not useful, but they are not enough that would be the right word that I would use. They are not enough, right? We also need to measure like how AI is actually helping us, what sort of things that AI is helping us to solve, whether we are actually improving ourselves or not, whether AI is actually working in our favor, working in our side or not, right?

So things like, okay, first signal would be the quality, signal quality, right? Whether AI is improving the existing work workflow or AI is helping us doing this XYZ in a better way than what we were doing previously. So that would be the first KPI that I would maybe I would put it first.

The second would be the analyst efficiency like I mean, engineers efficiency whether after using AI, am I able to solve this problem or am I able to fix this problem in like X amount of time, which was taken Y amount of time before, right? So also has to be that efficient. Otherwise, if it's not saving our time, then no point of using AI, right? So that would be my second KPI.

The third would be like decision quality, right? So decision quality that if, for example, if I have to take all the decisions, even these low risk tasks and low risk recommendations, then what's point of using AI? So if you are using AI, it should recommend you in a better way. It should recommend you with good practices so that you can implement those in your system, right? So that would be the third point. And maybe automation safety where you can say, I did this automation, I implemented this automation, but are those automations correct? How often are those AI driven automations correct at PC? Right?

So we also have to use AI and the AI has to tell you, okay, these are the automations that are in place, which are correct, which seem correct, which are also doing correct things. Because as an engineer, you cannot focus on a hundred things at a time, right? It will like you, you can, but it will take time. You can do it in like 10 days or seven days but you need AI where it will check the correctness of your automation. So that would be also one other GPS that we can think about.

Host: So yeah, I like how you structure them into different KPIs. So one question I have is you mentioned about decision quality. Any tips on how you can measure that that has improved? Is it based on the volume of decisions or maybe the volume of decisions we are able to make today? Or is it the way you mentioned it? Meantime to detect, meantime to respond.

Is there a metric that can be used when it comes to decision quality as well? Like do you have something in mind?

Advait Patel: Sure, so for me and I think if you ask any engineers most of them would say always quality over quantity because if your AI agent is giving you maybe 10,000 recommendations but if those are false positive for you it's just waste of time but if it you 100 recommendations but they are accurate they are like with 99.99 accuracy then those 100 recommendations are way above than 10,000 recommendations.

Because those 100 ones, they are saving your time. Those 100 ones are actually leading you somewhere into the good direction. So for me, always quality over quantity. that's what I'm trying to say. When AI recommending you, you have to check the correctness. You have to check whether it is giving you the right predictions or right recommendations or not. Probably we will talk about this in later.

Because there is a very hot topic right now, AI versus AI, right? Just like how we are using AI, the other parties, the bad actors are also using AI. So whatever AI recommends, whatever it predicts, we also have to check the characteristics, but at the same time, it's not like, my AI agent is giving me 10,000 recommendations, that has to be good. Always has to human intervention at some point to check the character.

Host: Yeah, I mean, it's very similar to like the philosophy of trust word verified, right? Even though you trust the AI systems are providing you like close to accurate findings, you still need to have a verification system in place to ensure that it is what it is, right? Makes sense.

So I want to go back to IAM. One question that I forgot to ask is,

Like when it comes to IAM like AWS or GCP, any anti-patterns that you have noticed that can help with that could help in the AI driven operations that we are setting up.

Advait Patel: What do you mean by anti-patents from IAM?

Host: Like from a you mentioned about the IAM architecture, right? Like if you are architecting today, then you will follow a specific pattern or specific architecture for implementing IAM. Similarly, any anti-patterns that you have that you would recommend folks not do maybe go very fine grained access control or as you highlighted, it's a JIT.

Maybe do not enforce JIT on every single access. Similarly, have you seen any other anti-patterns which are used across organizations which you would recommend folks not to do anymore?

Advait Patel: Sure, so it actually. It actually varies company by company person by person right because for example if something works for a startup Which is like hundred people hundred engineers it won't work for a enterprise company Which has 10,000 users right and other way goes around like whatever works for 10,000 doesn't work for 100 so it always has to be related and suitable for your needs for your company's needs and from for most important your engineers as at the end of the day, those engineers are the ones who will make the product or who will make the product better, put it in a better shape, right? So some anti-patterns like, if JIT works for you, do it.

If you want to start from zero trust, where you don't trust any API calls, where you don't trust any human intervention, then start with zero trust and go upwards, like whoever needs access for whatever purpose start assigning them rather than just starting directly from admin level or power user level of access. Start with maybe assign them roles.

For example, your DBA, your DB team doesn't need access to infrastructure. mean, virtual machines or your EC2 instance or GCP VMs. Your DB team needs access to database services such as DynamoDB or Cloud SQL and all. So you have to, you should implement the IAM roles so that each person belong to whatever team they are in.

You can assign them those specific roles and doesn't matter if they are just in time or if you can if you can deal with it, then you can assign them roles on for larger time as well depending on the needs and depending on how whatever you whatever base is suitable for you Right?

And if that person is very multiple heads, if that person is also in DB team, but at the same time is also on Q8S team, then you can assign them both the roles at the same time.

It's not like, okay, one way or one or the other right but it has to start from somewhere but it has to start from the bottom rather than from the top it's not like okay I'm giving you admin privileges and if I enable track your usage and in 30 days if you don't use DB services then I'm gonna remove DB services it's not gonna work that way right but before people used to do that I have seen people I have seen startups because startups they don't they cannot they cannot afford security engineers dedicated secret engineers right so they start from top and then go bottom but it should not be that way.

With AI you can use certain tools that you can use. You can always start from some smaller amount of permissions and then you can go and add more permissions on the need basis.

Host: Now make sense and that's a very good suggestion of how maybe when you're small, you should approach or when you are a startup, how you should approach and as you grow, how you should approach it differently.

And as you highlighted, that maybe if you are, if a developer is requesting access to a development environment for a read-only access, maybe you don't need that to go through JIT, right? You can just get them access even for longer duration as well. So yeah, it always depends on many factors when it comes to patterns and patterns of IAM.

The next question that I have in mind is like everybody is talking about AI, right? And everybody is building AI agents and things like that. If I want to connect both the SRE world and the IAM world as a SRE person,

How do you use agents to do maybe root cause analysis of somebody trying to access something and it failed or somebody is trying to use a service which they generally don't use like anomalous patterns and UEB and things like that. How are you using AI agents to perform some of these activities?

Advait Patel: So, AI agents are also favorite for SREs these days because they are helping them. AI agents can be used like from, as you said, from tracking the changes, tracking the logs, collecting the logs, then finding the patterns, especially for the production environment, especially when you are receiving alerts or when you are working on alerts, those pattern matching is also helpful. And you can solve this with agents.

As I said earlier that SRs, they are the most closest person to the production environment, right? So they also have to be very critical because you are like when you use agents you lose the visibility you tend to lose the visibility if I'm doing something on my own, I will go OK. I will click A, B, C, D, E, and F. So I know what is happening on A, B, C, E, D, E, and F, right?

But when you use an agent, you just hit the workflow, you just open or you just run the workflow and just sit and be relaxed. So you don't have the visibility of what is happening behind it. But here you can, you need to know what your agents are doing.

You need to know what they are being capable of so that you don't compromise the security as a reliability you don't lose the cost factor as well and you lose the observability. Things like low risk tasks that's what we always start with that's what that's why we always use AI it's not like we need to use AI to replace the security teams.

But we need to make sure that security teams are granted with AI tools they have the right amount of tools right amount of access to use those tools and that's what make their lives easier and that's what we can use those AI agents for, right? Because it's not like, like, I will also talk good about AI as well. It's not like AI is not gonna help, but things such as like, okay, I need to make sure that these operations are need to be completed in X amount of time, or we need to, if we are in production environment, we are on call, and then we need to find out the root cause of these problems, right?

We need to find out from where it is coming from, whether it was, whether something like this happened previously or whether something like this will happen in future. This kind of pattern tracking is also being like you can use AI agents for this. Also to create the runbooks, to create the incident runbooks for your problems.

That is also useful one use case to use the AI agent. So these are some smaller things that we can always start with rather than okay giving your AI agent your production API key and asking it to take care of everything rather than any human intervention. So that is what somewhere we can start with.

Host: Make sense. you mentioned that there is a possibility that you lose some details when it comes to AI agents. Of course, there are benefits, but there is a possibility that as an expert, might go five steps. AI might miss one or two. Do you see like this is a question that we got from Jason Kao and that do you see that AI agents will be fully autonomous in your future where they are not skipping the details or they are doing a very thorough job as a seasoned SRE or seasoned security person is doing.

Advait Patel: So… we can clearly see where the industry is headed to, right? We can clearly see that what would be the next two or maybe next one year or two years and I can see that some of the companies they're already working in this direction, being autonomous, making autonomous AI agents and in that direction, right?

So what I feel like is definitely those things are good, but you should never trust anyone with anything and especially with AI with your production system, right? Where it compromises your production environment. It touches your customers. It touches your company's reputation.

So definitely you can use autonomous AI, but not everywhere, not with everything. You can maybe try to do it with some of the things as I said earlier, like start with low risk tasks, start with low risk items, Where you can, that's fine if you don't have the visibility in those tasks, like, okay, I will let AI do whatever it is doing, but I am fine with this.

I'm fine with the results, right? If you answer that question confidently, then you can use autonomous AI. But if you are even slightly in doubt, 0.01 % in doubt, I doubt that people will use AI for their production environment.

Host: Interesting. I mean, I agree, but at the same time, I understand that why there would be that hesitation, right? Because when you are an SRE, when you have access to production all the time and you need to analyze a particular issue, you cannot skip any detail, right? It has to be thorough enough so that you can trust it. Even if there is like one person doubt, then yeah, folks will not use it because what if it misses a key aspect in an incident that you are investigating?

Now speaking of the incidents, and you touched on this AI versus AI arms race earlier, right? So have you witnessed any breach or attack where attackers are using AI to bypass, let's say behavioral biometrics or any anomalous detection that let's say as a defender who have put in using IAM. Have you witnessed any breach or attacks?

Advait Patel: Sure, so fortunately, I don't want to jinx it, but at work, no. But I was last year, I spoke at SANS in Denver and I touched upon this topic. Like my topic was basically AI versus AI. So how we are using AI as a defender.

Some other bad actors are also using AI. They use AI to inject the malicious traffic. They use AI to find more ways to attack the system. They use AI to create the patterns. They use AI to make our models make our algorithms faulty and when algorithms and models have become faulty and riskier, you don't know whether they are whatever they are like we talked about recommendations, right?

10,000 recommendations versus 100 recommendations. Those recommendations, we don't know whether AI is giving based on false data or correct data. So just like how we are using AI to solve, okay, I used AI to track the patterns to find out the patterns to find out the root cause and then create a book and then we will use our AI agents will use those playbook so that incident won't happen again in future right but what happens when that start itself like when step zero was started from the false node then your AI agent will treat that entire process as a false and when you create the runbook at the end that runbook was based on the false data right?

And when you feed that runbook to your AI agent your AI agent will always mess up your entire thing right so we also have to think about this AI versus AI just like how we are using they're also developing some sort of algorithm they are also developing some sort of model to make our life worse, right? And we have seen that.

I don't like at work I don't have any personal experience but probably if I would Google it I would definitely find some sort of incidents that happened because of AI versus AI because it's like very hot thing right now and because of the rapid pace that we are in it will definitely happen more often rather than later.

Host: Yeah, no, totally, totally. I'm glad that there are no, no breaches or anything that has happened at your org. I hope that continues. But yeah, I understand what you're highlighting, right? That your foundation has to be solid when it comes to setting up the payload or setting up the, sorry, not payload, setting up the workflows so that as we go more autonomous or towards autonomous, the foundation either makes the whole playbook successful or it fails. Right.

So totally. So you've been in the industry for over 14 years now and you have seen a lot of trends as you highlighted the cloud then maybe mobile now AI. So there have been a lot of progression. What is one of the trends that you have seen in maybe 2012 when cloud was just picking up that you wish it just you'd have died already.

Advait Patel: So, actually personally I'm very big fan of cloud when I like because I haven't worked on on-prem servers like I don't know the pain on and difficulties and the things that people used to face back in the days and I feel very lucky to directly work on the cloud and to directly okay click click click and finish and the instance is ready. So yeah this in this journey like I'm just saying that I'm just getting started. always feel like okay today is the new day and I'm just getting started.

But past few years in past decade I have seen trends getting trends products coming coming now or maybe the new names will may appear new products may will appear but people don't focus on the core problem. Core problems are security, reliability, cost effectiveness and visibility. So when new name appear, when a new tool appear, I always judge them in front of these core problems are they helping me solving these core problems?

Are they helping me doing better than what I was doing using the previous tool used to solve these core problems?

If they are, then I will pay attention to them. If they don't, then I'll just move on because doesn't matter if we are talking about cloud, doesn't matter we are talking about containers or Kubernetes or whatever technology, whatever tool that we talk about, right? Right now it's all AI, right? But if it is not solving the foundational fundamental problems that no tool or no technology will help you or will help your product to become the next thing, right? So that's what I feel like.

And the new trends wise, definitely we are talking about AI probably in 24 hours. We are speaking about more AI than about ourselves in these times, right? So definitely we will see more things like autonomous AI, serverless AI or etc.

We don't know what the next thing would be, right? So we'll see more things coming out and out, but also we also need to think about whether they are actually solving the problem or just introducing new problems.

Host: Yeah. No, no, I totally hear you in terms of the pace at which the innovation is happening in the AI, right? Every two months, there is a new model with new harnesses and the, the, the product, the core philosophy based on which we had built the product or build the solution or build a workflow in the organization that needs to change.

Yeah, totally. Like it's a very fast evolving space we are in. So I'm super excited as well to see what's coming up. One last question. This is from Jason Kao again. Where do you think AI will be going in terms of cloud security? Where do you see it headed?

Advait Patel: I think I shared a lot of negative points about AI, but I think I will end it in a positive note. So I see AI headed into the very good direction. I see people talking about good things about AI when they use it in their production environment, when they use it in their SOC especially.

And I see more and more things being automated in a good way, not in a bad way. As I said that AI is not going to replace the security to be replaced the security but it is to make the security more stronger and more effective. So definitely we will see more operations, will see more workflows, we will see more agents in this solution to make the entire security operations space better and better in terms of experience, better in terms of for the customers, for the owners and for the products as well.

So I think we will see some more good innovations in this space. And what recently, I think I was reading some at NIST website, they are also working on this AI and cybersecurity intersection. And people will, once people will start paying more attention, once people will treat AI as a friendly person, I mean, friendly partner, then we will see more innovation in a good direction.

Host: Yeah, that's a great note to end the podcast, right? Like on a high note. I love that. But before we end the podcast, I have one last question. Do you have any learning recommendation for our audience? It can be a blog or a book or a podcast or anything, anyone learning that you would have a recommendation you have for our audience.

Advait Patel: So if you were to ask me this question maybe 5 years ago I would say go and watch YouTube videos, and watch this Udemy Coursera and etc. those sort of things or maybe you read blogs on this XYZ website but today the time is different.

There is no time for us to read some sort of manual and etc. spend just read this and you will know the technology. We are not in this time. The time is to actually make something, break something and learn something. You only will get to know once you actually do it. Once you actually put your hands down, put yourself into that situation and then only you will get to know.

So I recommend everyone who wants to start in this space by actually doing something rather than just being a kiddo who is like reading stuff and doing nothing. That's not gonna help. You can definitely sure go ahead and read stuff about how what is agentic AI. You can read the NVIDIA has really nice catalog about AI agents and agentic solutions and all that. You can definitely go and watch those videos. Maybe try and get a certificate or one or two right but just just don't stop there there is a whole new world after that that is only first step starting in this direction.

But you have to actually build something break something and then try to see what you can do or what you can innovate with this space in this space so that's what I would like.

Host: Make sense. Get your yeah, I love that. Get your hands dirty, right? Like the more you play with it, the more you will realize maybe how you can leverage it better. What are the gaps and you keep going right.

And then that improves your knowledge and also it tells you how to fail or how to succeed either way. So that's a great recommendation. With that, we come to the end of the podcast. Thank you so much for joining and sharing your learning. And I hope our audience will get a lot of value out of it.

Advait Patel: Thank you so much, Purusottam. Before I go, I would like to say thank you so much for having me. It was a great experience and I'll always like whenever I saw your posts on LinkedIn, I was like, okay, when is my time? When is my time to join this podcast? Thank you so much for having me. And also I would like to request all the audience to please like, subscribe Scale to Zero podcast. They are doing really great work and I would like to see more podcasts coming on this direction, especially be it on cloud security, be it on product security, doesn't matter. Would like to see your content. Big fan. Thank you so much, Purusottam, for having me.

Host: Thank you so much. You are too kind. So thank you so much for doing that. And to our audience. Yeah. Please reach out to Adwit if you need any any learning guidance or to see what he is working on. And thank you so much for watching this episode. See you in the next one.


Navigating the Identity Maze: Strategies for IAM in Multi-Cloud and the AI Era
Cloudanix is a comprehensive cloud security platform that provides code security, cloud security posture management (CSPM), workload protection (CWPP), and just-in-time access (JIT) to safeguard your cloud infrastructure and applications.
Mastering IAM at Scale: Our Deep Dive into Cloud Security with Stephen Kuenzli
Fix your IAM bottlenecks! Discover how to implement self-serve security and codified reference architectures for scalable cloud security. Learn from Stephen Kuenzli on Cloudanixs ScaleToZero podcast.
Modernizing Privileged Access for Cloud Infrastructure
Eliminate standing privileges for cloud infrastructure. Implement IAM JIT access for granular, time-bound control, seamless multi-cloud support, and compliance.

Get the latest episodes directly in your inbox