Securing the SDLC in the AI Era: Challenges and Strategies for Modern Enterprises with Ashish Bhadouria

TLDR;

  • Secure SDLC is as much a Culture challenge as it’s a Process or Tools challenge. Having a strong champions program helps with alignment of Security objectives.
  • Engage early with other teams & stakeholders in the Development cycle. This will ensure Security is not an afterthought rather it’s baked in to the SDLC process.
  • In the AI world, Thread Modeling is important. This ensures security is integrated, the right way, into each layer of AI Application stack including Models, Data, Apps and others.

Transcript

Purusottam: Hi, everyone. This is Purusotham, and thanks for tuning into the ScaleToZero podcast. Today's episode is with Ashish Bhaduria. He is a domain engineering manager for security and privacy at IKEA. He's experienced in designing, deploying, migrating, and supporting critical security infrastructure across industries. Ashish, thank you so much for joining me in the podcast today.

Ashish: Thank you, Puru. Thank you for having me here.

Purusottam: So before we kick off, do you want to add anything to your journey? I know that I just did a two point intro, but anything you want to add, like maybe how did you get into security? What excited you? What keeps you excited to work in security? Anything you want to add.

Ashish: It's a very incidental story actually. In 2008 when I started my career with TCS, know, but TCS is Tata Consultancy Services, one of the India's biggest, or now, should I say, one of the world's biggest IT consultancy services. So, in very beginning of my time with them, I was actually working for them in corporate data center, like their central data centers.

So I used to sit in a floor where I was in the central security team and then there were like, know, application team, the Linux management, the messaging security and whatnot. So the very first learning experience for me was when I was kind of onboarding into TCS was experiencing people and my colleagues dealing with a security incident.

And me being at the center of the, like, you know, the team going through that incident, doing that incident response that basically defined that yes, this is the line, this is the career option I want to do for an entire life. And that's how my journey started. Then I evolved into building more of infrastructure security projects. I built data centers for companies like through T-SYS for companies like Nokia, Starlines, Jaguar and Land Rover. And then eventually ended up working for Skype, Microsoft Office, MS Office Group, Unity Technologies, the Gaming Engine Company, Zalando, and now at IKEA. So all the security career has evolved as the new opportunities came in and I kind of kept taking on more and more responsibilities, more and more challenges. That's how I ended up here. So looking forward to this. Thank you.

Purusottam: Yeah, that's a great story, right? Not many security professionals would have started their career by looking at an incident response, how it was done and being on the same floor as the security folks and see like noticing what is going on and then making you excited, right? So that you can you choose that area in a way.

Ashish: Exactly. Imagine this. You're seeing something happening in an environment which is the corporate data center environment for a company as big as TCS. And then the consequences. So the thrill you go through was unimaginable. And that's why I realized that, yes, this is the line where I want to work and make an impact. So yeah, that has shaped my career trajectory for sure.

Purusottam: Awesome. I hope that we can get some points from that during the conversation today. So today we'll focus on two areas. One is integrating security into SDLC process. And the other one is AI, AI's impact on overall offensive and defensive security. So let's dive in.

So there is a lot of awareness around security as part of SDLC. Shift left was a big wave. Drawing from your experience, Like you have worked across diverse industries, consumer goods, software and cloud. Even today, what are some of the fundamental challenges that you see when somebody thinks about integrating security into SDLC?

Ashish: Traditionally, has been, generally, when we talk about some of the challenges, security, if you talk to any security person across the industry, what you will hear that the security is mostly an afterthought. Security is siloed. We have too many tools in our CI-CD pipeline, or we have too many tools in our SDLC. There is a tools overload.

Then you will hear about the engineering velocity is too high. Like, you we have too much deployments on a weekly basis. We do not have the security resources to actually put that to secure the or to kind of measure the engineering velocity which we have. So all these challenges were traditionally there in the security teams within industry.

Some teams have done great. Some teams have done some level of automation. And within the SDLC, some people have created some standardized structure to kind of measure security vulnerabilities, security findings more, doing more automated threat modeling, creating more check and triggers, and kind of creating a culture where engineering teams know where to involve the security teams.

But majority of the teams still struggle with the basic security ingestion within these secured SDLC steps, we have to do the threat modeling. What threat modeling looks like? How do we do threat modeling which provides the value to us? How long the threat modeling should be? It can run from two hours to a couple of days. So how should we do it? How should we find out, should we plan the mitigations for threat modeling? Where should we invoke the pen testing? How do we mitigate those penetration testing findings? What is critical, what is not critical.

Imagine there is a state of DevSecOps report, 2024, 2025. 2025, if you go and read that report, you will realize that 18 % critical vulnerabilities are the only the ones which needed to be prioritized. So these are the real challenges where people do not know what to prioritize, people do not know how to fix these things, people do not know where to trigger what security activities within the SDLC pipeline.

So these are all the traditional challenges and these are still there in many of the security teams today. So yeah.

Purusottam: So we got a question from one of our common friends Mohit Singh in the same lines. And I want to read it to you to see how you want to help him.

So security teams face a lot of resistance from dev teams, including involved in the SDLC. Like as you mentioned, that security is often an afterthought. So security should be involved early on. But when you do not involve security teams during design discussions or we don't do security reviews or security code reviews. How do you go about them? How do you address that?

Ashish: I really like as a fundamental which or a framework which Jason Chan created from Netflix. He created something called security partnership. In those security partnership, he targets mostly about the security culture within these companies, within these organizations, within these teams.

And that's how I believe that's the right approach, building the right security culture where your developers, your product teams knows which step to actually pull in the security team for. Do I involve them for the security code reviews, threat modeling? Yes, that should be done. But shall I reach out to the security team for doing threat modeling and all the pen testing, XYZ of security activities, or I can do this on my own and then just release in the production. There are both of the options. that partnerships, that culture of having the security integrated into the SSDLC pipeline, that needs to be there.

And how, as a security team, if you ask me that how would I do this, how would I change this culture, how would I target this problem?

My approach would be, to build this relationship with these development teams, engineering teams, but also target their pain areas. Are they, you know, too much tools provided to them, which is creating the alert fatigue for them? Are they getting too many signals, which they cannot process? Are they getting too many vulnerabilities, dependencies, which they cannot prioritize enough? You target those questions, provide slight improvements.

Right now there are so many tools you can actually take backstage as a platform. You have various tools, you can customize what kind of security signals, what kind of security priorities, it secret detections, be it vulnerabilities, be it dependencies, be it any of those signals you want to give the developers within their workflows where they are creating their CICD pipeline. Let them see that information right there.

The actionable actionable metrics, actionable things which you provide developers right there so that they can fix within their developer workflows. They do not need to click 10 different tools or 10 different dashboards to go out and check. That would impact the productivity. So provide those metrics right there within their development workflow, have this improved, and then create that engagement that yes, you have provided with these toolings, you have provided them to improve on their security by 10%, 20%, 30%.

And then you measure, create that culture of self improvement. That's how you can target this problem in my opinion.

Purusottam: Makes sense. So one thing that you touched on earlier is the velocity, right? Often security teams are seen as a team of no or team of like team which blocks everything, things like that. Like they have that perception. And when it comes to bridging that gap between the velocity and the security implementation or the maturity in an organization is often a challenge.

So when you're working with, let's say, leaders or C-suite, how do you advise them to structure their team or the processes so that you can achieve both the speed, but at the same time, you do it in a secure manner? You mentioned about tools, right? Like having a lot of tools could have more negative impact than positive. Yeah, yeah. So what advice do you generally provide to, let's say, other leaders or C-suite when it comes to setting up the team or the tools or the processes in the right way?

Ashish: Right. So first thing first, you said about, you know, security teams being the gate or the gatekeepers. There is a book, I'm sure you have read that, the Phoenix Project on page 39 on there. It says that security teams are often, you know, acting like the authority with the flashing badges to the engineering team, you cannot do this, you can do that, and so on. My whole idea is very against that.

Purusottam: People call me that you are the security guy who is very pro developer, you anti security establishment. That's my reputation for a right reason, because I would like to say to the security leadership or technical leadership that we need to create a culture of engagement. We need to create a culture where people do not hide things from their security team. They come to seek support.

They come to seek us guidance like how do we do this thing or you know we have some of these findings can you help us prioritize like what comes first what comes you know later then if I'm meeting my executives my ask is generally that okay you you help me decide on the on the your triggers or the checkpoints like help us decide one checkpoint before we release a new product in the market.

So have some security checkpoints agreed with the engineering leadership. Then you come down with the engineering teams down below, with the product development teams, and the developers. And then you engage with them on the engineering layer. You have an SDLC. These are the checks you can do, security checks you can do on the engineering layer. With the product teams, you agree that during your sprints, what will be checked. During your sprints, what needs to be prioritized and so on.

There are three different layers you need to target accordingly. Engineering layer, you target engineering metrics. On the product layer, you go with product metrics. On the technical leadership layer, you talk more about cultural metrics, like how many security champions do you have, how many security champions engagement percentage you are seeing month on month, how much NPS score we are having. So these are all the ways you can create a good and a strong, mature engineering program downwards. And that will, of course, improve your security maturity as well.

Purusottam: Well, I like how you structured like in different layers so that depending on maybe which team you are talking to you use that playbook while you are working with them. Let's say you are working with engineering. How can you make sure that you are doing the prioritization right so that they are not looking at 10,000 vulnerabilities versus what you said, right? Like the DevSecOps report said only 18 % should have been looked at. Maybe the others are not even priority.

So based on which team you work with, you should change your playbook in a way. On the similar lines, we got a question from Vivek Raju. what he's asking is, and I think this applies at IKEA level quite a bit, right? Because you have many lines of businesses and things like that. So the question is, are enterprises bringing security elements together when each line of business has their own preference of tooling, methodology, process?

This goes back to what you also mentioned earlier about the tools and things like that. So yeah, how can you bring alignment when it comes to when you're working with multiple lines of businesses and each have their own way of doing things?

Ashish: So in engineering circles, we called it autonomy and people love autonomy, right? From a security point of view, my approach or my push is for aligned autonomy. That needs to be some certain standards which each engineering unit need to follow. There needs to be some security practices that engineering needs to follow.

And then based on those foundational aspects of things, then we can have different level of maturity of security, of engineering maturity and so on. So having some standards is most important. That's what I would say first. And you have different business owners. are doing different business. Their products have different business criticality. Their products are different. Some are customer-facing, some are internal, some are not different products.

So the business units and the unit teams will always define the security requirements and the security priorities as per them. So what you as a security leader need to do or security person needs to do, align with them, understand their context, give them that space where they can do what they are doing, and then you improve their maturity by just aligning with them. Have a strong security champions program. Have the right visibility across their their organization, their operations, build that security champions program which gives you the metrics and the insights, but also triggers at the right point say, no, you need to change priority on this one because we are seeing some trends which are not good. So that way, have some of these elements in there and then you'll have the right recipe.

Purusottam: So you mentioned about security champions program, right? So one of our previous past guests is Dustin Lehr. He's a big advocate of having security champions. And it seems like you are very aligned with what he was also saying, that you need to have champions in each teams. Because if you are an outsider, like outside team, and you're just trying to push through your security agenda, then that that may not fly, but if you have a champion in that team, they will carry that agenda with them, right?

They will make sure that their team is sort of buying that agenda and then incorporating some of the principles that you might want to put together, right? From a security roadmap perspective. So yeah, it makes a lot of sense. Now, yeah.

Ashish: Puru, just to kind of add into what we are saying, there is so good value in having a good security champions program because in my practical experiences, I have learned the good security findings, the good security signals, the good security metrics which I have gained from the security champions has proved their value in many more folds. Like, know, the, yes, imagine the security incidents.

You can of course, you know, get the security incidents and the security detection through the detection engineering through various means and tools. But the kind of signals and the kind of issues which you will get through a security champions, because you know, people in the engineers and engineers know the context, engineers know their environment. When they come to security champions and then security champions bring that to us, that is solving some fundamental kind of, you know, security challenges. And that's… most value as an engineering unit.

So there is no value or there is no comparison to that value. So that's why I would recommend every engineering unit to have those security champions programs already stabilized and more than ever now. Now when the AI adoption, AI is coming into the mix. Now you need to have those champions which will actually give you far more value than just security champions. They need to be there to detect those cases as well.

Purusottam: So yeah, I was about to move to the AI world and thanks for bringing that up. now we spoke about the basics, what needs to be done culturally, technically, how do you align and all of that. Does that change when you go to AI native environments? Let's say you are building agentic AI apps, does that change from all aspects, not just from… SDLC, but from a cultural perspective. Maybe we'll touch on cultural in a bit, but from a SDLC perspective, do you see any change in terms of the challenges and how you address in the AI native environments?

Ashish: Yes, essentially it changes. There is a need to revive or rearchitecture the whole SDLC for AI native era. I will send you picture later. There is a company called Pillar Technologies. I'm not sure if I'm saying the name right, but they have created an AI native SDLC called SAIC, if I remember correctly, SAIL. So I'll send you the picture later. That maps the current DevOps practices with the AI kind of SDLC now. So when you're building the AI tools, when you're building the AI models, what kind of checks and what kind of practices will be there, that is additional layer of complexity into your already existing SDLC.

So things have changed for sure. Things are changing for sure. It's just that now with the AI people who want to try, engineers who want to kind of explore, this has changed things very quickly. So the adoption, the speed has gone crazy and we are still catching up. So that's, we as in like not only us, entire industry in my opinion and security means in the industry.

Purusottam: Yeah, I believe we are at a very early stage of adoption of AI, right? And often like what we have seen with software development also security was an afterthought as you mentioned. Maybe this time we'll not wait for 10 years before we think about security. We start from the get go.

Ashish: I believe I would like to believe what you're saying, but I'm seeing that going reverse again. Like MCP when MCP was created, security was again not part of that, right? So when MCP and MCP happened and things started taking evolution, security was still trailing that, okay, now how do we build this? How do we do this? And people started exploiting those use cases first then than actually building the controls and security first.

Even today, when MCP implementation adoption is there, people are building the security tooling afterwards. Now, as the implementation takes on, you're still thinking that, okay, how do we do security, checking security assessments or security threat modeling? How do we do this for AI environment? How do we do security incident response in AI environment?

These are all very complicated things, which again, have to restart thinking that how do we build those things. That's why you see many startups, many companies are popping up into this space. So I'd like to say that yes, I want to believe that security would be better this time, but historically, let's see how it goes.

Purusottam: Yeah, so I think when MCP initially was rolled out, there was no support for even OAuth, right? And they added it later on. So I mean, can totally understand where you're coming from and why you are hesitating. But hopefully, I'm hopeful that we'll do security better with the AI world.

Ashish: And this time we are saying that S in MCP stands for security. So let's see.

Purusottam: So now, since we're talking about AI and development lifecycle, for folks who have been doing traditional SDLC, it's a big shift, right? Because it's a completely new model. So that means there will be a lot of misconceptions around how to do maybe development lifecycle in a secure way for the AI native world. So what are some of the common misconceptions that you have seen and how would you address them?

Ashish: Misconception. I think there are several misconceptions like, and this goes back to what you were saying earlier. This is very Genesis or a starting phase of AI adoption or AI as an whole trend itself. So if I give you some, some misconception from security point of view, people like, could be, you know, the security issues in the AI? These are just, prompt injections are just SQL injections, right? Making them understand that, nope, SQL injections plus the social engineering plus the 10x impact depending on the context and where those prompt injections are happening.

So there are various misconceptions in terms of people assume that security or the AI models, open source, MCP server for that example, all this tooling which is available all these AI models and everything which is available in the wide open space, they're all secure. And this is biggest misconception. By default, people think that things are secure.

You remember that Microsoft's Thai chatbot incident, right? Where things started going pretty south for Microsoft's Thai chatbot. So people assume that if Microsoft built that, know chatbot and that model AI model this will be secure by default. People are not thinking that you know there would be model poisoning and there could be you know those security issues and ethical issues with those things. So that's the general misconception which is I think it again there are companies there are people who are trying to break and they are already succeeding at it but this will take more and more center stage in terms of breaking those misconceptions that people will start realizing when the awareness increases that yeah not so secure actually so we have to you know start believing.

Purusottam: So by default, we are insecure anyway, and you work towards making it secure. So the example that you gave around prompt injection, the biggest difference at least I see is earlier with SQL injection, only a handful, an attacker, sophisticated attacker could do SQL injection.

But now with prompt injection, if you have a copilot or a chatbot, any individual can get to the end data by writing some fancy prompt, some way to inject it. So the attackers who were able to do it now that the data set has gone way bigger in a way, the way I see it, like the difference between SQL and prompt injection. So now the question is what architectural patterns or design principles that you can put in so that you have security right from maybe get go.

If an organization is thinking about security of their AI workloads, what should they think about? Whether for prompt injection, data leakage, any areas, how would you?

Ashish: There are bigger people than me who are thinking on these lines, but I have some thoughts as well. Like how do we build this? There are many companies throwing buzzwords out there, but I would like to kind of keep very simple as I've learned. The defense in depth actually is one model where I would continuously push people to think about.

How do you build into, think about from perimeter layer, network layer, application layer, the data layer, the model layer. So these layers you think on the perimeter, how do I create the, if I get to gateway where, AI aware detection models are there to detect if we are getting some sort of adversarial prompts or something mischievous, have those there.

Then you go on down below like network layer, create the segmentation for the machine learning workflows, like create this different network segmentation if you can. Then I would say on the application layer, you create those, what do you say, the prompt sanitization, input sanitization, those kind of practices you build on the application layer. Then you come onto the data layer.

You make sure that you have your ethics, your privacy guidance or your privacy practices, your fear practices, your ethics regulations implemented on that data layer. Then you go into the model layer. Then you think about how the adversaries can play with your data model. You make sure that on the model layer, your testing happens, like what can adversaries can do, how to detect, what guardrails to create create that on the model layer.

And then you think on these defense in depth kind of model, that I believe can give you some level of protection as of now. That's how I think this is evolving. So the more clarity we get how people are building these models, how easily people are adopting to this, this will get more maturity. But again, in depth will be one place which I would constantly kind of push for.

Purusottam: Mm-hmm. Makes sense. So you earlier, I think the defense and depth that you're talking about, it all goes back to threat modeling, right? At each layer, what is the worst that can happen at the model layer or application layer or the chat interface that you are providing, let's say, or copilot experience? At each layer, you do sort of threat modeling to find out what can go wrong, and then you work backwards from there.

Now… You mentioned about social engineering attacks and things like that, right? And there are so many social engineering attacks with videos, with emails that we see nowadays. The entire threat landscape has changed, right? So how do you make your organization aware? What are some of the technical indicators that you look for when it comes to staying ahead of the attackers from an offensive perspective?

Ashish: You said it right. And if you look officially at its stats, Microsoft themselves has confirmed that they have seen a slight or not so slight, 57 % of uptick in having the phishing attempts. And those are very sophisticated attempts against their environment and their production environment. If that is happening with them, imagine what is happening on the general level. So previously, you with your naked eyes could detect that, okay, this looks phishing and not to touch that.

But those differences have gone now. So people like you, me, who are at least a bit more tech aware, can find those, cannot find those things anymore. The difference. People are paying 25 million to fraudsters just because they heard a mail or they heard a video recording of their CFO saying that things are happening. Those are all things happening. But as these offensive, malicious actors have got some capabilities, we on the defending side needs to have some more capabilities as well.

We need to have machine learning models for detecting these things faster. We need to have certain kind of automations created. Imagine the attack surface monitoring thing. There were previously some bug-boundary researchers who would create some easy automations to detect subdomain dangling, some easy kind of secret scanings, and some basic workflows and they would create those automations, they would report those bug-wanty submissions very automatically and very fast.

Imagine the same kind of attack surface monitoring within machine learning model out there. Constantly looking at your organization's attack surface, looking for the easy or low-hanging fruit, subdominal dangling and these kinds of cases, and then reports back, even goes one step ahead, and then takes some corrective actions, notifies the teams or reduces or revokes some secrets or disables some wrappers.

Some of those activities can be done on these protection side as well or the defending side as well. These capabilities are improving on the defending side as well, but that goes on engineered again, the capabilities on the organizations like how fast they're evolving, how fast they're seeing these kind of attacks and how far they're automating these kind of detection in their environment.

So to cut short, what I'm trying to say is that yes, any technology you create, the malicious actors grow capable faster, then comes the defending capabilities. Now we are evolving, we are going there and include, it will take some time, but we are already there. There needs to be a bit more, awareness in the industry as a whole about AI security and AI defense mechanism that needs to be a bit more a sink between the companies and organizations about these kind of attacks and you know the threat intel to protect faster. So that I believe will be happening in some more months or years.

Purusottam: In coming months, ok! So we, when it comes to AI, as you pointed out, right, like the attackers move fast. So if we, we can, the same technology can be leveraged on both from attacker side, also from defense. So when it comes to defense, any, any recommendations you have that, or any, anything that you have observed how AI is transforming the threat detection response capabilities or secure SDLC, what patterns are you noticing when it comes to AI and from a defensive perspective?

Ashish: Yeah. One example let's take, right? One example from security operations environment. Previously, there used to be a security analyst sitting on the machine, getting an alert from the CM system, going into the CM system, analyzing that attack and then creating the, okay, this is true positive, false positive, false positive, then goes back to the same cycle again and again.

Now with the AI in the mix, what we can do, there's just one use case which I'm talking about. The majority of this task where identification of the false positive and true positive can be done with the added context to these models. That is one thing. We can increase the rate of detection, the quality of detection, quality of signals coming in from these data lakes, from these CM systems, for example. The false positives will be reduced. The quality of security engineers' engagement will improve.

We can go back and do some more kind of important quality work. So that's one example. Then you see from a security testing point of view or in the SDLC point of view, I have personally seen that threat modeling has been, the use cases in threat modeling itself from collecting the threat scenarios, from creating the diagrams, from creating the automated kind of, for the action plans that has improved the quality of engineers and engineering alignment.

So the things which used to take like, let's say four hours previously have just been one hour or 30 minutes now. You just sit together and then agree on things and then move on. That is another example.

In the security testing within the CI-CD pipelines, there are some automated generated security testing nowadays where you can just know that, okay, these are the security testing. Do that secure core reviews are getting easier for that matter. All these things are improving the defensive security side. There are just some example. But again, it goes to the defending team. What use cases they find? How can they automate? What can they… think of creating those opportunities where they can detect faster, respond faster, contain faster, recover faster. All those things can happen with these different use cases.

So it goes beyond the opportunities are endless, right? If you imagine, if you know what to do, you can improve in your environment. That's how I see.

Purusottam: So last question on the defense, you said that we like the quality of data, quality of signals are much better. let's say I'm taking the first use case that you gave, right? The SOC analysts are more productive. Now, are we heading in a direction where we would not even need SOC analysts in future? Because AI can do all of that. Like not only it gets the alert, it looks at the signals, takes an action. Are we heading in that direction?

Ashish: On the contrary, yes, as I said, the quality of alerts and the signals will improve for sure, but the kind of alert frequency is not reducing. So the more alert or the more attacks will be coming in from various directions. There needs to be some human checks on these things. Let's say we detected one alert. Yes, there were false positive and true positives in traditional environment, but there will be false positive and true positives in AI native environments as well. So there needs to be certain human checks in there as well.

There needs to be certain quality checks in how fast we involve which person, which team, and how fast we involve is this going into the right direction? Is this the… right way to respond to this incident and so on. So there will be some quality checks as well. And what matters most, I've been saying, the context will be king. The more AI adoption happens, the context will be king.

So your human SOC analyst will be kind of, know, enabling themselves with that context, contextual knowledge of your organization. So the more context they have, the better they can target those signals, the better they can improve the security quality of your environment. So that's how I see that, the number of signals might reduce, but the quality of engagement, the quality of work, which SOC analyst will be doing will be improving further.

Right now, as you see, majority of the SOC memes are about that SOC engineers are laden with the tons of false positives and these alerts which are not making any sense, that will change.

Purusottam: Make sense. Yeah. So human in the loop in a way, right? So there is, there is hope for humans to continue. We, it's not that AI is taking over everything.

Ashish: Oh! I think, you know, you have seen this DEFCON, right? Zenity Labs has basically ensured that humans are, you know, humans are there in the loops, because all these scenarios which you are thinking in our heads that how AI could be, you know, disastrous, Zenity Labs and some other players and DEFCON as well, they have proved, they have literally materialized those attacks and those scenarios. And now people are thinking that, yes, We need to think about security in the AI landscape as well.

Purusottam: Amazing! Yeah, so with that note, we come to the end of the security questions. But before I let you go, I have one last question, which is, do you have any learning recommendations? I know that you have already mentioned about the security partnership book from Jason Chen and the Phoenix Project, but any other books or blogs or podcasts that you would recommend our audience?

Ashish: Yes, some. There is one book called Art of War by Sun Tzu-Tzu. Art of War with the traditional Chinese writer. Pretty good book. I love that book. The second is the Engineers Guidebook by Gurgly, my ex-colleague at Skype. But he also writes the Pragmatic Engineer blog and he also hosts a podcast.

I believe that one book and his blog, the Pragmatic Engineer, will actually keep you ahead in the engineering world. So you should definitely kind of, as an engineer, would highly recommend to just, you know, having that, read that, you know, blog and that book for sure. From security specific, there is a blog from Clint Gleyber. He writes this, you know, newsletters and these blogs.

He, TLDR security, he collates. If you just read that one blog, every new release, I think you are 90 % aware of what's going in the security world. He has sections about tools, he has section about what new vulnerabilities and things are going on. So I generally have a culture and a push within my team that we are on week on week, we are talking about what we have been reading, what kind of new things we learn or what new books we are targeting for.

So I'm a huge kind of, know, first I'm a huge admirer of these books and these blogs. So that's why I recommend.

Purusottam: So yeah, I also follow Clint Gribber's TLDR sec and also the unsubscribe to pragmatic engineering blog blog post as well. And yeah, they are amazing. They at least keep you ahead of or keep you in the loop of what's going on from a security standpoint, from an engineering standpoint. So yeah, absolutely. Yeah.

Ashish: From engineering part. I literally anyone who joins my team, I give them the engineers guidebook book just to read and get on the same page my expectations from their expectations from me as a manager. So this already makes them a certain kind of matured engineers that what they can expect and what is being expected from them. So that level of alignment is very good and people who have done is they have become better person and engineers for sure. That's why I recommend them.

Purusottam: Yeah, absolutely. So when we publish this episode, we'll also add it to the show notes so that our audience can go in and purchase the book, subscribe to the blog post, and learn for themselves as well. So with that, thank you so much, Ashish, for joining. It was a wonderful conversation. And yeah, thank you for coming.

Ashish: Puru pleasure was all mine. I loved talking to you and thank you so much for having me here. It was pleasure.

Purusottam: Absolutely. And to our audience, thank you so much for watching. See you in the next episode.

Ashish: Thank you.