Vulnerability management deep dive with Walter Haydock

Host: Hi everyone. Thanks for tuning into our Scale to Zero ship. I’m Purusottam Cofounder and CTO of Cloudanix. For today’s episode, we have Walter Haydock  with us. Walter is the founder and CEO of Stack Aware, a software as a SaaS provider that allows organizations to evaluate the risk of known vulnerabilities in their networks. By assessing the vulnerability scanner reports, security bulletins, and software bill of materials in a quantitative manner, stack Aware lets you make informed risk management decisions. Walter, it’s wonderful to have you in the show. For our audience who may not know you. Do you want to briefly share about your journey?

Walter: Sure. Thanks a lot for having me on, really appreciate it. Looking forward to the conversation. So you did a great job of summarizing what I’m currently working on, but prior to that, I worked at a couple of different enterprise software vendors and saw a lot of the same problems along my way. So decided to do something about it by launching my own company. And then prior to that, I spent most of my career in government working both on Capitol Hill and as a military officer in the Marine Corps. And that kind of gave me a different background than a lot of people have in the technology space. But I think I have a unique approach in terms of my disciplined focus on getting things done and I think that’s helped me along the way, even though I don’t have kind of a traditional background for a company founder.

Host: Makes sense. Thank you for sharing your journey. So you write a lot about vulnerability management and its challenges. I particularly loved your analysis of Palantir’s Container vulnerability management program. And you also recently wrote a blog post about with your thoughts on national cybersecurity strategy laid out by US. Government. I would like to unpack some of these areas today, particularly like how to set up vulnerability management programs, what are the best practices shortcomings, and how to work with response and stuff like that.

So there is a lot to discuss, so let’s get into it. So the way we do the show is we have two sections. The first section focuses on security questions and the second section, which is a fun one, is more around rapid fire. So let’s start with the security questions. Right?

So when it comes to vulnerabilities, organizations should ideally be able to address all of them if they have a defined sort of practice or process in place, right. Can you help us understand what’s an ideal process or practice?

Walter: Absolutely. So the first and most important thing I would say is make sure you have a process because I’ve seen organizations that take a somewhat ad hoc approach to vulnerability management and there’s some data on this. There was a Ponomon report, it was from a couple of years ago, but potentially about half of all organizations use an email and spreadsheet approach to vulnerability management. And that can work when you’re just starting out. But once you hit any sort of scale you need a really crisp and clean process in place and here are some of the things that it should include. So first of all, you should have regular patching and update cadences because if you can just update a piece of software to remove the vulnerability or to fix it, just do it. I mean that’s the easiest thing to do in a lot of cases. So that should be your response. It gets a little more challenging though when you need to make trade offs with business priorities.

So if you need to suffer downtime because you need to update a certain software asset or if you need to push out a new release of your own product for your customers, that’s when it gets challenging. So the key thing that organizations should do is have clearly defined thresholds and actions to take ahead of time. Now I’ve just seen in my career organizations large and small will take problems as they appear and address them in a one off manner. But the key is to have a process that can address all the conceivable outcomes and have an answer for what to do in every situation that doesn’t require a meeting and doesn’t require a single person to a senior leader to make a decision every time. There will be exceptions, of course in unforeseen circumstances, but most circumstances can be foreseen. So having clear thresholds and timelines for resolving issues is really important. Happy to talk more about that, but that’s at a high level.

Host: Okay, that makes sense. And so more around having playbooks for your vulnerability management process. One of the things that you highlighted is that there could be challenges while applying patches or applying updates, right? Because you have some competing priorities. So all of us are sort of constrained by these, right? Either by time or people or budget. So

What are some of the key areas that need to be covered when it comes to vulnerability management and particularly when starting up?

Walter: Yeah, so I would say if you’re just starting your vulnerability management program, I would say you should focus on the exploitability of the vulnerability and use that as your method for prioritization. I know a lot of organizations use a CDSS common vulnerability scoring system based approach and that’s kind of become the industry standard for vulnerability management programs. I would advise not doing that. If you’re just getting started, if you don’t have a lot of budget, a lot of resources, and you’re just using kind of either open source or low cost commercial tools, most of the findings that you’re getting, unless you’re building your own software, in which case you might get static analysis findings.

Most of the findings that you’re going to get are going to be CVE, which are common vulnerabilities and exposures. And there’s a tool called the Exploit Prediction scoring System which the data is freely available, which gives you a probability of exploitation from zero to one. One being 100% chance and zero being no chance of it being exploited. And really I would recommend just prioritizing using the EPSs score. The people who put together the model recently put out a paper and they showed that if you look at using EPSs for CVSS for prioritization, you can basically target 112 of the number of vulnerabilities that you would have to using CVSS if you just use EPSs to get the same result in terms of fixing exploitable vulnerabilities. So my recommendation would be if you’re not super sophisticated just use the EPSs to drive your remediation program and then as you get more advanced then that’s when you should take into account things like Asset value or business value at risk and it’s going to be more challenging but it’ll give you kind of a more holistic true risk picture.

If you can look at the probability of exploitation of a vulnerability and the business consequences of that happening and then you can come up with a financial risk score and that’s really important for big organizations when you’re making budgeting decisions. If you’ve got a vulnerability that represents a risk of $10,000 a year and the remediation is $100,000 a year, then that might be a risk that you want to accept because you don’t want to spend $100,000 to save $10,000. Now conversely, if the vulnerability represents a billion dollars in loss, which is a reasonable amount based on some of the breaches that we’ve seen, the equifax breach that cost more than a billion dollars and that was a single vulnerability, it’s probably worth a lot of money to fix that and quickly too. So the key is understanding that what is the risk of a given vulnerability and then what is the cost of remediation? And then also the last thing I would say is time. Most vulnerabilities are never exploited.

Most CVS, most known vulnerabilities are never exploited in any kind of realistic type of scenario. They might be exploited by a security researcher in some very narrow circumstances, but most are never exploited by a malicious actor. But the ones that are, are exploited very quickly, relatively. There’s kind of an initial spike when first published and then there’s kind of a steady exploitation cadence over time. So those are the ones, the ones that are going to be exploited very quickly, you need to fix, you don’t have days, you probably have hours to fix those before you start getting targeted. So understanding that and then you can push to the side the ones that are less likely to be exploited, those can be fixed later, but you really need to move quickly with some of these key vulnerabilities as soon as they’re identified.

Host: Yeah, I think we saw that with log four shell, right, particularly when it was announced there were many attacks around that that came to news, but now it has died down a little bit, but still there are attacks happening even based on that because not everyone has addressed that yet. Right, and that goes back to what you were highlighting. Rather than focusing on the CVSS score, maybe you look at the exploitability.

If somebody cannot exploit that in your environment, let’s say you are not using Java or that log force and lock for library, then you’re not affected by it, right? It doesn’t make sense to spend hours and hours to figure out whether you are affected, whether you need to address what needs to be addressed and stuff like that. So, yeah, you’re on point there.

So one of the things that you highlighted, like with Equifax as well, right? Like there have been many cyberattacks and over 60% of the companies are affected by it. And all of those begin with exploitability and vulnerabilities. Right. And organizations have a glaring gap when it comes to their vulnerability management process. So my question is,

When it comes to managing vulnerabilities, where do organizations make mistakes?

Walter: I would say they make mistakes by trying to boil the ocean, if you’re familiar with that phrase, meaning they just say you must fix all highs and criticals, meaning CVSS seven or greater. And if you look at the National Vulnerability Database, that’s most vulnerabilities, most CVE are CVSS seven or higher, it’s actually more rare to find something that’s below that. So I agree, if you fix most vulnerabilities, you will reduce most of your risk. Yes, that is a true statement. But the problem comes when you’ve got hundreds and hundreds of thousands of known issues in your network. So security teams will sometimes say to engineering teams, you need to fix all these issues. And then engineering will say, okay, I’ve got 50,000, 150,000, 50,0000 millions potentially even issues. It’s not even realistic to start to say that.

Cloud Security Made Simple With Swati Anuj Arya
Purusottam: Hi Everyone, Thanks for tuning into our Scale to Zero show. With this podcast, our goal is to get your security questions answered by experts in the security space and build a community. For Today’s episode, we have Swati Anuj Arya with us. Swati is a Leader at

The engineering teams will kind of just say, okay, whatever, they’re not even really paying attention to here. So you need to be very targeted in your recommendations from a security team perspective to say, hey, these are the top risks, these are the criteria. And even if you can’t do a purely quantitative approach, you can have conditional approaches saying, like, if we have an Internet-facing asset with a vulnerability that has an EPSs score above this threshold that needs to be fixed in one day, and then if it’s in this bucket, then it can be fixed in this period of time. Ultimately you’d want to get to the point where you can kind of continuously measure your risk exposure. And every organization should have a defined risk appetite for how much risk they’re willing to accept. And then if you exceed that level of risk, then that triggers an action to get back down below that risk appetite and at a given velocity, which would be your risk tolerance. So I would say taking a boil the ocean approach is probably the biggest problem that a lot of organizations have with vulnerability management and then another one would be not having a consolidated picture of their network or their product.

So for example, if you’re running a network, having a full asset inventory is really critical because you can’t fix vulnerabilities if you don’t know that they’re there. And you can’t find out that vulnerabilities are there if you don’t know what assets you have. So understanding what you have in operation, what SaaS platforms you’re using, those types of things are really important for a vulnerability management program and then making sure that you’ve got a way to measure the risk from all those things. So a SaaS platform, it’s usually difficult to scan for vulnerabilities because you don’t have access to the underlying software. But understanding are you using a security ratings tool or do you have some sort of agreement with the vendor? Are you doing pen tests on the SaaS platform? That type of thing, having those in place and knowing what your sources of information for risk are and then what you’re going to do with them that can help you avoid some of those common problems.

Host: Okay, that makes a lot of sense. There were many things that you highlighted.

I think the one that I like the most is don’t just try to address everything because it’s called as critical or high in the CVSS, but rather look at the exploitability. Does it apply to you? Maybe it makes more sense to address those rather than looking at everything, trying to address everything. So in this, is it more of a prioritization challenge, is it more of a cultural challenge or is it more of a process challenge that you think organizations are facing?

Walter: Yeah, I would say it’s mostly a cultural or process challenge, not so much a technology challenge. Because like I mentioned, there are a lot of tools available that can allow you to make good risk decisions. But I think it’s really a cultural problem on the security side. The security team will often feel kind of accountable for the security risk, which it’s good that they feel on the hook for it. But frankly, in my opinion, I think that the business leadership, whether that be product management or business line general manager or a CIO in a big organization or even the CEO, those are the people who should be making the risk decisions.

They shouldn’t be saying, hey, that’s not my problem. Security is taking care of it. That’s the wrong mindset because business leaders need to incorporate every risk that the organization faces, whether it’s they need to make revenue, they need to deal with technological issues, they need to deal with competitors, they need to deal with regulators. So having a holistic picture of all of your of the risk, that’s something that a business leader needs to do. Where security is just focused obviously on security, but they should be the ones who are advising the business leader and they should be implementing the business leader’s decisions but they shouldn’t be owning the risk. They should be the ones just providing a good picture. They should be illuminating the risk.

I think organizations that move in that direction where business leadership is much more involved, then the business leaders can say hey, I’m really worried about this happening, I’m not so worried about that happening. And then the security team can say okay, well if you’re really worried about that then here’s what you need to do to fix it and then here’s how much it’ll cost. And then the business leader can make an informed decision. I think the challenge is mainly cultural and then on a process level, not having a process in place. If you need to ping the CEO every time you detect the CVE, like the CEO is eventually going to say hey, stop, figure it out. So you can’t do that. You need to have leadership set, high level risk appetite approve program for dealing with things and then the security and engineering teams need to go ahead and execute.

Host: Yeah, I love how you put it that security teams are advising but ultimately it’s the business who is accountable, right? We cannot just say that, hey, we have security engineers, they will take care of everything. It’s not our job. Right. At the end of the day, it’s the whole organization who is responsible for it. Makes a lot of sense. One of the things that you highlighted is that for a proper vulnerability management program you need to at least understand what assets you have. Right. Understanding your asset inventory is key.

Nowadays we use a lot of open source libraries and and like SBOMs are becoming a key building block in enterprise software and that is sort of introducing supply chain risk management related issues. Right?

So why do you think organizations need a software bill of materials in the first place? If you can highlight that a little bit.

Walter: Yeah, great point. So I would say at this point most of vulnerability management is software supply chain security because only if you’re developing your own software in house, which is a lot of organizations these days, but only if you’re doing that or do you have first party code security risk, but most of it is third party risk. It’s either open source libraries or proprietary code that’s running in your network from another company or maybe it’s a SaaS platform that you have your data stored in. So most security risk in my assessment is software supply chain security risk in today’s environment. So understanding that in a way that you can make good decisions is really important. And now the way that most organizations deal with this today I don’t think is super effective. So most organizations today take kind of a two tiered approach.

If they are talking about open source software, they may run software composition analysis on it, see if there are any known vulnerabilities in it. I’ve even seen some kind of hilarious examples of security teams sending open source project managers security questionnaires, which they don’t fill out because there’s no buy with a they’re not getting paid to do it. So there’s kind of that. And then there’s the other thing, which is, okay, this is a commercial provider proprietary piece of software. I’ll make them fill out a questionnaire, maybe get an audit report if they’re a SaaS provider, get SOC2 or ISO 27,001 or do security scorecard or Fitsight or whatever to evaluate them. And it kind of treats the risk as being different based on where it’s coming from. And I would say it’s all the same general type of risk. It’s the chance of your data’s, confidentiality, integrity, or availability being impacted by a malicious actor. Now, you may have different tools to impact different parties. So the open source folks, you kind of need to ask nicely for them to do something for you. Or potentially you could fund open-source projects, which I’m an advocate for. And on the commercial side, you’ll have contractual obligations, ways that you can enforce things. But coming back to the question about SBOMs. SBOMs allow you to depict that very complex supply chain in a structured way that allows you to talk about it consistently.

And whether it’s a piece of open source software that you’re running in your network, or it’s a SaaS provider, maybe a SaaS provider is running a piece of open source software on top of AWS or Azure S bomb. Specifically, the Cyclone DX format can depict that. It can show that this SaaS provider is running this open source library on top of AWS. And then that can allow you to make really good risk decisions if you have the tools to evaluate it correctly. We can talk more about this, but S bombs can also be kind of just a data dump of information. If you don’t know how to absorb it, then you’re not going to be able to make effective decisions. But if you do, you can really understand what are all your dependencies throughout the entire supply chain down to your fourth and greater parties, your supplier, suppliers and their suppliers and things like that, which can really have impacts your business continuity.

There are examples of Slack when Slack was using AWS for its hosting and AWS went down, then Slack went down as well. So you may say, hey, I’m an azure shop. I don’t care about AWS. But if you’re using Slack, you actually should care about AWS because they use AWS.

Host: It’s funny that you highlighted about that open source questionnaire thing. I think recently somebody tweeted like an open source library contribute owner, I think tweeted that somebody had asked them to fill up a questionnaire with some 100 plus questions and you’re just laughing at it. So that is on point, right?

So I want to continue the discussion a little further on the SBOMs. And we have seen that there are multiple uses of S bonds, right, including like security or regulatory compliance or marketing or sales enablement in 2023. We have heard many organizations indicate that they want to prioritize this S bomb analysis, s bomb data. So my question is,

Understanding Cyber Security With Aseem Rastogi
Purusottam: Hi Everyone, Thanks for tuning into our Scale to Zero show. Today we have Aseem Rastogi with us. Aseem is the Head of CyberSecurity and Compliance at Meesho. Prior to joining Meesho, he was leading the CyberSecurity & Compliance efforts at RazorPay. Aseem, Thank you so much for joining…

As far as interpreting and using the SBOMs data effectively, what are some of the biggest challenges you see organizations face?

Walter: Yeah, great question. So having a program in place to analyze, to consume and analyze SBOMs is the first important step. Before you start asking your vendors to provide SBOMs, you better have a plan in place to do something with them. Now, I know there are tons of organizations that make their vendors fill out security questionnaires and then don’t read them. I don’t think that’s good practice.

I think that actually hurts security because then you’re distracting your vendor from doing something potentially more productive. So it’s important that you have a plan in place. For example, which vendors will need to provide SBOMs? How frequently will they need to provide them? What do you do with if you identify known vulnerabilities in libraries in those S bombs, what are your demands going to be or your request going to be of your vendors? And then a key piece of that is, how are the vendors going to communicate with you about their S bomb? So if they’re just emailing you kind of a PDF or just sending you a JSON in an email, that’s going to be very difficult to manage in any sort of effective way. So having a consolidated system for managing your S bombs is critical, and then also being able to communicate about the vulnerabilities identified in those components. The pieces of the S bomb is important, and that’s where Vex, which is the vulnerability exploitability exchange format, comes into play. So I mentioned Cyclone DX. Cyclone DX has the capability to include vulnerability disclosures as part of an S bomb or even separately as just an entirely separate document.

And I think that’s really critical because if you analyze any given S bomb, you look at all the libraries included, you’ll probably pretty quickly find lots and lots of CBEs. And we discussed how it’s important to not try to boil the ocean so the vendor ahead of time can say, hey, we’ve scanned our own software. We know that there are these vulnerable libraries in it, and that these libraries have these CVE’s in them. However, based on our technical analysis, there’s no way an attacker could access these. Because maybe the code is written a certain way, or there’s some compensating control in the case of a SaaS deployment and using a vex statement in a way that can be easily consumed by a risk management system that can really help streamline the process. Because otherwise, you’re going to have this back and forth where, hey, I got your SBOM and I saw all these vulnerabilities. What’s the deal here? And then the vendor kind of says, oh, I don’t know, I’ll get back to you.

And that wastes a lot of time and it doesn’t really help with risk management. So I’d say having a program in place and then having a plan also to either provide or receive vex statements is really critical to using SBOM effectively.

Host: Yeah, that makes sense because I’m pretty sure all of us must have seen the vendor questionnaire and once it is filled out during the procurement process, nobody looks at it. And also the challenges of sort of sending your SBOM analysis data and then how do the organizations consume it.

So a follow up question on that is organizations are also trying to set up tools, right, so that let’s say they can take in the SBM data from the vendors and set up processes around it. So what should organizations do if there are limitations in the S bomb tools or the formats to make sure that they are managing these vulnerabilities and software components properly?

Walter: Yeah, great point. So something that I have kind of proposed, or a term I’ve coined is called a synthetic SBOM, which is basically if a vendor can’t they don’t have the technical capacity to provide you with an SBOM or it’s incomplete, or you just want to do your own analysis or your own diligence. You can create an SBOM for a piece of software that you’re using, even if the vendor didn’t give it to you. Now, obviously, it’s not kind of the official version, but if, say, I know I’ve got this piece of the SaaS tool that I know it runs on top of this AWS service, I can create my own SBOM. Saying, okay, I’ve got this piece of software, it’s running on top of this cloud platform and then maybe it has these other dependencies that I know about just from talking to the vendor or from doing some research on them. And then you can build your own structured depiction of software supply chain risk even without the vendor telling you things, and especially for cloud deployments, cloud based software, sometimes it’s opaque. You can’t entirely see what’s going on. So using a synthetic spam, you can create kind of just the beginnings of a structured depiction of your supply chain risk. So that’s one way you can help improve the picture and manage risk. Holistically.

Focusing on Cloud Vulnerability | Security | Practice | ScaletoZero
Understand and get hold of methods, awareness, basic concepts that are necessary to improve your cloud vulnerability practices. Watch now!

Host: Okay, that makes a lot of sense. That’s helpful as well. When you do not win as an organization, you do not have to have access to your vendors like SBOMs to do the analysis and stuff like that. So, yeah, thank you for sharing that.

So I just want to quickly change the topic to something new, like some new technology that’s been very popular nowadays. So chat GPT, right? And everyone has been playing with it and everyone is raving about it. And in fact, Google recently launched Bard to compete with chat GPT as well. Right, so now let’s say I’m a security engineer. I can use Chad GPT or Bard to generate, let’s say, security policies or the patching instructions and use these in my organization.

Is that sort of a magic trick to solve all my security problems, particularly in infrastructure security, let’s say?

Walter: Yeah, unfortunately I don’t think there’s going to be any true magic trick, but chat GPT is pretty good at doing certain things. So I would say your security policies, I would try to avoid boilerplate as much as possible. So your security policy should be very concise and clear. And chat GPT, I’ve tried to create security policies using chat GPT. I even tried to create a security policy for using chat GPT with chat GPT, and I found it to be relatively boilerplate, kind of vague and not specific. That’s because it’s a generative AI tool, it kind of just assembles amalgamates a lot of information from different sources, but it definitely can be useful for security purposes. Obviously, a key is understanding a what data are you feeding it, and what’s happening with that data. So making sure you don’t give it sensitive things like secrets or PII or anything like that is important. And then also the output, making sure that the output is valid, don’t take it at face value. It’s just a soulless machine. It doesn’t have feelings, it doesn’t think really, it just provides you something.

But it can definitely help in a lot of areas especially areas that currently require a lot of manual analysis. So I’ve been using chat GPT to do things like analyze assets. For example, if you get a vulnerability scanner and you run it against your hosts, you get some information about the operating system, what the DNS is, the IP address. And if you plug that into chat GPT and give it some rules, it can actually tell you, oh, this is probably an endpoint, this is probably a server, this is probably a different type of asset. So we’re experimenting at stack aware with using that for asset classification and then also for taking unstructured vulnerability reports. So if a company posts a blog, blog post about a certain vulnerability, turning that into structured vulnerability disclosures using VEX, that’s something else that we’re also experimenting with. Although chat GPT hasn’t been able I haven’t tried it with GPT Four, that’s on my list, GPT 3.5.

It didn’t do a great job with that, but I think eventually in the future that’ll happen.

So understanding how to use these tools effectively to reduce toil and to reduce manual tasks is going to be important, but at the same time, doing it securely is also critical.

Big Mistakes in Cybersecurity With Mel Reyes
Host: Hi everyone. Thanks for tuning into our Scale to Zero show. I am Pursutham, co-founder and CTO of Cloudanix. Scale to Zero is a forum where we collect security questions from curious security professionals and invite security experts to learn about their journey and also to get these security

Host: Okay, that makes sense. Another question that I have around it is like with any new technology, right, any big innovation, one question that comes to mind to folks is am. I going to lose my job. Right. So do you think these Generative AI tools with large language models take away jobs from security professionals?

Walter: I think, yes. Bottom line, they will. And the key question you should ask yourself, if you’re a security professional who’s worried about Generative AI, you should ask yourself, what does my job look like? Am I doing a lot of copying? Copying and pasting? Am I transferring between formats? Am I kind of doing repetitive things? If so, then you should probably be worried. And what you should be doing is you should focusing on the higher level aspects of your job, like the thinking of your job rather than the doing. Because Chad GPT is going to get very good at doing kind of digital Drudgery in the future, or Generative AI in general will be very good at that. But what it’s not really great at is thinking. It’s not like not yet at least like the terminator where it can come up, give itself a mission and do things. So the thinking is really where humans need to add the value, the higher level abstraction asking why am I doing this is really important. And if you can’t answer that question for the task that you’re doing right now, then you should probably think hard and you should be worried. But for people who know how to employ these tools effectively, I think it’s going to really improve their productivity and it’s going to make them much more effective and help security in general.

Host: Makes sense. It will aid in improving security engineers life unless you are doing repetitive boilerplate type of work. So, yeah, that makes a lot of sense. Right.

The last question that I have is around the announcement that US government had on the National Cybersecurity Strategy. Right. It focuses a lot on national threat actors, to military security practices, to software security and many more areas. Right.

For folks who might not have got a chance to read and digest it. What’s your take on it and how are security practitioners affected by this announcement?

Walter: Yeah, great question. So I think there are three main takeaways from the National Cybersecurity Strategy. One is that the United States is clearly saying that we’re not just going to respond to cyberattacks with another cyberattack. We may respond with a missile strike or an invasion or something like that. So if you’re a business leader, understand that you might find yourself in the middle of either a cyber war or a real war unexpectedly. So you should focus, you should incorporate that into your risk assessments going forward.

So that’s number one. Number two is that at least the current administration, the Biden administration, is focused on shifting the burden of cybersecurity to software manufacturers. And now I have a major issue with their approach. I think that it is kind of a boil the ocean approach and it’s not a very specific or detailed or actionable. One and my fear is that yet another government standard will come out and technology companies and companies in general already have tons of regulations that they need to comply with and having competing sets of regulations is very difficult, especially for smaller companies. I think that this proposed kind of shifting the burden to the software makers, it’s really kind of just crowning the incumbents, whether it be the big tech companies, crowning them as kind of the dominant players because they have all these teams of people who can sort through all these regulations and focus on complying with them. And then also sometimes these compliance frameworks, they’re not even helpful to security.

I give an example. The FedRAMP standard, the Federal Risk Assessment and Management Program which is required for all companies that are selling SaaS or cloud services to the government. It basically penalizes you if you find too many vulnerabilities in your software. But the thing is, if I’m a customer, I want my vendor looking for vulnerabilities. I want them to find as many vulnerabilities as possible before an attacker does. It’s not finding the vulnerability that hurts you, it’s the vulnerability being exploited and you can’t prevent it from being exploited if you don’t find it. So that’s number two.

I push back strongly on the Biden administration’s plans there. I don’t think it’s well thought out. I think they’re just trying to approach problem simplistically. There’s not a lot of nuance there and I provided some very concise examples of that in some of my blog posts which I think you’ll link to in the show notes. So that’s number two. Number three is I think a cyber insurance, national cyber insurance backstop is coming and what this means for business leaders is that basically the government is going to essentially step in if there’s a major catastrophic cyber event and bail out the insurance providers, the cyber insurance.

Host: So that’s a great way to end the security questions section.

Summary

Here are a few important points which stood out for me.

  1. The first one is exploit prediction scoring system which is also known as EPSs is a better way to understand and prioritize vulnerabilities as it uses data driven approach to determine the likelihood or probability of a vulnerability to be exploited.
  2. The second one is as part of vulnerability management programs document the process design playbooks for like patches and updates and define thresholds and actions for the identified vulnerabilities to prioritize vulnerability actions.
  3. There is no one size fits all approach. Instead understand your asset inventory and its exploitability.

Rapid Fire

Host: So let’s go to the Rapid Fire section now. The first question in the Rapid Fire section is if you were a superhero of cybersecurity, which power would you choose to have in you?

Walter: Yeah, I’d say the superpower I’d love to have would be able to snap my fingers and immediately identify the financial risk of any sort of security situation and then also know what the cost of the control would be and not just kind of the initial price tag, but the total cost of ownership. So if you told me that there’s this advanced persistent threat that’s trying to target my systems and then to mitigate that risk, I need to buy an endpoint detection and response tool, understanding the cost of each would be really nice because then I can make a good decision on how to proceed.

Host: Okay, makes sense. So the next question is what is the biggest myth or misconception you have heard in cybersecurity?

Walter: Yeah, the biggest myth that I usually have encountered is the regulators require this or compliance requires this or legal requires this. Because I feel like sometimes people will hear someone say something and then someone else will hear that person repeat it. And it’s kind of a game of telephone where these requirements get passed down between various people and by the end it comes out, some developer is trying to do something and he asks, why am I doing this? And his manager will say, oh, because compliance. Or legal says we have to do it, where legal might have said something more general, more vague, or more kind of broad and left it up to engineering and then engineering or left it up to security and then security interprets it as one thing and then security talks to engineering and engineering interprets as another. And I feel like this can really create a lot of headache for organizations because, one, sometimes they do things that aren’t really required, and then two, sometimes they attempt to do things that are required, but they don’t do them in a way that actually meets the intent of the standard or of the framework or of the regulatory obligation. So that’s probably the biggest myth or misconception that I hear. So interpretation, losing the context or the real challenge, a real problem in the interpretations yeah.

Host: Makes a lot of sense. Or the real requirement. Yeah. The last question that I have is what advice would you give to your 25 year old self starting insecurity?

Walter: Yeah, the biggest piece of advice I would give is that you should try to get your hands dirty with solving problems as soon as possible. And it’s helpful to study. Reading books and listening to other people and reading blogs and things like that can help. It’s certainly something you should do. You should always be a student. But solving problems, real problems will teach you so much so quickly that you won’t be able to learn in books or in schools or in courses. So pick a problem if it’s just a problem that you face and figure out how to solve it. And then the key is standardizing and documenting how you solve that problem. And that will really pay you dividends going forward. Because once you’ve solved a problem, you don’t want to have to solve that same problem again if you’ve written it down in a way that other people can use, then you’ve created something valuable for everyone. So that’d be my biggest piece of advice, is just start solving problems as soon as possible.

Host: That makes a lot of sense because particularly in security, it changes quite quickly, right? So if you just read and don’t practice or don’t experiment, then you will never catch up to the latest developments in the security area. So, yeah, that’s on point. So thank you so much, Walter, for coming to the show and sharing your insights. I particularly learned a lot around the vulnerability management and the form analysis and stuff like that. So, yeah, thank you so much for coming to the show.

Walter: Well, I appreciate you having me on and it was a good conversation.

Host: Thank you. And to our viewers, thank you for watching. Hope you have learned something new. If you have any questions around security, share those at scaletozero.com. We’ll get those answered by an expert in the security space. See you in the next episode. Thank you.

Information security | Risk Management | Cloud security | Scaletozero
Host: Hi everyone. Thanks for tuning into our Scale to Zero show. I am Purusottam, Co-founder and CTO of Cloudanix. Scale to Zero is a forum where we collect questions from curious security professionals and we invite security experts to learn about their journey and also to get these questions

Get the latest episodes directly in your inbox