Purusottam: Hi everyone, this is Purusottam, and thanks for tuning into ScaletoZero podcasts. Today's episode is with Matt Tesauro.
Matt is a DevSecOps and AppSec guru who specializes in creating security programs, leveraging automation to maximize team velocity, and training emerging and senior security professionals. When not writing automation code and go, Matt is pushing for DevSecOps everywhere via his involvement in open source projects, presentations, trainings, and new technology innovation.
It's wonderful to have you with us, Matt. For our audience who may not be familiar with you or your work, do you want to briefly share about your journey?
Matt: Oh my, it's been a journey. Um, so I started out life. I'm going to, I'm going to, I'll try to wrap this up quickly.
I started out life actually as an economics undergrad student at university, cause I didn't, this is early days, this is late nineties and computers weren't as much of a thing as they are today. And I got almost through my degree before I realized, wait a minute, people will pay me to fart around on computers, which I already liked to do anyway.
So I did the math and it was quicker for me to graduate with my undergrad and econ and get a master's in MIS, which is what I ended up doing. Um, it worked for a telecom company. It was a very strange beast. Their IT operations were in Texas. Almost all of our customers were in the UK, Netherlands, France and Belgium. Um, so if I messed up as a developer and I was a developer, a software developer, initially, if I messed up as a developer, I had an angry Belgian calling me at like 2 am yelling at me to get out of bed and go into work.
And yes, this was long enough ago that you didn't remote to work. I actually had to drive my car to work. Um, so that was great. I wouldn't recommend that and you don't have to do it anymore. So it's okay. I went from being a developer to running a bunch of systems for the, uh, Texas A&M University, the business school there. Uh, that's where I kind of caught the security bug went from there to doing pen testing, I did pen testing for a number of years, um, and I really enjoyed it. It's, it's fun to creatively think about how to break into systems.
But after a while, uh, it just felt like it was like every Monday, here's your URL, here's your creds go beat up on this web application by the end of the week, write up a report, and next week it was the same.
And after doing that for several years, I mean, I hate to say it cause it was still fun, but it also just kind of got monotonous. And so I went and joined RackSpace and started their product security team back when Rackspace had their own cloud.
And so we owned the product security team owned the entire cloud infrastructure for Rackspace, everything that made it run from the iron up. Um, and that was a heck of a challenge and quite a fun place to work. Worked there for a number of years, worked at a couple other large corporations, worked at a Duo security, which is a really great startup. Um, and now I am the CTO and co-founder of Defect Dojo Inc.
Host: That's quite a journey all the way from economics to pen testing to running your own company now.
So, I am curious like today what does your day look like?
Matt:Oh, my day looked like I usually start the day with a large cup of coffee, a very large cup of coffee. Um, just cause I'm a, I'm a bit of a coffee stop.
I roast my own coffee. So I have a friend who got me caught, who got her taught me to do that. And now I'm stuck. Um, but I started a day with large coffee and actually, um, for the last shoot, since I was at rack space, we created this thing called defect dojo, um, which is an open source, uh, vulnerability management platform and,
Every morning I go and I look at the what's come in that night, either issues or PRS review those. And that's kind of how I start my morning. It's a, it, it sounds like an odd thing to say, but it's a nice, easy start to the day. Just reviewing stuff in GitHub. Uh, somehow I enjoy that. And then it's whatever the day has for me.
This could be customer calls. This could be doing something like this, being on an interview. I'm also currently on the board of directors for the OWASP foundation.
So occasionally I'm doing board meetings with the OS board or handling some kind of issue with that. Um, and then it's just work stuff, which can be anything from very technical issues, like nuanced bugs that we find in our software, or it can be setting up systems so that we can be successful and doing automation and trying to, to me, a lot of what I find, and this is true beyond just security, actually, now that I think about it as a co-founder, but visibility into what's going on is so huge.
And so I, I'm currently working on bubbling up as much visibility as I can about the various pieces of software and processes in my current life. Um, so that we can make better judgments. Cause a lot of times you sort of get caught, right? As, as any level of an employee with, I have to make a decision. I don't have really good information. So it's like, I either guess and get lucky or I find that information. And sometimes finding that information can take forever.
So, I like to have things presented, even if they're not necessarily useful every day, all day long, there are times you're like, wait a minute, what is the best or the most used something, right? Well, boom, if I have a list of how many people are using what I can go look that up quickly and answer that question.
Host: Right. Sounds like you have your hands full with a lot of activity, all the way from being in fixing bugs or reviewing others' code to running a company, right?
So I'm excited to talk about some of the things that you have been doing for some time, right? So let's start with the topic of DevOps. So.
DevOps practices have been followed for years, but security was never a core part of the process, which sort of led to the DevSecOps movement.
The core idea is that you address the security concerns as soon as possible in the development cycle. With that being said,
What's your thought, how is that transition going from DevOps to DevSecOps?
Matt: Yeah. So I'm seeing lots and lots of people adopt the idea of DevOps or DevSecOps or whatever you want to call it. There's all sorts of names for it these days. Um, so it is definitely there and it's a goal for a lot of businesses, but I think there's a, how do you say this politely?
So there's shifting left, which I think is a good thing. And then there's shifting left without the F in shift, which is a different thing to the left, um, that you probably don't want to do.
So like I do have some concerns about doing everything as early as possible because there are some things you can't do. I ran into this very thing at Rackspace. We were running a cloud. You have many, many services all doing different versions, finding an environment that matched the new version of say compute with the old version of all the other parts of the cloud.
So you could test it while another group was launching a different part of the cloud with the new version but all the other things, the same version number, like these get really complex and you can really only test those interactions in a live running environment. So I think shifting left is great. I think you have to do it with a little forethought though because not everything can be shifted left. The other thing is that I think DevOps has been great and it's allowed a lot of visibility into processes that were just sort of the mystery thing that the sysadmin did back in a corner office somewhere.
Um, and they were all very manual and now that they're repeatable, I think that's a huge win. Um, but the thing is when, when you're doing DevSecOps, you're going to surface a lot of issues, right?
If you start looking around and flipping over rocks, you find things and having a place to put those results and pre-filter them because some of them, like anyone who's ever used a security tool knows they're going to say, Oh my goodness, this is bad, right? And you look into it and you're like, well, actually that's not bad because A, B and C.
That's actually a false positive. But if you push, I've seen some people and it just makes me die a little on the inside, want to run a tool and push results directly to the developers. And it's like, stop.
You need to have some place to store and pre-filter those before you push them on. Um, because if they're not build breaking items, uh, if you are doing CICD and breaking builds on things, you still might want to handle them, but you don't need to, you don't need to handle all of them and you need a human in there somewhere to say, okay, of these five things that the CI, CD run is complaining about. Three of them are actually actionable. Two of them are false positives. And of those three actionable ones, this one we really need to fix because it's spooky.
The other two, we'll put those in the backlog. We'll get them sorted out, but they're not crucial. So that, that whole idea of cadence and the speed, I think has really pushed people, uh, unfortunately in a direction of, well, let's just shoot it all straight to the developers, but you can't dump that stuff on the developers and expect them to do this thing called developing.
Host: And it gets overwhelming right and that's where you are highlighting like there has to be that person in the middle who helps with the prioritization that hey there are let's say 20 items out of those we only need to focus on the three, rest 17 can be put it in the backlog.
So if that's the case like how can organizations address that challenge when they are sort of transitioning from DevOps to DevSecOps?
Matt: Yeah. So as you're doing this automation, if you do it right, you're going to produce a lot more results, right? And, and it used to be, I mean, I can remember the good old days of pen testing.
I've, I produced 20, 30 findings in a pen test probably. And that's a pretty noisy pen test. You write it up, you'd submit it and you were done. But if you do automation, you can have 20 or 30 things happen in one review of say some source code or goodness. I had one time I had. I jokingly called it the static analysis of the beast. I ran sat against something and got 666 findings, right? Like the number of the beast.
So that's just like handing 666 issues to a developer is just a way for them to go, you know what, this is ridiculous. I give up. I'm just going to go get into my ID of choice and write some code. So I think having that, like I said earlier, that single source of truth.
Where all of the results can go to that first. And then you can do things like do reporting out of that.
Ideally, push things into a bug-tracking system. You can actually learn what's happening across your software, right? Is one particular team really bad about injection attacks, but the other team has got that sorted, but they're say library management is a mess. You don't know that till you start looking and getting that visibility that really helps you focus efforts on improving.
So that having a place that's sort of a security teams area to manage and look at those results before they talk to downstream systems like developers, like management. Um, it's very, very important.
I mean, my mantra at rack was if you ever wanted to find my foot in your backside, you would pass down nonactionable findings to the dev teams, right?
Cause it's so hard to get that credibility with the dev teams and you can blow it with one stupid report of a non-actionable or false positives.
Host: Yeah, yeah, makes a lot of sense. And I want to talk about that in a bit. I have one follow-up question on this.
So, you highlighted that there are several challenges when it comes to adopting DevSecOps. If I am running a FinTech startup, growing startup, is it necessary?
Do I need to invest in DevSecOps?
Matt: Yeah. So that's a great question. So to me, I would answer that is it depends on how you define dev sec ops, right? Should you create systems that automatically deploy things in a secure or somewhat hardened state? Oh heck yeah.
You should be using some kind of configuration management or automation to lay out your infrastructure.
That's just, that's just like fundamental work, right? Um, should you like have red green deployments on your first day, the first time you push out your software to the world, oh heck no, right? That's very mature things. So to me, it's picking your poison, right? Getting those fundamental things that particularly for a startup, you're limited on people, you're limited on time and resources, you're generally limited on money. So you have to pick your battles carefully, but the way I like to think about those things is,
If I'm investing time or money towards a problem, am I avoiding future instances or am I solving a one-off and I want to put systems in place that avoid those future problems, the whole thing about configuration management and automation. That may not get me a completely hardened deployment of say my product, but I have an automated deployment. If I find a problem, it's tweaked, say the Terraform and reapply it, and get a hardened version of it.
So it gives me the fundamentals I need to be quick and agile and adjust to the changing world where if I had somebody who just handcrafted a bunch of, you know, commands in a terminal over an SSH session, yeah, you can set up something, but it's not repeatable. You know, God help you if that person gets hit by a bus, right? Like there's all those kinds of issues around it that, that could be very detrimental to a startup.
Host: Okay, that makes sense.
So start small, set the basics right, and then depending on if you are fixing longer like if you are setting up the pipeline in or setting up the process in a way that helps in the longer term, prioritize that over one ops.
So yeah, makes a lot of sense. Now, one of the things that you highlighted earlier, right, like organizations started adopting DevOps for speed of delivery.
So, When it comes to security, like adding security, sometimes the processes slow down a little bit, right? Not the actual deployment, but the overall integration of security into the DevOps world.
So how can organizations find the right balance between speed and integrating security in DevSecOps?
Matt: Yeah, this is a fun one too. So in my mind, where this comes down to is understanding your risk profile, right? Not everybody is subject to advanced APTs. Granted, there's some baseline things you should do to protect yourself from malware in general and what have you, but not everybody is facing that kind of thing.
So it's really finding your right level of practice, right? Which is a nuanced answer.
I mean, unfortunately, what I like to think about is are there existential items that you just can't have happen? Right, if you're a financial institution, there are certain things that the bank regulators really, really don't like. Like those are the things to avoid. Like those are fundamentally nonstop, non-negotiable. These have to be nailed down. Right, now, the other issue is like tool cadence, right?
So some tools you can run very quickly. uh, SCA tools against a repo, the software composition analysis. Those are generally pretty quick. Those run in, in almost real time. Uh, static analysis and dynamic analysis, sassed and dassed can be very long running processes, depending on the complexity of what you're testing.
So this comes down to a cadence issue, right? I don't want to slow things down, but I still want to know that my deployed app is secure from a dassed perspective or a sassed perspective. How do you handle this? What I did at Rackspace was.
I said a cadence. So every, we had like the most aggressive team at rack space was deploying 75 times a week, which is nutty fast. There's no, there's no time to even run a tool at that speed, any tool pretty much.
Well, I mean, maybe FDA, but most of them it's a non-starter. So what I ended up doing was deciding that I'm going to every, that like for the very first, uh,
CICD run, I'm going to fire off DAST. And I'm going to have a way for that runner to know it's still running. And as long as it's running, you just get a pass. And then when it finds out that it's no longer running, I kick off a new one. Right. And I'm just continually running that scanner as fast as I can run it, but skipping the times when there's a, you know, a CICD run happening, but the tool is still chewing through what's, what's been, um, what's been deployed.
And this is where that infrastructure automation can really help you. Because if you can have CI CD fire off sort of a canary deploy that doesn't matter and isn't part of the production. Your, test can take as long as it takes. It doesn't matter as long as you have the method to clean that junk up after that test is done. So that becomes a huge value to the, to the process. And then the.
Host: I really like that approach. So sorry to interrupt, I really like that approach where you are not blocking the deployment, but at the same time you are achieving the security analysis as well, right? Like you are doing it on the side and as soon as you have the findings, you present it to the developer or the product team so that they can prioritize and work on it. So yeah, that's lovely actually.
Matt: It's a very pragmatic approach because both of those, uh, both teams in that, in that process have something to do, right?
The dev team has things they need to get out the door and the security team or product security team or dev sec ops, what do you want to call them.. They need to know that a thing that made it out the door is as secure as reasonably possible. And when those clash, then it, you can try as a security team to say no and be the no cop, but that gets you nowhere.
So it's much better to say, okay, we both need two things. They're contending with each other. Neither one of us can really win. How do I make this so that we both win? And that cadence thing was a huge, yeah, that, that really helped me a ton at rack space because I ran into that to where these teams were moving so fast. I'm like, I can't even, and the flip side of it is, is if you do have a team that's moving fast, that also means that issues can be fixed really fast.
Right, worst scenario in my history was a SQL injection on a login page that lasted a year in production, which that was a year of having like my heart kind of clenched just waiting for someone to pop and then the opposite end. Yeah. Oh, and then the opposite end was at rack space. We had one team that team that did 75 deploys a week. One of the, one of the guys working for me was testing it, finds an issue, gets an IRC, which is what we used at Rack.
It says, Hey guys, I found this issue. If I do this, this bad thing happens. They're like, Hey, can you send me the HTTP? Yeah, sure. Here's the HTTP request-response pair. This is what the attack looks like.
And they're like, okay, cool. Thanks. And then 20 minutes later, they were like, we pushed a fixed production. We're like, we haven't even written up the finding yet. So we literally opened the book because it was fixed before we could report it. So although it feels spooky when you have to make these adjustments for fast-moving teams.
You do get a benefit as a security person, right? Instead of waiting, you know, six months to get a thing released, it can happen in 20 minutes, which is kind of cool.
Host: Yeah, yeah, and that's a good segue to my next question, which is like earlier you mentioned that as a security person, if I go to a development team with a non-actionable issue or I go to them with something which is a false positive or something like that, that sort of breaks the trust, right? And that goes back to how the culture is set up in the organization.
Because not every organization focuses on security from the get go.
So how like I want to understand like how you have done it in the past or how you think about it like;
How can organizations ensure that the security mindset at least is baked into the teams from the beginning rather than doing it a year later?
As you highlighted like there was a SQL injection issue which was there for a year, the organization had the security mindset, could that have been resolved within a month instead of a year? Like how do you think about it?
Matt: Yeah. Unfortunately with that one year of SQL injection, that was about the cadence that they did releases. So that was just a, a not high performing organization. Well, that's not true. They did, they did two releases a year. They couldn't fit it into the next one when we found it. So it had to go a full year. It was really painful.
Um, so that's more of a organizational, uh, agility issue in that one year of SQL injection still makes me cringe.
But generally speaking, I think it boils down to being just really pragmatic because I think a lot of security people try to be purists and I'm as guilty as the next person about doing that early in my career where I would just rail and fight against the man and no you can't do that's not right this is broken we have to drop everything and fix it and I just realized all that got me was like not being invited to important meetings and otherwise being shunned by my peers so
I needed to change. And so it came down to very pragmatic thinking and empathy about the situation that the other party is in. So another great example of that. Also at Rackspace, I learned so much there. It was a great crucible to learn stuff in.
We had an issue that affected our entire compute or a large portion of our entire compute cloud. So this is tens of thousands of machines that need to have something done to them to address an issue.
You don't just update and restart 10,000 machines that are actively hosting people's cloud infrastructure. So this is a very complicated process. It was a really ugly problem. Um, how do we resolve it? Well, prior to me getting there, there had been this idea of SLAs around the criticality of the finding very standard practice, you know, a critical is so long, uh, high is a little bit longer. Yeah. That thing. I threw that out.
Because under that SLA, this restart of 10,000 computers was supposed to happen in 24 hours. Like that physically isn't gonna happen, just period, right?
So unless we wanna like disrupt every, all of our customers, which was a non-stop. So how do, what do you do? What I came up with is I changed the SLO definition to say a fix in place by X was a mitigation plan in place by X.
So 24 hours for a critical to have a mitigation plan in place. So what did we do? I sat down with the compute team. We talked through all of the, the really ugly wrinkles about making this fix happen. And we set a timeline and I put a date in my calendar. Um, that they say they'll have it fixed by this date. And then I backed up three weeks and put another thing in my calendar that said, Hey, check with the compute team and see how they're doing on problem X. Right.
And it just changed the mindset because instead of coming them and saying, Oh, no, we have a, we have a one day SLA. You have to have it fixed in a day, which was complete hogwash. That was never going to happen. So by changing that conversation to, I don't need you to fix it, but I need to have a date I can put in a calendar and a plan in place that I can tell management. This is why we're working to make this better. That's all it takes. And it completely changed the nature of the conversation at the table because
Unfortunately, the team that I was dealing with had dealt with that critical must be fixed in 24 hours thing and done some Herculean efforts to fix four or but that they didn't want to do again because it made their lives awful. And so we were able to come to a reasonable accommodation.
And this is that empathy and understanding. Both sides have, you know, uh, both sides have issues that they have to confront. And how do you find that middle ground?
Host: Yeah, yeah. Yeah, makes sense. So one, I like how you put it right like security teams think that all the security issues should be addressed. Like very much like how QA team sometimes take it as well, right? That hey, if I have found a bug, you cannot accept it, you have to address it. Right? So it's very relatable.
Now the question is any tricks or tips you have for security team members to let's say work with other teams to make sure that things get prioritized properly so that the culture is set up properly?
Matt: Yeah. And, and, and a lot of that comes to what I mentioned earlier. It's the empathy thing, right? Just understanding where they're coming from.
And I think this is where I was a little bit fortunate in that in my early background, I was a developer, I dealt with this junk, you know, as a developer, I dealt with the, Hey, somebody hands you a PDF of like 47 findings, a third of which are bunk, and I have to burn a day or two to disprove these things that they're not really vulnerable.
And then go fix the things that are. And by the end of the process, I'm just angry at the world, right? It just didn't, it didn't work, right? So I think having that empathy and trying to get in the shoes of the people you're dealing with, that I mean, shoot, if you can pull this off, spend a day or two doing, you know, like shadowing the person that you're providing vulnerabilities to.
See what their day is like and understand how you can derail these things. And I mean, honestly, I've never seen a developer have in their quarterly or annual goals, like fix all the security stuff. Maybe that's happening. I've never seen it.
So you are also kind of fighting the system where the developers are incentivized to get stuff out the door as quick as possible. That's what the business wants. If your job is there to get it out securely.
So you have to find a way to make them understand why, like the fact that yes, you can get this out today, but there's gonna be an issue that we're gonna have to have a fix.
If you have to sort of circle back on that issue, you're burning more cycles. Let's spend, you know, 10, 20% more now and not fix it in two weeks when you've forgotten even what you wrote for that feature, which is a lot of what development is like. Like, yeah, I wrote that a month ago. I don't remember. Like I've written so much code since then, who knows?
So I think it's mostly that. And then understanding that improvements are gonna be iterative, right? You have to start slow. It's the whole boil the frog idea. Like, any, I will say like incremental improvements will be perfection every single time without a doubt. And then
Host: without like instead of a bank a big bang kind of an approach right.
Matt: Oh yeah. I can't tell you how many people are like, we're going to do a static assessment of all 3000 of our apps this year and they get, you know, six months in and they're like, oh my God, we're at 200 apps, we're never going to make it.
And it just Peters. Okay. Make it reasonable. Like I'm going to run. We did, I did this at, at dual security. So I was able to containerize the running of a SCA tool for, um, dual security and wired into the, I was going to say the GitHub, but we actually had multiple Git hubs at that time, wired into the Git hubs or the Git labs. Actually, I take that back. We had multiple Git labs, but anyway, we had like six different Git repositories that were used by different teams. That was an unfortunate thing, but whatever. Um, I wired it into that, right. And now I can run this very lightweight. I did a very, uh, I didn't turn on all of the features of the SAA tool.
I ran it in its lightest configuration and ran it across all of the apps. Now with the automation and everything else I did, I did was 46 Python repos in three minutes, which is stupid fast, right? But now what does that give me? That gives me the ability to one. I have a smell test across all of the apps.
I know the ones that are pretty clean. I know the ones that are really a mess and now I can prioritize. Okay. These ones are a mess. Why are they a mess? Oh, they're not doing.
Oh, I see what they're doing. Their updates aren't happening correctly. They don't have say wired independent bot or whatever. Let's get that going with these guys and get them shored up. And then I can move on to those laggards that well, laggards are not really like the lower, the lower issue out people. Yeah. And sort them out. So, I mean, some of it just comes to getting that visibility and then making, cause you, I mean, I don't know. I've never seen a security team that was bored or had too many people. Right.
So this is where you have to pick your battles very carefully.
Host: Yeah. So if like we spoke about Rackspace quite a bit, right, and we there is a lot of automation aspect of it. But when it comes to automation, let's say you are incorporating security measures through automation, one misconfiguration could introduce vulnerabilities that could be difficult to identify and remedy.
So how should organizations automate their security when it comes to cloud environments so that they do not introduce more misconfigurations or vulnerabilities rather than addressing them?
Matt: Work effective. So great example of that. Also funny enough at Rackspace, we, although I did it in a lot of places after Rackspace as well is for configuration management. So puppet chef, Ansible, whatever you're using, salt doesn't matter. What we would do is the product security team would bless a specific tagged version in Git of a way to deploy whatever the product was. Right? So version three of cloud files.
Deploy one, deploy 10,000 of those, I don't care. Now, any time they made a commit to that repo and changed the configuration, we had a say on the PR to give it a thumbs up or a thumbs down so that we knew that once we got it hardened, we weren't having drift. And that changed the problem from, I have to deploy a thing, I have to do a whole bunch of security tooling to assess the configuration and hardening of it to, I have to look at a diff of code.
Oh, look, they're just changing this like unimportant part of the config to set a message of the day when you SSH in. I don't care that PR goes. So I've changed the process from being this large body of work to just this little differential change. Right. So you can do that kind of work. It makes it hugely beneficial. And some of this is just doing, honestly, I
I would, it's almost silly, but I'd call it pragmatic. I had one, um, developer I worked with at a different corporation. That just because of the way the corporation worked and how, uh, findings were passed out. I had to provide PDF reports, which I really don't like. I'd much rather do things in a bug tracker, but whatever that's what they wanted. Okay. So here's my report of the things I found. I got pinged by the, oh, everybody goes man, these PDF reports are really a pain. Is there any better way for me to look at them?
And we were using Defect Dojo at the time, and you can export the findings in Defect Dojo as CSVs. And I'm like, sure, how about a CSV of all the issues in your product? He's like, oh my God, you can give me that? I'm like, yeah. So boom, I gave him a CSV. Wasn't official, wasn't the normal process, but it made his life much better, and we got stuff fixed.
Right? So it's, it's that kind of being flexible and having pragmatism, um, is really valuable. And the other thing that I found too, it can be really a killer is, uh, and this is, this happens more in bug bounties than in internal things, although I've seen it in internal things as well as you get a report that says, this is bad. Right? This has this issue. Like, okay, how did you come to this conclusion? Like.
Can you show me that it's bad? Where, like for a web app, where is the request response pair? These are things that as a developer, I wanna see. Like I put a single tick into the search field and the app blew up. Okay, maybe that happened, but can you show me like the response that came back? Was an error message on screen? Was it an HTTP 500? I need more details.
So providing sufficient details is hugely valuable to dev teams that can really win you a lot of friends because if you give them everything they need to fix it, guess what? They're way more likely to fix it.
Host: Yeah, absolutely. Again, this takes me back to the QA and developer battles as well. Right. Like the without enough context, without enough detail, sometimes developers feel frustrated that, hey, I'm not able to reproduce or I don't know if it even is the critical priority.
So that's where adding that additional context or details helps win points like from a relationship perspective and also it makes the developers life easy as well, right?
Another thing that you highlighted which I really liked is that You let's say you have a hardened Environment you have already set up any change you are just looking at that change rather than the entire hardened environment because you have already sort of Verified and certified in a way right that from a security perspective one of the question to that is,
nowadays we use open source software quite a bit, right? Where we are not only dependent on our own code, but we are also using others' code. And I think there was a recent study by Anchor where they highlighted that around 85 to 97% of the enterprise code bases use open source.
So in that case, how do you think about one, like securing your applications? And the second thing is,
How should organizations tackle these supply chain security issues?
Matt: Yeah, no more great questions. Just like a nonstop flood of great questions. So, um, open source software. I think what people need to remember is open source software is free, but it's free as in puppy, right?
You may get it for no dollar. You still got to feed and water and take the thing outside and deal with the maintenance of the thing. So yes, you don't have to earn dev cycles to write it, which is great, but you still own the problem of keeping it updated and secure. So.
I think there's a, some of it is a mindset thing of like, hey, we get this for free and I don't have to worry about it. Let's use that. Well, not really. And this is what that 85% or 7%, whatever it was, shows you like a lot of people have that mindset. The truth is that SCA tools have gotten pretty mature. I can remember when they were terrible and they've gotten significantly better. So the thing here is to use them as early as possible. For example, for the,
For the Defect Dojo project, we have Dependabot, Renovate, and another one, shoot, I'd sneak, I think. I can't remember what the other one is. We have three different tools that look at our dependencies, either container-based, libraries in Python, or the images that we use as the bases for our containers. All that's wrapped up. Every PR gets run through those.
So we are continually looking at this. Was it a pain in the butt to get up to date the first time? Yes, it was because we weren't doing that initially. It's a 10 year old project that 10 years old. We weren't talking about SCA, but yes, we did have to sort of allocate a good chunk of time to catch up.
But now that we're caught up, it's like I said, in the beginning of this podcast, it's my morning routine. Oh, look, depend about says this library is out of date. I'll update it in the dev branch. It can soak for a week. We'll do a release at the end of the week and we'll know it's okay or not, but based on the QA stuff that we do. So like it is.
A very solvable problem. If you can go over that initial hump.
The other thing I've seen that's been highly effective is for the, well, we back up a step, I was at a DevOps days in Austin and I was speaking to the, the person at the time was the CIO of American airlines. And we're talking about this very same issue. And he said, you know, what I did that just made it a world of good was I looked into an inventory.
And we had that, I guess they were a Java shop. If I recall correctly, I'm don't quote me on that, but I think they were a Java shop, but he looked and they had like six different logging libraries that different teams were using. And he was like, this is kind of stupid. Logging is a thing that it's been sorted out years and years ago in computer science. We just need one.
And I doubt that any one is so much better than the other that it's worth having to, et cetera. So he, he went to the, he challenged his dev teams and said,
You guys need to pick only one. Logging library, authentication library, all the major pieces that you end up using open source for you guys pick. I don't care, but we're not having three or seven. We're having one.
And he let the dev teams Duke it out, however they did. And they came up with an official list. Now, what was really awesome, he said, and he didn't expect to happen was you're a new dev who just gets hired in to go write software for them.
There's a Wiki or a webpage or some internal documentation that says, if you need to do logging, we use, you know, log for J or whatever the library is. Right. And all of that kind of, whatever inevitably happens when you have a new dev show up.
So he got tons of benefit out of just telling the dev teams. You pick. Yeah. Standardized. Yeah. And he didn't impose it on them. He said, you guys pick, which I think is crucial, right? They owned their fate. but they could only pick one, right, which is important.
Host: Yeah, makes a lot of sense. So a similar or a related question to that is vulnerabilities, right? Like as you use open source libraries, it's not just with open source libraries. It is with our code as well.
Like there is a possibility of opening up some, like having some vulnerabilities in our code as well, right?
So like what's an ideal process or practice look like to sort of look at the vulnerabilities or prioritize them or address them? How do you look at the vulnerability management?
Matt: Yeah. As a whole, that's a huge thing. I, uh, wow. I don't even know how to start answering that one. Um, there's a whole bunch of angles on the, some of it depends on the tools. Some of it depends on the organization. Um, from a tool perspective, uh, like let's say SAS does a great one to start with. SAS is well known to produce loads and loads of results.
Right. So let's say you're working with the team, you run SAS because they haven't done it yet, or they haven't done it in a while or whatever. And you get, I don't know, thousands of findings. Well, you know, right away, that's a non-starter. I'm not going to hand them. Here's your 10,000 findings go fix this in a week or whatever the SLA is that you,
so what happens when that, what would I be, what have I done when that happens? I go and talk to the, you know, the manager of the, that team and say, Hey, look, you obviously have some issues here. Like.
I also understand that I can't have you sideline your entire team for two months to just knock out all these issues. So here's what I'm going to do. I'm going to run this SCA or this, uh, SAS tool and only produce criticals and highs.
And that gets us down to like 300 or whatever the number is. Right. Let's do that. Right. And let's give us a quarter to work through that 300. Right. And then at the end of the quarter, you, you have a, hopefully a better state of that application, you can talk about, let's turn on the mediums and see what happens, right?
But it's, it's being incremental and being smart about it. I think the, like the, the contra case, the thing you don't want to do is go by 12 different tools, run them on everything, produce tens of thousands of results, and then dump them on the teams that need to go fix them.
That is just a non-starter. Um, and in the, so for me, the ideal case is I have an ability to run tools, ideally in an automated fashion. If it has to be manual, that's fine too. But all of that output goes to one place. And this is what defect dojo does.
So I'm somewhat biased in that regard, but I think it's really, really important because then at the, in defect dojo or in that place, I can decide, you know what, this is worth passing on to say Jira and I can push it into this other one.
We're going to risk acceptance and we're just not going to be, I'm going to kick it down the road for three months because it's just not worth it.
Um, and you can do those things if you have them in a, in a vulnerability management platform that allows you to manage those things before they go to downstream systems. The other thing that's super important about a vulnerability management thing like defect dojo was it normalizes the results, every tool produces different weird results, right? Some things they call it a finding other things. That's an issue. Other thing is the vulnerability, different names, different attributes. It's annoying.
And I want to report those in one standard way. Well, one of the things DefectDojo does is it reads in from, I don't remember what we're up to now, I think 168 different security tools and normalizes into one data model.
So that now when I'm feeding downstream systems, I just have to understand what DefectDojo knows as a finding to be able to push it downstream. I don't have to understand that, oh, my SCA tool thinks a finding is this and my SAS tool thinks a finding is that and my, you know,
Cloud posture management tool thinks of finding is this other thing. I did that at first at rack and it was painful. I tried to write a thing for all of the different types of tools. And I went, no, I'm going to go bad. Came up with the.
Host: yeah it's better to standardize yeah
Matt: Yeah, because then everything downstream is normalized. It all looks the same. And I can feed like, like some of the larger organizations I worked at, I would feed compliance, like RSA kind of compliance tools from defect Doja because it was normalized and it didn't matter what the tool was.
Um, but it also allows me to do interesting things like we find out that our SCA tool.
Hasn't, you know, maybe it got acquired by somebody, and the quality's gone down. I don't really like it anymore. I want to shop around for a different SCA tool. Well, if everything lives in defect Dojo, I can swap out that tool and my process doesn't change. Right, as long as I can fit into Dojo, the process is the same downstream. So it also lets you, I mean, we didn't do this on purpose when we created Dojo, but it allows you to be able to tell vendors to go hit the bricks and get a different vendor that's performing better without interrupting.
Host: Makes a lot of sense. I just have a follow-up question on that Like you said, right? Let's say if I run sassed I might get a thousand findings and Out of those one way to look at it is maybe we will start with critical and high Let's say if I'm running a startup and I have limited time people and budget all of that.
How do I prioritize between those 300 like when I do critical and high I got 300 which one is the first one or the second one or the third one?
Matt: Right. Yeah. This is where you were, uh, what I've talked about early about understanding your risk profile and what are those things that are like existential to the business, like I will violate this really important banking regulation. If I don't do this thing, well, that becomes priority, but that's really where you need that human brain.
Um, and that's another reason why that inner in-between system that lets, you know, the security team review and make those decisions it's, It's not unlike a CVSS, right? It has the idea of a base score and then an environmental score, right?
Security tools can give you base scores all day long, but they don't know your environment. That's really the value that a security professional provides to that process is they should understand the context in which that's perfect example. So SQL injection in an application is bad, right? We'd all kind of agree with that. SQL injection bad.
So, when I was at rack, um, bringing the product security team, one of our new guys came in and found a SQL injection. Um, and it just so happened that I was out at a meeting and the VP in charge of our group was happened to be walking by and asked this fellow, Hey, you know, what, what are you doing today? It's like, Oh, I found a SQL injection. And they were like, Oh my God. And they fired up this whole incident response process.
Well, they found a SQL injection in the system that allowed you to book rooms, meeting rooms at our home office. Not that important of a system. Certainly not worth it.
Host: Yeah that is where that context yeah environment comes into picture right.
Matt: Yeah. So we spun up this whole incident response process for something that was completely unimportant, right? And, and I don't, I don't blame the guy who found it. He was very new and I'm sure he was excited. I was excited the first time I found SQL injection. I felt cool. Like, Oh, look what I found. I am awesome.
Right. But you have to take a deep breath and go, okay, this is bad, but like SQL injection and rackspace.com. Oh my goodness. Like that's pulled it, pulled it on, don cordon.
Yeah, alarms go off, but like, if people can't book rooms as easily when they're in the home office, that's annoying, but no one's gonna die. Customers probably won't even notice. It's not gonna impact revenue. Not worth getting excited about. Certainly not incident response time of excitement.
Host: Yeah, I totally agree In CVSS as you highlighted right like there are those three areas like base score, temporal, and environment, but most folks just look at the base score at the face value and they are like oh it is bad we need to address it, but that is where, as you highlighted right like look adding the environmental aspect helps you in better prioritization.
Matt: It's kind of crucial because you're going to get more than you, your, you know, whoever is in charge, be it a dev or an ops team or whatever DevOps, whoever has to go fix those things, you're going to have more than they can fix. In the allotted amount of time they have.
So yeah, like having, having a closer understanding. And it's funny, years ago, I got asked one time, like, how, what would you tell people new to the field that they really need to think about? And it was context. Like that's everything. Like
Like I used to do, I still do trainings, but when I used to do trainings on doing web app pen testing, I used to tell the class, like I have a cross site request forgery in one of the major websites on the internet today, I'm going to share it with you, but I'm going to embargo it. I don't want you to tell anybody.
And then I'd show them how you can do cross site request forgery to Google, which is irrelevant, right? But I can send you a link that allows you to makes it look like Google thinks you want to search for, I don't know, naked penguins or something crazy, right? Like, is that a vulnerability technically?
Yes, yes, it's vulnerable. Is it important? Not hardly. So, it, like the context is so, so important, right? It really, really makes a difference.
Host: And that helps you helps the security team and the product team also prioritize and work on it. Like as you gave an example, right? Like booking an internal conference room, maybe it can wait for a few months even, right? Versus something on the main application. So yeah, context is definitely key.
Matt: Yeah. It's everything. Yeah. You have to be inside of the rack space land to be able to even see that app. So like you're already an employee or a contractor or you have some sort of higher level of access anyway, you know, it, yeah, those kind of, those kind of, uh, decisions.
I mean, I understand the excitement. Like I said, when you first find these things and you're new, like more power to you, I'm glad you're excited because you're likely to keep in the field and keep doing great work, but you know, you do have to pump the brakes and think a little bit about. Okay, this is really bad, but how bad really is it because of A, B, and C?
Host: Yeah absolutely and that's a great way to end the security questions.
- Context is key. When looking at Vulnerabilities do not just look at CVSS Base score, instead, understand your Risk Profile and add the Environmental elements for better prioritization.
- In order to adhere to DevSecOps practices, be pragmatic. Instead of a Big Bang approach, start small and iterate to incorporate Security into existing DevOps practices.
- When it comes to Prioritization of findings from SAST or SCA or Vulnerability Management or Security tools in general, Let the Security team jump in and add context to help with information overload and prioritization.
We generally do another section which is called Rating Security Practices.
Rating Security Practices
The way it works is I will share a security practice, you need to rate it 1 through 5, 5 being the best and you can add some context as well like why you think it's 1 or a 5 or something like that. So let's start with the first one.
Conduct periodic security audits to identify vulnerabilities, threats and weaknesses in your systems and applications.
Matt: Yeah, I love this one. So this one, I would give a three. And the reason I would give it a three is it's, it's kind of half the way there. Like identifying issues and weaknesses. Great. That is a perfectly good start, but you have to use that as a feedback loop to then create systems that don't have those problems in the first place. That's the real goal, right? Is to have systems that launch hardened.
So yes, like I love the signal and you get from doing those kind of periodic reviews or vulnerability management or scans or whatever it is. But the underlying question you have to ask yourself is why am I finding these things and more importantly, how do I make it so I don't find these things in the future?
Host: OK, makes sense. The next one is,
Use strong passwords that contain a mix of uppercase, lowercase letters, numbers, and symbols, and change them frequently to avoid using the same password.
Change them frequently and also avoid using the same password for multiple accounts.
Matt: So this one I'm going to get ranty on. I would give it a one because this feels like we're back in the main frame days. That was where this came from, right?
When you could only store eight-character passwords cracking them took a reasonable amount of time and the very slow computers back in the day.
Um, that's no longer the case. And honestly, MFA and two FAA and shoot Fido are out. They exist. Like Fido is pretty new granted.
But MFA and two FAA, there's nothing new about that. That's been out for years and years and can buy you tons of improvement for very little effort. So like, I questioned the value of rotating passwords. In fact, shoot, when was it? That was in 2007 where it was working at the time.
A coworker of mine was rolling out an encrypted laptop program where we were giving, you know, doing a encrypted. This is when encrypted pass laptops were like a shocking thing.
And we had to have third party software to do it. Um, and the way he sold it was you have to have a, it was either 24 or 30. I want to say 24, 24 character password. And we're doing these trainings. I'm sitting in the audience, uh, with the other people doing this training just to kind of get a vibe on the room. And he says this 24 character and everyone's like, Oh my God. He goes, and you never have to change it. They all went.
What? I don't have to change it? He's like, yeah, why would you change it ever? It's dumb. Make it really long. Then you don't have to change it. So I think we've gotten fixated on complexity. When honestly a really long, not so complex password is harder to crack from a blind attack as an attacker.
If I'm brute forcing, I'd rather have a complex three character password to brute force than a non-complex 10 character password any day, just in the amount of temps I have to make. The numbers go up crazy fast.
That just drives me nuts. Now the other half of this, I will agree with completely. Don't use the same password for multiple accounts. We have password managers are built into the OS is now there's tons of third party ones you have lots to choose from. I use one.
I love it because shoot, this is another thing that happened a long time ago, but many years ago there was a LinkedIn compromise where they lost all the creds and people were like, Oh my God, I have to change all my passwords. And I went, I logged into, um, LinkedIn.
I updated my password and I was done because all of my passwords are different. It really does make life better. Now, like, uh, this is not to be pejorative to people that are older than me, but like for my parents who didn't grow up with computers, when they asked me this question about how do I do this?
Well, two things. One, uh, I'm perfectly fine with you having different passwords for the important sites. But if you're going to local grocery store.com and downloading coupons,
You could probably use the same password. It's not that exciting, right? Okay, fine. I'll compromise there. If that makes sense. The other thing that I've told several people that are getting up in years is go get an address book and write down all your passwords. Like if you don't want to have a password manager, write them down. And people are like, Oh my God, you're a security person.
How can you tell people to write down their passwords? Look at the, look at the profile. My dad did this. How am I going to get my dad's passwords? I'm going to physically break into his house.
know that the thing that's the address book in this desk drawer isn't really an address book. It's a list of passwords and then steal it. That's a very different thing than having, you know, the same password on every single website.
And, and, you know, my dad did that and he has unique passwords for all of his stuff. So it's really, it, it worked out great for him. Like, and God bless him. If anything.
Yeah, yeah, and it's reasonable. This is another thing where you have to be pragmatic. Like being a purist and saying, oh my God, you should never write down a password for any reason. I don't know about that. Like here's a reason. Like it makes perfect.
Host: Yeah, makes a lot of sense. The last question is, development regularly tests and incident response plans to help quickly detect, respond to, and recover from security incidents.
Matt: Yeah, I'd give this one like a four because I know people don't like being, um, the, well, let's just say the wet blanket, right? They, oh, what are we going to do if this happens? Ah, but it's not happening. Well, like let's, let's be, you know, let's think forward. Let's think positive.
Well, guess what? Like bad stuff happens. The reason you have a spare tire in your car is because cars get flats. Although I did hear that EPs are starting to take them out, which is really weird to me. That's a whole nother story.
So you need to, when an incident happens, and I've been in, worked lots of incidents and it's not fun, but when an incident is ongoing and happening, you need to have muscle memory. You need to know what to do.
You need to have a plan. And planning under a panic is planning with like a stupid brain, right? You wanna plan with your smart brain and you can sit back, sip a little tea and think through the problem.
Right? That's what you do if you can do these proactively. So I think developing and regularly testing is a very good thing. The reason I wouldn't give it a five is I honestly don't think many people do it because it feels like an also rank.
And it's not a, it's not a sexy thing, right? You don't go to a conference and go, hey man, we tested our incident response plan this week and it went fine. No one gets excited by it. But you know, when things go crazy and things are on fire and everybody knows what to do, that's hugely valuable.
Host: Yeah, totally, totally. That makes a lot of sense as well. Yeah, so that brings us to the end of the podcast. Thank you so much, Matt, for joining and sharing your knowledge. And like I could see reference to your past work, your current work. So yeah, it was a fun conversation. Thank you so much for coming.
Matt: Oh, I enjoyed it greatly. These were all great questions. I'm happy to share. That's how we all get better.
Host: Absolutely and to our viewers thank you so much for watching hope you have learned something new if you have any questions around security share those at skill20.com and we will get those answered by an expert in the security space see you in the next episode thank you.