Digging Deeper on DevOps & DevSecOps with Kyle Fossum

TLDR;

  • Security is definitely an additional overhead for DevOps practices. Self-Serve tooling helps ease the gap between Engineering and Security.
  • Some of the best practices of DevOps are: Do not embed secrets in Code. Follow Standardization via Config Management, automation (IaC) and Observability.
  • It’s highly recommended to use Hardware Security tokens like Yubikeys vs SMS or Time based authentication tools for MFA. Use SSO for user management.

Transcript

Host: Hi, everyone. Thanks for tuning into another episode of Scale to Zero. I'm Purshuttam, co-founder and CTO of Cloudanix. Today's episode is focused on DevOps and DevSecOps. To discuss on these topics, we have Kyle Fossum with us.

Kyle is a staff DevOps engineer at the Predictive Index, where he's responsible for the engineering organizations developer productivity, Azure infrastructure architecture, and cloud security posture.

When he is not working, he can be found spending time at home in the Boston suburbs with his wife and three cats, exploring the New England landscapes and singing vocals for a garage band. Kyle, it's wonderful to have you in the show. For our viewers who may not know you, do you want to briefly share about your journey?

Kyle Fossum: Yeah, thanks so much for having me. Glad to be here. A little about myself. Yeah, I'm a staff engineer at the Predictive Index, like you said. Been doing DevOps for, let's see, about four years now. So I kind of got started actually on the IT side of things, doing user support.

So back when I was a fresh-faced lad, got started resetting passwords at the help desk. You know, it was a treat to leave the desk and go clear a printer jam and got my way to desktop support from there. Started learning some networking, some scripting, and next landed a cloud engineering position after assuming some junior Sysabin responsibilities.

And then from there, I got very, very interested in DevOps and what that meant for developer productivity and landed my first DevOps position. And I've kind of been bouncing around a little bit since then. I'm going to go check and I've been at PI for just over two years now.

Host: Nice. Yeah, it's a pleasure to have you here. And your journey sounds very similar to a lot of guests that we have had in our podcast. We have started as sysadmins, slowly moved to DevOps and DevSecOps, and now are like security experts, right?

So I'm looking forward to learning more around DevOps and DevSecOps today. So the way we do the podcast is we have two sections.

The first section focuses on security questions and the second section focuses on security practices where you can read and add some context to it as well. So let's start with the security questions right. So if I think about DevOps. Traditionally, DevOps practices are geared towards speeding up the deployment process right production deployment process, and when we add security into it like when the shift left happens there is definitely an impact on the speed of the DevOps processes. Right. So according to you, like

When it comes to DevOps, what can organizations do to find the right balance between speed and at the same time security?

Kyle Fossum: Yeah, it's a really interesting question. So security is never convenient necessarily, right? You have to lock your door. It's an extra step. You have to put your seatbelt on. It's an extra step. So it comes down to organizational appetite and really risk management to decide where that line is between security and productivity. Because any time you implement a security control, you are going to be slowing things down a little bit.

So, it's really a business question about where that line is. And that's why it's important to have someone with both a comprehensive understanding of security and then also who is tightly aligned with business objectives so that they can make that informed decision about where that line actually is. So I know a big transition for me at one point. I used to work for a hedge fund and the security requirements that we had in that environment were extremely strict.

We had billions of dollars to protect. So we did not let people have local administrator on their boxes. Anytime anyone wanted to install something, be it like a developer tool, then there was a careful audit of the vendor for that offering, as opposed to something like a small startup where velocity is everything, and you might have, everybody might have access to production because that's what the business demands, right? Going fast is the most important thing in that environment.

Kyle Fossum: Oh no, after you.

Host: So I was saying that I love the seatbelt analogy, right? So what can organizations do? Because if you think about it, right, when we get into the car before we drive, either we have a habit of wearing the seatbelt or we are sort of forced sometimes that you have to wear a seatbelt. Otherwise there is that beep, right? It makes a noise. So

How can that type of culture be instilled that when developers, let's say, writing code to think about security as a must.

Kyle Fossum: Yeah, I think it's important to make security as convenient as possible. Um, so, like I said earlier, it's always going to be at least a little bit convenient, right? There's some extra thing that you have to do in order to operate in a secure fashion. Um, but if you, if you had to fold a pile of laundry, every time you had to put your seatbelt on, you probably wouldn't be wearing your seatbelt, but if you just had to, you know, slide the belt over, then it's a, it's a relatively easy thing to do. Um,

So I think where DevSecOps comes in is, it's very important to have reliable, repeatable tooling in place that makes it very easy to be compliant with your security position. A particular example that I have at one of the organizations I've worked for, we use Azure App Configuration and Azure Key Vault. And developers would frequently have to change the configuration in either of these config stores. And that was something that DevOps was the only, you know, we had the keys to the kingdom where the only people who had access to actually control the configuration of these resources. And that was very inconvenient, right? The developers would, they have to send it to us on Slack or like get in touch with us like, Hey, can you put this in here?

And then now you're handling the secrets outside of their system of record and where they actually need to be scored. So writing tooling that allowed the developers to self-serve on these changes, just in their own fashion.

And then, it gets encrypted in such a way that is handled securely gets sent through the pipeline. And they actually don't have to get in touch with anybody from DevOps now, and they can send those secrets and get them into the Key Vault purely from a self-service fashion. And that It's more convenient for everybody that way and also more secure.

Host: Yeah, that's a great example because that happens on a daily basis, right? Like let's say if your application is interacting with your payment system, then you will have those keys. Those has to be in your production environment. So tooling is a great way to help developers so that it's more secure, right?

What are some other challenges that you have seen and any suggestions that you have so that the relationship can be better between, let's say DevOps and the engineering team when it comes to implementing security?

Kyle Fossum: Yeah, I think something else that's important to understand is that it's very important to practice blameless postmortems. When we're talking about security culture, you don't want to be pointing fingers and make people feel bad or guilty about making a mistake. It's very common that you might see credentials committed to source code by accident. It happens everywhere, happens all the time just understanding that there's going to be that element of human error.

And instead of reacting in a negative way to that, maybe changing your processes to account for that risk, just understanding and baking it into your entire security posture.

Yes, we understand that there are going to be credential breaches in, you know, because we accidentally commit to source code because we're all, we're all human. And instead focusing on quick detection of when that happens and having a robust story for remediating and rotating those secrets where they were impacted, you can just really foster that sense of trust and camaraderie between the different stakeholders.

And that way you don't feel like a cop who's hassling someone on the street. It's just, okay, we're all on the same team. We made a mistake, let's go fix it.

Host: Right. Yeah. Makes a lot of sense. That's a very important, like cultural aspect that you just shared, right? Because sometimes developers or engineers feel vulnerable that, hey, I have committed a secret by mistake. Will I be scrutinized, right? Over scrutinized on this. So having that blameless, like retro culture definitely helps. So I just want to move to cloud a little bit, right? So now a lot of workloads are moving to cloud. And when it comes to cloud, there is a shared responsibility model, right? That cloud provider takes care of some part of security. And as a practitioner, we take care of some other parts of the security. So what have you seen? Like,

What are the challenges organizations face while working with cloud native environments, from a DevOps perspective or DevSecOps perspective?

Kyle Fossum: Yeah, I love that question. It's really important to understand comprehensively the shared responsibility model and how that might differ between the types of services that a cloud offering is providing.

So for example, you're in your favorite public cloud. They offer IaaS, so you have a virtual machine. And that's great. You don't have to handle the data center security. You don't have to check people's badges when they swipe into the rack. You don't have to handle the plugging in of the physical cables, but you are entirely responsible for the configuration of the operating system that's running in that VM.

So that opens you up to quite a bit of potential misconfiguration of that stack, as opposed to something like platform as a service, where now you don't even have to worry about the operating system. You are merely responsible for the workload that's running on this on this platform on rails.

Um, so having, having that comprehensive understanding about where exactly that responsibility lies and where that's demarcated is critically important because at the end of the day, um, you can't offload 100% of the management capabilities for a particular piece of software or service, even for something like SaaS where you're not, you're not even responsible for the platform. You're still controlling the IM and the access for the product that you're using.

Host: So, the challenges that you highlighted right that you have to understand the shared responsibility model comprehensively and so that you can act on it.

How do they overcome it? Any recommendations that you have to overcome some of these challenges?

Kyle Fossum: Yeah, I think standardization helps quite a bit. And obviously, this is very dependent on a given environment. But say, for a particular organization, you might use 10 or so different SaaS applications as part of your daily workflow. If all 10 of those applications are using individual user accounts and not tied to a single identity provider in the form of single sign-on,

Then if someone leaves the company, you have to remember to go in and disable their access on each of these 10 separate systems. Whereas if they had a single identity that was bound to your, your, you know, your Google workspace or your, your Azure Active Directory, you're going to shut off that one, that one account and it, that, that change propagates to all those systems.

So having standardization is very important. And then being able to kind of force multiply on like any individual administrative effort.

So yeah, yeah. And then also standardization, something like configuration management can be very powerful. So I know I've worked in environments where I have several hundred different platforms as service web servers running in Azure. And those different app services might have, up to tens of thousands of different configurations applied to them across all the environments and all of the app services.

So when you have a, referring back to our previous example, say we accidentally committed a secret to source code. Now we have to go rotate that secret everywhere. How do you find where that secret has been used across the entire environment if you have up to 12,000 different places? Some of them might not be exercised regularly. So having great observability across your estate is extremely helpful when you need to make these kinds of changes.

Host: Makes sense. So you highlighted some of the best practices, right? Like standardization in terms of config management, observability. Are there any others that you want to highlight as a best practice to implement, let's say DevSecOps in a cloud-native environment?

Kyle Fossum: Yeah, I think infrastructure as code is also imperative. You're really, I cannot, I've been a part of kind of a green field deployment of infrastructure as code at a few different organizations, and I cannot overstate the benefits of infrastructure as code enough.

It really just checks all the boxes. It makes the developer's lives easier. It makes the DevOps engineers lives easier. Instead of, hey, I need a new service. Okay, let me go click the menu in Azure for a week, and hopefully I did everything right and in the right order and it works for you. If not, then I'll go spend a day backtracking and find exactly where it went wrong.

Whereas with infrastructure as code, you can just bake all of the good defaults into a single template, and you only provided a couple different parameters that make that configuration unique to that particular service, and you get everything else for free.

And that really has enormous advantages, both in time saved and also from a compliance perspective, because you just have to bake the correct configuration in once, and then you know it's going to get applied correctly everywhere every time.

Host: Yeah, and one of the things that at least I have noticed is when it comes to like, Azure portal, I'm just taking that example, right? It's very flaky or it is very cache-heavy. Sometimes it doesn't refresh the data on the page. You have to go home and then come back to a specific page so that the data refreshes. So that's where ISE can help quite a bit, right? You don't have to go through the screens. Rather, you just have a script with dynamic parameters you are able to spin up your infrastructure in a much quicker way.

Kyle Fossum: Yeah, absolutely.

Host: So speaking of automation, so it's great for spinning up new infrastructure. When it comes to security, let's say you're incorporating some of the security measures as part of, let's say, your Terraform script or Pulumi script or something like that. If there is a misconfiguration in the automation, that can introduce vulnerabilities, which could be difficult to identify or remedy because it's part of your automation already. Like according to you,

What role does automation play in a DevSecOps world? Let's say if you are a cloud-native heavy infrastructure.

Kyle Fossum: Yeah, so automation is imperative to any competitive company, really. Gone are the days where you can just point and click, and you're really bottlenecked by how quickly and accurately you can point and click.

Say you have, if you only have one server, okay, yeah, you can log in and click around. And maybe you have someone watching over your shoulders to make sure you didn't misclick something.

But if you have 100 servers, then you're not going to hire 100 people to go click everything 10 times. So you really need to automate. I don't think anyone's unconvinced about that at this point. But you raise a good point about if you misconfigure your automation, then you're force-multiplying that misconfiguration across your entire state.

And that's where something like GitOps comes into play and could be hugely advantageous, because you can get peer review on any of your infrastructure changes. It's much harder to review accurately someone doing something in a graphical menu.

One, because it's incredibly boring to watch someone click a menu, right? So it's hard to keep that engagement, right, when you're trying to supervise someone.

And two, you can get multiple peer reviews on any proposed change. And I think something that's also very interesting is, there's this great saying that applies to medicine.

When you hear horses think, or I'm sorry, when you hear hoofbeats, think horses, not zebras, because you want to, you hear hoofbeats, chances are it's a horse, not a zebra the inverse is actually true when it comes to computer science.

When you hear hoofbeats, you might want to think zebras, because chances are, across all the different layers of abstraction, all of the really obvious stuff has been ironed out already. So if there is something broken or incorrect, then chances are that it's something that you haven't seen before, and it warrants a closer look. And I do want to credit a talk, Brian Cantrell gave. It's on YouTube. It's not an original thought, much as I wish I had it.

Yeah, and one thing that can be said for automation, if you do inadvertently propagate a misconfiguration, you can at least fix it fairly quickly and then propagate the fix just as quickly. So it's advantageous in that respect as well.

Host: Yeah, so I echo your statement, right? Like with automation, if you have an issue and if you fix it, then the automation gives you that power to sort of fix it across all the environments instantaneously rather than doing it more of a manual way. One thing that you highlighted, and I want to hear your thoughts on it, is the GitOps, right, which can help in finding some of the security issues.

How is that different from let's say traditional PR reviews. Let's say as an engineer I open a PR for let's say my Terraform script and you are reviewing it.

Can you not find the security issue or the misconfiguration as part of that process versus the GitOps process that you highlighted?

Kyle Fossum: Yeah, so the two are closely related, I would say. So having a, you submit a PR with your Terraform change, and then if you're not doing GitOps, maybe the next step is to, once it's approved, pull down the main branch and then execute it on your local. But that still invites quite a bit of human error.

How do you make sure that you're targeting the right Azure subscription when you put your Terraform deployment? How do you make sure that... Yeah, well, that's the only example that comes to mind right now. Ask me why I know. I just saw that.

But when you're doing GitOps, that also allows you to invoke, you know, downstream testing as well. So typically in a software environment, you want to have...different environments for different purposes, Prod being the top most that the customers are actually using. And you want to promote your changes through those environments so that you can do validation and regression testing of your changes.

So with GitOps, you can say you merge to main, and it continuously deploys to the develop environment, say. And then that might invite some immediate kind of user acceptance testing where, okay, now the developers who are going to be consuming this infrastructure that was just defined with Terraform are gonna go out and use it

And they might find something fairly quickly in the staging, if you're doing regression testing. Hey, we missed a spot here. It's not logging to Datadog for whatever reason. Oh, I forgot to install the Datadog extension on the app service as part of this Terraform module. Okay, I'm gonna go.

You know, before that gets to production, you can just cut, nip that in the bud. Really taking that filter left on, on all of your changes.

Host: Yeah, makes sense. So at least now I understand how they are different from each other, like from a traditional PR review and the GitOps approach. Thank you for clarifying that for me, because I don't have a lot of experience on the GitOps side. So one question that often comes to mind, like for startups, let's say, like one of the things you highlighted earlier that when it comes to DevSecOps, speed versus security, right? It always depends on your size of the company, your industry that you are in.

So let's say if I'm a startup and I want to start my DevSecOps journey, what advice do you have for me?

Kyle Fossum: Yeah, no, it's a tough question, right? Because maybe you Google DevSecOps, you're just getting started, you're not quite sure what this newfangled term is. And it can be a little overwhelming at first, right? Because you're taking what were traditionally three separate disciplines and kind of smash them all together and asking maybe in a startup environment, one or even half a person to take responsibility for this.

So I would say just to start out, it's really important to assess your current state. So you need to know where you came from in order to figure out where you want to start going. So maybe you commit all of your secrets to source codes so you can just deploy the configuration file. Maybe you want to start there and start using a secret store like Key Vault or something.

You're also going to want to set clear goals and objectives. So it's very important to stay aligned with the business on this. I have spent quite a bit of time implementing security features that there was really just no call for when I was earlier in my career. And it's just like, especially in a startup environment, it's very important to keep that, that minimum viable product notion in mind as you're developing these things.

And then. You know, like I was alluding to earlier, you really want to foster that sense of collaboration and open communication, knowledge sharing, because there are going to be different people that are responsible for different things. And if everyone is aligned and on the same page about where, what the priorities are and what you need to do, that's going to be helpful.

And I think from there, you know, I think anyone in any organization, if they sat down could probably come up easily with a list of 10 things that they could do to improve their security posture. And it might be a useful exercise to sit down and kind of tabletop, okay, what do we want to address? You come up with that wish list and you start triaging.

Maybe you can spend 20% of your time addressing security vulnerabilities. Maybe that's only 10% I think it's important to keep in mind that, you know, security is a journey and not a destination. There's always going to be room for improvement.

And as the security and the threat landscape changes from under our feet, you know, we're in this arms race between the bad guys and the people trying to protect our stuff from the bad guys. So it's continuous improvement, I think, is one of the most important things that you're going to have to bake into the culture because you're always going to have to be changing and reacting to things as they evolve.

Host: Yeah, I like how you put it right that it's a journey, not a destination. It should not be like security should not be thought as a one time activity. You have to do it continuously as long as you are writing code, deploying code, you have to have security in your mind. Right. So for a startup, let's say you highlighted one or two things already.

Do you want to add any additional points? maybe like the highest priority for the startup given they have limited time and budget.

Kyle Fossum: Yeah, I think it's so environment dependent, right? Like depending on how your software is being run, the kind of services that you provide, what industry you're in. If you are a startup fintech firm who's going to be offering like a managed brokerage, like a platform for trading, security is not something that can be an afterthought, right?

It has to be baked right in from the start. It's going to be integral and part of your entire pitch to potential customers is how secure you are.

Whereas something like, oh, I don't know, if you're a consultancy, then you might have very limited PII or even access, if you're getting access to customers' environments, you're gonna wanna be able to prove that you can safeguard those credentials and that you're not exposing your customers to any risk by onboarding you as a contractor or as a consultant.

So I think if I had to come up with just a few, I think secret stores are critically important. Not checking your secrets into source code. And I think cloud misconfiguration is another big one there.

If you have S3, it seems like every other month there's some giant data leakage in the news, right? Where it's like some huge company that you've heard of. It's just like, oh yeah, we just like had our S3 buckets open to the public and everybody got them. Everybody got in. So, and then I think building on that, having some kind of compliance and remediation built in to maintain some level of introspection over your estate is also very important.

Because if you're not, this kind of speaks to how tightly integrated all of these concepts are. Because if you're not using infrastructure as code and you're not getting peer reviews, then when you're deploying a new S3 bucket, you might be reliant on not checking the right box to make it publicly accessible. In which case,

You might benefit from automated scanning of your S3 buckets to make sure that they are indeed private. So it's yeah, I think secret stores and configuration management and reporting on

Host: Remediation is another one you highlighted takes is then having some sort of tooling so that you know what misconfigurations are there so that you can remedy them that that should help as well. So that also makes sense. One question that I have, like earlier when you were talking about shared responsibility of the cloud, you highlighted IAM, right? Like how you set up IAM. Having a single sign-on is always better than having users invited to the, let's say, cloud platform. So one of the challenges that we see with, let's say, user authentication is around multi-factor authentication, right? Because nowadays, everybody has started using it. Otherwise, there is credential theft attacks and stuff like that.

Do you have any recommendations around how that can be implemented and managed for a medium-sized organization, let's say?

Kyle Fossum: Yeah, definitely. So I think step one, definitely, if you're considering MFA and you don't have it already, you're on the right track. You definitely want MFA. And I think something that people kind of default to because this was popular for a while and still is, is using texts or SMS to receive that second factor.

And I've seen an unfortunate number social engineering videos on YouTube where you just get a very polite woman on the phone and she can just do sim jacking and like do an unauthenticated transfer of your phone number to her own phone. And she's off to the races, right? She's now receiving your SMS tokens. So I would probably advocate for something like an authenticator app, preferably one that's baked into a password manager.

I'm really partial to one password. I think they are an industry leader within the password management space.

Alternatively, something like a hardware token that you can, or that you can just plug into your computer would be, would also be great because you're not going to, you know, no one's going to be able to remotely take your hardware token. Yeah. Yeah, absolutely. And then you're going to want to.

Host: Yeah, something like this, right? Where you are using like Yubikeys. Yeah.

Kyle Fossum: Apply MFA very broadly. I think a lot of people are sensitive to pushback from their users. Like we were talking earlier, security, oh, it's inconvenient. Yeah, sure. But you really need to protect your assets, right? And all levels of employees, they have some level of privileged information.

You might be doing customer service, but OK, now you can get into people's user accounts for the software that you're providing. Say you're in finance. OK, I don't touch the cloud platform, but you have access to all of your company's financial data.

Everybody has something that someone wants, regardless of what level you're at within the company. Something you might be able to do to mitigate the inconvenience of it. I know a lot of MFA offerings have adaptive MFA or conditional access.

If you're signing in from the same IP address that you always sign in from, maybe you can delay the 2FA prompt for 30 days. But you sign in from Russia and all of a sudden that's like a, whoa! okay, who are you? We're going to need to make sure that you are who you say you are. So that can kind of like ease the...

Host: So it's very much similar to how some websites have remembered me, right? So it's like remember your TFA validation for some more time. Or if you are coming from a known machine, known IP, known region, maybe you are not asked to go through TFA that frequently. Based on your example, I could figure that out.

Kyle Fossum: Yeah, definitely. Because it is, security needs to be at least convenient enough that people won't try to circumvent it, right? You are still ultimately reliant on compliance from the people that are being controlled by these security measures.

Because if you make your password requirements, you have to change them every three days, and they need to be 26 characters long, then people are just gonna write down their passwords on a sticky note, right?

Host: Yeah, yeah, I mean, it has to be a good balance between a balance for security, but at the same time, usability should not be impacted, right?

Otherwise, nobody would follow it or as you said, like write it in a notepad or something like that, so that you can easily copy and paste, then you lose the value of adding those practices.

Yeah, so that's a great way to end the security question section. And speaking of practices,

Let's go to the next section, which is focused around rating security practices.

Rating Security Practices

The way it works is I'll share a security practice, and I would ask you to rate it from one to five. And then if you can add context why you are rating the practice, a specific number that will help our viewers.

Host: So the first practice is DevOps practices are needed to move fast and deploy code to production. Setting up security tests would slow down the DevOps practices. What's your take on that?

Kyle Fossum: Hmm, so I think that's objectively true, right? Security is not convenient necessarily, but let's see, I'm having a hard time putting a number on this. I would say, I would say regardless of it being slowed down, it does need to be a five. You definitely need that in there, but to kind of improve usability, and I have a recent example of this,

So when we deploy a given service, you might want to do dynamic application scanning of the newly deployed service. And you want to do that in a lower environment, right? Before you want to filter left, before it gets pushed up to the higher environments.

So how do you do this in such a way that does not impede developer productivity unduly? And you might be able to do that by parallelizing things. So say, it builds and then it deploys to dev while it's also doing integration testing and the scanning happens at the same time. So you're not adding too much time to a given build, but you're still baking those in.

Host: Okay, any number that you would like to give this practice?

Kyle Fossum: I think I'd have to give it a five. You need security every step of the way. Yep.

Host: Okay, so the next practice is use strong passwords that contain mix of upper, lower case, lower case characters, numbers, symbols and change your passwords frequently and avoid reuse of passwords for multiple accounts.

Kyle Fossum: I probably have to give that one a two. That's, yeah, I know that was all the rage back in the day, and a lot of people still remember that. But that's generally not considered a best practice these days. And I think NIST has actually issued publications to that effect. So password entropy is a super interesting topic, right? You want to add complexity. OK, if I, you know, it's a.

It's a combinatorics problem, right? There's however many to the power of however many options. And then that's how many different passwords you have to guess if you're trying to brute force the password. So length is actually just the single most important factor, not complexity.

So it's actually much more usable if instead of a password that might be 12 digits long, it's a mix of uppercase, lowercase numbers special characters. Instead, you just have a 20 character passphrase. It can just be in plain English. There's an excellent XKCD comic, Correct Horse Battery Staple.

I know a lot of people like that one. And you have until the heat death of the universe with current computational power to brute force that password. But it's also very easy to remember, so you don't have to write it down, so you're improving your operational security there.

And again, changing passwords arbitrarily, it's still a good password. What do you mean? I have to change it. Why? Well, we just said so. There's been no indicators of compromise. It's a well monitored password. There's no good reason to change it arbitrarily.

Host: Okay, makes sense. And I think thanks for highlighting the NIST, right, because a lot of compliance families ask you to rotate your passwords as part of the practice. I'm glad that NIST has sort of revised that practice or that recommendation. I hope others follow the same path. And having a passphrase which you can remember that sort of finds a good balance between security requirements and also from a developer. Not just developer user experience perspective. So yeah makes sense. The last one that I have is same incident never occurs again. So once an incident is resolved there is no need to do like a retro analysis.

Kyle Fossum: Yeah, absolutely. I gotta give that one a one. I strongly disagree. So an incident is a thing, right? It's an occurrence, a phenomenon, but it doesn't actually describe why or how that thing occurred.

I keep reverting to a car analogy, so I'll throw another one at you. A car crash is an incident. The reason why the car crashed was there ice on the road, was the driver impaired. That might be different for any given car accident.

So it's especially important to really digest and sit down with stakeholders and people who understand the problem space to figure out exactly how and why this happened. And then even more importantly, not just understand it, but take those learnings and then change the process and the workstream in such a way that mitigates or improves your ability to respond to these kinds of things in the future.

Host: Makes sense, makes sense. Yeah, so that's a great way to end the episode. Thank you so much, Kyle, for coming to the show and sharing your knowledge. At least I could learn some of the things around, like let's say GitOps versus PR process. So I'm hoping our viewers will learn something new as part of the process as well. So thank you for coming.

Kyle Fossum: Yeah, Buddha, it was a pleasure. Thanks for having me.

Host: Absolutely. And to our viewers, if you have any questions about security, share those at scaletozero.com and we will get those answered by an expert in the security space. See you in the next episode. Thank you.

Insights from ScaletoZero

Comprehending Security Culture with Ariel Shin
Host: Hi, everyone. Welcome to another episode of Scale to Zero. I’m Purusottam, co-founder and CTO of Cloudanix. For today’s episode, we are focusing on DevSecOps and culture around it. To discuss on these topics, we have Ariel Shin with us. Ariel is a product security team lead at Twilio.
Data Loss, DevOps, and More!
Host: Hi, everyone. Thanks for tuning into another episode of Scale to Zero. This is our first episode. For season two. I’m Purusottam Co-founder and CTO of Cloudanix.Today’s episode is focused on cyber, risk management, DevSecOps, and to talk more about this, we have Chris Hodson in the podcast.

FAQs

What is DevOps?

DevOps is an approach that combines cultural beliefs, practices, and tools to enhance an organization's capability to deliver applications and services rapidly. It enables organizations to evolve and enhance their products at a faster pace compared to those using traditional software development and infrastructure management methods.

What role does automation play in a DevSecOps world?

Automation is essential for competitive companies as manual processes are no longer efficient. Automating tasks is crucial, especially when dealing with a large number of servers. Misconfigurations can have severe consequences, which is where GitOps comes in. Peer reviews are easier with GitOps, unlike graphical interfaces. In computer science, uncommon issues are worth investigating. Automation allows quick fixes and easy propagation of corrections.

From a DevSecOps perspective, what are some challenges faced by organizations while working in a cloud native environment?

In a cloud native environment, the DevSecOps team faces several challenges. It is crucial to have a deep understanding of the shared responsibility model, particularly the variations among different cloud services. For example, in Infrastructure as a Service (IaaS), the team is responsible for the operating system configuration of the virtual machine, leaving room for potential misconfigurations. Platform as a Service (PaaS) eliminates the need to worry about the operating system, focusing solely on the workload. However, even in Software as a Service (SaaS), where platform responsibility is relinquished, the team still maintains control over identity management and access for the product. It is essential to comprehend the exact boundaries of responsibility to effectively manage and secure the software and services provided by the cloud.

Best DevSecOps tool for DevSecops teams

Cloudanix