Master the art of Incident Response, Digital Forensics, and Threat Intelligence with Gerard Johansen

TLDR;

  • Playbooks have an important role in the Incident Response Process, from all aspects, including Evidence Collection, Triage, Retrospection.
  • There are 4 key factors which need to be understood in Incident Response - Initial Access, Execution, Lateral Movement and Command & Control.
  • Training your engineers on Architecture (Cloud or Hybrid setup), Data flows, Evidence Collection & Preserve methods is a must for ideal Incident Response.

Transcript

Host: Hi, everyone. Thanks for tuning into another episode of Scale to Zero. I'm Purusottam, co-founder and CTO of Cloudanix. Today's topic includes Incident response, Digital forensics and Threat intelligence. And to discuss on this topic, we have Gerard Jahansen with us. Gerard is a cybersecurity professional with over a decade of experience in these areas.

He has held various variety of roles and currently focused on incident response within an MDR provider. Gerard, it's wonderful to have you in the show.

For our viewers who may not know you, do you want to briefly share about your journey?

Gerard Johanson: Thank you. Yeah, thank you. Well, thank you very much for having me. I appreciate this opportunity. Yeah, I really got my start in law enforcement doing digital forensics in the United States for two law enforcement agencies. And after a little while, I decided to make my transition into the private sector, really focused on taking those digital forensic skills and investigative skills out of my role in law enforcement and into the private sector. And really over the last decade, worked in a number of large to small enterprises and small, large consulting focused really on those key aspects, digital forensics, incident response, threat intelligence as it came out. Also had a lot of... I'm seeing more and more of it now as I think the first real generation of law enforcement that was...

Host: That's an interesting journey like from law enforcement to private sector.

Gerard Johanson: was started in digital forensics is starting to kind of retire out and making that transition. So what I would say is if any of the viewers are in that position, it's not as hard as people make it out to be. It's actually a really fun transition. You get to bring a lot of those skill sets into this world and they're very valuable.

Host: And I can imagine that you can draw parallels between real world and also, let's say the digital world, right? When it comes to incident response, forensics and stuff like that.

Gerard Johanson: Hmm.
Yes. Yeah. It's what I have found the major win or major advantage that I had is having kind of an investigative mindset. When you look at incident response and digital forensics, it's an investigation. It's just we're not touching anything in a physical sense, but a lot of the same principles apply.

Host: Enjoy. Right, right. Yeah. So I am interested to learn some of these as part of the podcast today. So let's get started, right? So before we get into the security questions, I generally ask our guest,

what does a day in your life look like? So what does it look like for you today

Gerard Johanson: So I'm focused on what we call incident response readiness or incident readiness. It is essentially a component of the service that we provide to make organizations better from an incident response perspective. So day to day, I'm conducting research, you know, maybe 90 minutes to two hours, just going out to see what the new threats are, see what the new tactics and techniques are and really researching those.

And then building out essentially training and enablement programs for our customers. And that's really running them through various scenarios, whether that's looking at how they can detect some of these threats all the way to, hey, if the worst happens, what are the things you need to do to make sure that you're able to adequately respond to it? So it's a lot of a combination of understanding the threat landscape.

drawing it from different sources, but internally and externally, and then actually applying that for our customers to really kind of give them some, a level of expertise that they may not have in-house for themselves.

Host: Right, makes a lot of sense. So research, learn yourself, share it with others and help customers as well. So that sounds lovely. So let's get started with the security questions, right? We spoke about incident response. So

What are some of the common challenges or roadblocks that you have seen when it comes to incident response?

Gerard Johanson: In the, it's really, I think, really two things. It is really getting over that initial shock of we have an incident. And I think that's really one of the key things to really understand is that it's a very stressful environment and whether the incident is an accident or a threat actor behavior, when you lose production data, you incur downtime, the stress level increases.

So that is one of the key things is to be able to navigate that. The other is drawing on expertise on specific aspects of say, where do I pull data from to really kind of gain an understanding of what's actually happening. So you combine this stress with maybe a lack of specific expertise, it creates this kind of multiplying factor. Where it really kind of hampers organizations' ability to cope with that.

Host: Okay, that makes sense. So in that case, let's say you have both of those challenges at the same time, that you have an incident, you're at shock and you are also trying to find out where is the evidence and all of that.

What techniques do you follow?

Let's say to detect or identify incidents happening in your environments.

Gerard Johanson: So one of the things I think is very important is the use of kind of a situational standard operating procedures, call them playbooks, runbooks, however you wanna put it. One of the things that will help with the stress is essentially preloading a plan.

So one of the things that I am a major proponent of is getting to containment as fast as possible. So if you think about a stressful environment like,

hey, we've got an attack that's going on, but we have an idea of where it is in the network and isolating that section off and understanding that, hey, we're gonna lose this section of the network for a period of time. But what it does is it slows that lateral movement, but it also allows us some breathing room. So coping with the stress is to actually get some of that breathing room and that very well may be working very quickly to contain the threat actor.

So that is really one of the key ways to deal with the stress challenge is let's get to a position where we can actually take a breath and craft out a better plan.

One of the other key things to do is there's a lot of resources out there and to deal with these challenges, these roadblocks is, it's a preparation. At the end of the day, you have to be prepared in some way, shape or form.

Host: So, two things you highlighted right one is the preparation and then the playbooks like when it happens you know that hey if it is this type of a particular type of incident what steps do you need to either remediate or to understand the gravity of it and stuff like that. So I would take that to the cloud scale a little bit like nowadays most of the companies are moving to cloud or are in the cloud.

So now the question is in case of cloud,

How does cloud native security tools help or affect the incident response capabilities?

Gerard Johanson: That's a really good question. And I will say that we have, from an incident response, from a service provider and even on internal to, say, an enterprise, this was something that was really forced on us with the pandemic. So let's look at what happened as we had this rapid expansion of people working remotely. And if you worked in a metropolitan area, you may have people that are maybe 90 minutes to two hours. In terms of distance, even just in a local area.

So what we did was, is a lot of it started out as, how do we just push evidence from say a remote system up into something as simple as Dropbox? But what ended up happening very quickly is we started to realize that a lot of the work that we could do is stuff that we could do remotely.

So you started to see more a more large scale commercial and even open source tooling that can sit in AWS, Azure, even DigitalOcean or something like that, that we can actually push out agents and do a lot of this work remotely. So what cloud adoption has done is one, it's made us faster, as opposed to having to go find that laptop or desktop.

In say a campus environment, we literally have the IP address, we can dial right in within 90 seconds. One of the other things is it's afford us the ability to do something as simple as isolate endpoints using say an EDR tool or digital platforms. So cloud data security tools has really made us I think a lot faster and a lot more efficient, although we had to kind of go to an adoption stage that might not have been the best for us.

Host: Right, yeah. Right, so now you highlighted two things right, one is either you can use a tool or a platform or you can use open source as well. So

What when you are sort of deciding what platform or tool to use, what criteria do you generally look for?

Gerard Johanson: So there's really two, whether you're going commercial or whether that I would highlight is, I talked about that either isolate or auto isolate feature. And you see this a lot with endpoint detection response tools, the commercial tools that have really kind of taken the place of that legacy or even next gen antivirus. And what it does is really gives us the ability to pinpoint our isolation.

So let's take a look at a scenario where we get a detection of Cobalt Strike or really any type of command and control framework. The ability for a detection analyst or a response analyst to simply click a button and isolate that endpoint from the rest of the network, that's a huge game changer. That really kind of allows us to pinpoint and really isolate systems that may be the precursor of it.

The other thing is, what I look for in terms of evidence collection is… really focusing on how evidence, trace evidence that you look at to identify really four key tactics. What the initial access was, whether it came through a drive-by compromise or email, what the execution looks like, what kind of lateral movement tools are they leveraging, and command and control.

And when you look at pulling something like as simple as Windows execution artifacts, whether they're in prefetch, amcache, those kinds of key evidence points. They're very small. They're less than 1% of the disk, but that can give us, basically, I would say that the vast majority of insight we need to know and understand an attack. So when I'm looking at a tool set, that's what I'm looking for. Can it extract those and can I analyze those on a workstation and do this remotely?

Host: Mm-hmm. Okay, lovely. So, I see very detailed instructions for that, right.

So, let's say I'm a startup, I'm running a startup and we do not have the budget to buy any expensive tool and especially in the current economic situation, right. And I'm looking for open source alternatives.

Do you have any recommendation from an open source perspective so that I can get started and then when I'm ready, I can buy like a commercial tool as well.

Gerard Johanson: Yo, yeah. I think your best option is Velociraptor. It is a tool put out by Rapid7. It is an open source tool. It is also a community sourced tool. So there are a lot of functionality that people write. So it has that ability to expand out.

And really from a tool set, it focuses on those two. It doesn't auto isolate, but does give the ability to isolate endpoints. And it...does have extensive capabilities in terms of evidence extraction. So if you're looking for an open source tool, I think Velociraptor and it continues to go through updates. I could probably get anybody here, run them through and get them up and started in 15 minutes, putting it in, you know, an AWS EC2 instance. And you can get up and running in 15, 20 minutes with that tool.

AWS EC2 Monitoring

Your EC2 could become your weakest link.

AWS EC2 Audit List

Host: Oh, nice. Oh wow, that is very powerful, right? Like you can set it up within 15 minutes and also it does capture the evidence and all of that. So yeah, I will definitely tag it when we publish the video so that our audience can get benefit out of the open source framework as well. Yeah, thank you for sharing that. One related question that I have is, there have been many advancements in

Gerard Johanson: Yeah, absolutely. Highly recommend it.

Host: Digital forensic techniques or incident response techniques. So security teams need to be familiar with them so that they can preserve the digital evidence. At the end of the day, that evidence is key for triaging incidents and stuff like that. So according to your experience,

What are some of the critical steps that need to be followed so that there is no data loss when it comes to bringing that to the forensic experts?

Gerard Johanson: So you bring up a really interesting point, and I would say that if you're focused initially on data loss, I think that's a really good starting position. I'll bring it to a real world scenario, is organizations will often leverage external incident response and digital forensics expertise through third party agreements, whether that's a retainer, whether they have a service provider that does this for them.

Unfortunately, if you look at the chain of events, there's often a time lag anywhere from maybe 90 minutes to several hours before those teams can even get engaged with you. So what I even recommended with customers that I've dealt with clients in the past is to say have some way of work flowing or playbooking just that evidence acquisition. So for example, let's say you're... We'll take it out of the cloud and just do...straight on and on prem is, okay, I've got an infected server in an ESXi environment, a virtualized environment.

Something as simple as making us a snapshot and then with the memory file and offloading that, that's all you would need to do. Essentially, we have got the file system and the running memory of a system that's been infected.

So something as simple as doing that, offloading that onto even something like a USB disk, a storage disk for us to examine later. Same thing with one of the other things I would make widely available is triage scripts. Simple batch scripts that go out and grab all of those trace evidence. The major point I would say is workflows for that.

And if you have, and these are tools that can be almost automated, I can hand them to really a systems admin and say, run this script and then take that zip file off, you know, take it off the system, put it on a USB and store it for us until we can get onsite or we can get it uploaded to something that we can, we can start working with it. Those, if you're focused on even just that, I think, there may be a intimidation factor where you might say, assistant admin with two or three years experience, like I don't know anything about digital forensics.

I would say, I'll teach you the first stage, which is as evidence preservation, here are the tools and within several hours, get them to understand this is the workflow, this is all you have to do. And if you do that as close to the actual incident, the incident indicators.

The better off that evidence is, and that's really kind of one of the things as a practitioner that I've really kind of wanted more and more of my clients to take part in and understand

Host: I love how you connected this to the earlier answer that you gave, right? It's all about your preparedness and the playbooks that you have, because if you have the playbooks, even someone who is not experienced can follow the playbook to preserve the evidence so that whenever forensic experts are available, they can look at the evidence and maybe provide their analysis. One question that comes to my mind is, like, let's say I'm an inexperienced.

How do I know when I need to start preserving?

Because logs are also expensive, you have to store them in a storage or somewhere.

How do we know when attack started or at what point the preserving of the logs should start?

Gerard Johanson: That's a really good question.. What I would say is this is a learning process. So as if I was to kind of take it out of a service provider and put like I'm a security operations manager and somebody, you know, you brought that question to me. I would say if there is any indication, even if it's EDR kicks back and says, hey, I think this is Emotet or a QBOT detection.

And you go ahead and pull that evidence in and we store it. We realize eight hours later, we don't need it. We can always delete it. So I lean with people in, even in enterprise environments I've been in is to say, I would rather you be overly aggressive, grab that evidence now, and then realize we don't need it versus it's eight hours and then we realize we need it. So what I would say is, it becomes a little bit of a learning factor. You may find some people that are overly aggressive.

You know, every everything they're pulling evidence in, some they may be under aggressive. But I would definitely make it part of our kind of culture, security culture, if you would say that. It'd be more aggressive with that because as I said, if you look at a standard, you know, to get very technical, a triage package, what I would use in assessing a system that has potentially been infected, especially on the window side, that triage package is only about 500 megs.

You're talking less than half a gig. So we're talking, you know, maybe some disk space, some bandwidth we need to do to pull that, maybe some time. So I would rather have that 500 gig get deleted and then wiped versus not having that. And yeah.

Host: Right. Not having at all. Yeah. So I like that approach that you have the option if you do not seem it's an incident, maybe you can just clean it up, but it's better to be better to capture those in the first time. So I want to increase the scale of this a little bit further. So we are in the cloud. Now, nowadays, a lot of practitioners use multiple clouds as well.

due to various reasons. It could be regulation. It could be different services that they need. It could be expertise. Or it could be they acquired a company which uses a different cloud than what you use.

So if you are using multiple cloud providers, working with multiple cloud providers, or you have a hybrid sort of setup, how does incident management change in that case?

Is it the same or do you see any different type?Do you have different types of playbooks in that case?

Gerard Johanson: Yeah, so that's a really good question. And I think it's a topic that needs some discussion. Again, you're going to detect a them, I think, is that preparation. The major challenge that I see, let's take it at a macro level of any cloud provider, is we have an additional layer of activity that is, we have the ability to log.

Whether that is...on by default, you know, you look at AWS, for example, there's a lot of logging that's not necessarily by default, same thing with Microsoft Azure. What I would say is your preparation steps is to one, understand the architecture in two, understand where data is. And so for example, you have VPC in AWS.

That is a kind of our net flow that we would have from switching routing on an internal enterprise network. So what I would recommend is, yeah, there's going to need to definitely be some playbooks and workflows that clearly define and say the AWS environment that we have versus Azure. So we have Azure networking may give us the same information. We're doing this kind of one for one mapping.

The key stumbling block I see with incident response in the cloud is not necessarily the data. The data is, we can pull this, there's techniques. It's understanding the environment where data is going to reside. It's going back to an architecture discussion. And the only way to do that is to really take that preparation step and really clearly define where those data sources are in your own environment.

Host: Makes a lot of sense. And like the architecture thing that you highlighted, that doesn't just apply to incident response. It applies to even developers who are developing applications. They need to understand the architecture when it is multi-cloud or hybrid cloud setup. Other than, let's say, understanding the architecture, having the playbooks, are there any other complexities which the team should be aware of?

Gerard Johanson: Yes. Yeah. There's complexities. There's also, what I would say is there's methods for evidence collection and analysis that have changed. So for example, in an environment like AWS where you can actually, what we'll see is fire off a snapshot and actually analyze the snapshot in the AWS platform. So you see a lot of that type of complexity.

But it also again cuts down that amount of time that we would need to, if I had to do that, say on an on-prem environment where I would have to download that snapshot and then mount that as a disk image in something like autopsy or FTK, a lot, a lot of the times we can push a lot of our own tooling into say that, that AWS, that AWS infrastructure, and then actually look at evidence without the need to actually go ahead and download that and, take time to do that.

So the complexities very well often, yeah, they're there, but they're also an opportunity for us to actually build in more efficiency and some quicker standard operating procedures, playbooks that allow us to investigate.

What I would say that the difficulty that presents itself and even in my own experience is understanding the nuances of each platform, whether it's AWS, Azure, Google, of what they're doing, how they're architecting and how these things operate from three different, you know, from three different perspectives.

Understand the nuances of each platform

Reduce your risks by fixing your misconfigurations before they become a threat.

Know Your Infrastructure Misconfigurations

That I think is going to be the key challenges is understanding that. And that takes a lot of just, I think, sweat equity in terms of research and, and looking at those opportunities.

Host: Mm-hmm.

Yeah, and this again goes back to like when you said, right? How does your day look like? A lot of time you spend on research, right? So it makes a lot of sense, right? Depending on the set up, like the environment landscape you have, whether it's single cloud, multi cloud, hybrid cloud, you have to understand the architecture and understand the nuances, have playbooks ready, so that when there is an incident, your engineers know exactly what to follow to collect the evidence, how to report it and how to triage it even. So I want to take this a little further. So you have been in the space for some time, right? In the incident response, foreign say, like

How has the landscape evolved in recent years? What's your reading of it?

Gerard Johanson: I think from incident response, I think one of the things that we moved very quickly to, and I can't really put my finger on, I think it was just the prevalence of ransomware. What we've seen is organizations when I first started really getting deep into ransomware, they wanted to understand mechanism of tax, root cause analysis is a good way to put it. We had a lot of time.

And we would see outages in the multiple of days while we tried to figure out what had ended up happening. What I've seen is a change in the overall approach to incident response is we need to get, and this goes back to a previous conversation we had, which was we need to get maybe four data points, again, that initial access, execution, lateral movement, command and control.

Get that contained and get back up and running. So what you're seeing now is organizations, they're like, I don't necessarily care how this variant, that initial access variant like Emotet or QBot functions. I just need to know the hash value so I can block it in the future. And I need to get back up and running because I'm losing revenue or I'm losing business operations, or in the case of healthcare, my nurses are actually doing all of the notes.

Nurses and doctors are doing manual note taking, I have to get our patient record management system back up and running. So a lot of it is this more focused on getting good information and getting things restored versus taking time to really understand the root cause. And I think that that's a natural outgrowth of a lot of these organizations realizing that with good backups, with good restoration processes, they may not be down for days. It may be measured in hours.

Host: Right, right. So that's a good way to explain the landscape shift. Right. One of the recent shift which is affecting everyone is in AI. Right. There is a new technology called chat GPT and everyone has been using it and have feedback on it. Right. Or have views on it. Google also recently launched Bard to compete with it. So let's say if I use chat GPT,

Can I generate, let's say the playbooks that we talked about, the digital forensic analysis approaches, all of that using ChatGPT, and would that solve all of my pain points?

Gerard Johanson: I think what I would say is, let's kind of look at, it is one of those tools, and I don't think we should wholesale just abandon it or wholesale adopt it. What I would say is it's going to give you a really good starting point.

The problem with saying, asking ChatGPT, give me an execution analysis playbook, and it'll give you...a generic kind of, you need to do these steps in sequence. And it may be 100% on, but it's one of those things that you can actually pull out and say, now you would just modify it. So it's very much like, I would say, getting a template from a government cert will do that. You'll see that Japan, for example, the Japanese computer incident response team or emergency response team provides templates. Very similar to that.

The same thing with, hey, I need a Sigma rule or a Yara rule for this malware sample. There's tools that'll do that, but again, at the end of the day, you have to make minor modifications. The pain point that it's going to take care of, I think a lot of it is just the amount of time and energy necessary to put these in, say, a Word document or a PDF. The pain point that I think...definitely will take care of is just the time and energy necessary to write these things out. Tends to be with security operations teams. One of the trailing tasks in building out security operations is just documenting plans and playbooks. And AI is probably a really good tool, and I've played with it myself to start working this. But again, need some modifications and make sure what I would caution anybody who's going down that road is,

Don't rely on it as soon as the chat GPT engine puts it out there for you or creates that product. Make sure it fits for your environment. Make modifications as necessary. But I would not say don't use it at all. No, if the tool is there and it'll save us time, let's use it.

Host: Mm-hmm. So a follow-up question to that is, let's say, now one of the things that you highlighted is that we get a template, which we need to extend. These are public language models, so that's why it's very generic. So nowadays, there are many solutions where you can build your own language model, which is more context-driven, and you can feed your data into it. So it's more the context-driven output that we get from the model is much more relevant. So do you think as the models improve, eventually they will take away all the security professionals' job because they will be able to generate the incident response plan, work like playbooks and all of that.

Gerard Johanson: Mm-hmm.

Yeah, I don't think we're gonna I'm maybe I'm just a little Pollyanna share or hopefully optimistic. I don't think it's going to replace us what it's going to do is a lot of the low level work initially and maybe in the next couple of years that take up a lot of time.

So for example, your every minute or hour that your security operations team is writing incident response plans, they're not in the environment hunting down potentially existing threats or doing research. So what I think, let's say, let's take it out of kind of that role and say research is having these type of models that'll say, hey, listen, this is our environment. What are the top 10 things that I should be worried about this week? That's where now we're getting very focused. So I think in the long run...

Yeah, we're gonna see a lot more of the day-to-day kind of repetitive tasks in like any occupation being placed into that model. But what it'll do is honestly, I don't think there's any shortage of work and it'll allow security operations personnel to focus on those things that AI just can't do for us. And that is looking at something that is suspect or taking it upon themselves to hunt down specific threats or specific indicators in the environment. So I look at it as an enabler. I'm cautiously optimistic that it'll do that for us. But I think at the end of the day, it's gonna take care of a lot of the grunt work that Security Operations Team has and allow them to focus on more important tasks.

Host: I love the word that you use, right, enabler. It will enable, like, let's say, someone who is starting their career or someone who doesn't have a lot of experience or even folks who have experience to get started on a topic and then take it from there. So yeah, that makes a lot of sense. And yeah, and that's a great way to end the security question section as well. We spoke about incident response, landscape changes, even chat GPT as well.

Let's go to the next section, which is focused around rating security practices. So the way it works is I'll share a practice and you need to rate it and you can add some context like why are you rating it a one or a five. So one being the worst and five being the best. So let me start with the first one. The first one is security processes are roadblock to business growth.

Gerard Johanson: Okay.

Host: Grant users unrestricted access to systems and applications so that business growth is not affected at all. What's your take on that?

Gerard Johanson: I think that is probably really one of the worst things as a security practitioner coming with this for Rubais, one of the kind of worst security practices that you can have. One of the things that we've seen really in the last, the aspects of this in the last couple of years is I would say I've never worked in incident where credentials were not impacted.

So even just looking at unrestricted access to systems and applications, that is, as they like to say, identity is the new endpoint. And what I would say is the thing to consider is organizations should have a least privilege model and if they need help, then that needs to get budgeted and get to the top of the budget is identity and access management.

Everything from initial access to lateral movement, even you see cloud-based attacks, a lot of them are based around identity. As simple as something as AWS keys in a GitHub repo commit. Those are all things that we just, they burn, and you've got tools. We're still, Mimikatz, I think is almost a decade old, and we're still dealing with Mimikatz as a tool that we see. So...

Just, you know, even something as simple as running Bloodhound in your environment to understand attack paths will disabuse you of this notion very quickly when you see how quickly somebody from the outside can get to domain or, heaven forbid, global admin and really wreak havoc.

Host: Mm-hmm.

Yeah, that makes a lot of sense because IAM is the entry point in a way, right? That is how your users or your applications get access to resources. So capping the privileges makes your life easy, like following least privilege best practices. The second one is conduct periodic security audits to identify vulnerabilities, threats and weaknesses in your systems and applications.

Gerard Johanson: I think that's one of the best things you can do is security audits. Audits will tell you a baseline, but really whether you call it a red team penetration test, even threat emulation or something as simple as running atomic red team, all good practices to really kind of understand what threats can do.

Can you see them? Can you respond to them? And can they leverage some sort of vulnerability? So that is a really good practice to undertake. I would say that's right up there at the top, number five.

Host: Make sense. The last one is developing and regularly testing and incident response plans so that you can quickly detect, respond to and recover from security incidents.

Gerard Johanson: So this is near and dear to me, because this is one of the things I've been working on. You know, it's made it my passion in the last six months is really looking at training the incident response processes and plans. So I think this is one of the most important things to do. What I would say to organizations that are like, we can't test our incident response plan. It's a four hour exercise. We can only do it once a year. I'll leave you this real quick. You don't have to test the whole thing.

We talked about plans, playbooks, standard operating procedures. I've done some work. There's some stuff out there and it's literally 20 to 30 minutes a week. You're building in that actual motor muscle memory, if you want to call it that, that you're able to actually go out and execute these plans and playbooks. Training and testing of incident response, it's a stressful environment.

And the thing about stress is, you know, it really does impact our decision-making and impacts our ability to do a lot of things. The better trained we are, the better we'll be able to execute on those plans and playbooks, the more training and more testing and more exercises that we build in.

Host: Mm-hmm.

So I love that we started with Playbooks and we ended with Playbooks as well. So yeah, that's a great way to end the episode. Thank you so much, Gerard, for joining and sharing your learnings with us.

Gerard Johanson: Thank you. Thank you very much, I appreciate it.

Host: Absolutely. And to our viewers, thank you for watching. Hope you have learned something new. If you have questions around security, share those with us at scaletozero.com. We'll get those answered by an expert in the security space. See you in the next episode. Thank you so much.