The Future CISO: AI, Quantum & Becoming a Multidisciplinary Strategist with Patricia Titus
You can also listen to the podcast on:
TLDR;
- With the increasing adoption of AI, Security Leaders would have to become deep specialists. This would enable security experts to not only leverage but also defend against threats.
- Context is king. With AI, it’s essential for security leaders to focus on context focused behavioral aspects than the policy driven aspects.
- From a governing perspective, NIST and ISO have AI Governance Frameworks. Leverage those as starting points.
Transcript
Host: Hi everyone, this is Purushottam, and thanks for tuning into the ScaleToZero podcast. Today's episode is with Patricia Titus. She's the field CISO at Abnormal AI. With over 25 years of security experience in multiple security disciplines, she's credited with designing, implementing, and transforming information security programs and organizations. So Patricia or Patty, thank you so much for taking the time and join me in the podcast today.
Patti: I really appreciate the offer for me to speak, and I'm glad we could coordinate time. Schedules are crazy. So very happy to be here.
Host: Same here. Before we kick off, do you want to add anything to your journey? I don't know if I did justice to justice by introducing it with two short lines.
Patti: You know, I've been a CISO for almost 25 years. And prior to that, I was military public sector government in the U.S. I lived overseas for about 13 years in various countries, working not just for the U.S. government, but also for other governments. So I've a little claim to fame. I have been on all seven continents, either lived or visited. So I ticked off the last one by going to Antarctica. So, quite an experience.
Host: Antarctica as well. Interesting. So you have been in the military, you have been to seven continents, which sounds like an amazing career. So now you are at Abnormal or a field CISO. What does a day in your life look like?
We ask this question to all of our guests, and based on their career, based on their role, often the answer is very unique. So what does a day in your life look like?
Patti: So as a field CISO, it's a little bit different than being an operational CISO and owning operational responsibility in the company. So we have a wonderful person, Houston Hopkins, who's taking that charge for us. I report to the CIO, who's Mike Britton. My job is really to bring the 25 years of being a CISO to bear to help advise our customers and our prospects, not just in our core products.
but across any gamut of things that they might need guidance on or seek advice. So my job as a thought leader is to bring those years that I've done this job to help our customers and our prospects think through problems, brainstorm. So it's really an advisory kind of position where I'm basically a free resource for those individuals who reach out and ask for it. I also speak at a lot of conferences. I love being on podcasts such as yours to share ideas and hear other perspectives as well.
Host: So you mentioned Houston Hopkins. So last week I was at FwdCldSec in Denver. I think I ran into him. We had a short chat. He's an amazing guy. Love to hear that you're working with him as well.
Patti: yeah! he's super smart, and so he's very new to the company. I'm pretty new to the company. So we're just going to be newbies together, I think.
Host: Sounds like a fun team. So for today, we'll be focusing on AI, of course, and a little bit of quantum and future CISOs. And with all of these three in place, like one of the things that you have been talking about for some time is, How should CISOs become multidisciplinary strategies? So we'll touch on that as well. So let's kick it off with AI.
Speaking of AI, every business team is learning to leverage AI. So was at a AWS partner event in Seattle a couple of weeks ago. And one of the things the presenters were highlighting, which I agree with, is that 2023 was all about learning about AI tools, like GenAI tools. 2024, folks started thinking about POCs. And 2025 is when there is now a lot of emphasis on rolling the POCs to production. And when that comes into mind, one of the major concerns is how to do it in a secure manner. So I want to ask you this, like how do you envision the internal security organizations, security departments to shift to accommodate some of these, like deep expertise, which is needed for AI security? and its related threats. How do you see that in today's world?
Patti: Yeah, so I think to effectively govern and defend during this new AI era, we are going to have to evolve from a fairly flat structure of generalists into a more of a modular ecosystem with deep specialists, integrated strategists, and really trusted advisors that are embedded across our business and not just sitting every day in an operation center with their peers. I think we really need to focus on dedicated AI security functions where that allows those individuals to go deeper into those disciplines. And these roles don't just live in the security operations center. So you're going to need them that you're going to need these jobs to be able to interact. with your data science teams, legal, and engineering.
We've been talking about privacy by design and security by design. Now we're going to have to talk about AI security by design. So we're going to change it a bit. I think we're going to probably see some title changes. Maybe an AI security architect or a machine learning red team lead, or an AI policy analyst.
So I think we're going to see this modernization of our org chart, actually. And I think that's going to actually lead to a lot of career development for our internal teams. Yeah.
Host: I think this happened with, I know it's not exactly the same, like this happened with cloud migration as well, right? Earlier, when things were on-prem, we did not have cloud architects. Now we have cloud architects, security architects who are focused on cloud exclusively in a way. So what I'm hearing from you is we'll have some of these specialized roles, maybe, which will happen in the next. Hopefully, three months, six months, one year, something like that. Do you have, how would you, like, since we are at a very early stage of AI adoption and insecurity as well, how do you see these specialized roles working together? What's your thought on that?
Patti: I think, I think what we're going to end up doing is we're going to end up creating cross-disciplinary teams. So where your risk team focused on risk management and they did it sort of in isolation, although they had to interact from a remediation perspective with other teams for risk treatment. But I think what we're going to start to see is this crossover between analytics and analysis happening.
AI is going to allow greater visibility into what other teams are really doing. So I think we're going to create cross-governance bodies anyway. We've got to have a risk oversight, AI risk oversight across an enterprise, and looking at AI ethics. I think you're going to see the CISO organization is going to become a pinnacle point for legal, your CTO function, maybe your chief data and analytics officers, or your chief transformation officer, you're going to see security play a co-chair type role. But I also think there's going to be a lot of crossover and great visibility created by AI, where we can now start to see that connection between operations and GRC, where they've kind of worked independently.
Now AI I think AI is going to help fuse more of that together.
Host: So speaking of GRC, I know that this is not something that we want to talk about. Do you think GRC would be like the first area which would get impacted with AI or do you see any other areas which will have far greater impact, like to begin with, anyway? Any thoughts on that?
Patti: But I think we've been using AI. We've been using AI in our security tools for a while. And AI is pretty much embedded in a lot of the big tool sets that we have, like CrowdStrike or some of the Microsoft capabilities. So AI has been used, especially in the operations center. I think what we're going to see is the broadening of that into GRC, even into the programmatic.
So how do we determine a return on investment of a deployment of a solution, how we're looking at our teams, where we're looking at, do I have the right people doing the right job? AI is going to help us with the program itself. It's going to help us think more strategically. I do think governance is probably going to be impacted because when you can have generative AI update your policies based on a new framework or a new regulation, it's going to happen much faster than having to hire a consultant to come in and do that for you. Does that make sense? Yeah.
I think we're going to be able to take those people who are policy writers and make them something more interesting, and probably save ourselves a lot of money in hiring outside consultants or outside legal entities to help us create briefings. So that's what I think.
Host: Right. Yeah, yeah, absolutely. Yeah, I think hiring and employee impact is a very touchy topic when it comes to AI, right? Like, there are folks who are on both sides of the spectrum. Some say that AI will impact employment. Some are like, No, no, we'll still have engineers and things like that. I don't know how to get into that debate. It will be very tricky.
Patti: It's going to give them some interesting opportunities. I mean, we're still going to need people to review what comes out of AI. That's it, you know, but does that allow that person to do something more meaningful? And there's always going to be people who don't want to change their job, and that'll give them an opportunity to maybe think differently about how things are being done.
Host: Yeah, absolutely. So now speaking about deploying the workloads, and before you deploy, you definitely need governance frameworks, processes in place. So recently, I was at an AI workshop where participants were asked to list out their concerns when it comes to this AI or agentic AI transition. And one of the highest voted concerns, along with quality, was security.
How do you ensure that your data doesn't leave your tenant boundary, or there is no PII, PHI leaking via customer entry points? And this is in line with your advice to CISOs, right? Like when you ask, what is the worst thing that would happen if your AI tool goes rogue or goes completely off the guard rails, given the increasing autonomy of AI, right? So have you thought about like, what specific governance frameworks or processes should CISOs should think about or establish before deploying the workloads so that they can not only identify, mitigate, and respond to some of these threats. What have you seen?
Patti: So I think our frameworks are still pretty solid. So NIST has an AI risk framework for that. If you happen to be a NIST shop or an ISO shop, I think looking at the AI guidance that's coming out of those governing bodies is really important.
You make a great point. Frameworks are not easy. But I do think they're fundamental because they actually set the guardrails to allow people to work within a set of guardrails. I think if you don't have that, it could be a recipe for disaster, to be quite frank. So I do think we have good frameworks. We need to take those frameworks. And each of the controls, we need to think about what controls are really applied to AI. And it's going to be. It's going to depend on your use cases. And I think you have to document those use cases out, which is another area I think companies, lots of companies, are not great at documenting their use cases, or they'll have scope creep on a project.
I think AI is one of those things where you have to be pretty specific about what you're using these models for to ensure that you're balancing the value proposition, the reward of using the capability with the risk that you could be undertaking. And that's where you've got to get to the point where you're looking at your governance models got to lay out things like, Am I, when am I changing? When am I retraining my models? How am I thinking about that? How am I governing those models, the use of the model, what data is going into the models, and then what am I expecting to come out of it? So behavioral AI, as you and I spoke about via email, we know that behavioral AI has changed the landscape significantly. So we have to be really purposeful in this approach to utilizing AI. We've got to be very purposeful.
This cannot be one of the… You know, cloud was a little bit, oh, we're moving into cloud. And then somebody said, Oh, we should probably talk to those security and privacy people. This is probably, I think, what I've seen is we are taking a more thoughtful approach with AI than we did with cloud. Cloud was, we're going to save millions of dollars if we move everything to the cloud. That wasn't quite true, but AI is a little bit different. And to be honest with you, somebody once said, If you want to make a big mess of something, you need a computer. Well, if you want to make a really big mess of something, you may be able to use AI to do that. So we've got to go at this a little bit differently than we have in another emerging tech space.
Host: Yeah, yeah. I think with AI, there is a far greater impact, both on the positive side and on the negative side as well. So that's where, like, as you mentioned like, you have to have some of these frameworks in place, guardrails in place before you start deploying your workloads in production, right? Where, let's say, you have a chatbot. Somebody can do prompt engineering and can get to your PII and PHI data, even though you have a card. So you have to make sure that you have enough guardrails to protect some of these aspects.
So we'll talk about behavioral AI in a second. Before I touch that, you mentioned that establishing some of these frameworks is overwhelming. Any specifics, like any tips that you have to get started with these?
Patti: Yep, I agree. Yeah, I think when you're implementing a governance framework, I think it's important to think about how you're measuring the governance, right? So I think you need to; there are some great AI governance and metrics dashboards that are out there.
And I think those are things people need to think about. What do I need to tell the board of directors, or what do I need to talk to my executive leadership about? And what is this governance model going to look like? Because I think that's super important when you're thinking about governance in general, what is it you're trying to govern? Are you trying to govern risk introduction to the environment? Are you trying to govern data loss, data leakage, which hopefully you have a DLP program in place, and this isn't that big of a lift for that program.
And I think it's really important. Am I trying to gain efficiency? Am I trying to boost productivity? Am I trying to reduce friction? What is AI? What are you using it for? Which is going to help you figure out how much or how little governance you really need. Again, it's different from cloud because our control frameworks and our frameworks in general that run our security programs, it didn't matter if you were cloud or you were on-prem. The controls were the same; it's just how you applied them in these different environments. AI is not quite like that. AI is going to create a little bit of a fundamental shift in how you think about utilizing the output of AI. And I think going to be a while yet before we're using true unsupervised AI. Although agentic AI seems to have come out of the shadows and is overpowering, and it overpowered the RSA conference for sure this past year, I think those are things we have to think about.
Host: Yeah. And the points that you highlighted between cloud and AI make a lot of sense with cloud. One of the with cloud migration, one of the major driver was cost, as it was put in front of us, right? That you can get rid of a lot of your costs if you move to the cloud. But when it comes to AI, it's like it's not about the cost that you end up spending more, but rather. It's more around productivity, how you build better solutions, and things like that. And I agree to your point earlier, right? Where you said that, you need to document your use cases.
Otherwise, you will have scope creep and things like that, because you have a shiny new tool or shiny new toy, and you are just trying to play with it. You lose track of what exactly, what value you want to add for your customer. So you'd rather start playing with it, right? For forever. So yeah, those are some key, like very important points that you highlighted.
You touched on behavioral AI and that is super exciting to me. So recently, you wrote a blog post around it as well. It's a game-changer for cybersecurity. It emphasizes how we should think differently. Instead of focusing on code, we should think about behavioral AI, understanding the context. So from a CISO's perspective, what are the most significant organizational and cultural shifts required so that you can...
You can be like context for a security mindset rather than just thinking about, I will generate more code or something like that. So what would you recommend there?
Patti: Well, first of all, with behavioral analytics, we're moving away from rule following to pattern understanding. So the traditional culture has been focused on static rules, signature-based, and policy compliance. Alerts were triggered by X, then we should do Y. Behavioral AI is shifting that and giving us a new mindset. where security now learns in dynamic patterns, meaning we're learning from how users behave, we're creating entity-level baselines, and we're looking at the intent of the signal versus signature definitions or geography.
We're taking a lot more telemetry. and doing more with it. And the analysts have to evolve from being rule executors, meaning I'm going to force this rule to happen, to investigators of content. So that's going to shift your organization away from the one-size-fits-all policies to risk-based decision making. So it's really going to change that culture where risk tolerance, in the traditional way is risk tolerance was disconnected from the actual user behavior. So what we did is we made decisions based on a flat policy. Like the policy says, do X or don't do Y. And it was based on, we have a risk appetite that is X.
This all is gonna change where our policies have to be flexible enough to change based on real-time signals as to who's accessing from where, how often, under what pressure, and security becomes much more personalized, more dynamic, and more situationally aware. This is a really fundamental change to the way we've thought about it before. Your organization's gonna have to shift to create a risk-informed decision framework versus flat way we thought things.
And so even your control framework has to be more dynamic. Kind of the cool thing about it is it's really going to empower the team to adapt your controls for your privileged users, your third parties, and your critical assets. So it's going to empower your security people to be more agile in how they think about the security controls. And we're going to move away from being security as a gatekeeper to security as a business enabler, right? I don't know how many times it was called the office of sales prevention or the department of knowledge. We are going to move with behavioral
Host: Office of the Know. Security of.
Patti: AI, we're going to be able to move to contextual security that allows frictionless access, but still maintains a high level of trust. And it's going to become a competitive differentiator when it's done seamlessly and utilizing the proper intelligence. Make sense?
Host: Yeah, I like that. One question, though, like today, most of the organizations are sort of focused on compliance families. They're controls. We need to match with that. Like, there are policies defined. We need to satisfy those policy requirements and things like that. When we move to more context-aware systems, like behavioral AI-based risk evaluation, how do you measure and show the effectiveness of such a system where the baseline is shifting, right? So how would you do that in that case?
Patti: So, remember I talked about measures and metrics? Now you're going to have the ability to actually go in and build out metrics based on model behavior, drift metrics, model entropy, feature importance, and volatility. You're going to look at things differently in your metrics. And it's going to allow you to really start thinking outside the box, right? So what is the model update frequency? What's your confidence score in the system? Anomalous behavior detection. How are you detecting anomalies? What changes in that? How many events are you seeing? Confirmed incidents that have been escalated, false positive rate, and false negative rate - which are two really important for AI, super important, especially in behavioral. Very important metrics and measures you need to collect.
And then, you also want to look at response efficacy, right? And traceability. So that's the other key component that AI is gonna give us the ability to have some level of transparency that we haven't had before through the audits and traceability module. You gotta have, you know, unusual logins from new geographies, deviated email writing style, right? If all of a sudden my email writing style changes exponentially, what happened? Did I have brain surgery, and it's a completely different brain up here or did somebody harvest my credentials?
You know, you're going to be looking at that impersonation of known VIPs. And so it's going to create this whole, it's going to open a whole new world to decision-making processes. And so I think that's where, you know, previously we would put metrics in front of the board or the executive leadership team. And we would say, trust me, I got these from over here. Now with AI, we're going to be able to actually show the traceability model so that we can prove to our internal and external auditors where the data actually came from. And then we can cross-reference those. Well, the other one is that we can create decision logs.So that's another key component in any behavioral AI model, any AI model, actually, is your decision logs on why you made certain decisions using the AI. And if you're not thinking about decision logs, you have a fundamental gap in your framework because without the decision logs, you can't actually have defendability with an auditor if something goes sideways, you know, if something goes sideways, then what are you gonna do?
And then you want to also make sure that you've got data provenance and lineage. So you've got to be looking at how do you trace where the data came from and how, when it was created, you'll want to look at AI bias and fairness reviews. You obviously need to do privacy control reviews against GDPR or CCPA or whatever it is. So, you know, it's going to, it's going to kind of burst open a whole new thought for us.
We weren't doing enough in the first place, right? I mean, CISO jobs are so boring, right? But now we're going to have this whole other thing to be thinking about, which is going to really allow us, I know this sounds funny because AI is supposed to be thinking for you.
I will say that it's going to give us a whole new thinking process around how we strategize around sharing this information. There's going to be a lot more data. But I think it's going to create a whole opportunity for us.
Host: Yeah, I mean, there are so many new frameworks that you need to think about, so many new controls that you need to put in as well. And one of the key things that you highlighted is the decision logs, right? You're spot on, right? Otherwise, there is no way of traceability. You cannot trace back what exactly happened, especially with agentic AI, where you have multiple AI agents working. If you do not have that decision log anywhere, then you're right.
Patti: Yes. You can't defend yourself.
Host: Yeah, you're right. Like when an auditor asks, How did you come to this conclusion where there were 10 agents playing a role, you can't. You can't forget about defending. You can't even understand for yourself what exactly happened. So yeah, a decision log is for sure a key tool to have.
You mentioned about some of these frameworks and some of these tools. One of the questions that we got from our common connection is Deb Gwynn. And the question is,
Do security leaders stay ahead of some of these things? Because there are many new things that are coming up, right, that you have to stay ahead of. So how would security leaders stay ahead of these?
Patti: I truly believe that we need to create the future CISO toolkit, which is really to lead through this era of change. CISOs have to become more part strategist, part ethnic, ethnicist, if there's such a word, part futurist. So, Our technical fluency still remains critical. Like we still have to say technically fluent, but our ability to connect security to trust culture and resilience is what's going to set us apart as leaders. And we have to embed ourselves in product strategy and legal foresight, and AI risk governance, not just compliance. So to stay ahead of it, it's a great question.
I think you have to be. I think you have to be willing to realize that you are going to become a modernized CISO and that this modernized CISO is going to have to know everything we have to have had to know for the last 25 years. But you're also going to have to learn AI ethics and risk governance, where maybe you didn't have to before, but these are new disciplines.that we have to be willing to take on. And what that means is you need to elevate yourself as a CISO and build that level of management below you that basically handles yesterday's problems so you can think about tomorrow's challenges. Does that make sense?
Host: I like that. Yeah, I like that. Like with any technological transformation. You have to find that balance. You cannot just abandon what you need to do to support your today's ecosystem. At the same time, you have to learn about what's coming up. With AI, there are two aspects. One is, of course, we spoke about the technical aspect, new tooling, new policies, frameworks, and things like that. There is also a non-technical aspect or a more human aspect, right? Where it's more like ethics or organizational psychology or business model innovation and things like that, which are absolutely key for or absolutely essential for CISOs. And you mentioned about like having a future CISO toolkit. What are the like such non-technical skills that you can think of that CISOs of today should think about and start learning about? so that they are ready for tomorrow.
Patti: I think CISOs are… the future CISO isn't a defender of the past. They're the architects of a resilient future. And what I mean by all those fancy words are we have to realize that we are at an inflection point, that there is a lot of change happening across security disciplines, and be willing to embrace them.
So we need to improve our ability to influence, to realize that what we need to be doing is helping to boost productivity and design those environments that anticipate need before it comes. So this has always been the challenge of a CIO, right? To think about what the business is going to need before they need it, so that you've positioned the existing infrastructure to support those transitions that are gonna happen. We need to be able to reduce friction, which means, and as introverts, we may not be very good at this, but we need to empathize with our stakeholders and become a partner, not a blocker, right? And to be honest, we have to cultivate disciplines that helps us survive the disruption and allows us to lead with that confidence and clarity, and creativity that we all have, or most of us do.
Host: Makes sense. Yeah, absolutely. Makes sense. One of the things that you mentioned is like the future CISOs must evolve to multidisciplinary strategies, right? Where you are not only supporting the existing technologies, you are also thinking about how you can work with the future technologies or leverage that to improve productivity or business value.
What are some of the disciplines or areas of expertise that CISOs must actively cultivate so that they are still relevant, maybe like 5 or 10 years from now? What have you seen or what would you recommend?
Patti: I think you, I think a CISO needs to continue to evolve away from the siloed leadership into synchronized leadership. So we can't operate in only a risk and compliance silo. We have to co-design, co-architect, co-influence product development.
And product development, What I mean by that is those things that support the company. So if you're insurance, it might be a policy system or a claim system. If you're in financial services, it might be a financial system, might be a banking system.
So to me, product development is not my product for my company, but it is the ecosystem that helps the company generate revenue, which is what most companies are in business to do. So you need to be an influencer in that development, meaning you're going to have to get involved in Agile Sprint Talk, business requirements at the highest level. So I think there is this whole opportunity to synchronize the conversations. I think we need to continue to be the stewards of trust.
So, ourselves, along with the privacy officer, that's part of our job, is that data is the fuel, AI is the engine, and security are the brakes and the guardrails. So none of these functions can work in isolation. And in an AI native enterprise, every algorithm decision is a data governance decision. It's a tech risk decision, and it has security and privacy implications. So I think what we need to do is replace those lines between who owns what to a shared accountability for building systems that are trusted, auditable. and resilient, and that's how companies are actually going to grow profits and revenue streams.
Host: So yeah, I mean, you're absolutely right when you said that data is the fuel. Earlier with the migration from, let's say, on-prem to cloud, you have different layers. You have an application layer, you have a data layer, and things like that. So there were always multiple layers before you got to data. But with AI, every layer of interaction, you are using data. And you are at a far greater risk of exposing your PII, PHI, confidential data, and things like that to an attacker if security is not embedded at every single layer anyway, right? So yeah, I love your response on that.
One question that we got from a common friend Norman Kromberg is, How can this be explained to the C-suite, like how AI and quantum can be explained to the C-suite from a security and business risk perspective? If your C-suite is well aware, then it makes your life easy. But if they are not, how do you approach that?
Patti: That's a great question. Looking at AI and being able to have a conversation with leadership about what it is and what it's not, think first of all, leadership is hearing a lot, our executives are hearing a lot about AI and everybody hears if I'm not doing it, I'm going to fail and my company is going to go away. And so there's a lot of angst at the C-level, at the executive level in your companies about I've got to be doing something in AI. But I think, you know, where I agree with what you said earlier, where 23 was, we're talking about it 24, we're POCing it in 25, we're deploying it. There are many companies and businesses that are still stuck in 23, because they haven't clearly defined their use cases.
And not that you're going to have clearly defined use cases for everything, but you've got to start somewhere. And what I saw a lot of companies do is say, We're blocking it. We're gonna block AI access until we can figure out how we're gonna govern it. And I think that was a little bit short-sighted because people who were crafty and innovative may have figured out how to take data and put it into publicly available models, which wasn't what we wanted them to do. But rather what I'm seeing now in 2025, there are more CISOs and more C-level people, more wanting to bring the technology into the enterprise and provide a sandbox so that our employees can use it or play with it and try it and see what they think.
And out of those tests and out of those playing with the technology, some things will start to materialize for the company. Here's how I think it can get help us do. low-value work and achieve much faster workflows through automation. And one would say we've been doing RPA, robotic process automation, for years. Now we're kind of putting it on steroids with AI. How do I take that workflow and put a prompt into a generative AI model and have that prompt come back and tell me how to do this better, faster, cheaper, better.
I do think, to be honest with you, though, we are going to have to get to the point in the next decade where we realize we're not going to reward technical control, but we're going to reward collaborative clarity. And what I mean by that is building a coalition around shared trust in future. We're going to have to realize that our future is going to be rewritten. by machines and by math. And we were really uncomfortable with that because we all saw that crazy movie way back a long time ago called 2001 Space Odyssey. And we all think, you know, or Terminator, where robots are going to take over the world.
I'm more of an AI optimist and an enthusiast than I am a pessimist. But you've to decide what camp you're in. And I'm strongly recommending people get in the optimist and enthusiast camp. Because the pessimists are going to be the ones whose companies won't stick around because they won't modernize. And hopefully you won't modernize as a CISO, although there's probably going to be legacy companies that are going to need, you know, legacy CISOs. So maybe you'll have a job for a while.
Host: I like how softly you put it that you have to adapt or you die in a way, right?
Patti: Look what happened to the dinosaurs. They didn't evolve, so they're extinct. And we don't want to be dinosaurs. Although I understand the new Jurassic Park movie, the new Jurassic world, maybe the dinosaurs become more monster-like. I don't know.
Host: Yeah, I think the recent one was quite different than earlier ones for sure. Yeah, I mean, this matches with what you said earlier. Like, CISOs should not think in silos anymore. Rather, they should be more in sync with other parts of the business as well, more involved in the product, more involved in engineering, and things like that. The more ingrained security is at every layer of AI, the more secure the organization and the data more importantly is. Yeah, sort of that's a great way to end the podcast.
But before I do that, I have one last question for you. Like, do you have any learning recommendations for our audience? It could be a blog or a book or podcast or anything like that.
Patti: I do have one. I hate to give them a shameless plug, but I'm going to. So there's a couple of things. One, I joined a, I subscribed to this email that comes out. It's called Superhuman and it is focused on AI and it's got lots of little links in it, similar to what we get when you subscribe to like an SC magazine or you headline news from David Spar. So it's similar to that only it's taking AI-relevant data and serving that to you, so you don't have to go search for it. So Superhuman is a great subscription to join.
The other one is my CEO of my company, Evan Reiser, actually does a podcast, and it doesn't have anything to do with abnormal. It really is digging into AI, talking to different experts in different industries.
about how AI is fundamentally changing what they do. So there's two podcasts, one with just Evan, it's his own thing outside of abnormal. And then there is one within abnormal where it's Evan and our CIO, Mike Britton, who are interviewing mostly CISOs to talk about how AI is changing security. But I find a tremendous amount of value in listening to podcasts such as yours. You know, how can I think differently about this, and maybe expand my mind. And everybody is so busy during the day, it's hard to find time, but I'll plug in a podcast and listen to it while I'm out walking or I'm at the gym or trying to have some downtime if that's possible, that it's relaxing to listen to a podcast where I'm not doing the talking and somebody else is.
Host: Make sense. So when we publish this episode, we'll add your recommendations to the show notes so that our audience can also go in and listen to the podcast and learn more about the more about AI and security and learn from like see source who come into the podcast as well. With that, thank you so much for coming to the podcast, sharing your knowledge and some of these recommendations as well.
Patti: Thank you so much for the invite. It's been a great, it's been a great show.
Host: Thank you! And to our audience, thank you so much for watching. See you in the next episode. Thank you.