Host: Hi, everyone. This is Purusottam, and thanks for tuning into the Scale to Zero podcast. Today's episode is with Jim Manico. Jim is the founder, CEO, application security architect, and lead instructor at Manicode Security, where he trains software developers on secure coding and security engineering.
He's an investor advisor for multiple security startups, like EdgeScan, Nuclear Security, KSoC Labs, to name a few.
Jim is a frequent speaker on secure software practices and is also an author of Ironclad Java, building secure web applications. Thank you so much Jim for joining with us today.
For our audience, there will be very few who may not know you, do you want to briefly share about your journey?
Jim Manico: I'm right now, I'm a secure coding educator at my firm, Manicode Security. And I teach developers to write secure code. I just finished a week of teaching, um, for like 12 hour, 12 hour, Monday and eight hour Tuesday, Thursday, Friday. So I had a nice busy week. I taught like 200 developers about different secure coding techniques. And I honestly love what I do. This is a real fun job and I feel very grateful. I get to do this.
Host: Lovely. It sounds like you have your hands full.
So other than this week where you have been training, how what does a day in your life look like? I ask this question to all of our guests because we get unique answers. What does it look like for you?
Jim Manico: I wake up and I clean my house. I usually keep my house clean. So I just, I wake up and I like get up, do dishes, whatever is not clean, I get my house in a clean state. That's just a few minutes to wake me up. I have coffee. That's it. Clean up, coffee, wake up for a little while. And then I clean up my email.
And if I don't have class, I go to a morning yoga class over dance yogis down by where I live. So I go to yoga every day, sometimes twice a day, if I can schedule it in.
And If I'm training, I'm usually starting early and I'm teaching a class like, I did seven a.m. to three o'clock today, or eight p.m. to three a.m. for Indian companies, or like nine to five for California companies, and I'm teaching during the day.
And I fit in a morning or evening yoga class in every day of my life that I possibly can, or I practice on my own out in my deck. And then there's a hot spring nearby. If I have the day off, or if I have time at the evening, I'll go to the hot spring.
I'll hang out with friends. I go dancing a lot. There's a static dance in my area in California and I go dancing like two or three times a week. Like Sunday, I'm going to Fusion, a partner dance training.
And I travel a lot. Some of my customers want me to travel or sometimes I just have a week off. In my business, I'll have like super two weeks of training and I'll have like a half a week off. I was going on a little trip. Like I'm going to Vietnam in two weeks just for the fun of it. And you're from Vietnam? And you want me to give you a secure coding talk in like two weeks. I love to find someone who wants it, but I live a good life. I feel very fortunate and grateful for the life that I live. I'm having a lot of fun.
And I'm single. I'm single right now, ladies. I'm single. So I'm trying.
Host: Yeah, sounds fun as well. And sounds fun as well, like you help developers get better at security, right, which is one of the core, like motto of our podcast as well. So yeah, let's get into it.
So today we are going to talk about application security. Before we get into the application part, right, like today we all live in the age of Gen.ai, like everybody talks about Gen.ai.
When it comes to developers, they use many tools like GitHub Copilot or AWS Code Whisperer, and there are a few others as well, to generate code and security recommendations. And these recommendations come from models that were trained based on millions of lines of open source code, right?
So what's your confidence score on these code recommendations?
Jim Manico: Well, it depends on what you ask for. If you say, give me a script to do something, you're gonna get more of a half ass answer. But if you say, give me a script with rigorous security baked in, guess what? You're gonna get a way better secure, more secure piece of code.
So it's gotta be really specific, really specific, the more the merrier about what you're asking for. And don't trust any code from AI. First of all, there could be licensing problems with AI-generated code that's still not society yet.
But besides that, whatever code you generate from AI should go through a DevOps lifecycle, static analysis, third-party library scanning, maybe even a dynamic scanner and go through like code and security, and the more security-critical that code is that you're generating, the more it should go through deeper review.
At the very least I'd run like a sem grab or bear or some kind of static analysis engine and fix those security bugs. I'd also look at things like the the complexity of the code, like I think it's called cyclomatic complexity that some code engines will tell you. If you're generating AI code that has a high cyclomatic complexity, that's I want, I worry about that.
I usually want something that's a bit lower. So I would just use the standard code metric review tools that we have today in a DevOps pipeline before I let any code, especially AI code go live.
Host: Mm-hmm. So there are two things that I could take out of it.
One is, it often misses the context of what you are trying to do. If you ask for a very pointed question, you might get a maybe good answer.
The second thing which is very important in your answer was, that you cannot just trust whatever I got from, let's say, the Gen. AI tools and then use it for production.
You still should take them through your DebsaCorp cycle, like trust but verify, right? That's one of the things that I could read from what you've said.
So when it comes to the context, right, often you have to, there were some news that some developers feed in some of the business critical information as part of the teaching to the Gen.EI tools, right?
How would you suggest the developers keep a balance of that when they are providing context to these tools?
Jim Manico: I just wouldn't put any kind of sensitive data or any kind of like business critical information in my request to generate code. I would ask for more generic, I'd use AI more for like general utility stuff. If I'm getting into custom business logic, I'm gonna use a lot less AI, even though it's compelling. I can get my requirements really crisp and throw it into AI and say, give me X code from this framework with high security and low cyclomatic complexity and I'll get it.
And it's really...Interesting. So just, you know, it depends on what I was building. In some cases, I just throw my requirements in to chat GPT and see what comes out and be okay with it. If it was a lot more sensitive of what I was doing was kind of some magic, the more unique it is, the less AI is going to help anyways. And I'd probably be more private, but if I'm doing generic enterprise development or something more plain, I throw my requirements into chatGPT is like, write me code at thousands of requirements that has high security, low cyclomatic complexity, and then review it carefully and security tested and do other kinds of automated analysis and review, depending on how critical it was, but it's crazy not to use AI.
I think it's an amazing tool to help speed up development. And it's not a common opinion, but it's mine. I love it. I love AI.
Host: Same here, I use it on a daily basis. One of the terms that you have used multiple times is cyclomatic complexity. I am curious what does.
Jim Manico: So high speed. I'm sorry to interrupt you, buddy. High cyclomatic complexity indicates a code base that's so complex, it's hard to maintain, test and understand. It's a red flag for functional bugs and vulnerabilities. So doing with security, you wanna keep an eye on this metric. High complexity almost always correlates with a higher chance of security issues. So just in general, lower cyclomatic complexity is a more plain way to develop. Let's fancy BS. It's easier to maintain and secure.
Host: OK, makes sense. So when it comes to security for AI in general, it's categorized into a few buckets. And on top of traditional application security guidelines, there are awareness and ownership aspects which come into picture.
So when it comes to GenAI-related security, how do you think about data security?
And what steps would you take to make sure the data that is being fed or what you are receiving is secure?
Jim Manico: It's secure in the database of the AI engine. You mean that's hard, but there's it's really advanced techniques to lock that stuff down.
Like I would like, I would look at things like differential privacy, where I can keep the data encrypted in the database, but still operate on that data in some way. So again, differential privacy type of engineering.
And so again, it's a way of allowing me to compute on data and an AI in an AI data store. So here, here we go.
So Differential privacy, it's a mathematical framework for quantifying the privacy risk involved in the release of data. This is Cynthia Dwork's work. And so the core idea, I'm sorry, the core idea is to add a lot of noise to the data or query results in a way that the presence or absence of a single record doesn't significantly affect the outcome.
So this is important because it's got data utility allows for the use of data in a way that's statistically significant, but doesn't compromise individual privacy and it helps you be client with things like GDPR.
And so that's one of the answers for private-centric data. There are also different kinds of crypto out of IBM and others where I can operate on data and do computation of data. Even while it remains encrypted to keep the confidentiality of the data in place. So these are some of the things that we need to get right in AI. I don't know how much the big providers have done this, but strong access control and similar is important.
And That's a quick answer, but at the high end of the solutions spectrum when building AI systems from scratch, especially.
Host: Okay, that makes sense. So on top of that, there is the code aspect as well, right? Where you might be using a framework or you might be using something like Lang chain or you're using a vector database, something like that. So you're prone to...
Jim Manico: Yeah, that's, that's horrible. No, no, it's, no, it sucks because you're going to pick up some third-party library in the new space of AI and now you're depending on it. And one of two things happens. There's a security bug and you're forced to update and also things break. So you have to go back and figure out what the hell just broke. And so, and these are like big data libraries. Imagine that, Oh, we had a security bug and it screws up your whole system. And you have no idea why.
It's just using third party components in a bleeding edge industry is we need to, but you got to keep, got to really vet which libraries you're going to use before you bet your company on it. Number one. And number two, you want to keep those componentry up to date and really hope that those open source developers have a good upgrade path. It's, so just being judicious about what third party components you use is super important.
And then again, maintaining them and updating them for security and functionality bugs over time. Good luck. I hate this topic.
Because whatever I do, whatever I do, I'm screwed. If I build it from scratch, I get longer dev times. And if I pick the wrong library and have to unwind that decision, it's painful. Very painful.
Host: Mm-hmm. You are absolutely right, because that also means you are dependent on third-party developers, either open source or closed source. And that sort of brings you to the supply chain security again, right?
So on top of what you just highlighted, like keeping up-to-date or maintaining some of the aspects which maybe are getting delayed or something like that,
What other recommendations you have for a company which is adopting, let's say, some open source technology for Gen.AI development.?
Jim Manico: I would say have a DevOps pipeline where I'm not allowed to issue a merge. Like I can PR, but you don't let me merge if I don't pass a software composition analysis tool that determines that none of the third party libraries I'm using has a security problem.
That's the main thing, right? It's automate defend like things like depend a bot are helpful. Things like sneak and JFrog and merge base and all the other software composition analysis tools.
Semgrep has a new third-party library scanner.
So automation helps a lot here. But before we even get to that point, be really judicious about what library you're gonna pick in the first place. And if you can get away with it, don't use it. If it's not that much dev time, don't use the library. I'd err on the side of writing my own code and only use a library when it's necessary.
And when you do use it, good luck. Also, I like to write wrapper classes around my use of libraries when I can. Big frameworks, less so. But individual utility libraries, I'll write a wrapper around it so I have to like change functionality. I don't have to change my whole code base.
I'm just like using, modifying the innards of my wrapper so I can maybe swap out a function or block functions, whatever I need to do.
Those are some of the things. And I just, this sucks. Whatever you do, you're screwed. You write your own code too much. It's like longer dev times, but at least the code is scannable by code scanning tools and use less.
And if you use too many libraries, you have the update burden. You can speed initial development. You're gonna pay for that during the update cycle.
So really I err on the side of using less third party libraries, writing a wrapper around the libraries, scanning the crap out of them and keeping them as up-to-date as I can about a month behind the bleeding edge. This is what I do.
Host: And that's one of the sort of risk rewards of working in a bleeding edge technology, right? You have the risk of using something which is not 100% stable.
At the same time, you gain the rewards when you build something out of it. So I have a follow-up question on that. Like, OWASP has put together a guide for AI security. So how, like, according to you,
How does OWASP Top 10 or CWE top 25 applied to GNI security?
Jim Manico: Well, there's a specific OWASP top 10. It's the OWASP top 10 for LLM for large-dance models. I think that's outstanding. It's a really good document and it helped educate me on AI security. So the OWASP top 10 in general, not so much, but the actual OWASP top 10 for LLM applications version 1.0.1 came out August 26th, like a little more than a month ago. It's
Jim Manico: Very well done. This is written by, by Steve Wilson. Um, virtual Steve on Twitter is the author of this great job. Virtual Steve, you rock.
These are things like prompt injection, insecure output handling, training, data poisoning, model denial of service, applied chain foam, sensitive information disclosure, insecure plugin design, excessive agency over reliance and model theft.
This is a great place to start when it comes to developing secure AI systems. Good job, Steve. Good job. Virtual Steve.
I'm going to follow him on Twitter right now.
Host: Yeah, so what we'll do is when we publish this episode, we'll tag him as well and also the OWASP top 10 for LLMs so that our audience can get benefit out of it.
And one of the things that you highlighted earlier was the wrapper, right? On top of let's say you are using an open source library.
That's a very good software engineering practice in general. It doesn't have to be only for security, right? Even for software engineering, it's a good recommendation.
Jim Manico: So that has saved me so many times. Um, so yeah, it's just, I did it at the UI level when I was doing like Java client-side development for the original, like desktop Java apps, and I would use a tree structure, a really complicated UI widget, and I just wrapped the whole thing. It took me a lot of time and I literally swapped it out with a new widget. It just worked and it was like freaking magic.
So I love wrapping third-party stuff.
Host: Yeah, absolutely. Makes sense. So, so like so far we have discussed that a code is getting generated from let's say chat, GPT or bar or something like that. You're using it. And there are pros and cons to it.
What do you think about using generative AI for building better software architecture, like secure architecture on top of not just the code, but building secure architecture? What's your take on that?
Jim Manico: Give me an example question that you would ask AI. Secure architecture is such a loaded topic. Give me a more specific example.
Host: Sure. Let's say you are creating an application that stores data in an S3 bucket as a temporary placeholder to, let's say, generate thumbnails or something like that.
How do I build a secure architecture there? I'm trying to deploy it as a lambda, let's say.
Jim Manico: So build me a secure architecture for an AWS app that uses an S3 bucket to save uploaded files, right? So build me a very, so I'm gonna go very super secure architecture for an AWS app. And I'm using ChatGPT 4 with PDF readers and web crawlers. And so let's see.
So they're saying, ah designing a super secure architecture for AWS app. These are three buckets. They're saying ready. Use front end react, angular or view.
Those are my three choices for user interface frameworks today.
Backend, restful API built with Java, Python or node database, AWS, RDS, MySQL, Postgres or Aurora file storage as three VPC. So they're saying these architectural notes, things like VPC isolation, security groups, knackles, and VPN for the network.
Great. That's three bucket-enabled server-side encryption and AWS KMS with enable server side encryption in the S3 bucket with the AWS KMS key system and ACS everywhere using bucket policies, use IAM roles and policies to grant less briefs with those access from a very high level.
This is, this is a great place to start. And they're giving me everything like backups, CIDC. And then I can go in and say, Hey, expand. So, so I don't understand something.
So expand in super detail on the VPC isolation stuff. I see, I always talk about mutual TLS and some people tell me that the VPC isolation can be used in place in mutual TLS.
I kind of debate it. So now I'm getting like VPC creation in steps for AWS. I got to vet a lot of this. If it's not in my expertise, I got to vet this, but this is pretty outstanding, right?
The, the, All that's spinning out in just a matter of a few minutes is a good starting point to map out other things. I need to research in a more detailed way. So I'll say yes. I'll say yes.
Host: Okay, so it makes sense that you feed in very detailed requirements in a way to do chat GPT or BARD and you get the output, you use that as a starting point and then you dig deeper as you work in individual areas anyway, right?
Yeah, and I really liked how you stressed on very secure in the prompt, right?
Jim Manico: Yeah, exactly. Yeah, it's better than secure. Very secure.
And I can even try like other descriptive words and like, I can even say, Hey, make it more simple, give me the same architecture, ultra secure, but lower the cyclomatic complexity or any kind of make it more simple and they'll give me even a more straightforward path.
We got EC2 instances. Yeah. So yeah, it's a good starting place, but again, it doesn't replace individual experts who have experience in the field using this tech.
So I can build the initial build out of this and then bring AWS architects to vet what I'm doing, and see what I'm missing. You know?
Host: Mm-hmm. Yeah, that makes sense. So let's say I got the recommendation. I started writing code now. So let's go into some of the secure coding practices, right? Even from let's say OWASP recommendation.
So one of the key ones is like input sanitization or validation. When you are let's say you have an input for users, it's a username password user is inputting on the screen. What's the… Why should I care? Users will always enter valid values, right?
Jim Manico: No. Well, input validation, keep in mind though, even valid data can cause injection. Injection, or input validation sometimes protects your app. Sometimes it's just hygiene. Because I can have an email address that has a SQL injection payload and it's still dangerous. So I wanna validate data, that's good. And restrict data to the least amount of characters and patterns possible. Allow this validation. Here's what good data is, reject everything else.
But if I'm gonna use that data in a parser, like a database parser SQL, I'm gonna use a parameterized query. If I'm gonna use that data in say, like an LDAP command, I'm gonna strictly validate that data to not include certain LDAP dangerous characters.
Or if I'm gonna use that data in a webpage, I'm gonna either validate, sanitize, or escape the data based on what I need to do with it in a webpage.
So validate is not security, it's your hygiene layer. Security happens when you use the data safely. So even if that data has an attack, it's not gonna hurt the use of that data.
Host: OK, that's a very good way of explaining, right? Like, you have to be very careful about SQL injection. So you should take parameterization and others into you have keep them in mind.
So another question, follow question to that is, what mistakes have you seen developers do when they are, let's say, implementing these validations? Or let's say, if they are storing any sensitive data,
let's say they capture credit card information or SSL information, or PII information. What mistakes do our developers make even today?
Jim Manico: So validation is really the problem, right?
The problem's not validation. The problems typically they're not parameterizing their query or they're storing sensitive data that they don't need to be storing, or they're not, they're not encrypting things like a bank account in the database.
It's ultra-sensitive data, or they're not even classifying their data. Or they're not using HTTPS or their weak password policy or weak access control. I look at the ASVS standard (Application security verification standard).
There are like 300 requirements around secure development. A lot can go wrong. You'd misconfigured XML parser using a very old library to parse JSON that has realization problems. The list goes on and on about 300 requirements in the ASBS standard.
Host: OK. Taking it to the next level, which is, let's say you have done the input validation and all, but there can always be some man-in-the-middle attack or somebody trying to replay a request or something like that.
How do you handle those? Like, do security headers play a role in web application security or content security policies for a web application? What's your thought on those ideas?
Jim Manico: Yeah, I definitely like implementing content security policy. I like a non-space policy. So I specifically notch each script. That's okay. So everything else is not going to run. I like XSS defense.
I still want you to write securely use Angular or react and understand where the frameworks aren't going to help you and where they do help you and learn how to use react and angular securely.
And, and on top of that, I love content security policy. It's a way to stop cross-site scripting in most of the browsers that we use today in a really powerful way. So I love these technologies. I tend to use a nonce-based content security policy with a strict dynamic call to make sure the dependencies load automatically. It's a nice way to UCSP. I love it.
Host: And some of the frameworks like React or Ruby on Rails, they come inbuilt with non-spaced secure transactions already, right?
Do developers have to explicitly do that, you feel? Or that's part of the framework? Maybe they just need to enable a flag or add a plug-in or something like that.
Jim Manico: Um, well with does react natively. So you can enable CSP and react in a couple of different ways. Yeah. You can use content security policy with react.
There are a couple of guides out there to explain how to do it, but you also should be using React securely, right? If you're going to use the dangerously set inner HTML function of react or the by-security HTML in angular, you're disabling that framework security.
And you should use like an HTML sanitizer, like an Angular, there's a built-in sanitizer or React. You want to use dangerously set inner HTML with a library called DOM Purify. So I would use a combination of enabling CSP, but also use these frameworks securely in the first place.
And like, and be careful of the times where you disable the framework security feature, like disabling, escaping.
Angular a little bit less so it's a little bit, Angular has a little bit better security baked in, but it's, it's really strict how you need to use it. So UCSP, integrate it with your framework and use your framework in a secure fashion. There's, these are topics I teach in my own security course as a teacher.
Host: Okay, yeah, and I like how you stressed on the fact that you'd see SPs were at the same time use reacts in a secure way, right? Like go through some of the security basics, don't get them wrong in a way, right? When you are, no matter which framework you pick, but at least you need to do that exercise with your team, let's say, before you start jumping to writing code in a framework.
Jim Manico: Absolutely.
Host: One last question that I have is often, when it comes to application development, there is, let's say, a sign-in, log-in capability, right? And often, organizations try to incorporate two-factor authentication or multi-factor authentication.
What are some of the critical factors one should keep in mind when building that capability?
Jim Manico: You know, the, the NIST standard for identity special publication, 863 says you really want to discourage SMS. It's the least secure multifactor, but it's better than no multifactor alone.
So NIST says, if you have sensitive data, go ahead and implement SMS, but make that a secondary choice and make a more secure option be the first choice, like app based multifactor with some kind of mobile app or something other than SMS that's stronger.
And so at level three, when you have like infrastructure level security, they recommend like a hardware token, like a YubiKey or similar. So that's my only advice there is use the right kind of multi. Yeah, exactly. Use the right kind of multifactor for the level of risk you can tolerate with the kind of app you're doing. So if it's like sensitive data, I'd use SMS and a mobile authenticator option for your users.
If it's like your critical infrastructure of your company, I'd be using YubiKey's as the main authentication factor.
Host: OK, that's a great advice. Thank you for answering the security questions.
Thanks Jim for the insightful conversation. Here are a few important points which stood out for me:
- When it comes to Code generation using Gen AI tools, Trust but Verify. Always run those through your DevSecOps pipelines for Static & Dynamic Scans.
- During Prompt engineering, stay away from feeding sensitive information and ask for Low Cyclomatic Complexity recommendations. It’s simpler and easier to maintain.
- On top of adding Security capabilities, when using any framework like React or Ruby on Rails, use them securely and apply application security best practices on top of it. Like OWASP Top 10 recommendations.
Now we jump into the next section, which is around rating security practices.
Rating Security Practices
So the way it works is I'll share a practice, and you need to rate from 1 to 5, 1 being the worst and 5 being the best. So let's start with the first one.
So provide training and awareness programs to employees to help them identify and respond to potential security threats.
Jim Manico: My company is named Manicode Security and I focus on secure coding trading. So I picked that one to self serve my own interests. Right. So yeah, hire me to do to teach your developers how to write secure code. That's the most important thing in my life. Now I'm not sure what's most important for your program, but that's what I do for a living and it's something I really enjoy. Again, go to I'm email@example.com or M-A-N-I-C-O-D-E.com. That's what I do for a living. I teach developers to write secure.
Jim Manico: Dose number one.
Host: Awesome. Yeah. So the next one is DevOps practices are needed to move fast and to deploy code to production. Security is not the most important sometimes, is seen as not the most important. What's your rating on that philosophy?
Jim Manico: I think the ability to push live fast lets you reduce the exposure windows of bugs. If I push out a bug and my post-deployment scanner said, there's a bug live, well, I can fix it just as fast. So as long as I have a disciplined DevOps lifecycle where I have a relatively clean code base that's already scanning clean, I only stop the developer from merging if we pick up a new bug that they just wrote and I have the full.
Jim Manico: CI, infrastructure building, infrastructure scanners, code scanners, all in line in a mature DevOps pipeline, then going live fast, I think, for a lot of code, is a good idea. For some code, like my authentication code or a credit card repository, I may require manual review. But for a lot of standard enterprise code, I think going live fast, it helps you compete in the market and lets you fix bugs as fast as you can write them to reduce exposure windows. I like it.
Host: Mm-hmm. Okay, makes sense. Let's go to the last one, which is continuous integration is a must for DevOps practices. Security architecture review should be part of it. What's your take on that?
Jim Manico: Yeah. You can't. Well, you can't do CD without CI. CI is the medicine that's needed to do CD. So I need to have good continuous integration, security testing, code functionality testing, selenium, unit tests around my most sensitive features, a really robust automated security and functionality testing framework. That's the price you pay. And then infrastructure construction, Docker buildings, Docker security scanning, all that stuff.
Host: Mm-hmm. hot swap live deployment. And that's the medicine that allows you to do CD in a powerful way. I like it, but you wanna do CI with all this, it's non-trivial to set this up. One of the companies I advise is called Defect Dojo, right? They have an open source, D-E-F-E-C-T-D-O-J-O. This is an open source DevOps pipeline that's not a bad way to get started.
Jim Manico: integrate with GitHub and everything. So this is like a.
Host: So it's funny that you mentioned Defect Dojo. Matt from Defect Dojo was in our podcast two weeks ago. We'll be publishing that episode in two weeks. Yeah, I totally agree. Folks should check it out.
Jim Manico: Oh nice. I'm one of his advisory, I'm on his advisory board. And I'm on his board because I believe in what he's doing. He's awesome. So, dflojojo.com is a good starting place for DevOps.
Host: Yeah, yeah. Yeah, absolutely. And that's a great way to end the episode. But before we like to end the recording, I have one last question, which is, do you have one recommended reading like a blog or a book or a podcast which you would like to share with our audience?
Jim Manico: Oh, okay, some kind of blog or some kind of book. I'm going to do a little search on this. So I just read, I just do a lot of, you know what I do? I go and set up a bunch of Google alerts on all the security topics I care about.
And I get curated list of articles every day from all kinds of, all kinds of different sources. So I recommend using, I recommend using, um, stored by newest arrivals here. Okay.
So I'm looking at the latest books to show up in secure coding for software engineers by James Ma just came out September, like a few days ago. What's this? What language is this? This is a guide to building resilient and trusted software systems over the web. Just came out from James Ma, but it came out like a week ago, Michael Murray. I know secure Python coding fundamentals.
So I went to Amazon searched on secure coding and said, sort by the newest arrivals, ASP.NET 4.5, network programming with Go, secure code warrior, secure and quality Java coding using Sense8. There's a lot of interesting books that were released just in the last couple of years. I'd look at some of the more recent ones from James Ma, Michael Murray, Alice and Bob secure coding from Tani Jenka. Yeah, that interests me.
But I really liked this book from Ma that just came out. And if you're a Python developer, I would totally get Michael Murray's book that was released just a couple of months ago. He's been around the secure coding world for a long time. That's secure Python coding fundamentals. It looks really interesting.
Host: Love it. So what we'll do is when we publish the episode, we'll tag all of these names so that our audience can get benefit out of it. And that's a great way to end the episode also. Thank you so much, Jim, for joining with us. It was a fun discussion.
Jim Manico: It's my pleasure. Thank you so much for having me on the show. It's nice to meet you and nice to be on your show. Thank you so much.
Host: Absolutely. And to our viewers, thank you for watching. Hope you have learned something new. If you have any questions about security, share those at scaletozero.com, and we'll get those answered by an expert in the security space. See you in the next episode. Thank you.