Dev Interrupted

How Marketing Ruined Shift Left | Semgrep’s Tanya Janca

LinearB Season 5 Episode 15

When it comes to securing software, most developers feel like they're playing catch-up instead of setting the rules.

Tanya Janca (SheHacksPurple), author of "Alice and Bob Learn Secure Coding," brings her 28 years of IT and security expertise—spanning counter-terrorism to enterprise training—to Dev Interrupted. She unpacks the common pitfalls teams face when security is treated as an afterthought, highlighting the developer frustration of being held accountable for security without the tools or knowledge needed to succeed.

Explore how transforming security from a final gate into an ongoing practice saves money, reduces conflict, and builds better software through clear requirements and true developer empowerment. Tanya provides concrete advice for developers and leaders on creating internal knowledge libraries, fostering continuous learning habits, and critically evaluating AI-generated code to ensure it meets security standards. 

Speaking of AI's growing role, we're curious how it's reshaping workflows across the industry. Share your own experiences with AI adoption by taking our quick survey to discover your spot on the adoption graph (and what you can do to level up).

Check out:

Follow the hosts:

Follow today's guest(s):

Referenced in today's show:

Support the show:

Offers:

Andrew Zigler:

Welcome to Dev Interrupted. I'm your host, Andrew Zigler.

Ben Lloyd Pearson:

And I'm your host Ben Lloyd Pearson.

Andrew Zigler:

This week we're talking about Claude's new learning mode. Microsoft's original source code in Shopify's controversial AI memo. Ben, what catches your attention?

Ben Lloyd Pearson:

Well, because I've already had some conversations on social about this one. I want to talk about this Shopify memo 'cause I, I think it actually is a pretty good story. So, tell us about it.

Andrew Zigler:

so the CEO of Shopify, releases an internal memo saying that staffers, they, they need to prove that jobs can't be done by AI before asking for more headcount or more people to join their team. so this obviously has a. Variety of reactions from folks within Shopify, many of which have, taken online to talk about it. And it feels like the whole tech industry is kind of spectating and commenting on this mandate that's happened within Shopify. Um, my take on it is just looking at it, you know, it's a good idea, maybe taken to a bit of a functional extreme. how do you prove that a job can be done by ai and if a job exists right now, but AI can do it in. Six months. What then, you know, how do you really justify the longevity of a headcount in that kind of space. And more importantly, after reading the memo, it kind of sparked a little bit of curiosity in me. I'm kind of wondering like, what are the workflows at Shopify now that are performing this kind of AI powered work? Like what kind of impact is that driving for them? I'd be curious to know.

Ben Lloyd Pearson:

Yeah, absolutely. And you know, I actually kind of agree with, with Tobias Lutke, the CEO of Shopify with his general sentiment that, AI really is fundamentally disrupting knowledge work. And, you know, like many skills, I think this is a muscle, like you need to develop it over time. really need to work with it regularly in your workflow so that you normalize and habitualize the practice. And the harsh truth is that like if you're not doing that today, then you're risking being replaced by others who are doing that. I'm not even just talking about your job, like competition is thinking about these types of things and they may already be taking action on it. but I also wanna point out that this kind of highlights the disconnect between like executives and individual contributors that we talked about last week. how do you prove a negative? Like how do you prove that AI can't do the job? You know? like Lutke mentioned, they gave their employees a chat bot. and you know, we've discussed on multiple occasions, how chat really is not the best interface for these ai.

Andrew Zigler:

Yeah.

Ben Lloyd Pearson:

and if that's all you're doing then like that's, that's not enough. Like you need to be giving your employees. a lot more support, a lot more tooling than that. And, you know, I'm not saying that Shopify is not, 'cause I just, what I'm seeing from them, I actually think that they have done a pretty good job, incorporating AI into their workflows. but it's gotta be more than, than a chat bot.

Andrew Zigler:

Yeah, I, I see we're speculating about it in the same way. So, you know, we'd love to have someone from Shopify maybe in the future, come on and tell us about, hey, ai, AI is transforming them. I think there's a story there for sure, that we could all possibly learn from.

Ben Lloyd Pearson:

Yeah. And I wanna take a moment just real quick to plug a personality quiz. We published, around AI collaboration. So if you've ever wondered how effective have I been at, at adopting AI in using the new generation of tools, we have this really cool quiz that we'll put in the show notes that you can take. it takes just a couple of minutes to find out. but we've already started to find some really interesting things from some of our early respondents on this. specifically 79% of our respondents so far are still in the AI newbie category. So this is a category of people

Andrew Zigler:

Huh.

Ben Lloyd Pearson:

have barely gotten AI into their workflows, and I think this is the norm. So don't feel like you're like completely left behind if you haven't, adopted ai. it's kind of interesting to see some of the adoption, patterns that are emerging from this as well. Like, the people who are being successful are finding some repeatable, generation type activities, like generating code, generating docs, generating tests, like they're getting success with those before trying to move on to other things within the software delivery lifecycle.'cause to be frank, we've seen quite a few of the categories within this quiz that nobody or practically nobody. Is using AI for it yet, even like in collaboration, like AI just hasn't even shown up in those parts of their workflow yet. So if you wanna know where you stand on this journey, head over to the show notes, take that quiz. It's really fun. It can teach you something about yourself and help you understand where you are. So what's our next story, Andrew?

Andrew Zigler:

Oh, well the next one I'm really excited to talk about this is, uh, something that's really near and dear to me, and this is talking about how Anthropic is kind of, um, turning the table on how, students engage with ai, and how it can unlock opportunities for learning. So Claude just uh, rolled out a new learning mode, that kind of put students. In the position of doing the hard thinking. this goes back to how education has always worked is that it's always best if you get that personalized hands-on attention that's tailored for you and what you are interested in, that's always going to do better and engage a student more. So that really highlights. This opportunity that's sitting right in front of us with ai. And you know, as a former classroom teacher, I actually see a lot of potential and I'm really thrilled, that major players in the AI space are taking this seriously and thinking about, hey, how this can transform education for our future. I think every student would. Benefit from it. And you know, it even goes back to a talk that I heard last week. this was our Atlassian team, which we're gonna talk about a little more later. I listened to a talk by Sal Khan of Khan Academy and Ben Gomez, the, SVP of Learning and Education at Google. So these are two great minds that have built education giants that help prepare people for the world and educate them and equip them with real skills. And they're seeing the value in using ai exactly like. This to build products and to build educational experiences that speak to the student on their level. you know, if you have a student and you ask them, oh, you know, you want to be a doctor, here's why you should still care about literature. A great teacher can make those connections for the student and help them have a well-rounded education or, you know, you want to be a Formula One driver. Here's why you should really care about physics, Like what better way to engage a student and make them really care about the subject matter.

Ben Lloyd Pearson:

Yeah, and I love, I love how this story sort of flips the script on, I think how a lot of people have normalized ai. Like today, AI is used a lot, or when it's used a lot of the times, it just reinforces whatever you feed into it, you know, whatever you tell it to tell you, it will tell you that. And they sort of flip the script and use it to challenge you now, which I think is a really critical. way to use AI.

Andrew Zigler:

Right.

Ben Lloyd Pearson:

mention like this, this is really cool 'cause it, it actually, the geek in me loves this'cause it brings us like one step closer to the education system that they have in the book Enders game where

Andrew Zigler:

Yes, yes,

Ben Lloyd Pearson:

with like AI that, that like adapts to them and like teaches them what they need to know individually. Like, that's like, that's brilliant to me. I love that.

Andrew Zigler:

yes. I thought the same exact thing. Actually, just having that personalized attention, that personalized tutor, is gonna give you so much of a leg up in life.

Ben Lloyd Pearson:

Yeah. and I feel like really what this is showing us is we've only scratched the surface on how AI is gonna change, the world around us. there's been a lot of disruptions from ai. Like we've seen this with like how it's disrupting a lot of creative fields. Um, but what happens when those people start getting access to this stuff that like, just revolutionizes how they can work, You know, an education system where everyone's got a tutor in their pocket. that's an extraordinary thing to think about I think there's a lot of applications that we're gonna see emerge, like therapy, personal health, nutrition. I think really the thing to take away from this is that, we hear a lot about how junior devs are maybe in a tough spot because AI is really, you know, more of a force multiplier for people who are experts, for people who have been around for a while. And if a junior Dev is just sort of blindly using AI and not using it to challenge themselves, then there's a real risk that, they're going to just not learn the right skills, the right knowledge to level up to that senior level. So, yeah, I think it's just a great example of like, if you're using ai, especially if you're newer to the field that you're working in. Challenge yourself with it. Don't, don't just use it to, to move faster. Use it to get smarter as well.

Andrew Zigler:

Yeah. Ben, have you ever heard of like rubber duck programming?

Ben Lloyd Pearson:

Yeah, absolutely.

Andrew Zigler:

Yeah, I think that is the real practice, here that developers should be bringing to their tools, asking those questions along the way, and, just trying to reflect upon what they're putting in action and make sure that they understand what's going on.

Ben Lloyd Pearson:

Yeah. Well, now your rubber duck speaks back to you with whatever

Andrew Zigler:

Right.

Ben Lloyd Pearson:

you tell it to use. All right, cool. So what's our, what's our next story?

Andrew Zigler:

Oh yeah. So this is a fun one. Comes right from Bill Gates blog. It's, uh, a little bit of a deep dive into Microsoft's, history. Looking at the original source code for, um, you know, the very first Microsoft to hit the market, the very first personal computer. This was actually a huge I. Mathematical feat. Um, it actually, for me, what found so fascinating about this article, which you should go read, it literally has the source code in the article. How cool is that on adopt matrix simulated printout? what this really reminded me of is how much of the foundation in our field and technology and computer science has been laid by all these mathematical geniuses before us, and how there were these insurmountable problems. Early on in computer engineering that were solved by brilliant minds applying math and physics, sometimes even theoretical ones in order to create a basically trick a rock into thinking like, what an incredible achievement. And so this is a fun throwback on some of the ancient history around computing, personal computing. I really recommend folks check that one out.

Ben Lloyd Pearson:

Yeah. Tricking a rock into thinking is one of my favorite metaphors of all time for computing.

Andrew Zigler:

Yes.

Ben Lloyd Pearson:

yeah, so, I wanted to include this article just because it's such a cool piece of work. You know, there's ASCI art, there's dot matrix printer designs, there's these cool animations, and of course there's some really great tech history, so. definitely need to go check it out just to experience like this really cool like 50th anniversary content from Microsoft. So Andrew, I heard you were on the road last week. How did that

Andrew Zigler:

Oh yes, Dev Interrupted is certainly going places. Last week I was at Atlassian teams, part of the press group there. I got to go behind the scenes on all of their major announcements. I. Dev Interrupted, met with, many leadership folks that were at Atlassian. They took really great care of our listeners and informing us about the big changes that are happening in the Atlassian ecosystem, including their release of Rovo. more importantly, what I learned while I was there is How quickly our field really is changing, but how collaboratively everyone is coming together to discuss real world solutions. And Atlassian is no stranger to anybody. This is a household name for all engineers, has a huge impact on the engineering ecosystem and how we build our tools. So for Dev Interrupted to have the chance to go behind the scenes, understand how things are getting made and why those decisions are happening. Was really transformative. and it really speaks to, the conversations that we're having here on the show every single week and why they're so important.

Ben Lloyd Pearson:

Yeah. And, and I, just wanna point out that, we're working with more companies like Atlassian, like AWS to just get the word out about really cool engineering stuff that's happening out there. Uh, you know, if you're

Andrew Zigler:

I.

Ben Lloyd Pearson:

listening to this right now and you're like, I have a really cool engineering thing that I've done that I would love to share with the world, Dev Interrupted is here for you. Like, either on this podcast or in our substack newsletter or even on social media, you know, so just reach out to us. but speaking of, of on the road, I will also be on the road next month. if you're gonna be in Miami at the Code Remix Summit, in early May, or at the Developer Week Leadership Summit in San Francisco at the end of May, hit me up. Let me know and let's meet up and chat. I always love meeting people from the community. And, you know, Dev Interrupted. We, we want to do as much of this as possible. So we're out there meeting new people, getting fresh ideas, getting new stories. so keep an eye out for us. We, we really, want to use these as opportunities to, to engage with our community. I.

Andrew Zigler:

Oh yes, we're gonna be on the road and conference season is just picking up. Uh, and you may remember our recent guest as well, um, Adnan Ijaz from AWS. He came on the podcast and talked with us about how Amazon is working with, agents, AI agents to transform workflows and applications. they sent us a special video message to you, the Dev Interrupted listeners about changes coming from Amazon Q and opportunities for you to take advantage of in your own native language. And so if you're curious about what that means, definitely be sure to check out Dev Interrupted in places like LinkedIn. You'll be sure not to miss it. Because if you're just listening to this podcast, you're only getting a part of the story. You know, we're going on the road to conferences and we're posting lots of things on LinkedIn and covering all of these news topics on substack as well. So, like Ben said, please join our conversation, become part of the Dev Interrupted network, and come meet us. You know, we'd love to engage with you

Ben Lloyd Pearson:

Yeah. I, I almost feel like this podcast is like, it's like a filtered version of all the things that we want to talk about. You know? We, we only

Andrew Zigler:

filtered or unfiltered. You know, they can decide.

Ben Lloyd Pearson:

Yeah. Maybe we just go back and forth between the two. I don't know. So tell us about our guests this week, Andrew.

Andrew Zigler:

Oh, I'm excited. We're bringing cybersecurity expert Tanya Janca on the pod. Tanya's work makes our world safer, cooler, and more purple.

Ben Lloyd Pearson:

Ready to move beyond copilot. Join LinearB for a 35 minute workshop that explores how top engineering teams are transforming their workflows with Agentic ai, we'll show you how to go from passive assistance to full AI orchestration beyond the IDE and into real impact. Discover your place on the AI collaboration matrix. Uncover the next initiative that could change how your team works. Don't miss your chance to learn from the leaders at the forefront of AI maturity. The workshop takes place on May 14th and 15th. Reserve your spot and step into the future of engineering.

Andrew Zigler:

Today we're tackling one of the biggest gaps in software engineering. And that's, security isn't a product, it's a practice. Yet too many teams treat security as a box to check or a tool to adopt instead of a skillset to build. And when it comes to securing software, most developers feel like they're playing catch up instead of setting the rules. And joining me today is Tanya Janca, AKA. She Hacks purple. She's the bestselling author of Alice and Bob Learn Application Security. And most recently, Alice and Bob learned secure coding over her 28 year IT career. Tanya has won countless awards including oasp lifetime distinguished member and Hacker of the year. She's spoken at conferences. All over the world, before her tech career, Tanya was also a musician and performer, but she's really done it all in the world of cyber, including counter terrorism and leading security for the 52nd Canadian general election. So we're really excited to have you today. Tanya. Welcome to the show.

Tanya Janca:

Oh my gosh. Thank you so much for having me, Andrew. I've been looking forward to

Andrew Zigler:

I.

Tanya Janca:

months.

Andrew Zigler:

We've been looking forward to having you here tapping into some of your wisdom. You know, and Dev Interrupted. We talk a lot about the skills and the things that people need to be paying attention to right now in the world of tech. and nothing is more important than security, and I don't feel like security always gets as much attention and time from everyone as it needs. And I'm sure you as a security professional, security expert, very much feel that same way. So, um, I wanna start by, by kind of addressing a recent talk that you gave. this was about how Shift left doesn't mean anything, anymore. you know, what did Shift left ever mean?

Tanya Janca:

So shift left was supposed to mean starting security earlier in the system development lifecycle. So when I was a Dev. Security was I wanna go live on Thursday, so Tuesday I'm at like the cab meeting then security says no. And I'm like, why? And they're like, you didn't do this thing we never told you you were supposed to do. I'm like, my deadline's Thursday so I'm going live. So I'll just try to do that between Tuesday and Thursday and you'll just get what I can do. they're like, that's not good enough. And then I would usually go to prod anyway. it was, I ran this stupid tool,'cause the tools were really stupid then like 2011 and 20 12, 20 third, they were not very mature. So I ran this tool and it found this one thing and I'm like, I'll fix that one thing. Then it became, and they would say that right when I wanted to go to prod. Right. It was always at the end. So then they're like, oh, we hired a pen tester and this person's gonna come in your baby's super ugly, then we're gonna tell you you're a crappy Dev. You did a bad job. And it's like, well, how was I supposed to do a get job? And so when you look at the SDLC, like on a piece of paper, so assuming waterfall. Or water fail, depending upon how you pronounce it. but like, so not the eternity symbol for DevOps, but if you write it from, you know, left to right, like anglophones and francophones, et cetera, right? If you look at it earlier, so you know, coding comes before release, coding's after, let's say requirements gathering. So the further left on the page, you are, the earlier you are. Her. so some marketing person or some person thought it would be smart to come up with shift left, push left, which I think is stupid. I think it should have been, let's start security earlier. But that didn't catch on. So they came up with this idea of shifting left and lots of marketing people were like, yeah, let's use that. But what they used it for was, if you buy our product, have shifted left. If you stick our product in your ci, the devs will magically fix everything and you'll totally be secure. And you don't have to do any other security. Don't worry. No effort required. Not true.. Um, and so as a result, a lot of security teams have been very like, frustrated with tools they bought because they're like, I was told I could just press three buttons and life would be grand. And it turns out security's a lot harder than that. And then they're like, oh, and now we have a backlog with 40,000 random things that it found, and no one has time to fix all of it., it's been ruined to not mean something anymore. But basically we, but we did shift left as an industry, like there's very few companies now that are just pen testing at the end. It is much, much more common to actually give developers some security tools, actually have some sort of security requirements, have some sort of architecture review. those things are happening now. Like they did not when I was at Dev,'cause I've been doing security. So I did security for a year and a half between 2007, 2008. Then I switched back to Deving because counterterrorism terrorism, it's really scary. and I was like, I don't want a job that gives me nightmares all day, night, not all day. and so then I switched back again in like 2014 full-time. And like, I think 2013, I was, like switching over and like it's just, it's improved a lot. It did help. But we're not done, Andrew.

Andrew Zigler:

Yeah, and and what you're describing to the initial. way that you encounter security, you go through security view. It's like a, an obstacle at the end, like you felt you were done. Right? Well, I'm gonna ship to prod. I'm gonna push it. Like, I think it's good to go. My team thinks it's good to go. And then your act like the security team feels like a blocker then at that point. So it's like, it sets up this antagonistic. Relationship between the developers and the security team. And if the developer doesn't prioritize that, then they see it as like this extraneous step as opposed to something that's core to building, you know, good usable software. So, what you're highlighting is, is really interesting to me how, it becomes like a, like a marketing hype problem, a marketing, situation where they take this concept, and they move it. so when you talking about how companies now they do security practices. Earlier. Like how do they actually move it earlier in the process without falling victim to like hype?

Tanya Janca:

Oh, such a good question, Andrew. So. I wrote two books essentially about this, Right. Because I'm very, very excited about this.'cause when I was the Dev lead and it was just so abrasive and so crappy, it was such a crappy situation I felt like of us knocking along. so the way we can start earlier, so first of all, if we start earlier, it's better. We will save money. We will have less conflict. We will build better software, period, like hands down. We really will. And so what I like to do so I meet with companies, and I have been doing this, I guess since late 2018 through this company called Ions Research. And then sometimes I do it just on the side, but with companies and we look at their program and I'm like, so what are the strengths of your team? So if you have someone that's awesome at threat modeling, it makes sense that maybe you wanna add that to your program. But if you have some, like no one with any of those skills, that might be my last choice. Does that make sense? And so it's like what are our strengths and what do we think we can support? And I try to start with the easiest things first. So as an example, so let's say you're building web apps and APIs. And then you're doing like one team's doing web sockets. Cool. Okay. So from now on I'm gonna make a list of requirements for every new project that does those things like security requirements. So you're gonna build an API. Cool. We use this API gateway. These are the settings we expect. This is where you can find a document that will show you all those things. We expect this, we expect that, and it's super clear from the beginning. So they design it in from the beginning. And they know how much work they need to do instead of us springing surprise work on them later, which no one likes.

Andrew Zigler:

Right.

Tanya Janca:

having a list of requirements and concise. easy to understand, technical advice. So like you're gonna do a web socket. Cool. Here's like some advice that we need you to follow when you build or maintain a web socket. For us, starting with requirements, to me is key because with those requirements you can say, we expect you to scan it with these two tools and then remediate anything that's higher, medium before you go to prod. You can scan it in your IDE, you can scan it at the CLI, you can scan it. When you check your code in, you can scan it in the ci, whatever it is that you want. But when it gets to me, it better have passed those things or I'm gonna embarrass you and be like, your, your baby's not that pretty. Right. So then you've given them control because I am a control freak like I really am. and they're, I hated that I was not allowed the testing tool. And I was like, how am I supposed to pass? If you haven't let me see the tool. I need to run it myself, fix everything. Then you may see it. And they thought I was insane. They're like, why would we let you touch our tools? I'm like, they're forever. They're, they're our tools. We're we're one team. Dude, we are not enemies. We are on the same team.

Andrew Zigler:

Right, we're working towards the same end result. it's smart how you call out, acknowledging what your team can do. What do you have the skillset for, and what do you have the bandwidth for? It goes back to a tool that maybe like you end up with a big backlog, like a pile of stuff to do instead of addressing them earlier in the process. that sounds to me like. When you try to like start a habit and, your habit is to just put everything in a bucket or in a list to deal with it later. how are you actually gonna build a skill if you're not doing it every day? If you're not integrating into your life, if you wanna work out, you know, if you want to get into shape, you know, you need to like do your little bit every day or as much as often as you can. You can't do it all at the end or you can't be like, oh. I'm going to do it later. It's gonna go in my backlog. Right. And it's like, so you're really calling out how it's about acknowledging and accepting where your team's at and what your skills are. it sounds like a bit, kind of what you're keying us in on is, almost like building like a little internal security library. Like we use web sockets. These are the basic security things that, that we expect when people build web sockets. And then that's where shift left starts to actually happen within your organization.'cause now your product managers, your team leads, they can design that into what they're building upfront, right?

Tanya Janca:

and if you're really lucky, this with companies before, like where I worked full time so it's like, here's our secure coding guideline. you're gonna build a a web app. I'm especially thinking of Java'cause I worked at this one place where we had 2000 Java apps. Um, and,

Andrew Zigler:

Oh my gosh.

Tanya Janca:

here's, you know, number one, we wanna validate all the inputs. And so then you could click that link. then me and the devs had made a wiki page like all different examples in Java of like, this is how you validate a phone number. This is how you validate. This is just like, reuse the code. Do not rate your own. This has been tested. Just use this. And we tried to do that for all of the examples we could. And so we made it so it was like really easy. It's like I'm designing a web app. You go here, I'm designing an api go there and like I find you get, so you know the expression, like you get more bees with honey. I find you get more devs with concise, short, actionable advice instead of like vague crap that sometimes I see like. Session ID should be ephemeral. I don't know how to code that like that. Like that means shortlived. So the first time I saw that, which was from a security team, I had to look up what ephemeral meant. And I'm like, shortlived, I can't code shortlived. What if I think Shortlived is 20 days and you think Shortlived is 20 minutes, right?

Andrew Zigler:

Totally.

Tanya Janca:

you to be specific. And they're like, well, we don't know how long. And I'm like, well then I guess this requirement doesn't exist. Go away. Right, like, if you can, so like whenever someone's like, we're gonna write a policy and it's gonna be like 400 pages, I'm like, cool, no one's gonna read that. I'm gonna write a summary that's half a page and be like, please, please read this half page. And then I would hold little workshops where I'd have as many people as I could convince. Come and then I would teach them the thing. So I'm like, okay, so this is why APIs need protection and like, here are the things we want you to do and why we want you to do them. This is how you do this. This is how you do that, And so that's actually how I got into training. It was like I just kept doing talks at work all the time. And then I got hired to be the trainer at work. And then other people were like, you like come train, like. Where we work. And I was like, do you have money? I like money. That sounds so appealing. and like when I joined gra I assumed all of that would stop, but people still call me and I'm like, this is so great. So if, if you are listening, first of all, like shameless self-promotion, if you buy my book. You can take that information and turn it into guidance for your office. That's why I wrote it. And so like I have this whole section on Java whole section on Python, et cetera, right? So like take that make a guideline or make a standard and then show them how, train them be like, this is, this is a good example. This is a bad example. This is why this is bad. This is why this is good. Here's a cheat sheet on how to do this. I tried to make the book as easy as possible. I. It twice as long as my first book and like there's only so much my publisher will tolerate from me, I couldn't just go on forever. Right. But I'm doing the best I can. I made it speci like, and at the end of the chapter I'm like, turn one of these into a secure coding guideline where you work. You have my permission literally in writing to steal my work and use

Andrew Zigler:

Steal it, take it, use it, remix it, please apply it. Don't just read it.

Tanya Janca:

Yeah.

Andrew Zigler:

the key ingredient here is training. and that's like the secret ingredient I think, to having a really successful team where that's has security practices is you have to have an active training process. You have to have an active conversation at all times about how to apply these things. And you gotta have good examples. Going back a little bit about your experience, training and teaching. You know, you've built an entire program, entire platform to help people, build better, more secure software. and I'm wondering along the way, what do you think is the biggest misconception that developers always have about security?

Tanya Janca:

So sometimes I run into developers where they're like, not really a big deal. It's not as bad as you say. And then I'll show them some exploits, then I almost always bring them to the side of, gosh, it turns out security is important, I need to have them give me that chance. I find generally, so I hope this does not sound awful. No one writes me hate mail, but I find newer developers so not necessarily younger, but newer people who have become a developer more recently tend to prioritize security a bit more and take it a bit more seriously. Whereas ones that have been doing it like 20, 25 years are like, you're overreacting because they're thinking of the whole 25 year career they've had, and they're like, what? We've had two breaches in 25 years, we're doing fine. It's like, yeah, but when was this year and when was last year? Like because attacks are just happening. So much more often so, and the attacks are happening are so much worse in damage, and so as a result, some of them aren't taking it seriously, but I'd say most of them are. Now. I would also say the other problem is that they're like, listen, I have a deadline Friday, and boss was like, this feature goes out Friday or we all die. so I have to do that and I'll totally do security next week. But then their manager gives them another drop dead deadline,

Andrew Zigler:

Right.

Tanya Janca:

another one and another one, and the devs sort of like, here's a rock, here's a hard place. And then there's the Dev getting squished. And that's not their fault. so it's really hard if like that's what your manager's doing to you, because as a person that isn't the boss, you can't be like, listen, your priorities are all wrong.

Andrew Zigler:

Well,

Tanya Janca:

is gonna

Andrew Zigler:

well, maybe I, I'll, I'll challenge you a bit on that. Maybe, maybe, let's say you're in a development team that you don't think prioritizes security as much as it should. What are maybe some tactics that you've seen a successful developer use to take that back to their team, their manager, their leadership, and be like, we have to take this more seriously.

Tanya Janca:

Okay, so this is what I did when I a Dev because I was really concerned, and this is probably how I ended up on the security team. so I would ask questions. I'm like, what if this happens? Then everyone would be like, you're overreacting, Tanya. Yeah. so when I was doing the top secret counter-terrorism stuff, there was a thing that happened that I would love to tell you about by I'm not allowed because non-disclosure agreements slash go to jail, in Canada, they're not like, you know, I would tell you, but I'd have to kill you, they're like I would tell you. But then you have to go to jail for 20 years.

Andrew Zigler:

I tell you, but then you can't have any more maple syrup.

Tanya Janca:

not to sound prejudice, but I just don't think I'd like jail.

Andrew Zigler:

No. Probably not. There's not a lot of purple in jail, Tanya.

Tanya Janca:

It sounds so, so sucky. So but basically like some of them were doing a thing that did not follow the policy and so I was like, you can't do this. Like, it's not a movie Tanya, you're like overreacting, blah, blah, blah, blah, blah. So I phoned Ciis, which is like the CIA for Canada and told on them,

Andrew Zigler:

So you just went straight to the straight to the top. You know, you, you had a great example. You were ignored. And it goes back to what you were even saying too about like when nowadays, when there's a security breach, there's security problem. It's major, it's big, there's big losses, financial losses, security like privacy losses, IP losses. And so, the stakes are really, really high. And in your case, you had to escalate it all the way to like a bureau

Tanya Janca:

yeah,

Andrew Zigler:

of investigation.

Tanya Janca:

to them about it and they wouldn't do anything, and my boss was like, you're just so overreacting. And I'm like, I can't be responsible for this, but not have the authority to change this. Right. And so then CSIS came and so I was like about to go into the top secret building and they're like, could you hold the door? I'm like, are you effing kidding? No. This is a top secret building. Can I please see your badges? And they're like, no. And I was like a real not nice person about it. Like, I was like, what are you doing? Like, get away from this building. How do you even know where it is? Because it was like. So we have like several stories underground, like, we're like really intense as you would imagine, for like counter-terrorism stuff,

Andrew Zigler:

Right.

Tanya Janca:

I was just like, and they had like gotten partway into the building and we're trying to get into the top secret area. And I was like, I'm calling security on you. FYI like, show yourself out. Don't let the door hit your butt. and they're like, whoa, whoa. Like, we're just like these nice ladies. I was like, now scat. and so I didn't know it was ceases. And so then I called security and was like, Oh my gosh, like some people got partway in and they were trying to do this then lunchtime happened and then I come back from lunch and they're like, oh, we're having like an all, like, all of, all of us. Like, stop what you're doing. We're having the special meeting. And it was the three women. And I was like, and they're like, we're cis. And they're like, and we wanna call her out. She's the only one that didn't let us in.

Andrew Zigler:

That's too funny. that's a really fascinating story about, and it's even funny to think about the, uh, building being way underground. Like that just sounds like, oh, of course it's top secret. It's like way underground, but you've been in these environments that are highly secure, that are, secure from top to bottom. And so it sounds like. you have this mindset about, um, security. It's not just something you're coding, it's something you think about. It's how you think, it's how you act, it's how you breathe, and you've made it part of who you are, Tanya. And so if someone else wants to be more security minded. All in all, you know, there's obviously lots of ways they can get started. They could read your book, they can check out your courses. but ultimately, I think it comes down to habits, right? what are successful little things that you could be doing every day? And our listeners, you know, a lot of them, they're engineering leaders. They're, they're in charge of teams. They're trying to figure out how to navigate their teams. Right now, there's a lot of hype, there's a lot of security concerns too, that come with. Those hypes. So I wanna ask you, you know, in today's kind of like fast paced world, there's a lot of AI stuff too coming out. what are the things that you are, doing to stay proactive about security?

Tanya Janca:

something that I did when I was a developer that I don't do now.'cause I have a very weird, like I do developer relations like you, basically I talked to my boss and I was like, listen, don't wanna take like a few days off and go on a training. I don't feel like I have time for that. Can I have a two hour block of time week? And assuming I've gotten my other tasks done, I do self training. And the first boss I asked said, yeah, that sounds great. another boss was like, actually, we really need coverage from five till six for incident response. So if you will stay from five till six. You can use that hour always for this, unless there's an incident, which by the way, ended up only being about one week a month because they were on fire anyway. Like they were literally on fire. I almost never got to use it,

Andrew Zigler:

Of course, I mean, they have a, they have a standing incident response time for you to fill.

Tanya Janca:

But I, but I managed to do three university courses

Andrew Zigler:

Okay.

Tanya Janca:

throughout like that year, like by doing correspondence during that one hour where we just needed coverage and so. If you can have like a regular learning block. For me personally, I find that super helpful. So that basically like if everything's just super wild that week and I just, I do feel overwhelmed, I can just roll over that block and catch up on things. But if not, then it's like, it's so cool to see yourself progress through different courses and stuff At the end, we'll give the free link to my free online academy, right? Like you can take courses there. There are so many amazing content creators are releasing things for free or almost free that I recommend like you search and try. Like when I was first learning, there was not very much, but now Andrew, it's so rich find some content creators you really like and then just absorb everything that person's ever done. I'm a huge fan of, like, I find an author, I read 100% of their books.

Andrew Zigler:

Yes, me too. Yeah, you find someone that you really identify with, really vibe with that like is writing to you, you know, making stuff for you. and you'll read everything they'll write. that's a really good call out. I love the idea of, of focusing on, individuals with security practices, with security skills, and setting aside, of course the time for yourself to ingest those to work on those week after week. it sounds like a good takeaway too, as if like you're a leader in charge of a team to help maybe carve out. That time proactively for your developers. if you want them to be doing it, maybe suggest, a block of time on a regular basis where they're going through and doing things like security training. Yeah.

Tanya Janca:

That's actually what I did with my deaf teams.

Andrew Zigler:

Yeah.

Tanya Janca:

so I, I was like, okay, so I'm the CISO now 10 until noon every Thursday is US learning block. so one guy was trying to learn French as a second language so he could get a promotion because that's a whole thing in Canada. Like another guy was like perfecting his threat modeling skills, And I was like, if someone books a meeting during that time, I'm gonna come and be like, frowny face Tanya. Disappointing. Like, look at you just like your mom when you're a teenager. I'm giving you that look.

Andrew Zigler:

Yeah, the classroom teacher vibe.

Tanya Janca:

well.

Andrew Zigler:

it does work well. I mean, you have to obviously carve out the time you have to make it important for your team. And then when things come across the way and you have to be there for them and stick up for their time. And it goes back to even what you said about, the developer who, gets all the way to the finish line and they're like, oh, I'll do that last security, but next week. But then next week, you know, a whole other new plate's gonna drop on them and they gotta do all this other stuff. so, you know, it's important that when we're. Managing a team, and we have a whole bunch of moving parts that we still take the time to upskill, to pause, to reflect, and to learn on what we're doing. Another question I have for you, this is a, this is another one about, um, kind of like hype and AI specifically. I think about like when I am trying to figure out a question. As a developer, this has been a big disrupt Now where it's like in the olden days, right, the olden days being like two, three years ago, if you had a question, you'd go and you'd. Google it, and you'd probably end up on stack overflow looking at somebody's not totally relevant example for what you're doing. but maybe it's relevant enough, right? and you can maybe put the pieces together, go between them. this is kind of like how developers historically have struggled through learning and have, gotten through like programming problems as a group, right? Or, or going to search for how someone's done it before. Nowadays, what's more common is you're seeing like a interrogative with an ai. You're working with AI code assistant, you're getting things that it's suggesting back to you. Maybe you're never even going to hit Google, you're never going to stack overflow anymore. You're working in a more closed loop. do you, work that way as a security person? What do you think about that practice? Kind of curious to get in your head as a security minded person, how you're reflecting on folks that are using AI generated code now.

Tanya Janca:

So I literally rate before we recorded this, gave a talk about that. And so I'm just gonna tell you old answers, . So risks to using ai when you software, like when you develop software, are, so shadow ai, that means like using AI that's not approved, Whether it means like actually connecting it to your app or it means like feeding stuff into it that you shouldn't, or any sort of like, we don't have a license. What are you doing? You're not supposed to use that one. This is the approved one. Use that one. Right. So shadow ai, so please don't do shadow ai. don't give it decision making abilities. For anything that is not irreversible, such as giving a refund permanently banning someone, any sort of transaction that is final, right? There should be another secondary thing that checks policy that is not the same ai.'cause you can't have AI check itself. You need to have something else, check it and validate that. not giving the AI agency, so don't let it control itself. All of this, like Terminator movies, like all of that was just like, we gave it agency and look what happened. Now, do I think that's gonna happen? No. But do I think bad things would happen? Yes. I don't think there'll be any Arnold Schwarzenegger, robots from the future,

Andrew Zigler:

Hopefully not

Tanya Janca:

that it won't go well. we should not feed it sensitive data. And sometimes sensitive data is your code, you should check if you're allowed to feed code into the AI or not When it writes code. So this is the one you asked about and the most important one, it is very important when it writes code that you understand fully what the code does. And that you review it for any security issues and run a, like a stack analysis and a software composition analysis type of tool on it to ensure that for sure safe to add to your code. So just like you shouldn't copy and paste anything from Stack Overflow without understanding what you have done. Right. And so if we do that and then we run all the regular tests that we would, we should be okay. The one thing that I wanna add to that is that, Copyright and licensing. So if you ask it to like, it's like, oh, I need a function that does this type of derivative, that's not gonna be copywritten because math is math, right? Like someone can't copyright all of math. if you're like, I want you to make a game that's like Frogger, except for don't name it Frogger, you're gonna end up in potential copyright issues. So you need, like, if you're having it create an entire app for you, you need to, first of all. Ask legal if that's okay. Ask legal if the copyright belongs to you or if it actually belongs to the ai or if it actually belongs to someone else. Like if it's creating an entire system for you, you need to be careful. And so it's very important to be not stealing others' work, which I'm not gonna comment on how they train their models. No comment on that, but we are not going to steal other people's work. the last thing is just like, no matter what you ask it to do, like no matter what, always validate that it's true.'cause it is wrong a lot and it acts just as confident when it's wrong.

Andrew Zigler:

Oh yes, the fake confidence is the killer.

Tanya Janca:

Like if it gives you references, check the references. Half the time they don't exist.

Andrew Zigler:

Yeah.

Tanya Janca:

ask. sorts of things, and it's like, yeah, here, here's 10 examples and blah, blah, blah. And then like eight of them, or sometimes even 10 of them, don't exist. assume that it is a teenager that has showed up really late at night and smells like booze.

Andrew Zigler:

Mm.

Tanya Janca:

maybe you don't trust everything they say is true

Andrew Zigler:

Right.

Tanya Janca:

were doing.

Andrew Zigler:

Okay. You maybe have to apply a little bit of skepticism. This is maybe what I would expect from your perspective on it For sure. Especially when it comes to security. I would imagine that it knows a lot about security, but could have as many kind of like, things that it misses as well. I say I, for those that are listening, Tanya is shaking her head vigorously at me. So I, I assume in your mind that you don't think AI really knows anything about security.

Tanya Janca:

It is not that it doesn't know anything. It's that it's like wrong, like half the time. when I was researching my book, I was like, oh, this is gonna be so great. I'm so excited to use AI to help me research my book. And then, I am a self-proclaimed expert at Secure Coding. And so here I am. I'm like, you know, give me like top security tips for JavaScript. And I was like, wrong, wrong, wrong.

Andrew Zigler:

One after another.

Tanya Janca:

Yeah. So like two of them are like. Okay, these two are good. These two are not fricking JavaScript. these, like these, these four, like don't tell a Dev that Please.

Andrew Zigler:

And of course the LM was like, so like eager to and happy and to help. And it was so proud of what it generated for you. And it knew that, that was like the standup list of the 10 best things that you wanted to see right in that moment. And it, and it used its full confidence. And, and that can be the danger, with security, right? Because. you have to have full confidence yourself in, in what you're putting out. It goes back to the ownership. It's like that's where ownership comes from. You have confidence in what you're making and what you're doing, so you can't let the LLM erode your confidence, I think is the takeaway there.

Tanya Janca:

when I create training, I create bad code, better code. Great code. so like the bad code, so like we learn about input validation for example, and then bad code is like there's none or it's implemented super bad, right? And then better code. So we're doing some that's cool and then great code and I have like all sorts of security on it and it's just like so hardened and awesome. And so when I create bad code, I just ask the

Andrew Zigler:

Ah,

Tanya Janca:

to

Andrew Zigler:

you just use it to go from zero to one, right? And then that's the bad code example.

Tanya Janca:

What I would say is bad code. It's almost always, in my opinion, nowhere near good enough.

Andrew Zigler:

Wow.

Tanya Janca:

I'm like, great, thanks. And then when I ask it to create better code, I'll be like, implement this, add this. And then I'm just like, no, no, I disagree. I'm adding, I'm adding No, no, delete that, blah, blah, blah. And So I usually have to create the better code in great code all myself. I'm trying to figure out how I can like get the ais to like learn more, because they have to train on giant models. So anyway, I'm working on this problem. I'm trying to help everyone, but I'm not,

Andrew Zigler:

So stay, stay tuned. Everyone. Tanya's on the case. She's gonna make sure that these LLMs get a little more secure. this has been a, this has been a really great, kind of like a, I think capstones of the conversation.'cause we've talked about misconceptions in security. We've talked about habits that, that successful security practitioners use. We've talked about how your background has kind of, really shaped and evolved. How you view security, how you teach it as well, and, and why it's so important to have it as a. Teaching practice. like many things we have on Dev Interrupted, what we discussed today is a skill. Security is a skill. And so like any skill, you can build it, you can practice it, you can bake it into what you and your team are doing. And so I want to just, uh, before we end here, you know, you talked a bit about your book. congratulations. First off, writing a book is amazing and you did it. Twice now. And so, more kudos to you. That's like an incredible accomplishment. if folks, you know, wanted to learn more about your writing or your book, where could they go to, uh, see more about Tanya?

Tanya Janca:

If you go to she hacks purple.ca, there's all sorts

Andrew Zigler:

I.

Tanya Janca:

about my books and where you can get them. There's also, my newsletter is available there, and so a special weird thing about my books is that I do free lessons. I did it for the first book and it was just so fun and such a smashing success and like obviously for the second book. And so either like April, may, or June, I'm gonna start doing a lesson every month on every chapter of the book. And it's free. I'm gonna stream it on YouTube, but if you wanna invite, you gotta join the newsletter, which is also free. is not free. Please don't pirate it. Um, I. actually found out recently that my last book was being pirated a few months ago. and it was actually malware that then attacked your

Andrew Zigler:

Oh no, of course.

Tanya Janca:

that was not me, just to be clear.

Andrew Zigler:

Clearing the air, everybody, that was not Tanya.

Tanya Janca:

Was not Tanya, but also please don't pirate. Um, I worked so hard, but the rest is free. And so I would love it if people would come. I'm gonna have a bunch of experts on with me for every single chapter. We're gonna answer all the questions at the back of the book, discuss all the topics, and then answer all the questions that the audience has. And then we're gonna save all of those to YouTube, just like the last book, so that if you miss one, it'll be there for you whenever you are ready.

Andrew Zigler:

Great. Well, it sounds like going back to having a security practice within your team, carving out time, making it an important practice, sounds like this is a, a great resource, for folks to go and dig into as that regular practice. So we'll definitely include it in our newsletter. it'll be in our show notes as well for our listeners. so definitely be sure to check it out. And to you, our listener. thank you so much for joining us, Be sure to follow the links that are in our newsletter. Subscribe if you haven't already. Also, please reach out to us on socials. You know, Tanya and myself are both on LinkedIn. We'd love to continue this conversation, get your take, hear your questions on what we covered today, and definitely be sure to check out her book as well. And thank you so much, Tanya, for joining us today.

Tanya Janca:

Thank you so much for having me. I just realized we never said the name of the book. It's Alice and Bob Learn Secure Coding.

Andrew Zigler:

No, that that sound, that's very relatable. That's me. You get all the way to the end and you're like, wait, I didn't even say the name of the book. The name of the book. Everyone is Alice and Bob Learn Secure Coding. So be sure to go check it out and it'll be in our show notes. And thanks for listening.

Tanya Janca:

Thank you.

People on this episode