Dev Interrupted

Amazon Q and The Future of Autonomous Development | AWS' Adnan Ijaz

LinearB Season 5 Episode 13

AI is evolving at a breakneck speed, leaving engineering leaders with a critical dilemma: innovate or fall behind. But how do you experiment with AI without risking your credibility? 

Andrew Zigler sits down with Adnan Ijaz, Director of Product Management for Next Gen Developer Experience at AWS, to unpack the power of AI agents. Together they discuss how to leverage autonomous AI in your development workflow, and learn from real-world examples like Amazon Q.

Dive into the evolving role of the developer and discover how to mentor your AI, not just use it. It's time to shift from task-oriented coding to strategic architecture, and this episode shows you how.

But first, co-host Dan Lines frames the conversation by discussing the shift towards measuring the concrete benefits of AI tools in development, rather than just their potential. Dan also provides examples of how to set realistic expectations for AI implementation by focusing on specific tasks and measuring both individual and workflow improvements, highlighting the need for overall workflow optimization.

Check out:

Follow the hosts:

Follow today's guest(s):



Support the show:

Offers:

Welcome back to Dev Interrupted everyone. I'm your host, Andrew Zigler. And I am Dan Lines awesome to be here. We have a amazing conversation ahead of us today. I sat down with Adnan Ijaz, one of AWS's top product leaders. He's driving Amazon Q. It's practically a household name for developers already. It's only two years old. but it's already made so much impact. I was at reinvent actually, when they announced. And like, the reception in that room was, was really crazy. lot of feedback from everybody there. And now it's reshaping how people are building things, they're shipping software, how they're scaling. so it's been, for me, a real big blast to come full circle, bring Adnan, onto Dev Interrupted talk about how teams are actually using and adopting these kinds of tools. And you know, Dan, I, I'm sure you've seen and heard, Amazon Q out there in the wilds yeah, for sure. Definitely. From the developer perspective, were you there two years ago? I think it came out, or like it was announced in 2023. Right. was in 2023 at Reinvent in I. Yeah. Yeah. That's awesome. I see also they're expanding, I guess, Amazon Q to not only development teams. I know they're probably crushing for development teams, but I saw it's also like, okay, use this across your entire business. Let's just make everything ai. Yeah. Yeah. and we're gonna talk about some of like, what's that secret formula that they're able to use to really scale and have that much success with it? And, you know, every day at LinearB you help engineering teams, to automate stuff and. fix things that are slowing them down, whether that's like updates or streamlining pull requests. And I know Sure. an early listen on our conversation with Adnan. I'm sure you have lots of ideas in your own head about the things he talked about, which were all really fascinating. So let's go ahead and jump in. you know, last time you and I talked, we talked about Dr. Ashoori's episode from IBM, and you talked about how AI is coming up in like eight or nine out of 10 customer calls you're on every week. I imagine that's probably still the case. I think it's super interesting because of course it's still coming up, but the narrative I would say is shifting and the questions that are being asked are changing. And I think the, the last time you, you and I talked, it wasn't even that long ago. It's coming up, but now most of the questions because people have been dabbling with, with ai, and they're using, point solutions. Maybe it's like coding assistance. Maybe it's, it's something else. Most of the talk now is not around like, should I use it or not? Or like, maybe how to. It's more like, what did it do for me? did I get any productivity gain? I owe this back to my business. I spend a lot of money and I need to report on that. where I see the narrative has, shifted is more from the early excitement, with this untapped potential of what AI could do to transform my development organization a little more towards outcome. What did it do for me? How can I prove it? What should I do next? I think they're really pointing out like a really big thing that Adnan even talks about, about aligning, people within your organization with their expectations of like what you're going to get out of it. And something he points out is like, maybe, maybe that that workflow that you're trying out, that AI workflow isn't gonna give you that 100% completely done thing that you were hoping and imagining and kind of even. Counting on it to do. But if it's giving you that 60 or 70%, it's giving you a massive head start and understanding how to use that head start. so it, it really starts also with resetting those conversations internally about expectations. And I know you've worked with lots of leaders who've definitely been burned by hype. You know, other people who've really nailed it in the moment. What do you think differentiates them? I'll say a few things here. So one with, Adnan, the conversation is just amazing and he's pointing out a few things that I am also hearing from customers. So I wanted to highlight those first, and then, we'll talk about maybe what, like the best DevX teams are, are doing in that, that type of thing. So, he's speaking to. Let's make sure that we can confirm a quick win or confirm a win, meaning, okay, let's say AI could do anything for you. Narrow it down. I think he talks about like documentation or the code review. Pick something very pointed, even though it can do anything and say, I'm looking to get a win here. And let's just say that it is, documentation for, developers.'cause developers don't like to write documentation. Make sure that you say, that's my mission, that's my goal, and I'm going to prove that it makes, it, makes everyone more productive. That's, I think, something, and he'll dive into it more. But I'm hearing that and I'm working on that with customers as well. because you have to show impact at the end of the day. That's how the, conversation has shifted. It's shifted to productivity now. And then the other part of your question, you were kind of saying, okay, maybe what are some of the most effective teams doing or leaders and that type of thing? It relates back. They're doing good expectation. Setting. I think it's all around expectation. I think the leaders that are getting in trouble right now is if you're like saying, okay, I'm gonna adopt copilot, it's gonna fix all of our development problems. We're gonna move like a 100x faster or 10x faster, or even 5x faster. I. that's actually not, occurring. As opposed to maybe saying more of those pointed solutions to say, you know what? we're going to eventually adopt AI across our SDLC, but let's just first start with the AI code review or a documentation, because I wanna make sure that I can show incremental gains to you. The business, that's who I see is doing the best with this right now. That's, I think, the best way to approach it. Yeah, setting the expectations, I think is the biggest key there. you couldn't have put it better really. And when you set those expectations, then you can get that incremental impact as you figure out I. by bit how it's gonna impact things. I also thought it was really smart how Adnan uses the example of taking a task that developers already don't want to do and already maybe aren't even doing, and. Automating that first. That way you're not intruding on their process, but you're still giving them immediate gains. That's like a force multiplier I think that a successful team could use. and then when they're doing this, it's obviously really important to track that impact, to be able to show it. so that's another big thing. I'm sure you're seeing from teams that, that stand out in those scenarios. Yeah, absolutely. So that kind of goes back to the first thing that we talked about. The conversation has now shifted to what did this do for me? Did we actually get more productive? What was the outcome? How can I prove it? That's where the conversation is, which I know a Adnan actually touches on, and says some smart things I can give you. My viewpoint of the ways to measure, which is actually very similar to what he said. here's what we've seen, successful in the customer base. There's two different ways to look at it. One way to look at it is, you can think of it, I think a Adnan says the individual developer, but I'll just adjust it a little bit. Think about an individual task. So right now, for example, our company, LinearB B, we have an AI code review, which is an individual task that a human, a developer has to do. You have to review code. Now, one way to look at productivity gain is, okay, we released an AI. Code review tool, or it could be an AI documentation tool or whatever you want to like insert there into an individual task. And if you have a metric monitoring tool like a LinearB or another solution, you'll be able to say, we used to spend this amount of time on code reviews. It used to be, for example, X amount, OU of hours per week. We have now reduced that by 20%. So we are spending 20% less time on code reviews, which then returns hours back to the individual that used to be doing that review. That's the first way to measure. Now the second way to measure.'cause what the business will say to you is, okay, that's really cool. You are more efficient, you're getting more, you're giving developers time back. What was the outcome then? Either for the team or the business or for the SDLC itself? And that's where you need the second measurement that I like to call, either like the process time or the flow time meaning Okay. in the AI code review that we did? Yeah. It's saving, let's say each developer 20 minutes per review, something like that. But we also see that the end-to-end cycle time, so let's say, from when coding starts all the way deployment to, release to production went down by 15%. Okay, that's really interesting to know. Not only are we giving time back to developers on those individual tasks. AI is doing that now. But we also see business value, outcome, meaning the flow of work or the flow of value has also decreased in time. And you gotta put two of those together. The only other thing that I would mention is where I think some of the, maybe not meeting expectation with some of these AI tools. For example, if I just, Do your documentation work or if I just, do your, code review work. It doesn't actually mean that your cycle time will decrease. You also have to have other processes or orchestration in place to take advantage of the ai. So like what we're doing with customers, is saying, okay, you did an AI code review and actually for your most safe changes, your most safe code changes. Since the AI review passed and did a, looks good to me, we're going to automatically merge that. Now I'm actually getting time back in my workflow process and for other times when the AI. review goes and eventually it might even get to a, looks good to me. We'll still bring in another human reviewer. So what I'm trying to say is I think that's kind of maybe where some of the expectation gap is. Yes, it might be doing something nice for you in like a pointed area, but if the rest of the workflow isn't also optimized, I think that's where the expectation mismatch is, where some of the maybe CTOs are saying, Hey, it's gonna. Revolutionize the way that we're doing our, all of our software development. No, it's like you gotta do it in stages and make sure that the whole thing is optimized. Absolutely. You've given us a really clear playbook for how a team could even start experimenting, but then track it back to real gains. it's at a formulaic level, like you said, calculating and understanding for developers how much time is saved. But if you don't tie that back. To a more aggregate impact across the organization. tie it back to things that are actually impacting the things that everyone else cares about and showing what you're doing and, and gaining with that time back. then it's falling short of actually delivering on those expectations. And then the further and further those get apart, the less successful anything that you try is going to be. so this is like a really great way of tightly coupling it and doing it in stages. and it kind of blends. A bunch of different approaches here too, because you have to figure out how it's impacting the developer and then make sure that they're getting their gains and then you have to draw that up into like a higher, gain for teams, um, in the organization more broadly. Yeah. So, before we dive into Adnan's conversation, 'cause I know at this point, you know, people are definitely very interested to hear what he had to say. I just wanna start by saying, you know, thanks for framing kind of how we should. Think about, the productivity and measuring the impact of this technology as we go in and understand what it's transformed. for me it's really shown how like if you remove that friction, people can really focus on that high impact work, which is what those more aggregate metrics and ideas are, are even looking at. and so after this break, you know, we're gonna bring Adnan Ijaz. As the director of Product Management for Amazon Q on the show and AWS, they have all the parts of the equation. They have infrastructure, tools and context, so stick around to learn how they've transformed. Over 30,000 applications, completed 4,500 years of developer work and saved $260 million annually from AI performance improvements with their new technology If you've ever struggled to explain developer experience to non-technical leadership, this workshop is for you. LinearB to learn how to translate Dev X metrics like developer satisfaction and AI performance into clear business outcomes. We will give you proven strategies to align engineering priorities with what execs care about the most, faster delivery, reduced cost, and ROI on AI investments. Plus, you'll get early access to our CTO board deck template to make your next leadership meeting effortless. Head to the show notes to register. And Today we have a really special guest in store. Joining me is Adnan Ijaz, Director of Product Management for Next Gen Developer Experience at AWS. And he's at the forefront of AI driven software development, including projects like Amazon Q. And on Dev Interrupted, if you've been listening, you know, we've been discussing the dual dilemma that every engineering leader is facing right now. AI is evolving at a breakneck speed, and if you don't start experimenting now, you're going to fall behind. But if you invest in the wrong AI initiatives, you burn time, money, and worse, trust. So how do you experiment without it blowing your credibility? That's what we're going to unpack today. Adnan, welcome to the show. Thank you. Thanks for having me. I'm really excited to talk to you about all the stuff that you just mentioned. Yes, really excited to dig in. The topic today is really, um, front of mind for a lot of our listeners and, you know, everyone's talking about AI agents and it's early days. It feels like the term has already started to lose some meaning. And I know you've spent a lot of time at AWS thinking about the evolution of this type of tool and what it means. So maybe let's start with some basics. What makes something an AI agent? It is definitely the first question we should take on because, the word has been, overused, particularly over the past four or five months, it seems like. Everybody's trying to fit the word and use it. So, the way, we describe it, or at least, uh, we think about it is think of AI agents and AI systems that are able to perform a job autonomously for you based on the task that you assign it to. And let me break it down, into three parts. at a broader level, I would say agent, the AI agents have these three components. One, they are able to take a goal or a task and break it into sub goals and sub tasks. then they have the tools, they have the mechanisms to understand the environment that they're in, they gather the information, and then they use that information to go to work and then do the job autonomously. You know, not just like you prompting every single time for them to do certain things, they take the first two, put them together and then. Do the job completely for you. so for software development agents, what would that does that mean is If you ask it to go implement a you know website or a feature It is the agents are able to understand what they need to do how they would go about it They are able to come up with a plan break it down into tasks then they have the tools like may maybe they have their own version of Exploring the code, understanding how the code is written or what is the context in which they are operating and then use that information and then they go about trying to find the best implementation. It can take several minutes for them to come back, with the implementation that you need. So that Is the definition of agent versus it's important to also talk about what is not AI agent and still, uh, factored as a agent, at least according to the definition that I'm putting forward is you have an LLM, which is using a database, a rag, and you're just going back and forth with that. It's still very useful. It's still very helpful. It can, give you a lot of information, even the programming questions, but the true definition of the agent that is not the agent. It's not doing the work autonomously for you, The key really falls in the autonomy that it can do things on its own accord and coupled with, of course. tools because when we use a chatbot AI tool, you know, we provide context to it. That's what our conversation is and what we provide it. Uh, we can even use tools. Most of those kinds of conversational chatbots, they have ways of implementing tools, but when it becomes an agent is when it's making those actions autonomously. And that distinction is really important because if they're going to be making actions on their own accord, then it becomes even more important for developers to understand how they work and the types of tasks that they're best suited for. since they're doing them beyond themselves, it's kind of like they need a little more oversight on the things that they're actually accomplishing. We should, decouple or clarify the autonomous nature of the agents from. Would human be missing altogether or they just like on their own figure out what they need to do Humans can summon an agent they can say hey go write me Improve the test coverage for my project or for my application What happens post that is really where are you going line by line say here is my function go write me a test Here is my another function go write me a test. I didn't like this go change that versus All you say is that here's my project. Here's my application. Go improve the test coverage for me. And then it's able to act autonomously. So that is where it's, it comes in because oftentimes people think of this, they hear the word autonomous and say, do I, am I not needed in the process is just gonna go on its own and do everything, but it's really, the task is coming from you. You're telling what needs to be done in terms of what the job to be done is, but then how the AI is approaching this is more autonomous is able to, you know, break it down and figure that out. So that's really where the distinction is. So yeah, you're spot on in your, in your comments. Yeah, and that's a really big, uh, shake up. It does a change up and kind of how developers should be spending their time and what they need to be focusing on. And when you see teams that are adopting AI agents right now, you know, what are some real world patterns that you're seeing those successful teams do? are, it's still early days, but, we're starting to see really interesting and powerful patterns and use cases and results emerge. I would first start at home, like at Amazon. the interesting part of my job is because of Amazon Q developer, which is the AI powered assistant that helps with all aspects of software development, lifecycle, whether you're writing code. Writing tests, doing code reviews, managing your application in the cloud, or transforming them, you know, going from one version to another, older version to a newer, uh, application. And because Amazon is also a large engineering company, because, you know, a lot of my peers and colleagues are engineers, right? So I get to see the whole, aspect of how the agentic interactions are evolving through that lens because You know developers are using it. They are putting the Q developers agentic capabilities to use so I would say the the biggest one that I we talked about it extensively where you know with the past several months we used the Code transformation agents to take the, Java 8 and 11 applications and modernize to Java 17. And because of these agentic capabilities that are able to do the work autonomously at scale, we were able to save 4, 500 years of developer work and 260 million dollar, annually in cost savings in the performance improvements on all of that, we, we, you know, modernized over 30, 000 applications. So essentially putting agents to use to modernize old codebases. In this particular case, we, we modernized the Java codebases. then the other thing that I'm seeing both internally and externally, teams are using these agents to bootstrap the projects. So it's not like, you know, just say, Hey, Go write me a website and it's done and you just done and you deploy it and it's not, it, it'll go, uh, the AI would go, the agent would go give you the best possible starting point. And then there are a lot of interaction happens back and forth. We could say, you know, can you please do this? Can you please add that? Like here is my specification. Can you change that? the unit test generation that we're seeing a lot of people are, not just the, Hey, function by function. Just. Take my code and write this. But here's my entire application. How can you generate unit tests? How can you generate documentation for me? Do the code review. So these are the four or five common patterns that we're starting to see emerge. stats around how you use. Yeah, Amazon Q and the effects that you can have from it using that kind of agentic workflow at scale is really impressive. And I think there's a lot of lessons there. as you know, like AWS, you have so much context about the organization, and the products that are within it, but you also have so much tooling. And like we said, the beginning of our conversation, those are two of the. Critical elements of what makes something agentic. and then you of course also have the infrastructure to scale the autonomy part. So right there is like a very clear anecdote of how those three factors can really influence the success. So one, I think that's amazing, And another thing I heard, that really resonated with me was about, the different levels. Of autonomy that you might have within different projects, and you might have something that's lower stakes. They get started. And I think that starting with those incremental projects is key to getting buy in from folks that are looking at how you're adopting and using it. these agents at scale when they're working there, they're requiring probably a large amount of context and tweaking in real time based upon the actions that they're taking. So how can, how can teams best think about, showcasing the best work that they can do for their agents? Like if they're going to mentor their agent to be as effective as possible, how can they work as a smarter engineer to help their agents also be more effective? a lot of this is how the agents are built and what kind of additional context they can gather, what kind of tools they have at their disposal. for instance, i'll give you an example of the software development agent that we have in q developer the way it works is when you say go write me a shopping cart api in my you know, web commerce website, right, It is able to take that, break it down, but then it has all these tools at its disposal. And that by tool, for instance, one of the tools that this agent that we have built is the text code, which is essentially an IDE environment, but represented through text. So just like a human, when you, a human is writing, you know, a developer is writing code, they would go explore the project files, they would open one file, open another, try to build an understanding of what this application is, and where do I go write that application? The agent is doing the same, but we have equipped it with the tool to go do that. So, therefore, to answer your question, uh, you know, more broadly, a lot of this is in how the agents are built, you know, and therefore the very first question that you asked me I think it's that is why I feel like it was a very important question because these little things Make what an agent, a true agent versus, it's more like conversation back and forth. So if you build the agent the right way, then you will think about, the memory in the agent so that it's able to understand its interactions. Like, you ask it to do a code review and then it does a code review and do you take certain actions, then it's able to learn that, okay, here is what we can do in this environment. It is able to collaborate with other agents to do certain. tasks. it is important for the engineering teams when they, as they start, they pick the right tool. Even in the AI, it's early days, but you can literally find a new tool emerge that can do AI development pretty much every other day. I mean, if I'm not exaggerating. so it is important to pick the right tool. and really explore those agentic capabilities. And then you go about it. and then how would you go about it? I would say, the team should really start off things where. There's low hanging fruit, right? For instance, generating documentation for your project I have not really run into very many developers who have told me that they enjoy writing documentation I'm sure there are And i'm not there's nothing against them But most developers would tell you that hey look if you can generate documentation for me accurate documentation is great you know, so when you Summon an agent and say here's my entire application Can you go generate me a readme file and just keep it up to date? Just go do that, right? That is a task you can start there. So you're not even really when the engineering teams are coming in and trying to bring in these agents into their workflow, there are things that they can pick, which will not necessarily be adding any value and start from there and then build off from there. in the example you gave, like you have an AI agent, you've summoned to be in charge of, maintaining this read me or this documentation alongside something as it's being developed, that sounds like a lot of free up time. and so now the developers that are building those features can focus more on the critical things that need more attention. and I think going back to your example with AWS, that seems to be the prevailing takeaway across the board for. Why teams should be using agents is because of how many developer hours it can save them. and I think I would say, We're starting to see the shift, in the customer usage behavior, a lot of early days of AI usage across development teams were the inline completions, like the auto complete as you're writing, typing the editor, you get the next suggestion, you complete that. It's still being used, but then came in the chat, the conversational aspect, where you can just go and still popular people use that. But now we're seeing particularly, I would say over the past quarter or so. So a strong emergence off, not just the options that are available. We have had, you know, I put a plug in for Q developer last year when we launched, the made the Amazon Q developer generally available, uh, we're the first to have. Proper agent the definition that I laid out the software development agent and that is we've been talking to customers about the capabilities and customers who use that they love it But now we're also seeing that go mainstream where now there is the developers who are probably using inline capabilities and chat before now exploring more agentic capabilities and which is a natural evolution in their journey but that is where the real magic starts to happen because that is where you start Getting the real value the scale and and whatnot. Yeah, I agree. It's like an evolution of how we can use the tool Um chat was like first stop on the on the on the express of like actually Utilizing and scaling this thing up, but it can be limiting And limiting for the capabilities that the tool gives us. And, I think there's so many parallels. you know, chat is maybe an imperfect, format for working with AI and how these emergence of other workflows are something that are definitely worth exploring and can more maximally use its potential. I thinking in my head going back to, you know, you have all these agents and they're saving you time and you have all this focus and something I, I mentioned at the beginning of our conversation is like you've scales of autonomy, like going back and looking at the work they're doing, understanding and being on top of their progress. what do you think are like the risks involved when you have at scale that many agents, uh, where can things start to get wobbly? my answer is going to be slightly nuanced on this one. because oftentimes a lot of development teams will not build their own agents. They will go use the agent from different. vendors and including Amazon Q developer that we provide the different assistants that are available out there. So I think it starts with ensuring that the rigor and the responsible AI practices have gone into building those agents. And there's a lot of scientific rigor that goes into that understanding the task that you have. How do you break it down? and how do you actually go about making sure you do it securely? What are the right, interaction points with the user when you bring them in? But more importantly, so that, that has to be done. So that, for all development teams that is an important part of it. You know, picking your tool before you go with it and understanding how it's being built and with the right safety and responsible AI practices. But once that's done, I think it's important to realize that humans are still in the loop. So that is the fallback of all of it. So in all these systems, you know, when, like for instance, the product that I am responsible for, Amazon QDeveloper, if you use the development agent, it's not just going to go on its own. although that is what we're seeing in the definition, uh, you know, autonomous, it's gonna keep you. In the loop in the sense that you can see what's doing It doesn't mean that it's asking you every single time that what do you want me to do? It's still doing but you are paying attention. so humans are there they can override it They can see if it's going off the rails. They can review the results that come back So at scale those are the risks are when you kind of assume that these things just gonna go On their own and do the work and you don't have to have oversight and the second one I would say the biggest risk that I would say is is not the actual risk It's it's more of a you walk in with this expectation that i'm just going to press a button and everything's going to get that The reality is it's it's an evolving technology. It helps you with a lot of things, but you know You still have to you know work with it in certain capacity so the human is in the loop and the reason I call it a risk because it leads to Misaligned expectation and disappointments. It wasn't like oh, I expected this agent to just like 100 percent perfectly do things It only did like 60 percent. It's important to realize that 60 percent work that it did or the bootstrapping that I talked about in A common pattern that we're seeing where people use the agents to bootstrap their applications where it just creates the good if you got a 60 70 80 percent head start on the application and then you have to go Work with it to just complete the remaining part of it is still a huge, productivity boost. So it's aligned expectation as well as choosing the right tool that has been built with, safety first and security. First mindset is important and that that's where I think it's a combination of two. That Addresses those concerns at scale. Yeah. And it all boils down to awareness and education and trust, understanding, you know, that's what we're, we're, that's what we're doing right now. That's, that's what we're discussing. We're trying to increase the awareness of the kind of tool and, and, and learn for ourselves, like how we can use it because it's something that's evolving in real time and You touched on about getting over like the blank page problem and in the, in the hours and time saved with that. I, think that's immense, especially if going back to what I was mentioning earlier about, if we give a really, consistent coding environment, coding practice within your organization, getting started in that bootstrappy way, you can get further and further if the agent or, the tools you're using, understand what your projects generally trend towards or how they start. And how they evolve. So, you know, there's about providing that context. when you discussed about setting expectations, I think that's ultimately the key thing to be discussing internally, especially with non technical stakeholders. And in a lot of our audience, they find themselves in this position. They're implementing these AI tools. They're justifying, it's expenses to, people who are just looking at as a line item or they're tempering the expectations. the agent can do from someone who's maybe a little overhyped or not fully keyed in on how the technology is working. what you highlighted, the observability, the understanding of what the LLM is using, that seems like a really critical tool for scaling up that level setting, across your org. Do you agree? Yeah. I mean, The tools itself need to provide the visibility. It needs to help get the impact as well. but it also needs to make it clear what it is doing and how it has helped. And then there will still be work that organizations and teams will need to overlay on that one. it is, I think the challenge that you're highlighting is a real one where, In fact, like one of the common topics that I work with our customers is that hey, how do I measure the developer productivity? I ran this. POC, or I'm using the AI, how do I actually justify that this is something that we need to roll out more broadly? And we talk, talk through that. I mean, there are, there are things that the product, a good AI assistant would provide you already. Let's say you would have the dashboards and things that you would use to measure the impact. But at a certain point, it also comes back to how do you measure the developer productivity today? because oftentimes you get into that conversation, you realize that, you, somebody says, how do I, how do you, like my first response to when somebody asked me, how do you measure the developer productivity is often, before I answer the question, you tell me, how do you measure the developer productivity today? And the answer is generally not very clear because, Some have very great answers because they have all these DevOps solution and they have integration points and they look at the check ins and deployment and the failings and whatnot. And if you have the infrastructure, then measuring this is also relatively simpler because now you bringing in this tooling and then you maybe you enable one team to do it and you can look at their metrics and see how they move and kind of establish, correlations. but if you don't have it, then it becomes all. All the more, uh, challenging and that is where I think your question originally, it generally it stems from that problem because you, you know, you don't have the existing mechanisms, but you need to do that. So what I generally recommend, our customers. And when I talk to developers is you need to think about the productivity at two levels. One is the individual productivity when you're working with the developers They would tell you like when we started rolling out Q developer internally We would get qualitative signal from the team. They would say hey, I feel productive I started using it. Maybe first few days were very challenging because it disrupted my workflow But i'm starting to get hang of it and i'm generally productive. So watch out for those signals. Then we also do surveys once in a while to kind of to establish the the Quantified impact and there you can have certain questions there. So that's like, how do you imagine, measure the individual developer productivity? That would give you a lot of information. But then the real stuff is when, how do you actually go, if you have DevOps solution, where the, if you have your check in system, rather, you know, how do you actually measure that? Okay. This team that is using AI assistance. How frequently are they checking? Like what are the kind of, deployment issues they're running into? What is the code frequency and all of that? and that is the second steps you have to do to measure the impact. so I think once you put together those two together, then it becomes easier to answer the question that, that you are highlighting, like, how do we actually show? the productivity at scale. Anand, you make it sound so straightforward and so simple, but it is the way that you're linking it back to, at the end of the day, you know, you have to have a strong developer productivity practice within your organization. You have to care about these metrics, and you have to be looking at them, and you have to have common definitions for how the entire organization thinks about them. and if you don't have that, then you're going to be rudderless. When you're trying to adopt something like agentic AI, because it's going to go really, really fast, but if you don't know what direction you need it to go, then you're just adding risks to your organization and you're not really accelerating. that seems like a really key distinction. one that you're seeing from teams that are successful is they have a, they're fostering developer productivity practice. They care about developer experience. They look at these things and they talk about them and they measure, over time. So that's like a key ingredient to having success with this tool, like, like any tool. Yeah, and absolutely. And, uh, yeah, I mean, it is not trivial. it definitely, it's easier said than done with say, Oh yeah, you should measure the impact, but then there is a lot more work that goes into that, but ultimately that is the, best way. To measure the impact is you, you shouldn't gather the qualitative data points and that is good and that can help you make decisions and justify certain investments, but ultimately that is what the investment just mentioning the right developer productivity metrics. Yeah. And for a future looking developer, someone who's thinking about being a better developer for a year or five years from now, what's, what's one good habit? That you would suggest, uh, any developer today Yeah, I mean, I think the space is moving quickly. So the time horizon, nobody knows where the software, how quickly it will evolve over the, the maybe one year, five years. but I would say, it is starting to become clear that, A lot of development human would still be, at the forefront of it, they would still be in charge, but their role would shift from, maybe writing a lot of code in the editor to a lot of. architecting, thinking, ideating, and then working with the software agents to go deliver those. And it's not just building new features. You can think of security agents, you can think of deployment agents, you can think of operators that are managing things in the cloud. So it is going to be humanist thinking about, hey, this is what I need to build. Here's the architecture I want to And even there, AI can help you make those decisions. If you are a developer starting now, I, I would encourage you to first, if you're not really starting using any of this AI assistance, I would encourage you to start. And selfishly, I would put a plug in. You can start with Amazon Q developer. Uh, it's available for free though. It's as a generous free tier that you can use. but you know, jokes aside. Pick any start, I think understanding, Getting started with these tooling and starting to use that it may be, you know First few days may be funny because it changes the workflow and you might even get to conclude and that is actually making you less Productive than more productive i've heard that but then it takes a few days, you know, or You know this time period and you need to get over the hump. So start there. That is definitely important. But then also What that will allow you to you start thinking less about I need to write code to what are the things that you need to do And I you would less be worried about you would be worried less about writing unit tests or generating documentation would be like, okay What are the things that I need to build and i'm specifying it the right way i'm architecting the right way So those are the things that I I think would happen and so you would sooner you start better it is because otherwise, if you have not started yet there's a chance that you may fall behind and I'm not trying to be negative or pessimist But it is how fast it's moving. You know, the best time to start was yesterday. The second best time is today's let's let's start. I love that You know, just just try it just get started Um, you need to get your hands in the sandbox to figure out what you like and don't like and there's so many things to be learned along the way so you just have to get started I think this has been a really great call out too as uh, you know the skills that that folks need to build and Thinking about your impact versus just your tasks, or you know, there's more to just the task being on your JIRA board or on your Asana board. There's, there's the underlying reasons that you and your organization are doing them and focusing on why those are. That's kind of the key thing that AI gives you. And, you know, Adnan, this has been a really fantastic conversation for me. I've learned so much, um, about AWS and how y'all are thinking about AI. We've got some really great examples, but you also broke it down into a really clear playbook for both engineering leaders and individual contributors to follow. it's been really great to kind of dive into your head and think about how you're seeing the best teams use this. But before we wrap up. you know, where can people follow your work or, uh, learn more about what you're doing? search for Amazon Q developer on your favorite engine, or just go to, Amazon website slash Q and then, you can get to know, uh, learn about Amazon Q developer. We are moving quickly. Uh, you know, it's, uh, A lot of exciting stuff. If you visit and you'll see how the agents that I've been talking about have been leading the industry benchmarks, so it's really exciting times and exciting space. So I'm excited. I hope the developers are in terms of using those tools. So You can follow our work there. Great. We'll include those links in our show notes for our listeners. And to you, thanks for joining us, making it this far into our episode. You know, you stuck around all the way to the end, so you clearly liked it. Make sure that you're subscribed and you share this with somebody. Talk about what you learned today. And also check us out on Substack. You know, we have weekly insights every Tuesday, where we dive into some of the things that we've discussed today with Adnan. But we'd also love to hear from you on socials. So reach out to either of us, our guests today, myself, we'd love to hear your thoughts about what we discussed, maybe learn some things that your teams are doing and that's it for this week's Dev Interrupted, see you next time.

People on this episode