Dev Interrupted

Agentic AI: Dissecting the Future of AI Workflows | Memra's Founder Amir Behbehani

Season 4 Episode 35

Engineering teams are already seeing efficiency gains by leveraging Gen AI solutions like Copilot, but the next wave of AI workflows has the potential to 10X productivity.

This week, we’re exploring the world of Agentic AI with Amir Behbehani, Chief AI Engineer and Founder of Memra. Agentic AI can be defined as AI agents or systems that have the capacity to make decisions or take actions on their own based on the objectives they are programmed to achieve. These AI systems act independently, gathering information, processing it, and then choosing or executing actions without direct human intervention.

Amir shares how Memra is leading the way in developing AI agents capable of handling complex tasks, decision-making, and improving productivity across industries. He also discusses the implications of AI in reshaping how businesses operate, and how organizations can prepare for a future where AI plays a central role in both day-to-day operations and high-level strategic decisions.

Whether you're an AI enthusiast, an engineering leader, or curious about the future of automation, this episode offers a deep dive into the possibilities and challenges of Agentic AI and what it means for the future of work.

Chapters:

  • 01:23 Defining Agentic AI 
  • 07:02 Frameworks for thinking about Agentic AI 
  • 12:52 Unpacking AI as a black box 
  • 13:58 How Agentic AI will benefit software engineers 
  • 22:55 What would be a good starting point to leverage agents on an engineering team?
  • 26:46 Will agents replace freelancers and the gig economy?
  • 36:20 What is the synthetic marketplace?
  • 40:11 How Agentic AI impacts writing code

Links:

Support the show:

Offers:

Amir Behbehani:

The work of an AI engineer is really to kind of build those systems. That effectively build systems. I guess if you're building agentic frameworks and then deploying those agents to effectively write code, and then the code is, for the purposes of building an application, then you're building the systems that effectively build the constituent systems that build the application. And in that regard, it's sort of more like industrial engineering than it is necessarily software engineering.

13% of all pull requests are bot created today and they are creating a unique impact on your SDLC. LinearB his upcoming research exposes the effect bots are having on our teams developer experience and productivity, and engineering org who created a system for managing bot generated pRS are able to reduce their entire review load by over 6% while also making drastic improvements in their security and compliance posture you want to learn how your team can manage bot generated PRS and get early access to linear B's data report. Head to the show notes to register for our upcoming workshop on September 24th or 25th

Conor Bronsdon:

Welcome back to Dev Interrupted everyone. I'm your host, Conor Bronsdon. And today I'm joined by Amir Babani, a mathematician and AI ML expert. Amir is both a founder and the chief AI engineer at Memra, and he's had previous exits to both Google and Meta. Amir, thank you so much for joining me today.

Amir Behbehani:

Thank you for having me.

Conor Bronsdon:

Yeah, I'm really excited for this conversation because we've talked a lot about how AI is impacting software engineering from the perspective of engineering leaders, but we haven't really had the opportunity to dive in depth on what's happening and, uh, as far as the forefront of it and kind of what we see moving forward with someone who's a deep expert in the research side. So in today's conversation, we're going to have that chance to zero into the future of AI and software engineering and hear from you about Kind of the cutting edge of research, uh, plus talk about agentic AI and how that may transform software workflows or other workflows in the world. But before we jump in, I do want to remind our listeners. I know we say it every time, but it really does matter. If you enjoy this episode, please just take one moment to rate and review the podcast on your X. App of choice, Stitcher, Google Podcasts, Spotify, whichever it is, it helps us bring more insightful conversations with leaders like Amir. And if you really love it, uh, tweet it or zeet it, uh, post on LinkedIn. We'd love to hear from you. but with no further ado, Amir, let's dive in. Agentic AI is rapidly redefining how we interact with technology. Right now, can you start by explaining for our audience what agentic AI means, just to make sure we're all on the same page and how it differs from prior concepts folks may have heard of such as RPA or robotic process automation.

Amir Behbehani:

I think of agentic AI as Blocks of code that have reasoning capability. So they actually have access to an LLM. That's the first thing. Then they have access to long term memory. And you can think of that as akin to the RAG layer, so that would be the vector databases plus the graph databases plus other long term, forms of memory. Then they have access to adapt, uh, sort of short term memory that, increases their adaptability. And then they have access to tasks on which they're trained. And then finally they have access to some sort of integration layer. They integrate into enterprise applications or other workflows. And in so doing, unlike, let's say, RPA, because I often get the question, like, how does, how does this, technology contrast with, RPA? agents are interacting with knowledge. As context, as opposed to sort of data inputs, and they can sort of reason their way to a conclusion when they're interacting in so far as these workflows are concerned. So for example, you can say, write these documents to a database that begs a whole suite of questions. What fields in the database, how does the fields in the database map to this particular document? What do you want me to extract from this document and write to the database? Or database. What's the path of a base, etc. So the agents, have the ability, on the fly to reason through that imperative and get to an outcome that was so desired by the, by the user who's sort of, you know, Commanding them to, do that particular job. I

Conor Bronsdon:

deterministic code essentially where, Hey, we. We expect the output to remain the same. And this more non deterministic model of, of an agent or typically non deterministic, you get so much more capability as far as reasoning, but the array of results you may receive is more varied. And this becomes particularly interesting for folks in software engineering, because, uh, as we've seen at LinearB, we've just released a 2024 bot automation research study that found that more than 13 percent of pull requests. Today are already bot created. And we expect that number to increase with this continued innovation in AI, more happening on the agentic AI side of things. How rapidly do you expect to see. Fully AI, you know, non deterministic models, uh, come through and actually commit code to a code base.

Amir Behbehani:

think it's happening as we speak. I've tried a myriad of these tools myself. And these, um, these other frameworks. I'm both a consumer of these things and a producer and I enjoy using them. They're, they're actually quite fun. I think in a lot of cases, there's a lot of work that is sort of needed for these things to fully automate, the development and the deployment of large scale applications, however, the pace at which that work and sort of that innovation is taking place. I don't see it too far in the distant future when, at least small scale, fully functioning applications are developed and deployed through a single prompt. With a bunch of the reasoning and sort of the Socraticisms that happen between the single prompt and the final outcome taking place. in between the command and the outcome. So I do see that taking place. And actually that, that speaks to, well, how do you maintain context? How do you actually break down a particular complex function into constituent sets of tasks? As you would maybe, MapReduce for a workflow. and for example, what is the right network topology to maintain context? And to get a job done. So, for example, there are different types of network topologies. A bucket brigade is a very simple network topology. There's a house burning here, there's a lake here, and you just move the buckets in a linear fashion. If you're disseminating orders in sort of a complex organization, you want that topology to be hierarchical. If access to information is necessary, sort of in near real time, you kind of want to have a flat or heterarchical topology. So identifying a given problem and mapping it to the right network topology so that you can break the tasks down into sort of a constituent set of subtasks and then have agents that are trained on specific subtasks allocated to those subtasks and then have let's say a master agent as a reducer summing the sums. Okay, and so if you kind of pull that together, and that's sort of like what's missing in a lot of these application or agentic frameworks that I've tried thus far as a consumer, if that can be pulled off, yes, I see that in a not so distant future, you can sort of command, development and deployment of at least small scale applications.

Conor Bronsdon:

So my kind of mental framework for this is that we're seeing this shift from something that I think a lot of us have done for a long time, which is. Hey, we're going to automate away repetitive tasks to not only are we going to automate away repetitive tasks, but we are going to essentially create agentic employees who are working on your behalf and where you have to really think about the inputs you're giving to those employees. And maybe there's less of the people management, but the information management becomes even more important. Is that an accurate framework to apply here? There's

Amir Behbehani:

I think if, if you look at agents as they stand right now, without mentioning any specific agentic frameworks, but if you just, I mean, there's, there's some well known ones out there. what they do is they reference a function as a tool. And then that tool. with several other parameters is allocated to a particular block of code. So the block of code, as I said, has an LLM, it can have even an LLM router, it will have some sort of memory, and it will have access to this tool, and that tool is making a effectively making a function call, or it's having access to a function. Now, if you think of these functions as methods that belong to a class, And if you can sort of further abstract that the classes sort of fit into larger workflows. then you can, and this is actually what we're doing, so I'm kind of speaking from, from experience here, we're registering into memory of a master agent, not a single function to which it has access through a tool, but whole classes and methods that map to a particular workflow. So effectively, you've taken, a master agent and you've trained it on a set of workflows. Well, if you think about that in the context of workers, a human has a role, that role maps to a job, that job maps to a set of workflows, those workflows map to sets of constituent tasks. And right now, I think what we're saying is that, if we can take a workflow and break it down into constituent tasks and allocate agents to those tasks and complete those workflows, then effectively those jobs are being done by the agents, which then means these agents have a role to play within an organization. And in that case, I would say that these agents are effectively digital employees. And those digital employees paired with a human employee, say an individual contributor, suddenly that individual contributor has multiplicative returns to scale. As does, um, hiring a manager. If you hire a manager, perhaps, yes, indeed, perhaps, yeah. Even, and, and actually think of it from this perspective. If you have the employee right now using, let's say, ChatGPT, there's marginal improvement on their productivity. But if the human is taken out of the loop, because that human is maintaining context through that, through the dialogue with the GPT, sort of that's they're sitting around maintaining context, engaging, winnowing down on, a set of conversations and therein the outcomes of those conversations. But if the agent is doing that, then that's, that's a much higher return to scale if that human is sort of out of the loop. Now the human can be on top of the loop, so to speak. And on top of the loop means they're engaging in an imperative. Go do this. And they're not sitting in the middle, engaging in the interrogative. agents, are engaging in the interrogative. And that's what I mean by there's a Socraticism that happens between the command to go do something and the final output. At

Conor Bronsdon:

I mean, it still maintains some of the, I'd say, black box concerns folks have at the model level, but because of the way you're constructing these agents and how they are approaching things, it sounds like there's a much better auditing process for what's actually happening there. That then the human employees can focus on, okay, let's audit to make sure this is working and that we're actually delivering on these processes. Whereas I'll say currently, one of the big things we noticed in that, uh, bot report I mentioned was that 96 percent of all basic bot created PRs that are being sent out right now are not linked to product management tools. Well, if I have an agent doing that, I presume we can simply make sure the agent is using a project management tool to track this and kind of improve the observability of a lot of this code that's coming into the code base and a lot of the approaches we're taking here.

Amir Behbehani:

present moment, we're able to Um, the steps that an agent takes to complete tasks, and in fact, um, we're able to cache and, and sort of have a, an auditable record of the reasoning process. And so it's not at the workflow level. It's not fully black box. Now, one could argue that again, these agents have access to LLMs and the LLMs have sort of deep learning models. And maybe there's some black box models at the root of sort of the language models. That's, that's definitely, I mean, that's probably a case, I mean, when I think about older, sort of like the last decade where we're building, machine learning models, some of those non parametric models, they're black box. It's really hard to interpret. what's actually going on and, and that's different from the parametric models where you have sort of, you know, a prior assumption as to the distributions, of the fields that comprise the training set. and I'm not sure in all cases, the black box nature of those models was necessarily an issue. Uh

Conor Bronsdon:

I hear you. It's, it's interesting though, because I think a lot of, you know, survey data will, will show that the concerns that you, when you talk to a, you know, CISO or just a general software engineer, is they're saying, Hey, I don't get what's happening here. Uh, I mean, we can also talk about the concerns of, is this going to replace me? I think we will get to that in the labor economics side of this, but what would you say to those folks who are just worried about implementing this and having, not being able to unpack when something goes wrong or, uh, other concerns on that kind of black box piece? Do

Amir Behbehani:

Well, I think if you're, if we're talking about agents, then, and agentic frameworks, then a lot of the reasoning is, There's layers of reasoning that are abstracted above and beyond the deep learning models that are sort of the base of these LLMs. that set of reasoning processes, you're able to cache that, you're able to trace it, and you're able to audit it. So at that level, I think, um, and generally as I, as I think about this, the worker, maybe the, the, the software engineer or whatnot, they would want access to that layer of reasoning, I'm not sure they necessarily need access to, um, reasoning that's happening, so to speak, or the inferencing that's happening, so to speak, at the LLM layer.

Conor Bronsdon:

you think software engineering is going to be one of the disciplines that particularly benefits from this increasing use of agentic AI and its ability to autonomously handle tasks?

Amir Behbehani:

I think so. I think that software engineering is going to benefit as other disciplines, knowledge work disciplines, I think it's going to benefit from sort of an An industrial engineering paradigm, right? So if you go back in time, you have, let's say factories with seamstresses were designing and fabricating sewing effectively clothing, but the industrial revolution came along and said, okay, we're going to design. The factories and the systems that output those articles of clothing and software engineers. Similarly, we're, in fact, arguably in some regard, the work of an AI engineer is really to kind of build those systems. That effectively build systems. I guess if you're building agentic frameworks and then deploying those agents to effectively write code, and then the code is, um, for the purposes of building an application, then you're building the systems that effectively build the, the constituent systems that build the application. And in that regard, it's sort of more like industrial engineering than it is necessarily software engineering.

Conor Bronsdon:

Or I would almost describe it as more of an architect mindset versus a carpenter mindset. Um, and we've already talked about that kind of, mean, plenty of people have an architect title already, and I think we're going to see a push more towards that. what are, what are the other industries that you expect to see a lot of changes, uh, coming due to agentic AI improvements?

Amir Behbehani:

Well, I think in terms of roles within the knowledge work, the arena of knowledge work, I, I envision these agents climbing the career ladder. So initially they can do some data entry, then they can book meetings. You know, then they can send emails out and sort of monitor, maybe drip campaign and then eventually they kind of just go up this career ladder. They progress, um, maybe they write some basic applications. I do think it's a, we're, we're far from the agents necessarily solving for product market fit. Or doing sort of more, you know, or maybe designing

Conor Bronsdon:

Good. I need to keep my job. So,

Amir Behbehani:

I think sort of the more, you know, the complex and creative. job functions, as you say, for example, architecture, it's not necessarily, that's a layer abstracted away from doing specific tasks. So in that regard, I see, a whole myriad of especially task based work being disrupted. I Something I'm realizing when I interact with companies is that there seems to be that of three or four stages prior to even engaging, but each of those stages, the customers are willing to pay for. And the first is enterprises they want people to explain to them how AI fits into everything. They don't have, let's say, necessarily, an immediate answer to the use case. Within the organization that can benefit from AI. And I actually think that type of reasoning is this design of experiment reasoning.

Conor Bronsdon:

Hmm.

Amir Behbehani:

And you learn about that in human factors engineering, you learn about it in research methods. Certainly if you're doing machine learning work, you have to sort of say, okay, what is the use case? But you're reducing that use case down to a single variable called the target variable or the dependent variable. And you're aligning a bunch of data that you have to sort of bear results for that target variable. And that's a, that's a way of thinking about the world. And that is sort of like an experimental form of reasoning. And that lends itself to kind of going into an organization and figuring out how things are functioning and then figuring out how AI can benefit those systems and processes. And right now I notice that companies just want help with that. And some subset of those companies, I think, as a sort of an emerging AI startup, those are your pilots. And then eventually those are your MSAs. I think Bain Capital put out a report recently that I thought was interesting. showed some hurdles that have been overcome in so far as AI adoption. And some of those hurdles were InfoSec related,

Conor Bronsdon:

Hmm. It

Amir Behbehani:

data private, but the other hurdles that have yet to be overcome, sort of like, what's the use case? It's like kind of a classic thing. Like what's the use case? How do we apply this? How do we wrap our minds around this? And it seems like the role of the AI engineer right now is really that, and then eventually that translates into ingesting data, ingesting content, building systems, analyzing, sort of the output, based on the output, where's the refinement, and then based on all of that, can you get agents to kind of automate processes. um, there's a lot of work right now to be done just informationally. The work doesn't stop there, by the way. Um, and, and it really shouldn't just stop there. That's not, that's not the end all be all, but it's, um, it's certainly a, it's certainly a need right now,

Conor Bronsdon:

feels like part of what's happening is we know there's so much potential to apply AI, uh, and there's so many opportunities that zeroing in on where to start and kind of the steps to take along the path is a challenge for a lot of folks right now.

Amir Behbehani:

precisely. One way I would think about this design experiment reasoning is that. You've identified the why and you can distill that through the what into the how. there's a lot of, um, content out there that, okay, the why is getting to the point of being self explanatory. How do you distill that down to the how?

Conor Bronsdon:

In the short run, you've noted that AI will encourage engineers to think more Socratically, to ask more questions, to really elicit more, what does that mean for day to day development now? And then I'm curious where you think it's going to go in the future.

Amir Behbehani:

by engineers specifically. Um, I don't know. I mean, I, I think, more specifically, the agents themselves will take on that Socratic reasoning. And

Conor Bronsdon:

you think the shift is already happening to a more. Managerial mindset for engineers where it's like, Hey, I'm going to manage these agents. I need to make sure they're, they're functioning well versus let me elicit info from them.

Amir Behbehani:

I think that the shift is happening where the human being commands the agents. That's the imperative. And then the reasoning that needs to happen between the imperative and the completed task, no pun intended, it begs a lot of questions. And it's, it's that interrogative process that I sort of think of as like the Socratic dialogue.

Conor Bronsdon:

Got it. And then obviously on the more, I'll call it individual level. We're all engaging in that Socratic dialogue when we simply use chat2BT to like rewrite an email for

Amir Behbehani:

Precisely. And when we're engaging in that Socratic dialogue, we're maintaining context in our minds. To sort of guide the conversation towards an outcome, whether it's winnowing down the reasoning process towards something that we want, uh, with increased specificity, that process again depends on these, the access to long term memory, the access to short term memory for adaptability, training these agents on particular tasks. Again, there's a difference between knowledge, and knowledge. And expertise. So there's a knowledge layer here with these agents and there's sort of like a, an expertise layer. And for example, you see that, for example, if you want to have a contract written there, you can leverage chat GPT to edit various clauses and you'd probably want someone with expertise to just look it over, see if it, is cogent. that is sort of where a lot of this modeling context comes into play. And all of that modeling context, as I see it, is a bus between sort of the foundational layer and the, and the application layer. And it's, it's really. All of us right now with regard to these agentic frameworks, we're sort of just developing the bus that sits between that, the, the CPU analogy and the application layer now.

Conor Bronsdon:

Like, I think many folks listening, I am only at this point using kind of the Socratic approach. I'm leveraging maybe Copilot. Maybe I'm just leveraging ChatGPT to help me with certain test writing tasks. I'm not yet leveraging agents, but I see that it's going that way and I want to start leveraging agents within my software engineering team. What would be your advice to those folks who are listening and saying? Damn. Okay. Here's the next level. I need to start moving there.

Amir Behbehani:

one advice is to say, with regard to the labor market transaction, there's two sides and there's a sell side of the labor market transaction. There's the buy side. it will be, I think, increasingly difficult to compete with agents On the sell side of that labor market transaction. So if you're, if you're on the buy side of that transaction, then these agents are great because they've reduced your operational expenditure, et cetera. so the way I see it that, we as humans will probably leverage these agents. To do work and if we can own relational structure, we own the product of the labor and we kind of own the relational structure, then the humans directly benefit from the agents. But if you don't have ownership over that relational structure, And you don't have ownership over that work product, then effectively you're competing with the agents.

Conor Bronsdon:

Yeah. And I think this is where a lot of the fear currently comes in for folks where they hear that and they go, wait, does that mean I need to run my own company? Does that mean I need to go be an entrepreneur? Does that mean I'm going to Not be as strong within the labor market in a couple of years as I am today. Like, what do I need to do to change that? Um, and obviously AI has extremely broad implications on what's happening with the labor economy. How do you see AI agents impacting the broader structure of work and labor contracts in the coming year?

Amir Behbehani:

Well, I think the roles based economy is too rigid. So I, I see that companies have roles, and again, like I said, those roles map to jobs, to workflows, to tasks, et cetera, and roles offer a degree of economies of scale for these companies. Agents actually, by the way, offer economies of scope. Cause the way to think about an agent is. You're designing a tooling line that can effectively, manufacture various products.

Conor Bronsdon:

To your comparison earlier, I'm building the factory versus I'm building the digital piece on the line.

Amir Behbehani:

Right, right. I generally think that if you're building agents into enterprise, into the enterprise, and that inclusive of, of workflows, but, but also sort of, um, culturally and structurally. Then you may want to rethink this sort of concept of a role. when you're starting your own company, when you're functioning as an entrepreneur, you're not engaging day in, day out at the task level, or at like a single workflow level, or you don't just have this one job that maps to this one role. I mean, that's, that's, that's, that's, that's, that's, In my opinion, that's very junior. And another way of thinking about it is the roles can be assigned to these agents. So if the roles are being imparted to agents, how does that redefine the roles based labor economy? And if, if again, you're going back and saying these, these effective workers who can sort of think Socratically, some, in many cases, they're polymaths. And if they can go in between, Various functions and the agents are sort of doing the work and why should they sit and just do one role all day long? This doesn't make any sense whatsoever. So all of that, I think if you're thinking about updating the frameworks of the labor economy, I think you slowly want to start thinking about the rigidity of the roles based labor market. I mean, we're not, we're not all sort of sitting on a factory line and sort of. at that level in, in sort of the knowledge work economy.

Conor Bronsdon:

And yet a lot of how we've built work is based off of that factory model, to your point. Uh, which is very early 20th century. fascinating. I wonder if this is going to exacerbate or kind of remove the trend of like freelance work of gig work that we've seen increasingly over the last, you know, 10 years, do you think that agents are really going to replace freelancers? Or are we going to see, uh, this kind of, you know, freelancer gig work approach where, Hey, I'm running agents on behalf of a company now. Um, because I, I know, and you've talked about this before that many companies, many large enterprises are taking this insourcing approach and saying, well, if I can build an agent to solve these tasks for me, I'd rather be able to have broader observability and potentially cost savings compared to, say, passing this basic data task off to another country where it's cheaper.

Amir Behbehani:

I think there's a, even, let me go one, one, step more, more broadly. I think there's a case to be made for vertical integration. I think that these agentic frameworks inherently have a multi layer stack. You have the foundational layer, you have the long term memory layer, You have the short term memory for adaptability, you have these agents that are trained on tasks, you have this integration layer, and there are feedback loops that, in fact, there's a two fold feedback loop that just comes, you know, comes to mind right now, and that is to say, if you're allocating these agents to specific workflows within an organization, and they're improving processes, and they're improving processes. Okay, there's an improvement in so far as the returns to scale and so far as those process improvements. But then also the, the, the outcome of those process improvements and the knowledge derived therein can go back and train the models and effectively train the agents to kind of further that process. And it's very, very difficult to have those feedback processes without vertical integration. So that's one, that's one point. There's a second point here, which is that if you think of this multi layered stack, if you will, Again, and go back, let's say 10 years and think about an analogy to AI, which a very good analogy to AI is sort of like machine learning, set of companies in the early 2010s. And those companies were, the ML models were like an afterthought to the use case. And the application. And so you're sort of building at the application layer and then, yeah, you're, you may have an ML model somewhere and, you know, and that, that ML model is benefiting the application, but you're really thinking at a top down, uh, from a top down perspective. With these AI, systems, everything's really happening bottoms up. The foundational layer comes about first, then the rag layer. Then the short term memory layer, then these agentic frameworks. And frankly speaking, the application layer, the AI application layer is sort of maybe still around the corner. So if all of that stuff is sort of happening bottoms up, well then that begs two interesting questions. One is, Where are these AI first companies with vertical go to markets, and then how is that different from an existing incumbent, and that's redundant, but an incumbent at the present moment? Acquiring sort of the AI capabilities because they have already distribution and they have sort of monopoly pricing power over those distribution channels. And even a vertical go to market AI first company, they have to solve for distribution. And arguably they have to solve for distribution faster than the incumbent solves for innovation. So there's like a difference of race problem. So maybe in the short run you're going into these large enterprises and agentifying them. But you have to do that vertically, or at least it would seem that there's a case to be made for vertical integrations that you get those feedback loops. That's one thought. And then the other thought is those vertical AI first go to market companies. This whole analogy is very similar to like Blockbuster. So there's an argument to be made with a sort of a smarter company going to Blockbuster and saying, Hey, we're going to, bring something analogous to Netflix, but have, have that inside of Blockbuster and sort of bring Blockbuster up to code for the current decade. And alternatively, there's just the, the paradigm where Netflix disrupts Blockbuster,

Conor Bronsdon:

So it sounds like what you're saying is if I'm at the startup level, I need to kind of focus on the innovation side. Uh, I don't have that vertical integration opportunity quite yet. I can start to move towards it, but I don't have the vast stores of data that, you know, the belt and resources. and what I can do is I can, like many startups have in the past, out innovate these enterprises if they're not careful. Whereas the enterprise layer needs to look at it and say, Hey, I need to insource. I need to, really vertically integrate so that I can take advantage of these, you know, monopolistic prices, uh, these data advantages, these scale economies that I have already. Um, and that's kind of how you would think about it from, from those perspectives.

Amir Behbehani:

Yeah, when I think about a startup in general, I think of the stock of a startup having an inherent option premium. It's not an explicit options contract, but there is no profit that's distilling down to shareholder value. So what is the, what is the equity buyer, i. e. the investor, Buying the generally buying the option premium. And as you solve for product market fit, IE, you solve for distribution as a startup, you're exercising that option premium and converting that option premium to enterprise value or effectively equity value. And, and sort of that, that analogy is sort of like potential about potential energy being transferred, transferred into kinetic energy. And I don't think, you know, generally, a lot of startups are going to fail because they can't innovate. They fail because they can't solve for distribution. And meanwhile, the vertical integration strategy says the following. The acquiring entity has distribution. Now, interestingly, there's a There's something called a reverse acquihire now, and that's an interesting form of vertical integration where the larger company, let's say the incumbent, is not really buying your stock, they're actually employing the founders and signing a non exclusive licensing deal for the underlying AI technology, and exclusive share. Licensing deal in U. S. copyright law is a transfer of IP, but a non exclusive licensing deal allows your product still to exist, and you can kind of have non exclusive licensing with many different, uh, vendors. But here, there's an interesting hybrid, because the founders are sort of being employed to Work at the incumbent and sort of build that AI sort of from the inside. And that's a form of vertical integration that, all of this is really kind of interesting in the short run until those AI first vertical go to market companies come about.

Conor Bronsdon:

Yeah. And that's certainly coming. So we've been talking about this from kind of the, the company perspective, the company's strategy side, but we've also spoken a little bit about the individual side and trying to make that shift to, Hey, I'm managing these agents. Um, I'm helping them, uh, versus, Hey, I'm individually executing tasks. How do you think folks who aren't on that founder level, uh, but are maybe You know, an engineering manager listening to this, who, who listened to the show thinking, Hey, I'm trying to improve in my career. I'm trying to grow. Or even like an individual engineer right now, maybe a junior or senior engineer who has been thinking, Hey, I want to move into engineering management, but I am, you know, maybe I'm worried now that, you know, is my role going to go away? Am I going to have this opportunity? How should they be thinking about their approach to the labor market? You know, now in the next couple of years, as this AI transformation wave continues to rock through us,

Amir Behbehani:

Um, there are two ways that just, I think, in the short run, They come to mind. One is that you leverage the AI agents or just the AI frameworks for marginal improvement over your current productivity. So you can get maybe 20, 30 percent improvement over otherwise.

Conor Bronsdon:

which I think where a lot of us are today where it's like, Hey, I'm, I'm leveraging Claude or cursor or whatever else to help

Amir Behbehani:

That that's right. The other way, again, in the short run, that comes to mind is effectively designing these AI systems that effectively that which you're tasked to work on. So it's building the meta models as opposed to building the models, building the, the, the systems that build systems. And that is at least right now, not something with which, these AI agents. Are, um, best suited. Maybe that'll change in short order. I mean, things are moving very, very fast right now, but those are the two ways I would say, to the, the person whom you mentioned.

Conor Bronsdon:

It is certainly moving really fast. We've got to make sure we get this episode out quickly so that not too

Amir Behbehani:

yeah,

Conor Bronsdon:

changes before we go live. I know another thing that I've heard you talk about is this idea of a synthetic marketplace. I'd love for you to expand on that concept as I'm starting to increasingly see that as kind of where we're going.

Amir Behbehani:

Well, you think of two sided marketplaces, so Uber. Airbnb, etc. What is a two sided marketplace? It's a, it's a, there are two demand curves. So in the case of Uber, there's a demand function for passengers and there's a demand function for the drivers. And there's an interesting, recursive cross product price elasticity of demand that defines how those two demand curves interact with one another, or I'm sorry, interact with each other. An example of this would be Adobe Acrobat, Reader and Writer. Once Upon a Time, the Reader had a price. And the Writer had a price. Now, if we're not familiar with the Writer, Adobe Acrobat Writer would create the PDFs, and it was sort of the de facto way that you would create a PDF. And the Reader is, you just read the PDF. And you would price these two products in accordance with the point on demand curve where there's unitary elasticity. But what Adobe figures out is, okay, if we give the Reader a way for free, and therein maximize consumer surplus, what's for each unit downloaded of the reader, you're pushing out the demand curve for the writer and the demand curve for the writer is fairly inelastic. So price times quantity on the writer side, that revenue differential more than offsets the fact that you completely subsidize the reader. And that's that cross product sort of price elasticity of demand, that's sort of recursive, if you will. And that defines a platform. And so a platform, or maybe a marketplace, has those two demand curves. And they're sort of feeding off of each other. Now, if you think of Uber versus Waymo, Uber has two demand curves. It doesn't seem like Waymo has two demand curves because there's no driver in the car. But they, but it functions like a marketplace. And the user, from the user's perspective, they're calling a car and the car comes and picks them up, as does an Uber. So that's what I'm thinking of in terms of a synthetic marketplace. You're not, you don't have that inherent chicken and egg problem that you would when you're trying to see a marketplace. with human beings and with regard to these agents, if these agents have the ability to automate entire jobs because they've been trained on workflows, then you can effectively build a synthetic marketplace where humans describe the work that is to be done. That's very different than posting a job for a role, but if you could describe the work that is to be done and then sort of the synthetic marketplace itself allocates agents to doing that type of work, that to me is sort of a Waymo meets Upwork. And that goes back to your earlier point about how does this stuff disrupt the freelancing space.

Conor Bronsdon:

Yeah, it's interesting too, because it also, mean, obviously Waymo has other challenges, right? They have capital challenges, they have resource needs, but those are needs that companies Can scale more effectively without some of the complications that come with this double sided marketplace. And, uh, frankly, companies already have that challenge. Do I have the cash? Do I have the resource I need? So it does simplify it for the entity and gives them opportunities. Uh, you know, I love the Adobe example you use. I think that's a great way to explain it. So, uh, this has been a fantastic conversation on the labor economics side. I want to close this conversation. I wish we had another hour, but it is what it is, uh, to kind of dive into a bit more about where you see the future of AI and agentic AI going. How do you see AI impacting, particularly code development and software engineering in the kind of mid to long term?

Amir Behbehani:

think in the mid to long term, humans are in the loop and they're, they are contributing to the code base and they are doing so quite meaningfully. And they're benefiting from AI and the two work as, as counterparts. And that will be probably, a flow. And then there's another flow that's probably happening at the same time where humans are effectively building these frameworks to automate the other flow. And then the question then becomes, you know, in the not so distant future, how does that bear out? And then what flows are actually allocated to the agents, the agents and the agentic frameworks and the humans that are effectively designing those frameworks. And then what flows are going to sort of this, with a co pilot versus an organization with an autopilot.

Conor Bronsdon:

this seems like it aligns really well to what you're doing with Memra, uh, your company, where you're helping enterprises to leverage gigantic AI, and kind of scale out operations internally versus outsourcing.

Amir Behbehani:

Correct. I think tactically there's some work that has been going, maybe offshore or to various outsourcing vendors. That I see as low hanging fruit that can kind of come back into the enterprise, but, brought back to the enterprise for AI to, for AI to take on.

Conor Bronsdon:

Fantastic. I really appreciate all these insights, Amir. It's been a wonderful conversation. As we wrap up, maybe if you could share with me any closing thoughts you have for You know, software engineers or AI practitioners listening who are looking to stay ahead and kind of get a jump on leveraging AI agents. What should they be doing?

Amir Behbehani:

I think the, the answer to that is do. I mean, I think with a lot of things, maybe there are thinkers and doers, and they're not the same, like the thinkers are allocating tasks. They're contemplating and then they're allocating tasks to the doers. But here, specifically, I think the doers are the thinkers, and specifically there's a lot of thinking that you can do as an engineer as to how to sort of shape the future of agentic frameworks or the future of work just by doing. I think a lot of the very successful AI companies, the startups, they're developer led right now. I think this model of, delegating work, I think it doesn't necessarily work, it doesn't necessarily translate, with AI as well as it does with other areas. And that's okay in the short run. I think in the long run, yeah, you may want to, uh, benefit from, um, the returns to scale that come from delegation, but in the short run, there's so much to learn by interacting with these models, by developing the infrastructure to augment the models, by learning about the processes the organizations have, and sort of improving on those processes and linking them to the agentic frameworks and the effectively the LLMs, there's so much learning that can take place with the LLMs. and in fact, some of that learning can fuel these vertical go to market AI first companies. And then some of that learning, um, maybe speaks to the other point about vertical integration. And there's a lot of work to be done here. It's super interesting work. It's really kind of fun to be honest with you as well, both to produce and to consume this stuff is super exciting.

Conor Bronsdon:

Perfect. Well, thank you so much for coming on the show, Amir. It's been wonderful having this conversation with you. I know I've learned a lot. I hope our audience does as well. And for those who want to learn more, want to follow Amir's work on these topics, you can find him on LinkedIn. He shares a lot of incredible stuff there. And you can also check out Memra at memra. co, to learn more about AI and agentic systems. As always, thanks everyone for tuning in to Dev Interrupted, uh, and be sure to describe to our newsletter on Substack for deeper dives into the topics we discussed today. Amir, thank you so much for coming on.

Amir Behbehani:

Thank you very much.

People on this episode