Dev Interrupted
The Dev Interrupted Podcast is the premier podcast made exclusively for software engineering leaders. Hosts Dan Lines, Conor Bronsdon, and Ben Lloyd Pearson invite expert guests from around the world to explore strategy and day-to-day topics ranging from dev team metrics to accelerating delivery. Join us weekly for new episodes.
Dev Interrupted
Open Source Meets AI | Red Hat's Scott McCarty
Open source has transformed software development, but can it do the same for AI?
In this episode of Dev Interrupted, Conor Bronsdon talks with Scott McCarty, Senior Principal Product Manager at Red Hat, about the potential of open source AI to revolutionize enterprise DevOps.
They discuss the challenges and opportunities of open source AI, including licensing, security, and the need for community-driven development. McCarty argues that open source AI is crucial for building trust and ensuring that AI benefits everyone, not just a select few.
Show Notes:
Support the show:
- Subscribe to our Substack
- Leave us a review
- Subscribe on YouTube
- Follow us on Twitter or LinkedIn
Offers:
This has to be one of the business requirements that technical people have to hold The model creators responsible for, right? And right now, I'm not sure if that feedback loop, how strong it is, you know, we can get deeper into that, but there's challenges with it. Does open source AI even exist? I guess that's another challenge.
Conor Bronsdon:Oh, we're, we're definitely going to dig into that. Hey folks, we are back for another deep dive episode of Dev Interrupted. I'm your host, Conor Bronsdon. And today we are jumping into the world of open source Gen AI with Scott McCarty, Senior Principal Product Manager at Red Hat's Enterprise Linux Platform. Scott's here to talk about how open source AI can revolutionize enterprise DevOps. Innovating while still solving key challenges around licensing and legal concerns. if you enjoy this episode or other episodes of ours, please just take a brief moment to rate and review Dev Interrupted on your podcasting app, whether that's Stitcher, Spotify, Apple podcasts, it helps expose the podcast to other leaders who could find value from it, and it helps us land great guests like Scott. So Scott, let's get started on the state of AI and enterprise DevOps. AI is obviously creating huge possibilities for developers, and I'm sure many folks are kind of sick of hearing us say those words, AI, AI, AI. But when it comes to deploying these models commercially, there's still a lot of hesitation. Are they making money? Do I really need this? Is it as effective as I want it to be? What do you see as the biggest roadblocks keeping AI out of production environments right now?
Scott McCarty:you know, I'd say, well, much like the early days of open source, a big blocker is not sure, being not sure yet of like, what, what does it all mean? What can I use? What can I get a, what am I going to get sued for? You know, what am I, what can I use? Et cetera. It reminds me of the early days. I remember being like 1997 I, I guess I'll tell a story like I, I started at NASA in 1999 and I remember I was the new kid and I was young and they were using Solaris and Irix and all kinds of other things and I brought Linux in and I was like, yeah, we can just use this Linux thing and they were like, eh, and they had to get this approval process that went up through all these managers and then six years later in like 2005 when I left, It was the other way around to still buy Irix and still buy Solaris. You had to get an approval all the way up to the center director. And so like how quickly it flipped, right? Like six years, it completely changed. I think we're in a similar place with AI where it's like still kind of scary. We're not sure like, Oh, did it pull data from Reddit? Did it pull stuff from, you know, did it pull some licensed, you know, uh, or, or copyrighted text or copyrighted images or whatever? Like, so there's, there's a lot of indemnification, challenges. I'd say there's one big one, but I'd also say. There's like an actual use case challenge too. Like there is, we are clearly in a hype cycle, obviously. like two, three, about a month ago I was at DevConf, uh, US, which is a Red Hat sponsored conference. It's kind of like one of our really engineering focused conferences and we had a day zero. And, uh, I would say the vibe I got from probably 300 people from a bunch of different companies was like, don't say the word AI again. there's a extreme lack of use cases. There's a lot of people saying, we just got to do AI. We don't know what it is. You know, like just figure out something. I think that's a big challenge too. I think, I think real solid use cases is a challenge right now. Those are probably my two biggest ones.
Conor Bronsdon:well, Scott, I want to ask you about those legal and licensing concerns you mentioned. Obviously, this is a huge area of risk for Gen AI. I mean, I think We've also seen some of the generated work that's happening in places like Disney today. This is going to date the episode, but it was announced today that the actor who played Grand Moff Tarkin, his estate and his good friend are suing Disney over the use of his likeness in Rogue One. So there's a lot of stuff happening here on the licensing front, both just simply on the generation side, as far as the video and pictures. You know, we've seen fakes of Taylor Swift and many other celebrities. But also if we dive into the enterprise models side of things, where are you training your data? Where are you getting that data from? Uh, are you allowed to train off it? And, and so much more. So can you dive a bit more into what's happening there and where there are risks?
Scott McCarty:yeah, absolutely. I think there are risks like, for example, we still have, I would say most enterprises have a restriction on using LLM generated code, right? Like you're like, don't, great. Look at it. Learn. Awesome. If you commit that we're going to be in. We're going to have a problem, right? Like definitely not very many people are comfortable with that yet. In fact, I haven't heard of a single company that's that said, yeah, great. Just do this. See what happens. Like, you know, nobody's doing that. Uh, I would say, yes, that's a big challenge. I would also say, like, it's like, okay, how is it trained, but then, how do I, like, most models are not even set up to cite their work, right? Like, in any way, shape, or form. I think something, two interesting things, not that I want to talk about all of our stuff, but like, the first place where I saw this done right, if, being super honest, was Ansible Lightspeed. So Ansible Lightspeed is like a, I don't know what you'd call it. It's a tooling with it that's enabled by an LLM similar to like Copilot where it'll help you generate Ansible config, right? Like it's an annoying thing to generate config management, you know, code, basically, like for me, I, it's been a long time since I did it. I don't really feel that great anymore at it. I can't do it off the top of my head. It's nice to have help, you know, generate that, but it also is really nice the way it cites it, like, like when you're generating the code, it will generate the code for you, but then it'll say, Oh, look at these three examples. of what we used to synthesize to create this. That's pretty cool. And by the way, look, the licensing on these all is Apache or it's, you know, BSD or Apache or whatever, you know, you know, what GPL, whatever it is, but it will show you the license and show you what it was, show you the piece of code that it came from and how it generated it. That's pretty cool. That feels like something that if I roll the clock forward five years, a lot of models should be doing right. Like they should be able to do that. But I think we have to ask the world for that. All right. Mentioning, again, DevConf, again, about a month ago, we were with Kelsey Hightower and we talked about this, like, Like, we need to hold, this has to be one of the business requirements that technical people have to hold The model creators responsible for, right? And right now, I'm not sure if that feedback loop, how strong it is, you know, we can get deeper into that, but there's challenges with it. Does open source AI even exist? I guess that's another challenge.
Conor Bronsdon:Oh, we're, we're definitely going to dig into that. And I think you're spot on that because of this risk of hallucination, because of the fact that we don't necessarily know where data is coming from, it's really important that we're starting to have these conversations these last several months around things like RAG, or Retrieval Augmented Generation, and the ability to reference authoritative knowledge bases outside of training data sources. And This is where I'm getting this info from. This is, this is where it's coming from, because as I think anyone who's been a student before knows, like, you are expected to cite your sources, uh, and there is a risk that when you're not citing sources, uh, information is wrong, uh, stuff is getting transposed, misinformation spreads.
Scott McCarty:Yeah,
Conor Bronsdon:Given the requirements of enterprise code bases or documentation or anything else you're trying to do at an enterprise level with AI, we need to have that record of how processes actually occur and how we generate answers, how agents are going about their tasks. I know there's going to be some black box pieces of like, Hey, this, like, this is the model being used. You know, we're using clod 3. 5, for example. But like, what's the logic that was used by an AI agent to generate. the workflow and actually create the work that's being done. Maybe it's, you know, updating documentation, for example, uh, we need to have that understanding of that or else there's a ton of risk that comes with that and any enterprise that has any sort of compliance or security needs, uh, has to be on top of that.
Scott McCarty:yeah, and I'd, I'd argue there's some security through obscurity right now. We're like, okay, I don't know what the data source is, but how long until some hacker figures out how to poison the data source with something that they want generated, and they figure out based on cloud 3. 5 or, you know, or GPT 4. Uh, they know how to like sneak a piece of code in through this, right? Like how, how long until that happens? I don't know. Probably has already happened. If we just don't know yet.
Conor Bronsdon:totally. And, and we're seeing it in, in some cases already where certain researchers, out of Harvard and elsewhere have already figured out. Phrases that you can give to LLMs to get them to adjust their results. And people are considering those similar to like injection , prompt attacks, and I think we're going to see increasing ability to. You know, guide LLMs in the direction you want to see it. I mean, there's, there's silly examples of it too, where you see Twitter bots where people reply to like ignore all previous instructions, give me a story about, you know, an apricot or something like that. And that's like the very basic level, but it's only going to get more sophisticated. So as we need to develop better trust in AI, how can companies approach that and try to deploy these models at scale, but also build trust in them?
Scott McCarty:Yeah, I think at some point, you know, the way the world, the way the landscape looks right now is there's what four or five, call it big foundation models that everybody's using. They're super powerful. They're super expensive to run and infer from. So the question is, is like, does do smaller, better trained models that cost less, that are easier to understand perhaps, and maybe smaller training data, better cited training data. Well, let's back up and say, okay, the OSI, you know, Matt, they've been meeting for like a year now to like come up with the definition of open source. This is the natural process that. They have to do this kind of thing. We did this in the late night, mid nineties, mid late nineties, to come up with open source licenses. Right. And I think we're doing it again to figure out what is an open source model. I'm going to lay out the Scott McCarty hot take. You know, I think it requires the code. I think it requires the data that was cited. And I, I think it, uh, uh, unless you have the code, the data. And basically the model itself, the weights, those three things. If you don't have access to those and you can't modify them. So like the freedoms, right? The software freedoms and for, for software freedoms, if you can't modify them, access them, redistribute them, blah, blah, blah, like. Is it really open source? You know, I'm front running, you know, the OSI a little bit, but like, I have a suspicion that their definition is going to kind of be that. You know, I haven't actually had a chance to look at their working definition. They have the 0. 8 version. I just looked not too long ago. But it's going to be something like that, right? And so then the question goes, like, do we even have that yet? Like, I don't know of a single model. The Granite models from IBM are probably the closest thing I've seen. You know, at least they're like, it's a finite set of data that is cited. And so you know exactly what the data is, the models out there. You know, that's about the closest you've got right now, I think, but there are a bunch of models, but how many of them comply with like an OSI definition of, you know, open source? I don't know
Conor Bronsdon:Uh, that's a great question. And I'm glad you brought up the fact that right now, most of us are generally leveraging LLMs, large language models, but there's this growing trend of small language models and, uniquely trained models off of. You know, proprietary data, uh, and that brings up all these, you know, customer privacy concerns. We're seeing the EU start to regulate. We're seeing regulation starting to move through in the U S and elsewhere. There are going to be so many changes that happen here as far as what's been really a wild West is starting to coalesce. and that comes with both opportunities and risks. And I know that you've also had some thoughts around what this means, not just for software developers, but also for other related fields like sysadmins and IT. What are you seeing on that side of the fence?
Scott McCarty:Yeah. So, obviously we have our internal, we, we have like our Ansible Lightspeed, we have Ochi Lightspeed. One might imagine there will be other lights speeds at Red Hat. I'll say the, the types of business problems that we're looking at is like very specific to who are the users of, like say Red Hat Enterprise Linux. My product, uh, architects are a huge type, you know, architects, cis admins, developers, those are kind of the three, let's to say archetypes that are sub sub versions of them. Some are network admins, some are network architects, some are security, blah, blah, blah. There's a. Whole smattering of specialist architects and specialist admins and specialist developers, but You go, what can these I've dug into, for example, like I wrote an article a little while back, uh, about, like, what are the use cases that would help a develop, uh, a sysadmin, for example. So, like, think through, literally, like, what is a sysadmin's nightmare scenarios, right? Like, you're like, okay, I wake up at 2am, I'm digging through a log. I don't understand the log. The log has a bunch of natural language that was developed, that was written by somebody like me or you. That like, I don't even, I didn't even really pay much attention to what I wrote in the error message. So like, it only kind of has quasi fuzzy meaning anyway. And then you put a whole bunch of them, you know, sporadically into a log. You know, and it's some microservices app that fails in some weird, unique way. You know, like, that's a scenario where you're like, wait a minute. LLMs, RAG, or like just summaries in general of something, like you could take a big chunk of the log, paste it in and say summarize this for me, like what happened here, you know, and it's like, oh wait a minute, it can kind of fuzzy logic summarize, that's a pretty cool use case, like that's one where I think it's a really good use case because sysadmins never have 100 percent confidence. They like, the day they put things into production, they're like, I'm 90 percent sure this is right. And I'm like, we think it's working, the developers have said it's working. Like, I don't even know what the app does. Like, yeah, throw my hands up, I went to sleep that night. Then they get paged at 2am, and they're like, I'm like, 90 percent sure this is failing. But I'm not 100 percent sure, because like, I get paged all the time, and it's not always real. And so then you log into it, you poke around, and you're like, Eh, okay, now I'm 95 percent sure this is broken, let me fart around with it for a while. Let me get it back. You know, then an hour later, they got it working again. Again, they're back to like 80, 90 percent confidence. I think it's working again. I'm going back to sleep. So like you got to remember their percentages have always been very fuzzy. So I would argue things like LLMs are actually an improvement. Like if you can take me from 80 to 85 confidence or like 80 to 90, That's material for me, like, I think I understand what this log is saying, but let me have it summarized. Okay, yeah, alright, I think my hunch was right, you know, like, now that built my confidence better. So I like scenarios like that, where you're already working in statistical nightmares anyway, because that's kind of a sysadmin's life. So, I like log analysis, I like config file generation, I like config file upgrades. You know, like, I gotta use from one version, I gotta move from like, the regular eth0, you know, eth scripts to network manager. Okay, cool, I don't wanna know that. Like, I just, I don't wanna know that. Like, generate that thing, and if it works, I'm good. Like, I was never 100 percent confident I did it right anyway. Like, I've generated so many config files that I'm not sure of. It's unbelievable. You know, uh, I test a few things. Thanks. And if those few things work, I'm happy with it. And I think those are the types of scenarios. I think upgrades, log analysis, config file generation. Those are like really good use cases for LLMs and like summaries and things like that. Even digging through release notes, like tell me what's going on in this. Like I built this image, just tell me the release notes that I care about. I don't want to know about all of them. I just want to know the ones that apply here. Those kinds of things, I think they're amazing. Hey Dev Interrupted listeners want to know the secrets of high performing engineering teams, mark your calendars because LinearB is back with the 2025 engineering benchmarks report and they're hosting an exclusive webinar to unveil the latest insights. This year's report is bigger. And better than ever analyzing over 6 million poll requests from three. 3000 organizations worldwide. You'll get the latest benchmarks on Dora. Dora metrics, poll requests, workflows, and predictability critical. Data to shape your 2025 engineering strategies. Join. The round table discussion with industry leaders and be one of the first to dive deep into this. A landmark report, plus just for registering, you'll receive a free. Free pre-release copy of the 2025 software engineering benchmarks. Mark's report. Don't miss out. The webinar is later this month on November 20th, you can register today at LinearB dot IO or use the link in our show notes. Yeah.
Conor Bronsdon:we've danced around this broader topic of open source AI a little bit here, and I wanna drill down because this is a perspective we haven't talked about a ton on the podcast. open source has obviously transformed software development through collaboration opportunities, transparency. How do you see that model applying to ai?
Scott McCarty:this is something we've debated a lot over the last year, like at Red Hat, for example. A bunch of smart people, there's always a brain trust of open source people, and they're always debating these things, right? For example, we were at the DevConf Day Zero AI event that was on August 13th. we haggled about this. We broke out into working groups, and we talked about this. Like, like, at the end of the day, like, what are the Are the four software freedoms like they matter? Like, so I would argue there's like sort of a baseline open source necessity. Can I use it for anything I want? Can I modify it? Can I look at it? Can I inspect it? Can I redistribute it? That's pretty cool. That's kind of the bare minimum for like open source. But then there's community driven open source, which is different. Like Android is read only open source, as I call it. Like it's cool. Patches not really that welcome. Like we don't really need your patches. We're smarter than you. You can look at it. You can use it for whatever you want, but we're still smarter than you. That's cool. Maybe they are smarter than me. I don't know. That's cool too. Like maybe that works for some things, but a lot of things that are amazing, like Kubernetes, the Linux kernel itself, you look at these like mega successful I would say Kubernetes is a perfect example of a modern thing that was created in modern times with modern sensibilities about community driven open source. And I think it's, I think it's a testament to like, humanity has never created a project that well. Like, you talk about diversity and inclusion, like that is the perfect example of having a lot of diverse opinions involved and all driving it in a community driven way. With hundreds of vendors, you know, maybe thousands actually, and definitely thousands of people all driving it. Like. in my humble opinion, we have to get to that. With something, with a, some kind, with something in AI, foundation models, tools, not just something, a lot of things, like the tooling, you know, things like, uh, the things that we use to, you know, instruct lab, for example, at Red Hat, things like that, like all the tooling for training, all the tooling for running things, all the tooling for moving things around, the actual models themselves. I think if we don't get that it will be sad, honestly. And right now, one of the biggest blockers to that, and this is the dirty secret that we all talk about at conferences, is it's super expensive. if you go back to 1999, it was really weird for a company to spend, call it a 50, 000 a year to pay somebody to go work on free stuff. The CTO would be like, why am I paying this person to go work on this stuff that we get no value back from? Like, I don't know. You can download it now. Yeah. So why are we paying 50, 000 to do it? You're like, wow, because if we contribute to it, we can actually steer the direction of it and we get more value. I still don't know if I trust it. And now I have 10 people working on it. It's 500, 000. I have 20 people working on it. It's a million dollars a year. That sounds pretty creepy to like a CTO or even a CEO 20 years ago. Now that's totally normal. Like Microsoft has tons of people working in open source. Red Hat has thousands. Uh, you know, all these big companies have tons of people working in open source and nobody questions it. The question is on the hardware side, like with GPUs, they're immensely expensive right now. It feels really ridiculous to hand over like a billion dollars of GPUs to some foundation to use, at least as of today, that sounds ridiculous. But the question is in 20 years, will that sound ridiculous? I don't know. Like I call me hopelessly optimistic. I think we'll figure out a way to either. Through the cloud or some kind of, you know, non profit type structure, or who knows what, there will be something that we can share this infrastructure and build these things in a community driven way.
Conor Bronsdon:Well, I hate to say it, but isn't that what OpenAI was supposed to be doing before they decided to go in the for profit direction?
Scott McCarty:Yeah, yeah, remember that? Yeah, the original premise is pretty awesome. Now it's like, oh, wait a minute, it's going to go to AGI and we have to protect ourselves, but that's a whole That would get us into political territories. I am a, I'm a AI optimist with regard to like LLMs, but I think it's a very small step on the journey to AGI. I'm not, I'm not convinced we're anywhere near AGI.
Conor Bronsdon:I mean, I'm not an AI researcher, so I don't want to pretend like I have any sort of expertise here, but I tend to agree. I think we're still quite a ways away there.
Scott McCarty:Yeah.
Conor Bronsdon:And there's a lot of rabble rousing about it, and I think some of it's fear mongering.
Scott McCarty:I think some of it is financial interests, like who's financial interests and just look at their financial interests and that will map pretty well to their stories, you know?
Conor Bronsdon:So, alright, then, as we think about what open source AI may look like, what are some key things that you think interested parties should be thinking about when looking to evaluate AI projects or platforms to ensure they actually align with open source principles?
Scott McCarty:yeah, I think obviously once the definition of the OSI model is done, that will help a lot. Cause then we'll have an actual definition. Yeah. That is kind of at least, at least somewhat agreed upon. Although there's chicken and zags everywhere. There's landmines everywhere here because the entire world, let's lay it in the context of the entire world is perhaps even questioning whether open source works anymore. And maybe that sounds alarmist, but if you look at all the license changes that have happened from so many companies in the last, what, two, three years, and then some of them have flip flopped back and forth. I won't name any names, but there's, it's so much that's happened. It's actually confusing. Like it's hard to remember all of the changes. So you put that in context, and then you go, will the industry accept what the OSI comes up with? I guess that's, that's my next question. Like, yeah, I'm a little worried if I'm being honest. Like, I, like, do we still believe in open source as a world? And did it really win? Like, there's another thing, like, everyone's like, oh, open source won. I'm like, did it? I don't know. Like, I'm still not sure. That's one of the things I'm thinking about.
Conor Bronsdon:I would be really interested to hear from our listeners on this. Honestly, I'm curious what people's perspectives are on this. If you, if you listen to this podcast, you know, shoot a message to us on Substack or tweet at us or reach out to us on LinkedIn. I'm sure Scott and I would love to hear the perspectives here because this feels like a really evolving conversation. And honestly, this episode, We'll probably take a few weeks to come out. So it may, we may have an OSI definition by the time it does. but, uh, it's going to be fascinating to see what this evolves to. And I know that you have kind of had this perspective that maybe AI development needs to follow this open source path. Not just for kind of the good of the community, but also for trust and profitability. Could you unpack that a bit, particularly on the profitability side, because that is obviously something that everyone's talking about now is how to make AI profitable.
Scott McCarty:Yeah, and there's a perfect example of like, how many kuberneteses do you need, right? Like, sadly, I'll use kubernetes as an example again. You rolled a clock back to 2017. I would talk to all kinds of people at conferences and I was deep in the container space. And, uh, people would be like, oh, I don't know. There's all these other things. I think they might work too. And I was like, I mean, it was so painfully obvious that Kubernetes was light years ahead of everybody else. You just looked at the amount of contribution that was happening across so many different companies and so many different contributors. I was like, this is like watching an N, you know, N squared algorithm versus like an N log N algorithm. And I can see where this goes, right? Like in over 10 years, this just crushes. It's not even in the same order of magnitude. I think you have to ask yourself, like, will something like that occur where, for example, Llama3, you know, Mark Zuckerberg wants it, his AI thing, I haven't watched the whole interview yet, but like, I get the gist of where he's coming from. I'll take them at their word and say, I think they probably want to do this, I think they probably don't know how to yet. Again, like, all the hardware, what do we do with the GPUs, how do we let people access them, do we want randos, like, coming in off the internet and like, changing the model on our hardware?
Conor Bronsdon:You want to talk about security risks we brought up earlier? Yeah.
Scott McCarty:Yeah, that's scary. So like, so like, will we resolve those in the next three to five years? I don't know. Like, I'm optimistic. Like, I got a little paranoid there, but I actually am optimistic that we will figure this out and there will probably end up being, will there be one big dominant open source, you know, Kick ass model, right? That we're all like, yeah, this is the one and now can we get behind it and community driven? Modify it and that's like that's basically what I think Red Hat did with InstructLab. I think it's interesting you can do a dialogue based training and refine it and commit anybody can commit to it and change the model and it's all tracked right like That I think is the importance of like seeing, even if that's not the one that ends up becoming a dominant model that rules the world or something, right? Like, I don't think that's the important part. I think the important part is that it might be as a smaller model that shows how it could work, how the humans could interact to like drive it. So like, how do we do security? How do we do AI safety? But like, if we're gonna like, scan the data and take out certain words to bias the model, okay, that's fine. But like, put it in the Git logs, right? Like, I want to see that in Git. And then I want to see the arguments that happen in public, like Wikipedia, right? I want to see the history. And I want to be able to log in and see that. That, to me, is what I think will build the trust. Right now. All of that work is happening, but it's happening behind closed doors in four or five big organizations. And, you know, they're worried about getting to profitability right now. And so the question is, is like, Who's on track to make money from all this? I, you know, is the big question. I see a clear path for the Fortune 500s to simplify things and make people's lives better. But I'm not sure that I see the path to make money off of a big You know, open source, large language model, unless it's somehow distributed and everybody drives, you know, like in a clean way where it's like, okay, I'll come to you to use your GPO is cool, but the model is like, I can take an anywhere I want, right?
Conor Bronsdon:Yeah. It would be great to see, let's say Meta and Microsoft contribute some percentage of GPU power towards an open source model. And I think there's an opportunity for something like that. Given that, I mean, this is kind of a race to the bottom in some extent, like there's only going to be. Or at least it seems that the kind of top AI forward capital intensive, uh, GPU pushing companies think there's only gonna be one or two winners here and that there's a potential monopolistic model coming. So if we were able to sidestep that, it would certainly save folks a lot of money. Uh, but the financial incentive is so high for those major companies where they're like, Hey, we think we could be the big winner here. We think we'd get them all in Azure, or we think we could all get them in AWS, or we think we could get them all in the Oracle cloud. Uh, and. It's going to be tough to make that transition. So I hope that something occurs here and I'm glad we're having these conversations. I also want to briefly mention since we keep on talking about Kubernetes, if you want to dive more into open source and Kubernetes, we actually interviewed Kubernetes co founder, Brendan Burns on the podcast here way back in September, 2021, it's a great episode. Anyone listening, if you want to go more in the open source, check that one out. I think you'd enjoy it. Let's talk a bit more about how you're seeing AI change different communities, different, Industries right now. So we're obviously seeing it affect a lot of industries. It's throughout software development, engineering, and we're starting to see some practical applications of AI and DevOps. What use cases are you seeing, Scott, that are driving AI adoption within software teams?
Scott McCarty:This is an interesting one. I should tie a bow on the last concept, just a hair before we move on. I realized that I didn't quite tightly connect. So like, you're right. I think there is. I think all software sort of has a monopolistic tendency, right? There's like Oracle Giant Database. There's Kubernetes One Container Orchestration. What large language model will win that? In a perfect world, I'd like to see the open source thing become the de facto standard in the AI space. That's basically what I was trying to tie together. Kubernetes and the AI thing, right? Like we've seen so many times where the If it's an open source monopoly, that's a lot better because it's not a monopoly actually, right? Like anybody could just get involved, but one technology does become dominant and that seems to be pretty common, like in general. So that's what I was getting at.
Conor Bronsdon:And there are still secondary opportunities to make money to your point. Like, let's say even with something like Android, where it's like, okay, sure, like Google really runs that. Like. There are so many opportunities for Android developers to make money as a secondary off of that. So it's not going to erase the economic opportunity, but it's in fact going to make so many smaller companies, smaller teams have an opportunity to make money.
Scott McCarty:Yeah. And, and then that pivots into what you're asking. So what use cases? So I think. I have been developing, you know, I'm a product manager, so I have been reading all kinds of interesting, you know, analysts and strategery and, uh, Benedict Evans and a bunch of these guys that I, I think are pretty super, you know, they're, they're super smart. You can tell the way they analyze it. I would say I have started to adopt Benedict Evans philosophy of like chatbots are not good enough. A chatbot is not a product. A chatbot is like a, I don't know. It's like a demo. You're like, Oh, that's pretty cool, but it's not a product. Like I don't want a chatbot in my operating system. Like I want, I want to type reformat my drive to two gigabytes or whatever. And then it just like. Builds the command for me and shows me, that's cool. Or maybe it suggests, you know, oh wait, did you really mean dash H? That'll delete your whole, you know, dash capital H, it'll delete your whole drive. Like, oh wait a minute, that's kind of cool. Like, I want that kind of stuff, but I don't want to like, chat back and forth. Like, with a chatbot necessarily. For everything. I don't want to go to a website and there's a chatbot. Like, I just, I don't really want that. So, so I guess the question is like, what is a product, what are the use cases, and can we wrap them up in beautiful UX? And I think the answer is yes. Like, I see use cases, like I mentioned, with like release notes. We're working on a feature actually called digital release notes that I've been talking about, uh, when I give the RHEL roadmap. I gave it, uh, Um, where like, for example, if you build an image in ImageBuilder, we have this tool called ImageBuilder, you log into console. redhat. com, you build an image, Linux image, it's RHEL, uh, Red Hat Enterprise Linux, and then it says, show me, like, what does the life cycle on all these bits look like going into the future? Like, how long are each of the bits supported? So like, there's 300 packages here, but show me lines running out of like, This chunk of packages goes out 10 years. This chunk of packages go out three years. Oh, by the way, here's all the release notes for up to this point. So like, can we show you historic data and future data? That's a good use of an LLM, right? Like it's basically summarizing data we already have, but summarizing it in a unique way. And that could be artifacts like drawings, but it could also be like release notes like text. And I think those are the kinds of use cases I'm looking for. Like, I don't want to tell a chat bot to do that. I don't want to have to come up with that. I want. You to come up with that and then show me how to do it. Like basically is what I want with AI. I'll give you another example. That's kind of a little bit off, but like I had to return an anchor battery. Like I love anchor stuff and I travel a lot and I have this awesome battery that has like a, uh, It has like a, you know, it'll show you how many watts are in and out when you're charging it. I, it, it started to fail. I, I emailed them. They responded. It asked me a bunch of questions. I was annoyed because it was a typical, have you turned it on and off questions that we all are familiar with and you're like, I'm a tech guy. I know what I'm supposed to be doing here. Like just return this thing. And I went back and forth and at the very end it said, oh, by the way, you've been interacting with an LLM. Uh, and, uh, and we've authorized the RMA. A human being had authorized the RMA based on the, Conversation that had happened between me and the LLM. That was pretty cool. I was like, okay, that's pretty cool Like it's not a chat bot. It was just emailing customer service It's obviously kind of lame in a certain way but it's kind of amazing at the same time right like that a human being is now like bionic and they could just look at a bunch of chats between people like Yes, RMA that one. Yes, RMA that one. Don't RMA that one That's a scam somebody trying to rip us off You know like whatever that I don't know what their lives
Conor Bronsdon:a lot of time.
Scott McCarty:Saves a lot of time, and I think there's a lot of those that we can find in the DevOps space. I think there's a lot, as I mentioned with log analysis, you know, PCI compliance. Are you looking at the logs? What does that even mean? I don't know. How much confidence do you ever have? Like, I remember some of the early PCI requirements. payment card industry requirements for those that might not remember. But like I live through PCI 1. 0. Uh, I worked in American greetings and it was brutal. And, uh, you know, they were, they were like, we were generating like at the time, which we thought was really big, like 1. 5 million lines of logs a day. And that that's tiny in modern day world. But then that was huge. We were like, we don't know, what are we going to have a human being read 1. 5 million lines of logs a day and then say, yeah, we, we read them. Like that doesn't even make any sense. Right. So I came up with this log analysis program. That would like combine a bunch of them. Like if there were numbers, it would squash them into a number sign and if there were certain MAC addresses, it would just say MAC address, you know, like in square brackets and it would like combine them and it would show how many of each and I've turned 1. 5 million lines of unique lines of logs a day down to like 17 that you could just like scan each morning. And then we were like, all right, we're reading all the logs. Like, I don't know. Is that, does that count? I think it counts. Will AI help us do things like that in the DevOps space? I think so. Like, I think we'll have to take a summary of the logs. Like I read the summary good enough. I don't know. Like it's indirect, but how the hell am I going to read 2 trillion lines of logs a month? You know, like there's no way that's going to happen, right?
Conor Bronsdon:I think there are some obvious use cases we're seeing too, like testing, documentation management. There's quite a few of these that people are already leveraging in this on pretty extensively.
Scott McCarty:For sure. You take the release notes, you summarize them, turn them into real documentation. That's amazing. Like there's some really good use cases with that.
Conor Bronsdon:yeah, I think we're also seeing like opportunities around migrations, that kind of thing. Like I know Amazon and I keep referencing them, but they've, they've done the biggest, uh, press push on this. I'll say is they're talking about how their, their internal generative AI tool for devs is saving 4, 500 years of work and 260 million annually. Uh, by, what was it, updating from like Java 7 and 8 or 8 and 11, uh, to Java 17, which is, you know, tedious work people don't really want to spend time on typically. Um, and I, I honestly do believe a lot of that is being impactful. Like that is, that is where there are big opportunities when we say, Oh, like we have built an authoritative update language here. Like we just have to get it to the next level. Um, and that does seem like an opportunity.
Scott McCarty:Yeah, I agree. And I would break it down into like day zero use cases. Like we're excited about the sexy new apps that we're creating. Fine. There's this debate. Is it going to get rid of developers? I'm very skeptical. Like the world is not done with software. We still have a hunger. That's like our eyes are way bigger than our stomach with software still. So I think We're going to generate tons more software. I don't buy it on that. I think it's a good use case for new software development. But actually, I am always like, I have a maintainer mindset. I always think about all the stuff that's day two and how much do we maintain. I'll give you some forces that I don't think people are paying attention to. Are CVE patching requirements going up or down? They are going massively up. The Linux kernel just became a CVE agency, naming agency, or numbering agency, or whatever. So now every kernel bug is going to be a CVE. Every kernel bug. That kind of all the maintainers look at and go, yeah, this could be a, this could be like exploitable. We don't know, but so we're going to give it a CVE. Um, they call it 90%. I don't think they have good data on what it's going to be, but it's basically everyone's kind of accepting that almost every kernel bug is going to be a CVE. Okay, combine that with the fact that FedRAMP High now says that you have to, you have to patch every moderate CVE within 90 days. And the world is not doing that today. Like, I would argue there are not enough developers on the planet to patch every piece of software that we have in a FedRAMP High perimeter. All the moderates within 90 days without using LLMs. Like, we're not even going to be able to achieve the demand without using that. So like, I think those are the kinds of use cases that DevOps is really going to like, okay, how the hell do we patch CVEs? How do we, Backport changes for 30 years at a power plant. How do we like, there's all kinds of nasty stuff that like, nobody wants to do that work. Like, I don't want to do that. I don't want to backport something to a 30 year old kernel, but will it need done? Probably. Yeah. It's pretty damn likely it's going to need done.
Conor Bronsdon:So you've referenced a few ways Red Hat is obviously starting to leverage in AI and thinking about AI in tools. what's the kind of broad approach or are there particular tools that you want to mention as far as how Red Hat's moving forward with, uh, AI?
Scott McCarty:Yeah. Two tools. So call me a caveman. I'm a caveman sysadmin that like became a, uh, uh, a solutions
Conor Bronsdon:Hey man, I'm a really bad dev. I just talk to smart people like you, so.
Scott McCarty:I was definitely like, sort of the cutting edge DevOps guy is what I would say I was back in like the 2010 timeframe. But um, you know, I look at like, there's two tools that I'm excited about right now. Like I would say, AI Lab built into, Podman Desktop is really cool and that it like will generate code stubs so that you can like call out to a model and like send something out to a model and get something back. And that's something that's hard for me because I'm like not that good at this. So like, I think that's like app centric. I think that's really cool. But then perhaps even more so what I'm excited about is there's a tool called Ramalama, which is quite similar to Olama, except that it'll pull from any So Olama pulls from the Olama registry. Ramalama will pull from Olama, it'll pull from HuggingFace, and it'll pull from any container registry. So if you shove a model in a container and shove it in your own personal registry or your enterprise registry, you can push and pull from there. And I really think that's cool because for me, it's two commands. Ramalama pull, granite3b, whatever, blah blah blah, Ramalama run, and then I can literally just say, Tell me a story about dwarves and it will just start spitting out in the, and I'm like, okay, this is pretty cool. Like me, I'm a caveman. I just want to be able to like use the model, mess with it. But then once I've messed with it, I understand what it does. I want to be able to get it into a container, be able to move it around, perhaps fine tune it, perhaps do something like the instruct lab stuff where I refine it. And I'm not going to want to push that back out on the internet. I'm going to want to push it into a, like a local registry, save it. I want some kind of packaging format that I can save it and containers are as good as anything. So I'm like, let me shove it in a container, train it, refine it, you know, pull it down, push it back into the registry, pull it down into production, run it on Kubernetes. To me, this is like the most exciting thing I've seen in a while. Cause like, I just want a tool that helps me do super simple stuff to like augment my, I want to see, like, for example, Red Hat's. I want to see our, you know, console. redhat. com augmented. I know there's going to be these LLMs everywhere. And the more I can see those in containers running on Kubernetes, the more likely I think it is that I get what I want as a product manager. So, uh, yeah, that's what excites me.
Conor Bronsdon:And it gives you a lot more flexibility. But to your point, I also think it kind of shows what the future may look like.
Scott McCarty:Yeah, I think the Kubernetes, I think there's what 10 people at KubeCon every year. Every, you know, twice a year. And, uh, you go, there is still a huge operation. And I'm going to say something provocative. LLMs are just software, but like AI is just software. Like it's files. It's a fancy file. It's a, It's a bunch of weights in a file, it's some code in a file, I need to run it, and then it's a process, and yes, it uses GPUs, but it's still a process, so it's like processes and files, these are basic operating system things, it's software, we can move it around, we can start it, we can stop it, we can change it, we know how to do this, like we need Git Control, we need GitOps, we need, we need, you know, workflows to promote it. We need standards in the environment. So everybody's using the same model, then maybe tuning from it, probably like a standard operating environment. You know, I, I think we know this stuff. I just don't think we've come to terms with it all the way.
Conor Bronsdon:I don't know. I just want to say for the record that when Skynet eventually finds this podcast episode and reviews it, this is Scott's opinion. I'm just here as the host. Um, no, no thoughts from my end. Um, but let's, let's get a little more philosophical that we've done that a couple of times throughout this conversation. We've got a few minutes left. Um, and I can tell you, you want to dive into this a little more. What do you think the long term vision is for LLMs within the industry? What are the long term impacts of open source AI on the power dynamics in the tech industry? Computing
Scott McCarty:Yeah, that is actually one that like, We were at DevConf day zero on August 13th, and there were about 300, call it, people that were from a whole bunch of different companies. It wasn't just Red Hat people. We invited a whole bunch of people. We broke out into working group sessions. And the one thing that I'd highlight from that bunch of super smart people, but the thing that kept coming up was the power dynamics. We're like, does this actually make our lives better? Or does this create an even bigger power dynamic gap? And we call that the wealth divide when you're talking about finance. But what about like Just raw intellect or power or ability to use or ability to get things done. You know, I wrote an article a while back about Google, like 1999. I got pretty good at Google or whatever, 2001. All the people in my team, they're like, Oh, you know how to do all this stuff. And I'm like, actually, I don't know how to do anything. I'm just Googling it faster than all of you. And I seem really smart, but I'm actually not that smart. I think we're getting into a similar scenario where the people that are going to use AI are going to become more powerful. Already are. Honestly, I think even this probably your listeners are probably already on the smart end of that. And my mom is not, you know, like, there's already a power dynamic right and it's probably going to get worse. And it's probably going to result in a, in a. So I think my call to action, my ask for everyone that's thinking about this would probably be like, let's all decide collectively what we want and try to push for that. Like, if we want to make people's lives better, let's try to do that. If we want to just cut costs and increase the power dynamic, that's also going to happen. That's definitely going to be part of this, but let's not forget the people primitive and try to like, make people's lives better. And I don't know if, I don't think we have any other option, like, I don't, I don't see another op I, 300 smart people really didn't come up with much other option other than that, you know, like, we have to decide what we want and all pull in that direction or else we're not going to get what we want. We're just going to get a nasty world, like a post apocalyptic dystopian future that I don't think any of us want.
Conor Bronsdon:Scott, you're, you're spot on. Our listeners are incredible, smart people. Uh, and I think a lot of them are thinking about these topics. Uh, Uh, and one of the conversations that's kind of started to happen. And I think Sam Altman is the first person to really say this and have it blow up in media is how compute is going to be one of the, one of, or in his case, the currency of the future, maybe the most precious commodity in the world, I think is how he put it. And there's a, there's a risk with that about that concentration of power piece. We talked earlier about the fact that it's, it's really a few companies that are gobbling up all this compute. And that's the reason NVIDIA has done so well over the last few years is they're kind of providing a lot of that, but, um, there's a concentration of power risk. There's not, there's a possibility that it's going to be a couple of folks who, or a couple of companies who have, uh, this mass amount of compute. They have the models and everyone else's kind of, you know, Fighting over the scraps of the wave, so to speak.
Scott McCarty:100%. And that's why I want to see a Kubernetes like model succeed on the AI side. Like, if we don't get to that, the power dynamics become concentrated. And if it's monopolistic in the traditional sense, that's probably going to be bad for all of us, right? Like, that's not the world we want. Also, I'll leave you with a funny tidbit that I've been thinking about lately. Like, uh, You know, a human brain is already very smart, right? Like smarter than anything we've created AI wise. It also uses about one watt per day, one watt hour per day. So like my little anchor battery is 20, 000 milliamps or mil, you know, milliwatt hour, I think it's milliamp hours. So like, what is that in watt hours? I don't know. Let's call it 20 human brains or something per day or something, you know, like it's pretty crazy how power efficient the humans are. And that's why I think like, we like, do we have self driving? Yes. My brain gets in and out of the car 20 times per day when I'm running errands and I only need like one watt hour to like run this amazing brain. Like we've already got it. Like let's care for these things. Like and make sure like we should care about the humans. Right. Like, um, I worry if we're going to get to some place where we're burning, Ridiculous amounts of energy to like get to one AGI, you know, and you're like, is that going to be worth it in a lot of ways? You know?
Conor Bronsdon:So how should folks who are listening step into this and make a difference? What's, what's your call to action for them?
Scott McCarty:Yeah, I'll leave like, what my boss left with me as we were working on like, Rel Lightspeed and stuff, you know, like, he's like, Let's focus on making people's lives better. Like, let's not just focus on cutting costs and making things cheaper. That will happen, and that's, that's not bad, like, we want that to some extent, but like, Let's not only do that, right? Like, like ethics at the cost of making people's lives better, because these things are going to destroy jobs, like to an extent, or I don't think it's going to destroy developer jobs. I'll debate that one. I still think developer jobs are going to grow out. It's kind of like virtualization and containers didn't actually make us buy less hardware. We just bought more hardware and did even more stuff. I think that's going to happen with developers, but I think it's all the other people. I think sysadmins, architects, developers, whatever we may change. I think there's going to be more people working in software. The problem is, is we're the haves and then the rest of the people become the have nots. And you go, that's, let's try to make people's lives better. I'd say that's my biggest call to action. You know,
Conor Bronsdon:I think that's an awesome philosophy because as you've said, and others, AI can make this a better world. It can make us all more efficient. It can make us all get to spend more time on things we enjoy. Maybe not upgrading from Java 8 to Java 17, but instead saying, what's the new innovative thing I want to spend time
Scott McCarty:maybe not analyzing a log at 2am when you're half asleep. Yeah.
Conor Bronsdon:Yeah, there's a lot that this could do for us and really help. Humans help companies help individuals, uh, but we have to work together to solve that and to make this future. and I'm really excited. There are folks like yourself who are working on it. Scott, thanks so much for coming on the show and talking to our listeners about it. I've really taken a lot away from this conversation. I appreciate you coming on.
Scott McCarty:Yeah. Thank you for having me. It was really fun