Subscribe
& more

Episode 67

AI 101

AI 101

Episode 47

Legacies | Hardy Hardware

 

legacies hero art

Show Notes

Everyone’s talking about AI. Everyone says it’s the future. To find out where we’re going, we should know how we got here—and exactly what we’re working with.

We hear a short history of AI development before diving into how it’s already changed the ways we learn and code.

Transcript

00:03 — Christopher Nuland
My junior year at Purdue. My, my advisor told me that, this wasn't going to go anywhere. We were still 30, 40 or 50 years away from this being applicable in industry. And that if I wanted to do artificial intelligence, I would have to go for my PhD or have challenges finding any kind of job in industry.

00:24 — Johan Philippine
Not that long ago, this industry was in the throes of what some called an AI winter; progress was slow and timelines stretched into the distant future. Students hope to bring AI to life. Christopher was one of those students who were told not to hold out hope. Wins were small, until all of a sudden they weren't.

00:45 — Christopher Nuland
What ended up happening was I got brought on to this team when there was just a few people last year, like the end of last year, and now we're looking at, you know, potentially like 50 plus people. And in the engineering side, we're looking at going from 50 people to, you know, three, 400 people. By just even the end of this year going into next year.

01:05 — Christopher Nuland
That just shows you how rapidly this area is growing.

01:08 — Johan Philippine
That's explosive growth. Like it or not, artificial intelligence isn't just a passing fad. It's changing how we work, how we learn, and how we interact with the world at a tremendous pace. It's a gold rush at the frontier. But if we're not careful, we might end up in a heap of trouble. In this series of Compiler. We're going to take the time to consider the requirements, the capabilities, and the implications of using artificial intelligence.

01:38 — Johan Philippine
We'll try to sort the possible from the fantasy and hear from creators how they're already building really cool stuff, and what projects may be possible next.

01:52 — Johan Philippine
This is Compiler, an original podcast from Red Hat. I'm Johan Philippine.

01:56 — Kim Huang
I'm Kim Huang.

01:57 — Angela Andrews
And I'm Angela Andrews.

01:59 — Johan Philippine
We go beyond the buzzwords and jargon and simplify tech topics.

02:03 — Kim Huang
We're figuring out how people are working artificial intelligence into their lives.

02:08 — Angela Andrews
Today, we're hearing about how we got here and where AI could take us.

02:17 — Kim Huang
All right, we're here. We're doing a series on AI get used to it. Surprise! Johan, you want to start us off?

02:26 — Johan Philippine
Yeah. I mean, everyone's talking about AI these days, right? And that's something we can't really get away from in the world of tech. So we figured it's something that we'd cover. But maybe in a little bit of a different way than, than most people are at this point. So let's start off with, AI in our own lives.

02:44 — Johan Philippine
How how's AI come into our lives as podcasters, as people and technology, as people, you know, in our own daily lives? I'll be honest, I haven't used it all that much yet. And that's probably a little bit of a problem.

02:59 — Angela Andrews
Well, it has run amok in my life. It seems.

03:01 — Johan Philippine
Oh, yeah.

03:02 — Angela Andrews
It really has it. I use it constantly. I actually did a conference talk on AI a couple times this year, and, it is all the rage. We're, we're using it when we don't think we're using it. I'm introducing people to it. Just the, the possibilities. And I'm speaking about specifically generative AI. But

03:27 — Angela Andrews
there's so this is such a huge area right now. And to explore and to understand the differences. I'm interested in hearing what this podcast is going to bring to us and what what, what's next? What's the series about? I know we're talking about AI, and Christopher had a very interesting perspective. Tell me more.

03:50 — Johan Philippine
So we're going to explore a little bit about how it came to this, right? I mean, everyone's talking about AI. It seemingly exploded almost out of nowhere. We're going to kind of dive into the history of AI because it didn't come up from nowhere. It just kind of exploded all of a sudden. But there's a whole history of it that kind of brought it to where we are today.

04:08 — Johan Philippine
And Christopher Nuland is going to help us out with that. So Christopher Nuland, he is a technical marketing manager for artificial intelligence here at Red Hat. He's one who we heard from in those show's opening, and he's going to walk us through kind of really, really quickly through the history of AI and why it got really popular all of a sudden.

04:26 — Angela Andrews
Grab your pens and papers. We're going on a history lesson.

04:31 — Christopher Nuland
Artificial intelligence. I mean, the term goes back to the beginning of computing. We have had I mean, Alan Turing is considered the father of computer science. He was writing about AI all the way back in the 50s. This is where we get the term the Turing test, which is, you know, you put a human in front of the computer, and are they going to be able to tell that it's either a human on the other side.

04:59 — Johan Philippine
The famed Turing test and, spoiler alert, ChatGPT has passed it recently, and it's a really big deal because ChatGPT was the first AI to ever pass that test. I don't know about the two of you, but I remember interacting with chat bots in the past, you know, before ChatGPT, and it was pretty clear that you were talking to a machine on the other side.

05:18 — Angela Andrews
Always, always. Yeah. Especially when you start yelling. You know, it's not people, you know, it's not... it's not a human on the other side. You know, it's not.

05:27 — Johan Philippine
And have either of you had conversations with ChatGPT recently and, and see how realistic that feels?

05:35 — Kim Huang
Yeah. I've had I've, I've done a couple of things. I've been, I've been using... I'm trying to be an early adopter using. ChatGPT and other, other types of AI tools to just play around, hack my life, so to speak.

05:51 — Kim Huang
Definitely. I definitely feel the difference between, like, a chat bot or like a customer service chat bot, which is usually when I'm using a chat bot as some kind of customer service function. And the difference between that and like a ChatGPT, a generative AI tool, that's pulling from just these huge reams of data and like it, it really you really feel the limitations on one and kind of the, the wide potential in the other.

06:21 — Johan Philippine
So ChatGPT two years old, history of AI, a little bit older than that. So let's go back to Christopher and see what came next.

06:31 — Christopher Nuland
We've slowly seen AI progress over those 70 years. It's been very academic. So the challenge of the term artificial intelligence is that it's both academic focused, but also something that we have incorporated into our own lives with things like media. So like movies and books and television shows. So sometimes it's hard to navigate what is fantasy, what is academic, and what is actually the things that we're seeing being delivered right now within industry specifically.

07:10 — Johan Philippine
We've had decades of media featuring AI with all sorts of ideas of what could be possible, which for most of the time was kind of taken for granted that it wouldn't be there for a long time. Right. You've got Jarvis from Iron Man, Skynet from the Terminator movies. Got Hal 9000 just so many more.

07:28 — Angela Andrews
Can we please not forget I-Robot?

07:32 — Kim Huang
I don't know, I don't know, can we?

07:36 — Angela Andrews
I mean, I can't. I hope we don't.

07:39 — Johan Philippine
That's the thing though, is we do we want to forget about them because they, they teach us quite a lot. And those stories, they're kind of warning signs about, like, what could be possible with AI. But again, like, you read them and you're like, oh yeah, that's, that's so far away. But even in just the past couple of years, has it become a little bit harder for us to figure out, like, "Hey, is this a real thing?"

08:00 — Johan Philippine
"Could this be a real thing in the next five, ten years? Should I be worried?" Like, is that is that something that's changed for either of you and your considerations of media and AI.

08:11 — Angela Andrews
I say, yes, you know, it's all it's on the horizon. It's just a matter of time. In what iteration will we see it in our lifetimes? But all of those things, Jarvis, I mean, hello, Alexa. Thank you for always being there for me. You know, Skynet? Yeah, that's totally real. I, you know, so...

08:34 — Johan Philippine
Oh no.

08:35 — Kim Huang
Well so much for that.

08:40 — Angela Andrews
In my, in my humble opinion. But I think we're heading all those things we've seen in movies and books there. Someone thought of them. And science is saying, "Hey, you know, we we have an idea here. Let's see how far we can run with it." I mean, I-robot, robots are here now, and we've seen, different, iterations of that. What it looks like most of it's creepy and, but who knows?

09:07 — Angela Andrews
Who knows what it'll be in the coming years? I mean, I guess we just have to wait with bated breath for all of this technology to come and take off. It's in our lifetimes though.

09:17 — Johan Philippine
I'm sure we're going to climb out of that uncanny valley any time now. But we're still there for now. And then little sneak peek, we will be talking to someone who's building a Jarvis like kind of AI in the next few episodes, so stay tuned for that. Right. So tremendous progress in the past few years. Right. It's kind of had us reevaluate what's possible with AI and how far off into the future it is.

09:41 — Johan Philippine
And a lot of that is thanks to a paper that Google and the University of Toronto published in 2017, which really changed the conversation about AI research and implementation.

09:54 — Christopher Nuland
At the heart of it, for a very long time, there had been that debate of is more data better or worse? And most people thought it was worse, that you actually needed more quality than quantity. But what we actually end up finding, especially after this paper, is that it really is about how much data you have.

10:14 — Christopher Nuland
And that's where we saw the birth of a lot of these large language models. And that whole area is not just what we would call generative AI, but in academia, it's what we would call deep learning models. They're all based off of a form of artificial intelligence that uses something called a neural network.

10:34 — Johan Philippine
I kind of remember vaguely the buzz around neural networks and deep learning that rose up after this paper was published. Right. There was some news around computers winning at the board game Go up against some some of the masters of that game, as well as video games like StarCraft 2, but nothing much else came of it until the release of ChatGPT three.

10:59 — Johan Philippine
And that's when the hype around generative AI and large language models really took off. Now, like Cristopher just said, the key to these generative models is huge amounts of data, right? Feed it everything. And then that neural network learns the patterns underlying the data that it's consuming; speech patterns, facts, etc. so that when you ask a question, it relies on those patterns to provide an answer based on the expectations established by the mountains of data.

11:26 — Johan Philippine
That's very useful. We're still figuring out ways that we can benefit from these models, but the large language models and we've talked about this a little bit, but I think it's important for us to reiterate, as these large language models are not the only kind of AI. Right? You've got reinforcement learning models, you've got predictive AI models, we've got machine learning, deep learning, generative AI again, large language models.

11:50 — Johan Philippine
These are all distinct terms. And a lot of them have overlap, but they mean different things. And so even though AI encompasses all of these, you know, we're trying our best to be accurate about which we're talking about as the series goes on. Now in general, if we do slip when people are talking about AI these days, a lot of the times they're referring to generative AI and large language models.

12:14 — Johan Philippine
Now, these fields of AI we've mentioned they're all in various stages of use, but there's one version of artificial intelligence that's really still out of reach at this point.

12:23 — Christopher Nuland
And that goes into the form of AI that would be actual true intelligence. So this would be that, you know, we've crossed into a barrier of a machine, actually have being self-aware of itself, having some self, being able to be self-conscious of its own existence, being able to reason through things more than what we see with LLMs right now.

12:48 — Christopher Nuland
And really, that's kind of the holy grail of, of AI right now.

12:53 — Johan Philippine
Large language models, they can give you some very good answers to a host of questions. Right. That that's undeniable at this point. What they can't do is think; they're notoriously bad at math. For example, if you ask what ChatGPT a math question, it might give you a right answer, might give you a wrong answer. It's not a lot of fun to play around with that.

13:11 — Kim Huang
Oh, that's not okay. That's not a selling point for me. Not at this time in my life.

13:16 — Johan Philippine
Right? Right. So it's very good at a lot of language based stuff, not so good at the math and the thinking. It's because all these answers are blenderized and reformulated versions of the data that they've been fed. It's not original and as much progress as we've made in the past few years, we still don't know how far off artificial general intelligence truly is.

13:39 — Johan Philippine
So that, that true, you know, self-aware AI. So that's sort of how we got here as a society. Christopher has a great story about how he got to where he is now on that rapidly growing team we heard about before. Now, we've heard he's been interested in AI since college, probably a little bit before, and he had continued to dabble with AI projects over the years during his personal time. One of the recent ones in which he created an AI to play the game Double Dragon.

14:05 — Kim Huang
Now you're talking my language.

14:07 — Angela Andrews
Uh oh, we have Kim's attention.

14:09 — Kim Huang
We're it. Yeah. I'm here.

14:10 — Johan Philippine
You want to tell us a little bit about Double Dragon, Kim?

14:13 — Kim Huang
Double Dragon is a side scrolling beat 'em up or, that's a game that is 2D, two dimensional, and you scroll from left to right, fighting enemies as they come to you in waves. And then you fight a boss. You get frustrated. The machine eats your quarters for a while, but other than that, it's a good time.

14:34 — Johan Philippine
Yeah. So the kind of game a lot of the kind of games you could find in old arcades and things like that. You can play them on your computer now. But really a lot of fun. And, Yeah, he decided to, to do a project to build an AI that would be able to play this game.

14:49 — Johan Philippine
And he'd been working on it, and it was on his GitHub, and it was kind of like a side thing for him. But sounds like he had a lot of fun with it. We're going to return to that project in a later episode, but, right now it's important because it's going to play a role in how he got the current job.

15:04 — Johan Philippine
So one day, not too long ago, he was asked to interview for a brand new team focused on AI here at Red Hat. He had prepared a whole presentation based around Red Hat's products in development.

15:16 — Christopher Nuland
And, he gets on the call with me and he just instantly says, "Hey, I looked at your GitHub. I've decided whatever you have planned today, I'm going to throw it away. You're going to talk to me about this project." He sent me the link. It was the Double Dragon project, and it was it was embarrassing too because it was not like it wasn't like a clean project.

15:43 — Christopher Nuland
I would say. We were halfway through the interview. I wrapped that up and he's like "Look, we're done. I've got everything I need. If I have a say in this, you've got this job." And he even said "If you don't get this job, you're going to get another job within this organization because we need, you know, we need people with your, your thought process, your ability to tinker, to think outside the box."

16:08 — Johan Philippine
This is the kind of story we hear about all the time, right, Angela? Like a personal project, really influencing the interview process in a positive way.

16:15 — Angela Andrews
It can be a game changer. And again, he admits this was something that he was tinkering with. It wasn't a clean project. It maybe didn't even need to see the light of day in the beginning. But look at what had brought him. Look at what it brought him: an opportunity of a lifetime. I'm so ecstatic for him.

16:34 — Angela Andrews
I got to hear more about this story. But so all of you tinkerers remember someone's out there watching your GitHub.

16:44 — Kim Huang
And I don't know, I don't know about that equivalent. For me, it's like, oh, have you seen my LiveJournal? I don't know.

16:53 — Johan Philippine
Well, that's something to keep in mind, and especially as we head into the rest of the episode, because we're going to hear a little bit about how people are using AI in their own careers.

17:11 — Johan Philippine
For those of you in the audience at the start of your careers in tech, this next section features the story of an intern you may relate to.

17:19 — Diego Santamaria
We did have an intern during the summer, and I got some approval to kind of use him as a little guinea pig.

17:27 — Johan Philippine
That's Diego Santa Maria. He's a senior software engineer here at Red Hat, and he helped mentor an intern this past summer. Now, for the vast majority of the intern's time here, he learned what interns typically learn and built up a lot of skills in the traditional way. But for the final couple of weeks of the internship, Diego set up what he called an AI experiment.

17:48 — Diego Santamaria
So what I told him is I gave him a few criterias, which was, we're going to build a tier three application. And what that is, is a simple application that has a front end, middle, and an back end.

17:58 — Johan Philippine
So far, pretty straightforward. Now, there were only a couple of constraints for the middle end of the application, and the rest was really open ended. The intern could choose whatever frameworks he wanted as long as they fit the goal.

18:12 — Diego Santamaria
The application in question was something that I kind of wanted him to build, even for his personal web portfolio, which is an, a cue for getting back to people who might be interested in chatting with him. To give you the architecture that I had in mind, this would just have a a place for maybe like a name email, phone number in the front with a submit button. From the front end that would go into the middle end which is Golang processing, and to put it into a database.

18:39 — Diego Santamaria
Now, I've been doing this for years, and I can write that tier three application from memory. But I also know technologies that I'm comfortable with.

18:48 — Johan Philippine
So, Angela, do you have any initial reactions to this kind of project? Like is it pretty standard and you know that something that an intern could do?

18:57 — Angela Andrews
Yes, definitely. If he's a comp sci major or something like that, he's probably learned a programming language, some, some middleware and some database. And he can probably put this, those three pieces together. So it's not a hard reach. So I don't think it was like hazing or anything like that. He...

19:15 — Johan Philippine
Oh no.

19:16 — Angela Andrews
...I'm sure it was inside the, interns wheelhouse.

19:19 — Johan Philippine
Sure. There was one more constraint, though.

19:23 — Angela Andrews
You don't say?

19:26 — Johan Philippine
Yeah. So Diego has the experience to whip something up pretty quickly, right? It's something he's been doing for a long time. He's got that expertise to rely on. But the intern doesn't have that. And the constraint really had him try something new for learning those things in the next two weeks.

19:43 — Diego Santamaria
It took me about 4 to 12 hours to get this to a point where I could demo it to him and tell him my restrictions were you're not allowed to use any googling, no StackOverflow. I want you to just specifically use ChatGPT. Ask it any questions that you want, and from there build out the same application.

20:03 — Angela Andrews
What?!

20:04 — Johan Philippine
Yeah, there it is. There's the surprise. There's the experiment.

20:09 — Angela Andrews
I mean, how do you not Google when you're....

20:13 — Kim Huang
I was going to say googling. I was like, no googling. Wow.

20:17 — Angela Andrews
You just threw him back into the dark ages basically.

20:20 — Johan Philippine
Yeah. This is, this is different.

20:21 — Angela Andrews
Yeah.

20:23 — Johan Philippine
Well, let's hear how the intern and ChatGPT did after two weeks.

20:28 — Angela Andrews
Okay.

20:29 — Diego Santamaria
I have to say, it was nicer than mine. I actually had a bug in my front end that when you hit submit, it wouldn't clear out the textboxes. His would actually refresh automatically. It would clear it out and be ready for the next submission. You know, I'm saying that it took me 4 to 12 hours. I've been doing this now for eight years.

20:50 — Diego Santamaria
It took him two weeks, but he did come out with a new product and he had to implement three different technologies that he's never really like touched.

20:59 — Angela Andrews
Oh, so this was new?

21:00 — Kim Huang
Yeah. New to him.

21:02 — Johan Philippine
New to him. Yeah. Yeah. So he had learned some of the middle layer of the application that was related to his internship, but the front end and the back end, that was completely new to him.

21:14 — Angela Andrews
Yeah. I'm impressed with just ChatGPT. He basically taught himself how to build this type of application. There is hope for us all.

21:28 — Johan Philippine
Yeah. I don't know if it's shaved off, you know, eight years of experience because it first of all, it may not have taken Diego that long to get to that point, but getting there in the space of two weeks with only ChatGPT to rely on to do research, that's pretty good, right?

21:41 — Angela Andrews
That's amazing. I mean, to have, minimally viable product, as they say, to be able to show somebody how something works. Just having taught yourself, you know, two technologies that you might not have been familiar with. I think that's impressive, even if it's two weeks.

21:59 — Kim Huang
Yeah, definitely.

22:01 — Johan Philippine
There was, however, a little bit of a downside to this method of learning.

22:06 — Diego Santamaria
One of the questions I had for him is if you had to fix a bug and this specific part of the application, where would you go? And I'm glad for his honesty. He kind of said "You know, I really just copied and pasted a bunch of code." Right. He never really learned servlet or the JavaScript framework that he went with.

22:25 — Diego Santamaria
Or even Redis. Right. And I think that is a pitfall where we're going to be able to produce production-ready applications faster than ever before. But we're not going to have people who understand the lower end to those technologies.

22:41 — Angela Andrews
Why does this sound so familiar to me?

22:43 — Kim Huang
It does.

22:45 — Angela Andrews
This is very familiar. We were talking about people using frameworks as sort of a way to short circuit the actual learning of the language. And I am on the side of it's better to know the down and dirty. It's easier to troubleshoot. You understand the nuances and things like that. But in this case, when you're up trying to get things up and running quickly, that's great, but you're not going to understand it.

23:18 — Angela Andrews
So there is a huge trade off here. And he, Diego, just exposed it. This is, this is this is what we lose when we rely on technology in such a way.

23:30 — Johan Philippine
Now, on the other hand, right as time has gone on and software development has gotten more sophisticated, it's also become more abstracted from those base layers, right? Like when we're talking about base layers, you're talking about the base code. But, you know, a long time ago, well, maybe not so long ago, people were still developing in assembly code.

23:55 — Johan Philippine
Right. And we've kind of moved away from that because we have other ways of developing, and we don't need to be at that base layer. Is this the same kind of thing, or does it really still help to know that that base layer, and kind of be able to dig into the code and figure out exactly how everything works?

24:15 — Kim Huang
I don't think it's quite the same.

24:16 — Angela Andrews
I don't think so either. How low can you go and how low should you go? Do we have to be talking ones and zeros and assembly language? No. The next level of abstraction, which is what a lot of these languages are, they really do give you a great basis or understanding. I don't know if it's that important anymore.

24:42 — Angela Andrews
I don't... I know that people know it, and that's great. But for the grand scheme, does everyone need to know it? Does everyone need that basis of understanding? I don't know.

24:54 — Johan Philippine
This is something that I hear both sides about, right? You hear some of the people talking about the importance of being able to understand how things work at that lower level, so that you can debug it more easily. You can figure out what the bugs might be more easily. You can figure out how to write more efficient code because you know how it all kind of fits together.

25:15 — Johan Philippine
And then you've got the other side of it, which we were just talking about, which is just sometimes you don't need the most efficient code, you don't need the most advanced code. You just need something that works. Right. So yeah, it is that tradeoff, right, of being able to... do you really need to be able to, to do all that kind of lower level efficient coding, or do you just need something that works really fast, and then you just kind of move on to the next thing.

25:41 — Kim Huang
I feel like the most kind of friction areas are like you're talking about like a front-end, back-end situation, right? Or front-end, middle and back-end situation. A lot of the friction areas are happening between these, like where these two technologies... where these multiple forms of technologies meet and like having like maybe not understanding like to Angela's point, the extraction layer, but maybe understanding how these different types of frameworks and, and things work in concert with each other, I feel is very important.

26:11 — Kim Huang
And I feel like that may get lost in a situation where someone is just using ChatGPT or some other form of generative AI, or some other tool to just generate code and just copy and paste it. They're kind of losing that understanding of how these different things work in concert with each other.

26:28 — Johan Philippine
Well, that's the perfect segue to the next section, which is that there's actually a little piece of the story that I've left out. So over the course of these two weeks, the intern did run into a wall and wasn't able to figure out exactly what the problem was and how to fix it. And neither was ChatGPT. So when he talked about the problem with Diego, they both decided together to change that part of the application to something that's a little simpler and more widely used, something that ChatGPT would be better equipped to provide answers about.

27:01 — Johan Philippine
But remember how we talked about generative AI being unable to actually think?

27:03 — Angela Andrews
Exactly.

27:04 — Kim Huang
Yeah

27:08 — Diego Santamaria
It actually does a really poor job at understanding a tier three application, but it does an alright job understanding each single one of those separately. But it still requires him to kind of know, him/us where we need to go look for debugging. Otherwise he would just be copying and pasting multiple lines of code, having to go through that predictive model, and maybe just going into circles without really coming up with a solution.

27:41 — Kim Huang
And then, and then you're dependent on using technology that's more widely used, which it just so that ChatGPT can understand it. So like going back to that, I feel like that was a huge thing. I was like that's, like how, how much more dependent are we going to become if we go down this road on more widely used or widely adopted technology?

28:09 — Kim Huang
Just because the AI tools that we're using can understand them better. Like, that sounds kind of problematic to me.

28:15 — Johan Philippine
Yeah, because that doesn't really leave the door open for new technologies to kind of make some headway if everyone's just kind of relying on what the model knows, which isn't going to be very much about those new technologies.

28:29 — Angela Andrews
And that's the thing about models, you know, someone's going to have to, rerun the model and it's going to have to have more input with our suggestions, what we think is good or what isn't. And then it's going... the model is going to grow, but it has to be, you know, what's the word?

28:48 — Kim Huang
Retrained, retooled.

28:49 — Angela Andrews
There you go. The model is going to have to be retrained with this new influx of information and that's not an easy task depending on how big the model is. So there's going to always be this gap in its knowledge for us...

29:05 — Kim Huang
...and biased.

29:06 — Angela Andrews
And very biased, depending on what's the new hotness and what's more information that's available, there's always going to be a blind spot. And Kim just mentioned it. It's inevitable.

29:19 — Johan Philippine
Yeah. And that's just for each individual component. Right. Putting them together to create something new that's likely not something the models encountered before. And while it could figure out how some of these components may have been combined in other ways, it's still needed the intern to figure out how to do that final assembly. All right, so overall, Diego sees the experiment as a success, and not just for the intern to learn how to use a large language model to write his code for him.

29:47 — Diego Santamaria
What I wanted him to use this exercise for was for him to also take a chance to learn a technology really quickly, because I think that's an invaluable skill set to have as an engineer is having that curiosity of going, why am I using this technology?

30:05 — Johan Philippine
Right. So not only was he learning the front-end, the middle and the back-end, he's also starting to learn a little bit about how they come together if they come together. Right. Like he was having some problems combining some of these components and figuring out that doesn't really work very well. And he's getting that experience, you know, a little bit over time or actually a lot in a compressed amount of time, into making decisions and starting to have opinions about which components to use for what tasks.

30:33 — Angela Andrews
I'm a fan of the ChatGPT rabbit hole, and seeing where it takes you, especially for something where you're not familiar and it you're constantly expanding your understanding of something just a little bit more, just a little bit more. And it can, it can be very, it can be time consuming. But in this scenario, when you're really trying to understand something that where it's not returning to you this very clean, precise way to do something, you're learning around it and you're getting the more detailed information that surrounds it, and that only expands your knowledge about this topic or these topics.

31:14 — Angela Andrews
It definitely can't hurt in this scenario.

31:18 — Johan Philippine
Well, we're going to learn about how Diego did just that for his personal project.

31:29 — Johan Philippine
All right, so Kim, why don't you tell us a little bit about how you heard about Diego in the first place and why we wanted him on Compiler?

31:38 — Kim Huang
Yes. Okay. So I reached out to Diego Santamaria after I heard a stor, about him from his point of view about creating a bot to help him with a game that he was playing, a game called Pal World. Where, it's an online game and he was trying to create a bot to help manage his time a bit more efficiently.

32:03 — Angela Andrews
Work smarter, not harder, okay.

32:04 — Kim Huang
Exactly.

32:07 — Diego Santamaria
As an older gentleman at this point, with kids, a family, this is a game that you basically need to be on 24/7 to really enjoy, because it's based on servers with 30 people. But the reality is, you know, I'm working for Red Hat. I have my family. I can't be on the whole time.

32:26 — Kim Huang
Diego's a gamer like me, but he doesn't have enough time to enjoy the game he loves, which has a grind. For those of you who are not familiar with that term, it's basically where you have to play the game for long stretches of time to get the types of things that you want, like the late-game content, different rewards.

32:43 — Kim Huang
But he heard about the work someone else had done to build a bot that would do some of that grinding, that really time-consuming, repetitive stuff. And trust me, I'm interested to learn more because I have some, grinding games myself. So I want to know how he did what he did.

33:02 — Diego Santamaria
There was actually a modder in the community who came up with a mod called, human NPCs. And in my mind I said, well, I'm a software engineer. I'm sort of smart. I can probably figure this out. But you know what? I would say the hardest thing to do in, in this field is really just that first step.

33:24 — Johan Philippine
He wanted to build a bot too, but he didn't know anything about game development. So he turned to that modder and to a large language model for advice on how to get started. What he got was a detailed learning plan, getting started with the game's software development kit, how to develop with C++, how to structure game logic, and much, much more.

33:46 — Johan Philippine
It made a really daunting task just that much less intimidating.

33:51 — Diego Santamaria
I've never touched C++ or Unreal Engine 4, but I can write it for you. And then me with just kind of like the years of experience, I can kind of look at that code and syntax and go, okay, well, here's where I want to make an addition that takes it past the restriction of the generative LLM.

34:09 — Johan Philippine
So Diego can combine his years of software development experience with the information provided by the large language model to make progress much more quickly than he would otherwise be able to. While he hasn't really gotten as far as he'd like to go yet in building this bot, he's really looking forward to it and he's got a plan to get it done.

34:30 — Kim Huang
All right. Today I learned that AI can write in Unreal 4 and, I am going to be making the side-scrolling Metroidvania of my dreams. I'll be right back.

34:44 — Johan Philippine
Oh my.

34:44 — Angela Andrews
So this is interesting. I mean, he put all of this together and he got a learning plan, like, a path in which to learn how to do these things. And to me, that is amazing. And I think that's what some of us would hope from AI that it could be a teacher and kind of help us work our way through these types of problems.

35:09 — Angela Andrews
And I'm really interested. I hope he shares his GitHub repo with you at some point Kim, because this is this is huge. From going to I don't know how to do something to looking at the information that's already out there. Turning to a large language model and getting the advice that he needed. This is a game changer. Again, it's not perfect because nothing AI is, but and this is an a step in the right direction, I don't know what it is.

35:38 — Johan Philippine
Yeah.

35:39 — Kim Huang
And some of these games listen 40 hours, 50 hours, 60 hours I have a mortgage.

35:44 — Kim Huang
I can't...

35:46 — Kim Huang
I can't do it.

35:47 — Angela Andrews
I not going to do...

35:49 — Kim Huang
Yes. I mean if I can help me get those loot boxes and I can still keep my job, I need to get on it.

35:58 — Johan Philippine
I think what's really important here from these past two stories is that kind of learning how AI isn't... it's not going to replace Diego or the intern in writing the code. It's going to be able to provide a lot of information for them to kind of really accelerate how quickly they're learning and how quickly they're building things, but it's not yet capable of doing... actually putting these projects together and actually building the things.

36:24 — Johan Philippine
And knowing, you know, what the end goal is and coming up with the ideas and all this thing. And that's, that's really, I think, the crux of what I wanted to get to with this episode, which is AI is here, and it can be a really huge boon for us as an industry in moving forward a lot more quickly than we used to be able to.

36:45 — Johan Philippine
We're going to bring Christopher back to close the episode with a few AI projects that he's helping with that are outside of the tech industry.

36:54 — Christopher Nuland
AI can be used to enhance not just industry, but outside of industry. I'm involved with some projects with the UN right now on how we can use this for bettering humanity, and it's been a really great process. There's some groups that are using it to help the bee population in the world and grow the bee population, or, identify certain hot areas for like malaria, for example.

37:21 — Christopher Nuland
These are ways that you can incorporate AI and in your own disciplines that can not just be used for industry purposes, but outside of industry, that's just bettering our world and our day-to-day lives.

37:36 — Angela Andrews
Using the power of AI for good.

37:39 — Kim Huang
Yeah. Grinding in Pal World or saving the bees.

37:42 — Angela Andrews
We're going to see more and more of this as we go on. I think people are going to find those use cases and explore them in a way that we realize we have to take advantage of this. Again, Johan said it so perfectly, it accelerates our learning. And now that this curve has kind of been flattened a little bit, this is going to help us make inroads in a lot of these other industries and technologies, using this type of, using this type of technology.

38:12 — Angela Andrews
So I guess we have to keep our eyes on it and see what happens. Again, you know, using your powers for good. There's always another side to this. You know, we haven't talked about it, but I love to see that Christopher has found his, his thing. And he's working on these projects that are changing lives. Good for him.

38:32 — Johan Philippine
Yeah, I'm really encouraged that we're getting to the point where that's actually possible and using AI to help make these projects a reality, right? I mean, that's what you're

38:41 — Christopher Nuland
talking about at the beginning is...

38:43 — Johan Philippine
...we had this concept of what AI was and how far away it was from being reality, and now we have ideas of how to actually make it real.

38:50 — Angela Andrews
The future is now.

39:00 — Johan Philippine
So that's gonna do it for this episode. But over the rest of the series of the next episodes that are coming up, we're going to talk about, again, what's possible with AI. What you need to get started and what to look out for in this really rapidly developing technology so that you can be aware of and ready for the potential kinks that haven't really been ironed out yet.

39:22 — Angela Andrews
This is so interesting. I hope you loved this episode as much as I did. We miss doing this. We have to hear what you thought about this episode and you know what to do. You have to hit us up on our socials at Red Hat, always using the #compilerpodcast. What do you think about the use of AI and have you seen some really cool applications?

39:43 — Angela Andrews
We would love to hear about it.

39:48 — Angela Andrews
And that does it for this episode of Compiler.

39:52 — Kim Huang
This episode is written by Johan Philippine.

39:55 — Johan Philippine
Victoria Lawton is definitely not an AI who's taking over the world.

39:59 — Kim Huang
Thank you to our guests, Christopher Nuland and Diego Santa Maria.

40:03 — Angela Andrews
Compiler is produced by the team at Red Hat with technical support from Dialect.

40:08 — Kim Huang
Our theme song was composed by Mary Ancheta.

40:11 — Johan Philippine
If you like today's episode, please follow the show, write the show and leave a review. Share it with someone you know. It really helps us out.

40:19 — Kim Huang
All right everybody.

40:20 — Angela Andrews
Take care. Air hugs

40:21 — Johan Philippine
Bye.

40:24 — Kim Huang
Air hugs...

Compiler

Featured guests

Diego Santamaria 
Christopher Nuland

re-role graphic

Re:Role

This limited series features technologists sharing what they do and how their roles fit into a growing organization.

Explore Re:Role

Keep Listening