Episode Transcript
[00:00:30] Welcome back to AI Today. I'm your host, Dr. Alan Badot. And today we are diving into one of my favorite topics. We're going to talk about AI cognitive Personas. Now, you've heard a lot of information, a lot of, you know, things that folks are saying that it can do that it can't do. We've got the, the release of, you know, Deep Seek on Christmas, and of course, the ramifications that are taking place with that and the ability to train and have these models perform at about the same, you know, capability as the, the larger ones. And so today we're going to set a couple of things straight. We're going to lay out the path, and then we're going to walk through what some of the really good things are that it can do, and some things that you really have to be careful of from an ethics perspective because, you know, again, we always want to get the information out there on, on the ethical use of AI and how folks can really optimize what is taking place, how it's taking place, and where it's being deployed. And so from the start, I want to look at it really from an analytical perspective and think about the analytical mind and from a thought experiment perspective, think about an AI system that, you know, approaches problems just like a chess player. Okay? You know, it's calculating, it's recalculating, it's precise, it's methodical, it's super trained, it knows everything that you can imagine about chess. Now you take a step back and you say, okay, what if I apply that to an executive coach?
[00:02:14] What if I apply that to some other profession?
[00:02:18] And now you start to train it and train it and train it to be an expert in whatever that field is, but it continues to have that analytical approach to try to solve problems. So when you combine those two, the power that you have is significantly greater than if you just use a normal large language model. It's significantly greater than anything that you can do from a prompt perspective.
[00:02:50] So, you know, I know everybody has run into this, and I've shown you some of the demos before when I would do some live prompts. But what happens is you're telling these large language models, you're saying, I want you to act like an engineer and help me solve this problem.
[00:03:10] Well, all you're doing is affecting it at its surface. You're saying, I want you to behave, you know, and you're not really, you know, getting it to continue to iterate. You're not giving it what I call, you know, a constitution and driving its behavioral patterns and training it on its personality or anything like that. You're just saying, I want you to pretend that you are an engineer, and it pretends that it's an engineer. And it'll do that for only, you know, a few turns. It'll do okay, but then you start to see a performance degradation after a little while, it doesn't behave as well.
[00:03:47] It'll start to, it'll potentially start to hallucinate or confuse things or just give you wrong answers with stuff. It's because you're only trying to influence the surface of these models as opposed to really trying to ingrain in their DNA what their personality is and what their behavioral patterns should be. And once you have that, then as you layer in new skills and new tools, that's how you get a cognitive Persona and it's very powerful. And I'll be able to show you a live, you know, example of an executive coach in the next segment. And then we'll talk about, you know, some of the ethical ramifications and even some of the things that we're starting to see from a lawsuit perspective, you know, going on right now. And it's, it's really starting to influence the market. It's starting to influence, you know, the, the manufacturers of things like GPUs, computer systems, networking, all those things. And, you know, it's, it's, it's, it's really going to start to rethink, have, you know, people rethink how they are trying to solve these problems, how they are storing information and how they're really driving home the message that they want to get across from folks. So as we, as we look at, you know, these, these cognitive Personas, what are they generally? What can they be good at? You know, you think about historical perspectives, meaning, you know, you want, you want to create an AI agent that is George Washington, for example, and you want to ask it questions and, you know, so you train it on all the information that you can find on it. And, you know, then you start to, you know, give it context, you give it some additional information. Then you start to interact with it. And as you are interacting with it, as long as it's allowed to continue to learn, it's going to think potentially that it's George Washington. And, you know, it's learning based on your, you know, environment today. And that is exceptionally powerful because it allows you, again, perspective. You know, if you look at, you know, and I always use this as a, as a good example with, you know, the Battle of Gettysburg in 1863. Right. You know, if you could interact with Robert E. Lee, for example, or, you know, any, any generals that were, were at the, you know, those battles, do you think they'd make different decisions now? Wouldn't it be interesting to ask them what they were thinking? Well, the AI, these cognitive Personas can do that. They can give you that perspective if you ask the question the right way. And that's what's so interesting because now apply it to a scenario today where maybe you have a product or you have a service that you want to offer, but you're not quite sure what your audience or what your buyer is really is interested in. And so what you want to do is you want to say, I've got, you know, you train a whole bunch of Personas, cognitive Personas, based on the demographic that you're really trying to sell to. And then you start to ask them questions, you start to see, okay, maybe I want to, you know, tweak this color because it makes the, you know, the can or something much more attractive and easier for a buyer to see. Or I want to change the flavor of something because it's too sour or something like that. And you know, the Personas are all telling me that, you know, it's not gonna, it's not going to be received very well. Or what about if you're writing a proposal and now you have simulated everybody around the room that's going to be evaluating your proposal and you can ask it, why don't you take a look at this section that I wrote? Do you think it is, you know, really going to hit the, hit the, the message that we're trying to send to you that will be a good, a good partner will be all these other things. Can you give me the strengths and weaknesses of the write up that I just provided you? And it'll go through and it'll say, oh yeah, this is, this is a strong point. I would evaluate that as a strong point. But you know what, here are some weaknesses that you have. And these are the reasons why I think they're weaknesses. And you get that from the perspective. No, again, you have to have enough information to be able to create these Personas. But when you do, they are exceptionally powerful. They're much more powerful than a large language model by themselves. And then when you start to even layer in other types of AI neural networks, you know, genetic algorithms, optimization, even some quantum annealing type activities, when you layer those in together, then you can really, you know, sky's the limit for, for those sort of things because then you can really assess what its performance is, how well it's going to do, and, you know, really the big drivers around, what your audience is trying to, you know, engage with and what they're going to buy. So with that, we're going to take a short commercial break, and when we come back, we're going to talk about stewing. I'm going to introduce you to Stewie, who is a AI cognitive Persona trained on 45 years of data, as well as everything folks can get their hands on. So stay with us, and we'll be right back.
[00:09:50] Welcome back to AI Today. I'm your host, Dr. Alan Badot. And this week, we're talking about cognitive Personas. We're talking, you know, we. In the first segment, we. We introduce what they were, what they're good at, how you train them. And in this one, I'm going to show you a live one. You know, I've built one. We're using it in some of our other, you know, activities that we have. But I want you to imagine that you are a professional, and it can be a junior manager all the way up to a CEO, and oftentimes you have executive coaches. They're fantastic. They help you get through problems, they help you think things through. They give you, you know, different advice. They, they, you know, try to give you a development plan to help you work through all of those things. And they're, they're really invaluable as you are moving up the ladder. And the great thing about that, too, is that, you know, you've got somebody or, you know, a, a coach that you can bounce ideas off of. Well, unfortunately, they probably have other clients. You're not their only client. And so they're gonna, you know, they're not available 24 hours a day, seven days a week. It's just unfortunate, but that's, that's just how it is. And so one of the things that I thought about was, well, what happens if I would create a cognitive Persona to be able to fill that void? Now, you always, you always hear me talk about, you know, you've got to have the human in the loop. And it's the same thing with cognitive Personas. You don't want to turn them loose and let them just go, you know, on their own and develop whatever they want and, you know, maybe you'll check on them in a year. Right? That's, that's a. That's a recipe for disaster. It's awful. So don't do that. However, if you've got the data and you have the ability to train on it, then that's where, again, the strength of the human and the cognitive Persona can really start to take off.
[00:11:58] And so what we did, we went out and we found an executive coaching firm, and they're called Steward Leadership. And I, I have actually taken some classes from them over the years. And, you know, a whole bunch of other folks that I, I know have as well. They've got a great clients, but their dad, John Parker Stewart, has been, you know, really doing this for almost, you know, a little over 45 years.
[00:12:26] And I thought, holy cow. The amount of information that I could get from that, the amount that I could pull from videos, from tapes, from, you know, every single source is. Is invaluable.
[00:12:40] And then I can train. I can train and build an executive coach that is, you know, just like him and all the other executive coaches that are part of that team. And so we took 45 years of data, everything that we could get our hands on, basically. It was about 50 million different points of information. And by points of information, I mean, you know, a. There was a blog that was released, there was a video, a taping or something, a writing or whatever that is that allowed us to really have a good baseline for training. And then we were able to continue to build that out and continue to train and refine and train and add and train and refine and train it on how to. How to be an executive coach, train it, how to understand how to interact with people, give it a personality exactly like, you know, John Parker Stewart, and then see how it interacts with, with. With students.
[00:13:47] And what's amazing is that you get the same tone, you get the same responses, you get the same interaction other, but, you know, it's just Stewie on your computer screen and not. Not John Parker Stewart.
[00:14:05] And that may scare some folks. I'll be honest, it scared me the first time I did it a long time ago when I was, you know, I had built one. I wanted to see if I could talk to it. And I built one of my grandfather, for example, and it responded in a way that gave me goosebumps, quite honestly, because it was exactly what he would say to me, pretty much get my head out of my keister and get going. But, you know, but it's. It's that sort of power that, that we have. And so let me go to. Let me go over to my Stewie dashboard so everybody can see what sort of things are happening. And just an example, this is what the dashboard looks like. It's very basic because it doesn't have to be, you know, a ton of bells and whistles and stuff like that. But what we can do is, you know, you can. You can just really quickly hit one of these buttons if you don't quite know what to say. But at the same time, we can say, hey, Stewie, how are you doing?
[00:15:09] Remember, when you do prompts, make sure you put some punctuation at the end. It makes your score go up, for heaven's sakes. But, you know, just like any other interaction that you're having with somebody, you want to make sure that you are interacting just like you would with a coach. Why? Because Stewie is reading your emotions. Stewie is trying to figure out what is this user trying to.
[00:15:32] To do what? What are they trying to ask? And, you know, Stewie's response is, he's doing well, thank you. And, you know, he's ready to assist any way we can.
[00:15:41] You know, how about you? How's your. How's your mind today? And I would tell Stewie, my mind is not in a good place today, but it is what it is, right? And so I'm not going to say that because he's going to learn that. And I don't want Stewie to come back to me and say, hey, are you feeling better today? No, I'm not. I'm not in that. In that mood. But as you continue to interact with him, you know, let's. Let's ask him. I want to. I want Stewie to reflect on some leadership challenges just. Just to kick things off. Now, again, this is not going to give you the speed of chat GPT, you know, an immediate answer. Why? Because Stewie has checks and balances in there. Stewie is grounded. Stewie's got so much, you know, training that he goes through that we want to make sure that he is striving to give you the right answer and not just an answer.
[00:16:31] Because speed is less important when you get to these kinds of levels. When you are training a cognitive Persona that is a subject matter expert, speed is not the main driver. Accuracy needs to be the main driver. And if it's not, then that's a problem. That also means you probably don't have a SME. Okay, you've got an AI agent or something like that. That's very. It is very different than, you know, what a true cognitive Persona is, especially around decision making, especially around different types of information that you want it to do and different types of problems that you want it to solve. You see, with Stewie, I could take a, you know, a multi system problem, have a bunch of SME stewies working on that problem and they can, you know, collaborate, work together and communicate information back to me on how to solve that problem more effectively. So that's a big, that's a big driver and a big advantage of these. Now one of the important things is that I have told you Stewie is an AI cognitive Persona. I'm not playing behind the keyboard or have bots that are playing behind the keyboard that are pretending to be a real executive coach and giving you answers.
[00:17:59] You know, you start to see that there are ethical bones that you have to be very careful of in order to make sure that, you know, the clients that you're interacting with are having a great experience, but they don't know they're communicating with an AI agent.
[00:18:20] And especially if they're communicating with somebody like something like Stewie.
[00:18:28] And it gets even slipperier as you start to take cognitive Personas and you start to use them in more advanced fields, even medicine, for example, service desk activities, for example, insurance, you know, type, type things.
[00:18:45] Then you really have to make sure that whoever is interacting with that knows that it's a cognitive Persona. There are a number of different lawsuits that are going on right now.
[00:19:02] Folks had signed up for, you know, different social media sites. You know, there's a, I believe there's a lawsuit going on in, you know, California around only fans, for example. And you know, there are other lawsuits that are out there where people were interacting with these and they didn't know they were interacting with a bot. And that's a, that's a big issue, you know, and some people may argue, well, you know, you're going in for. It's a different type of experience and it shouldn't matter as long as the experience is, is, you know, what you expect, you don't complain about it. So now all of a sudden you are complaining. I can, I can see that side and I can, I can understand that there's a lot of validity to that. However, at the same time, you know, being transparent and making sure that whoever you're interacting with knows that that is is taking place is always going to be the safer route to take. It just is because otherwise, you know, you just leave yourself open for things that, you know, you really could have prevented, you know, a long time ago. And whether that's a social media site, whether that's an app, whether that's something else, it's always better to err on the side of caution.
[00:20:24] And just let people know, hey, you're talking to a bot. The bot is exceptional because it's a cognitive Persona type or it's been trained a lot or whatever that is, but it's just always going to be safer. And the same thing with any sort of proprietary information, any sort of, you know, code, for example, that's going to be the next one that, you know, you, you can't go out and scrape everybody's code on a, you know, on a certain site and use it to train your bots just so they get better when you haven't gotten permission to do that from the actual developers. And so we're going to see a lot more lawsuits, we're going to see a lot more information on this because it really is going to start to become a bigger focus because we're leading the market from a cognitive Persona perspective. I've used them for years, quite, quite honestly, was able to deploy them overseas in a couple of different countries for a couple of different projects that they had, you know, us working on. And they are exceptionally powerful, and they're usually about 25 to 50 more accurate on, and that's just on the average.
[00:21:42] But it's a different age. Different age, you know, your, your identity, your digital identity, your digital credentials, your information that you need to get these bots to be trained where they are. It's a lot.
[00:21:57] Do the right thing, use them ethically, keep a human in the loop, and then, you know, you'll, you'll be successful at the same time that you are ethical. So with that, we're going to take a short break. We'll be right back, and we're going to talk really more on some other use cases that we can, we can get into, and then we'll go from there. So we'll be right back in a few, few second foreign.
[00:22:56] Welcome back to AI Today. I'm your host, Dr. Alan Badot. And this week we are talking about one of my favorite subjects, cognitive Personas.
[00:23:05] It's, it's, it's, it's really taking off. People are starting to recognize the importance of these, you know, the power of these types of models. They are fundamentally different in how they are trained from your traditional large language models. They're much more focused, they're much more powerful, and it also means they can be much more dangerous. So you just have to be careful with that last segment I showed you, Stewie, an executive coach who is trained on 45 years of information and data, almost over 50 million data points. And, you know, it will Give you responses that are not only accurate but empathetic.
[00:23:52] It will, you know, talk to you about, you know, small talk, emotions, just chit chat type type things. Exceptionally powerful in what it can do from an executive coach perspective. Now one of the weaknesses, though I didn't get to talk about this on, on the last segment, but one of the weaknesses of Stewie is, is I cannot take Stewie, who's an executive coach and have him go be an engineer.
[00:24:18] Stewie will be a terrible engineer. So these AI cognitive Personas, because we are focusing their training and we are really getting into their aidna that they are an executive coach and fundamentally they are, they believe they are an executive coach. They have a constitution that tells them this is what I need to do to be an executive coach. And this is all I want to do is be an executive coach.
[00:24:45] Then trying to take those and use them in other places is a disaster. I can, I can tell you I've known it's been a disaster for a long time. That's why I don't do it.
[00:24:56] That is the weakness though, that they have. And so, you know, with even for example, with deep seeking, you know, it does some very good things and it does them very fast and it is very economical. But I think what we're going to find out is, you know, the amount of training that they use. Granted it's, it's significantly less, it is more focused. But try to apply it to some general things, a multitude of general problems, and you will start to see that its behavior, its answers are going to, are going to fall into the category of it's not going to perform quite as well as something that's more general like, like chat GPT. I know this is a lot of folks's first experience with these and there's been some overreactions around that. But that's really fundamentally what's going to, what's going to happen. I've used these, developed them for, oh my goodness, almost eight years now. And it's been a slow process to get them to where we got to. We have the compute power in order to get them to run, but it is one of the weaknesses that you see in these cognitive Personas. But put a bunch of them together and it's really going to be able to conquer something.
[00:26:13] As you start to expand though, into other areas of AI decision making, emotions, you know, contextual type, you know, understandings, that's where the power can, can even more become evident because you're, you're, you're allowing something that is exceptionally focused, exceptionally skilled to really, you know, expand its, its AI capability without the same overhead that you get with traditional large language models. And half the time they, you know, those won't work, you know, anyway. And so it's, it's, it's a, it's a huge advancement. It's something that folks should look at, apply it the right way, but then, you know, just be careful. And when I say apply it the right way, I mean really do some analysis to make sure that it's in the right place. You're using it the right, in the right context. And so I'll give you an example from a customer service perspective.
[00:27:18] If you have a medical, you know, after hours call center, and you have it give. You've trained a cognitive Persona, it acts like an RN or, you know, a doctor or whatever that is, whatever that staffing profile looks like. And then you have a, a cognitive Persona that is supporting that. You know, it will be able to do things a lot faster. It'll look at a whole range of different events that it could deem as an emergency.
[00:27:51] But what happens if it makes that one mistake and it says, no, you don't need to go to the emergency room. It's something else, and you've got no checks and balances there.
[00:28:02] You can immediately see that would be a problem. And then on top of that, if you haven't notified the person that is on the phone with that bot, you have taken their ability to make a decision on their own really out of their hands. Because usually in those cases, the person on the phone is going to do what the doctor or the nurse tell them that they should do.
[00:28:28] So that's where the ethical boundary really lies. If you do not give the human an ability to make an informed decision because you are using a cognitive Persona and you haven't told them, that's an issue.
[00:28:50] That is, from my perspective, that is a big issue. You have to be ethical. That is your job. That is our job as scientists, it's our job as engineers to push, push the boundaries as much as we can. But as soon as somebody else gets involved in that process, you need to make sure that they understand that they know that they are communicating with that type of, that type of AI.
[00:29:19] And that doesn't matter if it's really from a, you know, an online bot to, you know, a call center bot, to even Stewie. That's why everybody knows you can see very clearly Stewie in the previous segment. Stewie is an AI Persona. It says that it is right up Front. It says it in the disclaimer. There's a lot of things that folks need to do in order to. To really, you know, push that forward. And that's part of the problem with the US and some of the laws that we. We have. We've seen some different laws, mandates, you know, presidential edicts, whatever you want to call them come out, and none of them have any teeth. There's no requirement to disclose that kind of information. And, you know, until we do that, that's.
[00:30:12] That means we're behind the rest of the world, for one. And that means, like every other industry, there are going to be an awful lot of charlatans that are doing some of this.
[00:30:22] Having an ethical plan, having an ethical mindset, having an ethical deployment strategy to make sure everybody knows what you're talking about, who you're talking to is important. And I think that's one thing that, that we really, really, really need to do a better job on. It's a shame. It really is. It's unfortunate.
[00:30:46] And it's our job to make sure that, you know, just because there's no law doesn't mean it's right. And it's our job to make sure that we do that.
[00:30:55] Another use case, really, education. If you think about it, having an AI tutor would be amazing. You know, I gave the example of, you know, George Washington or, you know, generals at Gettysburg, for example. What about if Sigmund Freud was your psychology tutorial?
[00:31:16] That'd be amazing. Being able to ask Sigmund Freud different questions on, you know, what's the right way to interpret a dream, for example. Think about the power behind that.
[00:31:31] It's huge. It's huge. It makes you rethink how you can learn information, makes you rethink how you can teach information, because you can teach it from a different perspective each time.
[00:31:44] I think that would help significantly, especially trying to understand the society that we have today where, you know, it's either black or white or something else, if you're lucky.
[00:31:56] Having both sides to be able to, you know, having that perspective from an AI is actually something that is pretty compelling because then everybody can tell their story. You have to have somebody that watches over that, but you are able to tell a story from different perspectives, significantly easier. But then you're also able to ask questions, because that's always what we're trying to do. We're trying to ask questions. We want to learn more. We want to understand we have a problem, help us with the problem.
[00:32:23] And if you don't do that, you're not, you're not maximizing the experience and you're really doing a, you know, a disservice to your customer. So think about that as you're, as you're going, you know, really being able to drive different features. Whether it's the empathy of a cognitive Persona because you've built the empathy models around that, whether it is, you know, the trust building that has to take place, you see, that's one thing also that folks don't understand.
[00:33:00] So I talk about, you know, ethical applications, you know, and, you know, implications around that. But there has to be some trust between the cognitive Persona and the human.
[00:33:12] So if as I'm interacting with Stewie, we'll keep going back to Stewie. As I'm interacting with Stewie, for example, if I don't believe what Stewie's telling me, then I'm going to use Stewie, then I'm going to have to go get a whole bunch of other information and then I'm going to waste time. And time is the one thing I always tell everybody, time is the one thing that you cannot get back. It is your most precious commodity. When you, you know, spend time with somebody to try to help them, when you spend time trying to plan something, when you spend time trying to do something that is worth way more than money because you can get more money, can't get more time, just doesn't work that way.
[00:33:59] So having that trusted relationship between your cognitive Persona and what you're trying to do with it and, you know, what you're trying to solve and believing it and, you know, really understanding with it, that is a completely different type of application. It's a different type of experience, and it's a different type of response that you should get back. That is really the next generation of how we are going to interact with AI, how we are going to use it, how we are going to solve problems with it, and quite honestly, how folks like me are going to try to keep building it and making it better. So stay with us. We're going to be right back after a few short commercial messages.
[00:35:17] Foreign welcome back to AI Today. I'm your host, Dr. Alan Badot, and this week we are talking about AI cognitive Personas. I've hammered on the ethical implications around that and making sure that folks know that they're talking to a bot. That's kind of nice. I've also talked about stewing cognitive Persona that it's an executive coach built on 45 years of data and information.
[00:35:49] Powerful, exceptionally powerful. Very trusting. Gives you the Right answers gives you and focus is really hard about trying to give you the right answers. Doesn't behave like your traditional large language models. And one of the questions I know I'll get, so I'm going to hit it now. How does it learn? Does it continue to learn? Does it steal my information? What's it do with the information? Does everybody else know what my characteristics are and all these other things? And the easiest answer is maybe it depends on what the application is. And we'll stick with Stewie. So folks understand that's the easiest one for, for me to describe. In the case of Stewie, for example, you can see, you know, look back at the second, second segment, my name was at the top. So that is my own personal Stewie. As I am interacting with Stewie, Stewie is learning from my interactions, learning. I want it to go to these websites. I want it to get this kind of information for me. I asked these types of questions. Stewie is learning from that. Nobody else's. Stewie is learning from that. It's compartmentalized. There is no cross pollination of data models, any of that stuff. Your Stewie is your Stewie. Nobody else knows what's going on inside of your Stewing. Now there's some good things about that, right? Because we don't want to say something, oh, I've got an HR problem and my boss is being mean to me or has said inappropriate stuff and then your boss has access to it. That's bad.
[00:37:27] So we, we, we definitely don't want that to happen.
[00:37:31] However, your interactions are going to drive how well it performs in the future.
[00:37:39] So if you just tell it, you want it to remember everything.
[00:37:43] If you do that long enough and we're talking months and months and months of all you just, you know, telling it to save everything.
[00:37:51] You are starting to generalize Stewie and you don't want to do that. You want to, you want to make sure that Stewie continues to focus. Now there are some protections that we have of course built into that to force Stewie to make sure he doesn't go too far in that general direction or to remind Stewie what his constitution is and what he is exceptional at. You know, we do, you know, things like that. But it's your interaction that is the most important piece. Not with, not my interaction with your Stewie. I don't, you know, that doesn't help, you know, so if you're having a bad day, if you, you know, look back, I, I, you know, Stewie asked how my, how my mind was doing. What kind of mindset was I in today? And I said, it's probably pretty crappy.
[00:38:45] But I don't want to tell Stewie that, because Stewie's gonna think I'm, you know, it may take a day or your other interactions for. For Stewie to think, you know, what? He's. He's having a pretty bad day. I'm gonna. I'm going to talk to him, you know, from that empathetical view, and that's how I'm going to frame those questions and stuff. So you got to be smart in. In how you're interacting with. With the AI but if you are having a bad day and you want to talk about it, there's nothing wrong with it. And you can do that with Stewie, and you can do that with any AI cognitive Persona if it's trained the right way.
[00:39:22] And that's how these systems are learning.
[00:39:26] And that, honestly, is really one of the most powerful things. Imagine if you had a Stewie when you're in sixth grade and it's your tutor in something, and you can continue to use Stewie as you go through, you know, high school, as you get into college, as you get into, you know, whatever job field that you're going to be in, and you have, you know, continuing to be coached and ask questions. And, you know, that is something that is really groundbreaking, that transformation. Nobody's ever been able to really map that transformation and look back and see, oh, geez, this is how I would have responded before. Now I have Stewie, and, you know, Stewie is helping me learn. So not only would you be able to see how your behavior is transforming, but also how Stewie has evolved.
[00:40:20] Now he's smarter now. He knows you. He knows what you're going to say. He knows what you're thinking.
[00:40:27] That that human AI relationship cannot get any stronger than that, other than. Well, I guess it could if you have an implant in your brain, but that's a different. That's a different show. Maybe in the next show we'll talk about that. But having that trusted, you know, advisor that, you know, happy, you know, Stewie that's sitting on your shoulder, and you can ask it questions, and it'll give you a response that, you know is tailored to you is. Is powerful. That's what gives these. These Personas such insight, such capability, and really why sky's the limit when it comes to those things. You know, the knowledge transfer piece as well is going to be, you know, is really going to be huge, because what you can do, you know, imagine if you are in a group, you know, environment, and you're all trying to solve different problems and you each have different fields of expertise. Maybe one year, you know, somebody's an engineer and they've got their engineer. Stewie.
[00:41:32] Somebody else is a, you know, a physicist, for example, or a combustion engineer or whatever that is, and they're trying to solve hard problems. You know, that coupling between the human and the AI is.
[00:41:49] It's fantastic. And that's why making sure that it's trusting, making sure it's doing the right thing and performing right is so important as you. As you start to expand.
[00:42:00] So, you know, I've said a lot of things around these cognitive Personas with Deep Seek and what I think is going to happen with Deep Seq as we get some more information.
[00:42:12] I've talked about how they can be used. I've talked about the power of these cognitive Personas, provided that they're trained the right way, that they have the right characteristics, they have the right psychological traits that you're trying to, you know, really train them in.
[00:42:31] And then that people know that they're talking to a cognitive Persona is very important.
[00:42:38] And it's only going to become more prevalent in society. You know, it's going to hit our phones. You know, we have. I'm not even going to say the name because they will respond in, ask me if I need something. But with all the devices that are listening, with all the things that we do online, with all of our interactions during the day on any kind of media platform, whatever that is, these cognitive Personas are going to start to really drive what the ethical conversation is, how it should be used and, and how we need to refocus. Maybe, you know, what our laws are starting to really look like and take shape.
[00:43:25] The challenge is, though, we don't want to stop advancing, you know, what the science looks like because we know that there, you know, our adversaries are not stopping, you know, our competitors may not be stopping. There's a lot of different implications when I say that. But that doesn't mean we can just throw caution to the wind and charge forward and, you know, what happens, happens. And we'll figure out the, you know, how to, you know, put the broken glass back together. Because these things so powerful that if the, if the, if the glass really breaks, you're probably not going to be putting it back together. It's just going to cause a. A nightmare for, for folks. So be careful with them, you know, make sure they're. They're optimized and, you know, use them. Use them the right way and you'll be successful. Use them the wrong way. You probably won't be using cognitive Personas, you know, after that because it will go bad very quickly. So get help. Get some advice. I'm happy to do it. Everybody knows. Send me an email. I'll answer your questions. It doesn't cost anything. It's free. Yes, I do it for free. Free. I love it. But I love the questions. Keep sending them.
[00:44:39] You know, Dr. Allen Alanbedo AI or Dr. Allenbadeau.com. send them to me. I respond as quickly as I can. I get probably, you know, anywhere from five to 700 questions a week, and I try to get through as many of them as I can. I got a backlog of about 15 or 1600, actually. But I love them. They, you know, they, they're, they're great. Some of them make a little bit of fun. Notice that my head's not quite as shiny this. This week as it was last week. I appreciate that comment. Thank you. I took that one to heart. But I love them anyway, so I hope you enjoyed the show. Come back next week. We're gonna have a great topic again. And thank you for being here. I appreciate all of you. Have a good one.
[00:45:29] This has been a NOW Media Network's feature presentation. All rights reserved.