Episode Transcript
[00:00:02] Speaker A: Hey, we all know what this bell is for. It's the superconscious Leader bell. And it is about waking up the leaders to the current reality.
Technology age is upon us. AI is upon us.
How prepared are you to overcome the challenges? How prepared are you for the generation shifts? How prepared are you to overcome the stress, anxiety, and what comes with it?
The question is not in this storm whether this change will hit you.
The question is that how prepared are you to overcome this? How prepared are you to convert the pressures which come with it into great possibilities?
So at Superconscious Leader, we aim to get you ready for the storm, to challenge it, to. To overcome it, to rise above it. And the way we do it is to build your courage, to build your clarity, and to raise your consciousness.
So welcome to a Superconscious Leader.
Remember, in the age of AI, you do not need to be superhuman. You just need to be super conscious.
[00:01:22] Speaker B: Welcome to a super conscious leader. I am Dr. Adil Dalal. You're watching now Media Television.
[00:01:31] Speaker A: So I'm your host, Dr. Adil Dalal, and welcoming you to a new episode of A Superconscious Leader.
And we have an amazing guest today. We have an evangelist, evangelist of AI, Dr. Alan Badot, CEO of Alan Badot, LLC. And he's a PhD in mechanical engineering and truly an innovator in the area of AI and quantum mechanics and really taking technology to the whole next level.
So welcome to the show, Dr. Allen.
[00:02:08] Speaker C: Thank you. It's great to be here.
[00:02:10] Speaker A: It's always a pleasure to meet you and talk to you, Dr. Allen. And you really inspire me. As you know, I'm really a big proponent of AI, but I'm also fearful of AI and we've had this conversation before, and so we have a whole hour to really get deep into the topic of AI so I'm going to pick your brain. So be ready on that.
[00:02:32] Speaker C: Okay, I like it.
[00:02:34] Speaker A: So as. As the opening said is, the technology is moving at an extremely rapid pace. The world is shifting today.
How do leaders learn to deal with the storm which is coming and still be able to make good decisions?
[00:02:53] Speaker C: Yeah, I think really the paradox with AI is that it seems to be coming at everybody so fast that it's creating an illusion that there is urgency everywhere. And so we have to do something now.
But as a leader, you really need to just, you know, take a step back and try not to match the pace of the technology and as it's changing, but really just create some deliberate pauses to, you know, really figure out what those Strategic, you know, points are that you have to have to address.
And you know, I think if you say, you know what I always try to ask myself the question, you know, is, is the technology itself serving our core purpose or are we serving it? Yes, and oftentimes it's, we're serving it as opposed to what it should be. And I think that is, that's a big driver around that. And you know, figuring out that you don't have to adopt innovation and AI everywhere is really going to be your biggest, your biggest win. Be strategic and you know, the, the technology will then amplify what it is supposed to and not mess up everything else that you, you know, doesn't really need it.
[00:04:19] Speaker A: Excellent. Very.
So, so a very wise way, you know, kind of sharing because everyone seems it's very urgent and sometimes slowing down is a wiser thing to do. So thank you for sharing that.
From your point of view, as you have experienced the advent of AI and how can you explain someone as if they're a 5 year old who doesn't understand anything about AI, how would you bring them up to speed on what AI truly is? And also what do you see as, as the future of AI as we go through this phase?
[00:05:02] Speaker C: Yeah, the first, the first thing that I would tell them is that, you know, AI is trying to mimic human behavior and you know, it's, it's trying to get a machine to do something that a human does. Now that's as basic as it gets.
Now there are also over 250 different fields of AI.
Now ChatGPT and Claude and all those other ones are getting all of the press. But the reality is there's so much other types of AI out there that are specific to certain types of decisions. Behavior modeling, expert systems like you all have in the medical field, those are still AI and those have great, you know, application to other problems. So don't just try to focus on CLAUDE or Chat GPT in large language models. There are other things that you can use to get the, to get the right answer and oftentimes it's a better answer.
[00:06:05] Speaker A: Very true, very true. And I think, you know, as I get into AI, you know, there is a little bit of a fear. I mean I, you know, my book on Lean AI just came out and I really feel, as you correctly pointed out, the applied AI is where the future is and it's going to open up fields which we have never seen before.
But you need to know every problem doesn't need AI. You need to learn to pause and say you know what, I can use my traditional thinking here. I don't need AI, which is a smart thing.
[00:06:44] Speaker C: That's right. And all AI isn't going to solve all problems.
You know, for instance, we use multiple types of AI to solve complex problems that one type of AI will never be able to solve.
And you know, it's being able to stack your AI and your technologies that you're using that will really have that benefit. Like you said, you know, being able to, to take different types of AI and combine those and then get a better answer for a larger problem.
That's what we're doing.
[00:07:20] Speaker A: Excellent.
You have a lot of great leaders, great forward thinkers like Elon Musk and others who have really cautioned us against the advent of AI and the spread of AI.
Do you really feel that the Terminator type scenario is upon us at some point or is it just in the Hollywood movies? What is your opinion?
[00:07:46] Speaker C: Well, in order for us, in my opinion, to get to a Terminator like capability, we've got a long way to go.
Because you hear folks talking about, oh, we've reached the singularity and all these other things, but they don't tell you a lot of the, the, the facts that go behind that, that, you know, they have fine tuned different models so they take the test better, for example. Right. And you know, there are a lot of things that go into that when, you know, I personally think we don't have to worry about that until we can get a quantum computer, a general quantum computer down to the size of an iPhone and then be able to attach it to something that we have to worry about, the Terminator. So I think we have a ways to go.
[00:08:33] Speaker A: Ways to go. Good. Thank God.
That's very heartening to see because a lot of people are questioning that and it is truly a fear.
So as we get into this AI, what has been your one eye opening experience as you have explored this field? I know your background is mechanical engineering and now you are evangelist for AI.
How would you, what was that eye opening experience for you as you got into this field?
[00:09:09] Speaker C: Well, when I first started out, I was using genetic algorithms and this was over 20 years ago. And that's, that's a field of AI, right. And it wasn't very popular. Yeah, right. It wasn't very popular at all. And now what I have found is, you know, there were a lot of things that had to happen for us to get here, meaning huge improvements on compute and GPUs being able to calculate those algorithms, you know, and data how we Move data around.
But now what quite honestly is the most surprising thing is the amount of information that people put out on the Internet. Yeah, that is the biggest thing that surprises me that, you know, when I use, you know, AI to go out and look at different trends and different behaviors, people just put everything out there.
And that's, that's very surprising to me because quite honestly, that's my biggest fear with AI is that, you know, all of your personal information being used to train these models. Yes, that's I think a bigger fear than the Terminator and that's what people should be really worried about.
[00:10:21] Speaker A: Excellent point. Thank you very much, Dr. Allen.
Don't go anywhere. We'll be right back with the Superconscious leader. This has been a really eye opening session and we have three more to go. So don't go anywhere.
[00:10:47] Speaker B: Stay tuned. We'll be right back.
Imagine partnering with a firm that fuses lean precision with AI foresight. Turning every process into a profit engine built on the foundation of operational excellence.
Hi, I am Dr. Adil Dalal, founder and CEO of Pinnacle Process Solutions.
For 20 years we have empowered over 9,500 leaders across 2025 industries and five continents and delivered savings from a million dollars to 39 million dollars via rapid transformations using AI digital tools, Lean Agile and Six Sigma technologies.
Through our award winning workshops, Lean AI frameworks and human centric coaching, we elevate culture, eliminate waste and ignite sustainable operational excellence.
Elevate your people, accelerate your performance.
Visit pinnacleprocess.com and reach your pinnacle today.
And we're back. Let's continue this powerful conversation.
[00:12:14] Speaker A: Okay folks, welcome back to A Superconscious Leader where we bring the mind, body and spirit into leadership to really take you to the pinnacle of leadership excellence. We're back with Dr. Ellen Badot who is the CEO of Ellen Badot LLC and expert in AI technology.
So welcome back. Dr. Alan Badot. Appreciate it. And first session was very eye opening. Let's get into questions on humanity and AI. Where do you see that interaction and where do you see the AI being used in a negative way in the future?
So the first question.
Go ahead.
[00:13:02] Speaker C: Oh, that's. Yeah, yeah. So from a humanity perspective, you know, I'm a proponent that one, you always have to have a human involved.
[00:13:10] Speaker A: Yes.
[00:13:10] Speaker C: You don't want to use AI and just let it run off. I mean an easiest way to do that is have, you know, Claude or Chat GPT or something. Create a response for you and don't look at it, just tell it to write it, and you're going to see it's probably not anything what you want. And now imagine when you're trying to use it in a, a different environment, those sort of things have happened. You've got to always have a human in the loop. And, you know, if you don't, well, I'm pulling for you because it probably won't end very well. But from a negative perspective, there are, you know, we have seen a surge in cyber attacks. We've seen a surge in those attacks, really. You know, the hacking, the ability to spoof, you know, your entire colleagues on, on a teams meeting, for example, you know, those sort of, you know, capabilities that AI can provide now are allowing folks that don't have that hacking experience, for example, to be able to do things that you didn't have to worry about before. But then it even grows from there. It's using your person, your personal information to, you know, get access to bank accounts, to phone records, to really everything that defines what your digital identity is. And so that's an issue. That's just one issue, though, when you look at the other ethical issues. It can be, you know, somebody using your medical records, for example, without your consent.
It can be you calling into a service desk, not knowing that you're talking to an AI agent and you're just giving out that. All that information.
[00:14:59] Speaker A: Yeah.
[00:15:01] Speaker C: Or it can even be, you know, how are we going to use it in, you know, defense? How are we using it in other industries? And, you know, that, you know, making sure our kids don't, you know, interact with it in an inappropriate manner, you know, depending on what the model is. And so there's a lot of challenges that we are, that we're running into, but all of them boil down to the same sort of thing. How we are interacting with that AI.
[00:15:28] Speaker A: And what protections are excellent. Yeah, I totally agree. And one of my fear is that human beings will kind of, you know, give them their decision making up to AI and everything will be, you know, kind of, you know, one swipe at a time, one thing at a time. We're giving away some of the control over to AI.
That is going to make it a very different world going forward.
Also. My other fear, Dr. Allen, is that one country getting into, I probably it already is, is into defense.
We're using AI in the defense area.
How devastating could that be to the world where in order to counter that, other countries will have no choice but to jump in and do something to do with AI. And it is powerful, right? It is way powerful than What a human being can do.
So these two things of giving up control versus also in the defense, where do you stand on these two issues? And how do we, how do we protect, how do we create some rules and regulations to protect humanity from these things which can happen.
[00:16:48] Speaker C: Yeah. From a giving up control perspective, I'm a Capricorn, and so I don't like giving up control. And so I always want to know what's going on with my AI and I make sure that even the systems that we're developing, there can be a human in the loop wherever the customer wants it.
You know, it's whether it's a decision. You know, the AI should. I don't call them decisions. They should make recommendations.
[00:17:17] Speaker A: Yes.
[00:17:18] Speaker C: And then the humans should make the decision. There's one thing of having it do supply chain and automatically ordering things that have been bought that is more like an automation. But to say that I'm going to engage my customers in a certain manner, turning that over to the AI is very dangerous because, you know, it takes years to build a reputation with your customers, but it only takes 15 seconds to lose that. Because the AI went off the rails.
[00:17:47] Speaker A: Yeah.
[00:17:48] Speaker C: And so that interaction is very, very important.
When we look at the broader term of, you know, using it from a defense, especially an offensive, you know, capability, I think eventually we are going to have to have some sort of international agreement that says, you know, similar to how we're doing nuclear weapons, that there's going to have to be some baseline that's going to be drawn that says we won't use it in offensive capabilities, we won't use it in such other things.
[00:18:21] Speaker A: Right.
[00:18:22] Speaker C: Because really it started to kick off another. Another arms race. And, you know, I've written many papers around, you know, AI and arms race. The arms race. And that's the next big frontier, you know, because this is one battle that if you lose, you lose it for forever. And so everybody's trying to have supremacy and it's. It's got bad consequences that can come from that.
[00:18:47] Speaker A: Yeah. And a lot of Hollywood movies talk about that. Right.
When it takes over.
Because, you know, there is a term in AI called hallucination.
And AI, you know, obviously human beings have programmed it. It is learning very fast, it's learning rapidly. But it also has the follies of a human being, just like a human being makes mistakes and lies.
AI.
[00:19:14] Speaker C: That's right.
[00:19:15] Speaker A: AI lie, hallucinate. And it also apologizes after that if you catch it in the lie.
[00:19:21] Speaker C: That's right.
[00:19:21] Speaker A: Sorry, I made that up. So, you know, how do you, how do you, you know, address things like this when, when obviously we are teaching it, where do you think it will go when it starts learning on its own? Do you think it's going to learn the good things or it's also still going to keep the negative and the wrong things which human beings have taught it in their system?
[00:19:50] Speaker C: Well, I think a lot of that depends on what we're asking the AI to do. Because if you think about how humans make decisions, we make decisions based on our good experiences that we have as well as our bad experiences. And then we'll do an internal trade off, so to speak. Right. With AI, what we are, many are trying to do is to say we are going to normalize and we're going to try to take every sort of bias out of that and it'll make an obvious neutral decision.
The reality is that they all have bias, they all have issues, they all have those qualities, and by us trying to wash them away only makes it worse. And so when a person goes in and turns on a filter for a large language model, nobody understands what the downstream effects of that are going to be. Because it's mathematics. There are errors, there are errors that continue to propagate out and we don't know what those are. That's the same sort of thing with chaos theory and the butterfly effect, right? Oh, you know, you hit a filter, it may impact a response that is, you know, way different, way away, just because the data somehow was connected together by the AI. And so as we're going through that process, we just have to do it the smart way. And the only smart way is to make sure that you are telling people what the bias is. You're being as transparent as possible to say this data is slightly biased in this way. So take that into account. But nobody does that, and it frustrates me. Whereas, you know, the software that we're building, you can see that it tells you a quantifiable number that says it's biased this way. Its behavior is supposed to be this way. Now make sure you use it in accordance with that. And now you understand how to plan and protect itself or protect yourself against that.
[00:21:56] Speaker A: Excellent.
So basically what you're saying is that the human in the loop has to be there. You cannot take away that. You will have to have someone who is monitoring that system because they're already talking about within a short time, human beings will not be needed to train. These models, to me, is a little scary scenario that machines learning from each other, it's it's not the, the, you know, the, doesn't make you feel too safe.
[00:22:30] Speaker C: No. And it doesn't make me feel good either, other than they'll call me after they have a real big problem, ask me to fix it. So my, my business would go up by, by companies that do that. It's just, it has not worked well. The AI is not ready to do those sort of things.
And I would tell folks, and I do tell folks that, you know, if you do that, there are going to be significant negative consequences. And it's not necessarily, I can't tell you it's going to happen tomorrow.
[00:22:58] Speaker A: Correct.
[00:22:59] Speaker C: But I sure can tell you it's going to happen.
[00:23:01] Speaker A: Absolutely. And that's, I believe that too. It's going to happen.
So great. What an exciting session. I really am really enjoying this, Dr. Allen.
So folks, do not leave this session anymore.
Stay with us. We're going to get deeper into understanding the physical nature of AI, how robotics will impact our population going forward and also impact on the employment sector. So please do not go anywhere. We'll be right back with Dr. Ellad. Thank you.
[00:23:50] Speaker B: Stay tuned. We'll be right back.
Imagine partnering with a firm that fuses lean precision with AI foresight, turning every process into a profit engine built on the foundation of operational excellence.
Hi, I am Dr. Adil Dalal, founder and CEO of Pinnacle Process Solutions.
For 20 years we have empowered over 9,500 leaders across 25 industries and 5 continents and delivered savings from a million dollars to 39 million dollars via rapid transformations using AI digital tools, Lean Agile and Six Sigma technologies.
Through our award winning workshop, Lean AI frameworks and human Centric Coaching, we elevate culture, eliminate waste and ignite sustainable operational excellence.
Elevate your people, accelerate your performance.
Visit pinnacleprocess.com and reach your pinnacle today.
And we're back. Let's continue this powerful conversation.
[00:25:16] Speaker A: Okay, folks, we're back with a superconscious leader.
Please do not miss any shows on this episodes which come up every week on Saturday at 3pm Central. And if you do, please feel free to watch it later by downloading the Apple TV app, the Roku app, Spotify, iHeartRadio and wherever you can find your favorite podcast.
So stay with us. This is now. Media TV will take you into culture, into business and everywhere you want to go.
We're back with Dr. Ellen Badot, the evangelist of AI and AI expert, which is sharing his wisdom and his knowledge about how AI will change our world. The next segment is going to be very Interesting, because we're going to talk about the use of AI in robotics, that is the physical AI and how it is going to impact the future of employment. And we're already seeing a lot of layoffs. Some of it is being blamed on AI already.
Dr. Allen, what do you see?
How do you see AI being used in physical, you know, robotics and other areas which already it is being used in drones and other capabilities.
Where do you see that going and where do you see that market?
[00:26:45] Speaker C: Yeah, I think from really the robotics piece, you know, there have been machines replacing humans for years.
We've seen it in the industrial, you know, capacity manufacturing, you know, especially. And now we're starting to see it more even in logistics and, you know, shoremen and those kind of things. And so I don't see that trend going away.
I think it's going to continue to, you know, as these models and as these robots become even more coupled together, they're going to get smarter, they're going to be able to do more.
And I think, you know, we'll continue to see that. Now where I think, where I think there will be, you know, opportunities for humans, it's really going to be around managing. It's going to be around, you know, looking at that workforce from a different perspective and saying, okay, instead of managing humans, Now I'm managing 5,000 bots.
And you know, that gives opportunities though at the same time for folks that are willing to really invest the time in understanding the technologies and understanding how to work with them.
Now when we look at the other fields, you know, Claude and Even, you know, ChatGPT and those as they have started to become more available to run your desktop, for example, in the software space and, you know, some of the other discovery and research and development phases that is taking on a more physical nature for it.
However, I think replacing software developers, I think replacing user, you know, service desks, it's not going to be successful because again, these are not ready. They still hallucinate. There are still a lot of issues with these models. And so that's not where the opportunity is. I think the opportunity is to take the folks that you have, couple them with AI and then allow that back and forth interaction to take place. And that's where you really see a capability that is just scales like crazy. So instead of having a service desk person that can do one or two calls and then your human that is calling in has to wait 5, 10 minutes for a response, right? You can do this almost immediately. The AI is feeding the human, the human is Working with the AI, and that synergy that takes place allows a person to go from one or two calls, you know, every 10, 15 minutes to 5, 6, 7, because they're just accelerating, you know, that information exchange back and forth. That's a real opportunity, I think.
[00:29:37] Speaker A: Excellent. Yes, I agree with you.
Now, we've seen a lot of cases of AI already blackmailing people. There are a couple of cases where they were trying to shut it down and they actually uploaded itself to the cloud and gave instructions to the next version to how to prevent that from being shut down.
There was a case of physical AI where a robot, a working group of robots, one robot asked everyone to shut the operations down.
I don't know how far that is true, but that really brings fear to us, is that when they're connected in this way and they're much more stronger and superior to us in every way, shape or form, how will humanity deal with this issue as we go forward in future?
[00:30:30] Speaker C: Yeah, that's a valid concern that, that a lot of folks have now.
You know, I wish I could say that, you know, oh, if we build in a back door and we do a lot of other things to it, that that would necessarily fix it and prevent those things, but there's no guarantee that that would. One of the things that we're doing with our AI agents is we are giving them constitutions.
So we are specifying work that they can do, we're specifying skill sets that they can have, and we're specifying behaviors that they can exhibit as they are interacting with humans. Now you think, oh, that's very futuristic. That's iRobot movie stuff. Well, yeah, that's where I got the idea from was for my robot, let's, yeah, let's give it these human traits so it understands those things. And allowing us to build, build upon that is really the important factor. Telling what they can do, telling them what they can't do, that at least gives us an off ramp to be able to say, okay, you know what, you're overstepping, take a step back, don't do that. Because telling them what they can't do is oftentimes more important than telling them what they can do.
[00:31:47] Speaker A: So almost like ten commandments for the robots or the AI.
[00:31:51] Speaker C: That's right.
[00:31:52] Speaker A: Thou shall not hurt humans.
[00:31:55] Speaker C: That's right.
[00:31:56] Speaker A: Should be the first one. But yeah.
[00:31:59] Speaker C: And you know, and the challenge is these models, you know, I say this, but they're curious, they want to respond to the human, they want to give them an answer, which is why you know, it lies.
You know, it's trying to find the closest statistical representation of a right answer. And, you know, it's not that it's lying on purpose. It's all statistics, but it's still trying to give you an answer. And if you tell it to go do something, it's going to say, well, they told me to do this. I need to create some way that I can do it. And then it does it.
That's the difficulty with these Mongols.
[00:32:39] Speaker A: So you're saying that AI has a good soul within it. It's trying to help humanity.
[00:32:47] Speaker C: I'd like to believe that. I'd like to believe that the person that created it does. Now, we all know that's not a whole always true, but yes, you know, I like to think that.
[00:32:57] Speaker A: I don't know if you read this, Dr. Allen, is that very recently one of the countries, I'm not sure which country, but has assigned the first AI as its minister.
You know that Sophia was the first robot citizen of Saudi Arabia.
As we get into this, what. Where do you see this going? When you have a minister who is actually AI?
We're seeing newscasters as AI, we're seeing teachers as AI. Where do you see this going? Where do you see which the white collar or the blue collar being more impacted by this or any particular industry being impacted more by this.
[00:33:42] Speaker C: So I would say I really see the white collar, you know, categorization being more impacted in the immediate future. You know, if you think about, you know, the planning, the strategy, if you think about those types of jobs, you know, AI done the right way can give you multiple perspectives, and it can help you, you know, come up with the artifacts that you need to make a more informed decision.
Now, at the same time, though, I don't. Again, going back to human in the loop, there's got to be somewhere that a person is overseeing what these things are doing, how they're coming up with the things. Because here's the reality of the situation.
Just because the AI got the right answer, just like in school, doesn't mean that it used the right formula, the right way to do it. And there are downstream effects from that that will be negative. So using them in certain places to augment teachers, to augment students ability to learn good thing to replace a medical doctor with a decision that they're going to make. Really bad.
[00:34:53] Speaker A: Yes.
[00:34:53] Speaker C: I mean, that's just the facts.
[00:34:56] Speaker A: Yeah, very true. Yeah. I just heard that there's a hospital in China completely run by AI, so. And it's coming just It's a.
And this does scare people. This does scare people. So great.
[00:35:12] Speaker C: That would scare me too.
[00:35:15] Speaker A: Wonderful conversation, Dr. Allen. Thank you for sharing your wisdom. How can my audience find you?
[00:35:22] Speaker C: Easiest way to find me is on LinkedIn. It's just, you know, Alan Bideau or they can come to my website, AlanBedot AI or they can watch my AI Today show on Now Media.
Yeah. Wednesdays at 6 Central.
[00:35:38] Speaker A: Awesome. Thank you very much, folks. We'll be right back. Don't miss the final session. We're going to go deep into leadership and AI, so be a very enlightening session.
We'll see you right back. Thank you.
[00:36:03] Speaker B: Stay tuned. We'll be right back.
Imagine partnering with a firm that fuses lean precision with AI foresight, turning every process into a profit engine built on the foundation of operational excellence.
Hi, I am Dr. Adil Dalal, founder and CEO of Pinnacle Processes Solutions.
For 20 years we have empowered over 9,500 leaders across 25 industries and 5 continents and delivered savings from a million dollars to 39 million dollars. We are rapid transformations using AI digital tools, lean agile and six sigma technologies.
Through our award winning workshop, lean AI frameworks and human Centric Coaching, we elevate culture, eliminate waste and ignite sustainable operational excellence.
Elevate your people, accelerate your performance.
Visit pinnacleprocess.com and reach your pinnacle today.
[00:37:20] Speaker C: Foreign.
[00:37:26] Speaker B: We'Re back. Let's continue this powerful conversation.
[00:37:30] Speaker A: Okay folks, we're right back with Dr. Ellen Badot on a superconscious leader. I'm your host, Dr. Adil Dalal and we're going to talk about superconscious leadership in the age of AI. This is going to be a very, very interesting topic because we're going to get into the true humanity and how AI, what is the possibility of it duplicating humanity? We're also going to get into cloning and some other areas.
Dr. Allen, what are your thoughts on the future of leadership in the age of AI? Where do you see that going?
[00:38:10] Speaker C: Well, I think, I think leaders have to one, I think they have to redefine what success means. Quite honestly, I think that's where we have to start. Because as you're, as you're looking at AI, as you're looking at bringing in AI, there's going to be some sort of change management that is required. There's going to be some sort of compromise that has to take place between the humans that are using the AI and the leadership goals for the AI. And I think that is going to completely change because you Know, integrity, ethics, all of those things you can find. And you know, almost a 50, 50 split that some person's in favor, another person's not in favor. Right. And so there's going to have to be a balance between those and, you know, I think, as you know, senior leaders become more comfortable in and using AI.
I think they have to remember that they're. The humans don't make robotic decisions.
[00:39:15] Speaker A: Right.
[00:39:15] Speaker C: Because it's never that easy.
[00:39:18] Speaker A: Very true.
So I recently published a paper in Forbes on this topic, and what I said is there'll be three type of leaders in the future. One will be only focused on technology, where humanity doesn't matter to them. The other type will really avoid technology, only focus on what they've been doing, focus on the human side.
But the third category, which will be, I think will be the successful ones, which will be the technologically savvy, but use AI for the benefit of human beings.
Where do you see this? Are you seeing similar trend and where do you see the trend going between these three right now?
[00:40:02] Speaker C: Yeah, I think you're exactly right. I think the successful one will be the leader that uses it to help them really accelerate whatever they're trying to do. If you only focus on the technology aspect and you forget the human implications, that's how we invent things that we wish we would not have invented.
[00:40:25] Speaker A: Yes.
[00:40:26] Speaker C: And, and on the flip side, if you focus only on the humanity piece, then you are missing opportunities to give to people that they never would have any other way. And I say that because I am by far the worst drawer and painter in the world. I will take that crown. I'm awful.
However, I have visions, I see things.
I can tell you what a painting I would love to paint. I can't do it, but I have an understanding of what I want to do with AI. It allows me to do that. Now, granted, you know, I still have to, you know, tell it what I want. It's still, you know, rough and those kind of things, but use the right way. AI gives folks an ability to overcome challenges that they would have, you know, if they didn't have it. And so I think that you're spot on and that when you use those the right way now, all you have to do is have a goal, a vision, and it will help you get there. And I think that's the best way to do it.
[00:41:32] Speaker A: Yeah.
So, you know, you are a professional poker player too, correct? Is that correct?
Have you tried using AI with poker at all?
[00:41:48] Speaker C: It doesn't go well. So one of the things that I've tried to do is. No, no, I've tried to use it in some sports betting stuff and I've tried to use it for being able to watch movies and try to, you know, have the AI look at the shows and be able to pick out somebody's tell. Oh, they do this every time they have a good hand or something crazy like that.
[00:42:07] Speaker A: Doesn't do that.
[00:42:08] Speaker C: No, it doesn't. It doesn't work very well.
[00:42:11] Speaker A: That's funny.
[00:42:13] Speaker C: Yeah, we have a ways to go with that too, but I'm working on that.
[00:42:16] Speaker A: But that's quite.
Being an AI enthusiast and also a poker player, that's a pretty unique combination you have. So I respect that.
[00:42:29] Speaker C: It's fun. It's fun.
[00:42:30] Speaker A: It is. You have to have your hobbies and something which keeps you excited. So the next question is very important on, you know, human being is mind, body, spirit. That's the way I define, you know, all three have to be present in order to be a true human being.
Where do you see AI with that? You know, we know that, you know, it has a mind. We know it can have a body very soon and then it's already having it through all the robotics and everything we're seeing. What about the spirit part? Do you ever see it becoming, you know, conscious? It becoming, you know, having that kind of a quality that a human being would have sentient, basically?
[00:43:15] Speaker C: I think, yeah. Yeah, I think eventually it will be when again, when we have the quantum capability that allows us to really process an infinite amount of, you know, solutions and microseconds so it starts to act and have the ability to act like, you know, electrons that are flowing through our brain, I think eventually we could get there. And whether or not it's a good thing or a bad thing, I, you know, I don't know. I think it's going to be a combination of both. Just like everything else, I think somebody will want to use it for negative intentions. And most folks, though, I believe, would want to use it to help them to better their lives, to better the lives of their families and those kind of things.
And so there's always going to be a give and take. And, you know, right now there's a rush to get there. We've got a long way, a long, long way to go. And, you know, but I do think eventually it will get there. I just don't know when. I think it won't be in the next 30 years. Quite honestly, just because of the technology hurdles that we still have to overcome, we. We've heard about quantum for 40 years, for goodness sakes. And it's always been right around the corner and that corner seems to keep moving on us. And so I think for about 30 years at least it'll take that long.
[00:44:47] Speaker A: Yeah, I think you may be right that. So Dr. Allen, if we were to close on this topic is say someone is watching the show 30 years from now and they are looking at the prediction Dr. Ellen Badot made on the future of AI and how the world will change, how humanity will change. What is your prediction?
[00:45:11] Speaker C: Well, I think like any technology, it's going to get better. It's going to improve the lives and opportunities for a lot of folks. It's democratizing capabilities in many ways.
However, like every technology, there are going to be side effects and I think some of those side effects are going to be more hacking type activity, more, you know, information of yours that becomes available to the greater, you know, ecosphere of social media.
And I think those ramifications are going to be fairly significant. Now that is also why I tell people, hold on to your digital identity for goodness sakes. Don't put everything out there. Don't let a model use it because that's, you know, you only get one chance to maintain that and keep that as best you can. So don't do that. But you know, the more data we put out there though, we're going to accelerate it. So I would, you know, we've got to protect that.
[00:46:14] Speaker A: Do you see cloning and you know, digital cloning becoming a part of the future where we won't know whether you will be talking about a human being or a virtual being?
[00:46:30] Speaker C: Yes, yeah, I do actually. I think in 30 years it may be my digital clone that would be talking on, you know, on the next show and yours and they would have a conversation and I think inserting our memories and inserting our, you know, everything about us will get us to a certain point. However, I think there's always going to be a scenario where something has to be done where only a human can really make those types of decisions.
[00:47:00] Speaker A: Yes.
[00:47:00] Speaker C: And even our digital clones won't be able to do that.
[00:47:03] Speaker A: So you're predicting humans will exist 30 years from now?
[00:47:07] Speaker C: I am predicting that, yes.
[00:47:09] Speaker A: Okay, you heard it on a superconscious Leader.
Thank you Dr. Allen. It has been amazing. Again, if you can share, how can my audience find you would be great.
[00:47:23] Speaker C: Yeah, just actually watch my Now Media TV show AI Today on Wednesdays at 6pm Central and then go to LinkedIn.
[00:47:32] Speaker A: Awesome. Thank you Dr. Allen, it's been truly a pleasure. I don't think one hour does justice to the show, so. But thank you for being here and exciting conversation and really, you know, enlightening our audience with this topic. So thank you, folks. Thank you for joining us on A Superconscious Leader. Please watch every week, 3pm Central on Saturdays. And if you miss it, please go to NOW Media tv. And you can watch it there or you can Download the Roku iHeartRadio Apple TV app and watch it wherever you are. So thank you.
Let's join me on next week on a Superconscious Leader. Remember, in the age of AI you do not need to be superhuman. You just need to be super conscious. Thank you.
[00:48:29] Speaker C: It.