Practical Wisdom for Leaders with Scott J. Allen, Ph.D.

Humans in the Loop with Dr. Dan Jenkins & Dr. Gaurav Khanna

Scott J. Allen Season 1 Episode 329

Use Left/Right to seek, Home/End to jump to start or end. Hold shift to jump forward or backward.

0:00 | 37:56

Send us Fan Mail

Dan Jenkins, Ph.D., is Professor of Leadership & Organizational Studies at the University of Southern Maine. Co-author of The Role of Leadership Educators: Transforming Learning and author of over 75 peer-reviewed publications, his scholarship spans leadership pedagogy, artificial intelligence (AI), followership, critical thinking, and curriculum design. A pioneer in integrating AI into development, training, and education, he develops innovative courses preparing students for digital-age leadership challenges. Dan serves as Co-Founder of the International Leadership Association’s Leadership Education Academy, Associate Editor of the Journal of Leadership Studies, and co-host of The Leadership Educator and Leaders in the Loop podcasts. An award-winning international speaker and facilitator, he engages thousands of leadership educators, scholars, students, and professionals worldwide on innovative teaching approaches and AI integration.

Gaurav Khanna, Senior Manager, Data Science and Digital Journeys, Cisco Systems, has 25 years of experience in technology and entrepreneurship. During the past five years, he has led efforts to automate business workflows using machine learning and deep learning techniques. His work focuses on using large language models and generative AI to transform how users interact with sales acceleration platforms. Khanna is passionate about demystifying complex subjects and is a frequent speaker on AI/ML topics. He received a BS in physics from Yale and an MS and a PhD in materials science and engineering from Stanford.

A Couple of Quotes From This Episode

  • “We’re beyond the point of innocent wonder about AI.”
  • “How do we align human values with machine learning and generative AI?”

Resources

About The International Leadership Association (ILA)

  • The ILA was created in 1999 to bring together professionals interested in studying, practicing, and teaching leadership. Attend The Global Conference in Toronto, October 28-31.

About  Scott J. Allen

My Approach to Hosting

  • The views of my guests do not constitute "truth." Nor do they reflect my personal views in some instances. However, they are views to consider, and I hope they help you clarify your perspective. Nothing can replace your reflection, research, and exploration of the topic.


♻️ Please share with others and follow/subscribe to the podcast!
⭐️ Please leave a review on Apple, Spotify, or your platform of choice.
➡️ Follow me on LinkedIn for more on leadership, communication, and tech.
📜 Subscribe to my weekly newsletter featuring four hand-picked articles.
🌎 You can learn more about my work on my Website.



 

Scott Allen: [00:00:00] Okay, everybody, welcome to the podcast. Thank you so much for checking in wherever you are in the world today. Fun conversation on the docket. I have two friends here, Gaurav Khanna I have Dan Jenkins, and we are, we're talking tech this afternoon. Some kind of different nooks and crannies around tech.

I was just telling these two gentlemen that we've. We've had a number of conversations on the podcast about technology where some of this artificial intelligence sensor technology, some of the possibilities of how extended reality can impact some of the work that we're doing. And these two have started a podcast Leaders in the Loop.

And this is a play on the human in the loop, at least. I don't wanna steal their thunder, but that's what I'm assuming here. But leaders in the loop. And so they've been on this adventure, I think it started in mid 2025, and they've been studying this intersection of AI and these technologies enabling [00:01:00]disruption and leadership.

And so I'm really excited for this conversation today. I want the two of you to spend just a couple moments introducing yourselves, and then let's jump in. So Dan, what do you think? 

Dan Jenkins: Absolutely. 

Scott Allen: Who are you? Who are you? Dan? Dan? 

Dan Jenkins: Who are I? Am? What am I? What's my purpose in life? So I'm a professor of leadership and organizational studies at the University of Southern Maine.

And beautiful Portland, Maine. And I have been in that role for going on, what is this now about 13, 13 and a half. Years. Before that I was at the University of South Florida working on my PhD and teaching in a leadership minor. And I geek out on all things related to how do we facilitate leadership learning and if artificial intelligence has a hand in that and there's a way to develop others' capacity.

More handily through some of those emerging technologies. Let's do it. Let's find out what what are the what are some ways to, to augment some of that with some of these new tools [00:02:00] that we have at our disposal. Love it. 

Scott Allen: Love it. And g, tell me about you, sir. 

 Gaurav Khanna: Thank you.

I am GI go by G. I am a AI executive at Cisco Systems based out here in the Bay Area, San Jose, California, to be specific. So my job is to consult with our customers about their AI journeys and their AI deployments. Normally, I get involved in very early in the conversation, ideating, use cases, all of that.

I also have a part-time gig as an instructor, AI instructor at Stanford. Continuing studies. Very grateful. To be teaching. So I'm learning a lot from Dan on actually how to teach because I'm a novice there. And so yeah, I wake up in the morning talking about ai. I go to bed at night talking about ai, so it's all AI all the time for me.

Scott Allen: Oh, that's awesome. Yeah. And you latched onto someone good here if you wanna get good at teaching. Oh yeah. 

 Gaurav Khanna: Of course. 

Scott Allen: And it sounds like Dan, you've latched onto someone good if you wanna learn about ai. So the two of you are this really, really duo that is bringing two [00:03:00] very unique perspectives to the conversation.

A little bit on the origin story of Leaders in the Loop. What do you think? 

Dan Jenkins: Yeah. It's one of these cold call types of things. And I love the a LinkedIn post that that, that g put on when we were first releasing our first episode and trying to get some social media buzz around things, with respect to, I'm glad he is.

I'm glad I responded to, to that cold call type of thing. And, I was trying to find somebody else. On this globe on this planet that was also doing something related to or teaching a course related to leadership and ai, that intersection there because while curious, I didn't have the opportunity to teach something like that at my own institution, but I did have an opportunity as an adjunct at TKA Nazarene University to teach a course.

Their EDD program, and in leadership which is titled Introduction to AI and Implications for Leadership. So there, how I the reason I was reaching out with this cold call, if you will, or it was a direct message on LinkedIn was for my sabbatical in the fall of [00:04:00] 2024. I was working with the International Leadership Association to put together.

A first of its kind summit on AI and leadership. And because of my own interests I was able to steer the topical scope of that summit on leadership, education, training and development. We started planning this with a team of volunteers in about August or September of 2024, and I knew we wanted to have a panel that looked at courses that folks were teaching that focus on that intersection of leadership and ai.

And going through Google, looking for folks and Gar K's name comes up working in Stanford, continuing again, working with executives teaching this course in AI leadership. I'm like, wonder if he would respond to me. I sure hope he does. He did we had a couple of Zoom conversations and we ended up actually enjoying each other's company enough to keep talking.

We ended up writing coauthoring an article together. In the Journal of Leadership Studies early in 2025, and I had been ideating this idea of a podcast around this intersection of AI and leadership because I was [00:05:00] finding just so much opportunity in its, and what it could do in the pedagogical space that I said, Hey, gee, I can't think of anybody else.

I'd rather do this with any interest. And I'll let, maybe I'll let him take it. Take it from there.

 Gaurav Khanna: Scott, in my lifetime, I've had the privilege to enjoy three subjects that have no end to exploration. So one is philosophy. There's literally no end, no there isn't. The other is mathematics.

There's literally no end. The two mathematics. And the third is I think ai. And you know what you heard Dan say? It's it was a panel discussion, then it was a paper. And it's every time we were talking we were like, wow, there's so much to this. And the real privilege here has been learning a couple of things.

One is there's AI for leadership, meaning how can AI help you be a better leader? 

Scott Allen: Yep. 

 Gaurav Khanna: And there's leadership for ai. Meaning how can you, how can people in society, and these will not be data scientists. Because AI is too important to be left to the AI practitioners, right? There's me, a lot of people in society that are gonna influence how AI is [00:06:00] built, implemented, done in a responsible way.

And that circle AI for leadership and leadership for ai. That's a loop within a loop, if you will of the theme of conversation. So exploring those with Dan is what actually led to the panel, to the paper, to the podcast, because we kept saying, wow, there's so much here to unpack.

And, we're just getting warmed up, right? Yeah. So it's been a great journey. 

Scott Allen: And you've stumbled on a fourth topic because I don't know that leadership has an end either. I don't know that I don't know. It's been, at least for me, 20 plus years of I just have jet fuel and I'm constantly.

Even in some of the conversations I've been involved in today recording some podcasts, it's just endless. I'm constantly surprised at kind of the infinite surprise of this topic for me at least. And but I love that framing of how you set that up. AI for leadership, but then leadership for ai.

I was watching a podcast episode with Eric Schmidt last [00:07:00] night, and Steven Bartlett. He does the diary of a CEO podcast and I've been watching a lot of Stephen Bartlett today. He's been having Tristan Harris on, and he is been having a number of different folks doing the circuit and talking about where all of this is headed.

And of course, as the two of better than anyone the ethics and the policy always seems to be lagging well behind the funding and the speed and the pace of where all of this is headed. That's a really interesting conversation in and of itself. So as you've engaged in these conversations, what are a couple maybe I can go to in the context of this podcast, I can go to five or six conversations where moments where I was just like, oh wow, shit.

I hadn't thought of that. That is incredible. Have you had any of those aha moments on the podcast yet? Maybe a couple things that have just stuck with you that. It caused you to stay in that place of, wow. That was a really interesting insight. I hadn't [00:08:00] thought of it that way before. Dan, have you had anything come across you and then G, we'll go to you.

Dan Jenkins: Gosh where to begin with these folks. I think one of the interesting things that, that came out of the conversation that we had with with G's colleague Manny Steinhardt, she's a futurist. So it brought this really interesting perspective to the conversation.

And, we were talking about what types of things. Would you want to delegate to ai to generative ai, to bots, to to an algorithm, right? And that there's this interesting research that's coming out. There was some from the HR executive, this Gartner study, showing that like 87% of surveyed employees, viewed algorithmic feedback as fair than human feedback.

So you have that on one side. And then there was this other study that was like counterpoint in some senses where the, and I can't remember the exact research study, but what they had found was that the one thing that you, that folks did not want to outsource to generative AI was leadership and [00:09:00]management because of the relational approach that humans will take and understanding and not being and not being.

Technical in their approach, or diagnostic in their approach. And there was some adjacent learning, and I think we may have talked about this on one of our episodes, g where I was analyzing some student feedback from. A pilot learning activity that I did in one of my classes where I was asking them to look at they, they went through a peer coaching project with some of those students in their classes.

These are masters and PhD students. And I love doing learning activity where it's solve my problem. It's basically the general, version of this. But in this case, they bring, a leadership failure or an issue that they've had to a small group of students to put in these little groups of three or four.

They go through a peer coaching experience and I said, here's what I want you to do. After you go through the peer coaching experience, I want you to take that same problem and take it to chat GPT. And here's a couple prompts that you can use. Then I want you to reflect on what happened. The overwhelming feedback theme from these student papers was that the chat bot was [00:10:00] very diagnostic, very technical in how it assessed the problem.

Here's some solutions that you might have, here's what I'm seeing, but not really having any sense of empathy. Now, could you say, Hey, cha t can you be more empathetic? You could, but it's not naturally empathetic. It doesn't lead with empathy. You want, we want our managers and we want our leaders to lead with empathy, to have that emotional intelligence.

Can you program, can you algorithm algorithmically design chatbots? I think there's opportunity there, but I don't think we're there yet. And Mandy really got me thinking about that. I think so, so much so that, I really I think I had a different insight. That I would've had when I was analyzing those student papers, when they came in at the end of the semester last in the fall of 25.

Scott Allen: Interesting. Interesting. Gee, how about you? 

 Gaurav Khanna: Yeah, no, that was a really insight. There was a lot, our guests are teaching us quite a bit, which is what I would hope happens in the podcast. I'm sure that's happened with you. Oh, yeah. Oh yeah. Maybe let me just a broad insight and then drill down into the details.

It's very clear that, look, we're beyond [00:11:00] the point of like innocent wonder about ai. Ai, right? It's oh my God, look at the poem and the limerick. I got it to do. All that was good, but really it's getting down to the nuts and bolts because of how much influence is having in our lives.

And the clear message from the guests we've had on, and we hope to have guests like this on, is that AI is a context sport, right? Yes, of course some of us who are teaching this, we get into the theory, but really with, especially with related to leadership. You have to bring things to the table in a way that people can consume and make it very actionable and real.

So the things that Dan talked about that Mandy had brought up. We had a few other guests on that were talking about, how they're actually showing up and implementing, whether it's, here's some clever prompts that I use. Here's how I assess an activity. Dan, you just mentioned, how do you get or push a language model to do this or that?

These are all lived examples because there actually isn't a closed-end formula. Speaking of mathematics, right? Like at some point mathematics is a formula that can, as a closed in formula. Can describe a lot of behavior, and even if it's not [00:12:00] deterministic, if it's statistical, that still lends itself to analysis because you have models that determine how data behaves.

With language models it can be very deterministic. And so you are really in the realm of best practices and not formulas, which is why I say it's a contact sport. Show up, talk to people, show examples, share examples. That's how AI is gonna get done for leadership, in my opinion is a lot of lived examples of people teaching best practice.

Scott Allen: Wow. Interesting. What else? Anything else that either from either one of you that kind of stands out in based on the conversations you've had? 

 Gaurav Khanna: Yeah, we touched on this a little bit, dan just finished saying how, we want leaders to lead with empathy. I actually have been thinking a lot about this problem, and it came up actually in a couple of podcasts with guests.

How do you get AI to very delicately go into a gray zone where it can be mean and do simulations? And I'm not sure we've cracked that night yet, because a lot of language models are built not to be mean to you. Yes. [00:13:00] So I actually got asked, I had to testify in front of the Alaska. House subcommittee, judiciary house subcommittee, believe it or not, in February.

Okay. So just pro tip, February is not the best time to travel up to Anchorage excuse me, Juno, but or Anchorage for that matter. But, the chair of the committee who was very savvy, asked me, he goes, I wanted a chatbot to tell me and be a mean coach, to tell me to get off my duff and workout.

But it didn't do that. Why not? And I said, because in your mind the word mean means something innocuous. But to a chat bot, it may not mean that way. But the reason why I got thinking about this problem, Scott, is because one of the things from a leadership point of view you have to teach people is how to handle difficult conversations or difficult in my pace, difficult cus 'cause customers don't always come from a company with a culture like ours where you are polite and nice, right?

They can be very, and some companies especially encourage. Very direct, which can get into a red zone, a mean area. So [00:14:00] dealing with that at a meaning versus being able to simulate that, I would love to be able to simulate that, but in my experience in leadership, people can't get there in a good way.

In simulations when we get together and do these leadership exercises, simulations, people don't go there. They're still too nice. 

Dan Jenkins: Yes. 

 Gaurav Khanna: So I'm wondering, can you get a chatbot to actually simulate very difficult customer situations and conversations in a way that is meaningful training. For both individual contributors and leaders.

And that's hard to do because the models are built not to go there. And that I think is a, but not only just a great insight, but I think an interesting problem to be solved. 

Scott Allen: It really is. It really is. Because to your point with humans, you're gonna have the spectrum and you are gonna have some clients that are gonna be reasonable and they're gonna be polite and they're gonna be kind.

And then we're gonna have individuals that are gonna be incredibly difficult, want to push and at times you need to train when to hang up or when to end the [00:15:00] conversation politely. And if we can't get there that's a, I guess I didn't realize that, that I couldn't say to chat, GPT play the role of a really mean individual that's swearing at me.

Can I not do that? I didn't know this. 

 Gaurav Khanna: I, you may not get where you need to get in that, it depends. It's a really interesting problem. Scott and Dan, I'd love your input on this because we talked about this too, GPT won't, may not get to where you need to go. I've had, it's funny now how societies changed.

I'm warned now when I go to certain countries, people in this country tend to be direct. I. It's okay dude. I can handle it right. But other people may not be. I've had the privilege of living in other countries and growing up outside the United States, so I'm aware of this, but I think it you can't get a chat bot necessarily to be mean like the foundational ones.

'cause they have a lot of guardrails booked in. Now what can happen is you can have, and this is a really important problem with language models, when you fine tune language models, they're built in guardrails, break, and there's a whole industry springing up to protect enterprises [00:16:00] from guard from models that go off the rails, tell you how to make a bomb and tell you how to, murder somebody.

So the problem is you can't, you need to strike that gray area. You don't want the models to go off the rails because what you don't want is to stimulate, a criminal or a really nefarious person. You want that person who is. Mean, but still a human. Yeah. And that's not what chatbots can do easily.

So that's something that I think would net. My only, the reason why I bring it up, Scott, is because my experience with humans is humans are not good simulators for bad people, but they're just inherently kind people. It's just hard for good people to simulate, somebody who's really bad day. And I think you we chatted about this.

I was wondering your thoughts. 

Dan Jenkins: Yeah. We did. Two things, two things come to mind. One, one is I was reading a post by I wanna say 90% aga sheet. That was Ethan Molik posting on on LinkedIn. And he's been somebody that I know that we've learned a ton from his insights.

And he's, and he was talking about this idea that that it was an assumption and they actually did some testing to see when you, [00:17:00] and I thought this was, I thought this was a hundred percent accurate until I saw this study that they put out from the Wharton School where it was like, I thought that if you say, Hey, Chad, BT play the role of an Italian chef.

Now gimme a recipe for lasagna, such and such that I'm going to get a much better recipe than if I just say, Hey Chad, GBT, gimme a recipe for lasagna. No statistically significant differences. And they ran this with 50 different roles. Play the role of a mechanic, play the role of a psychologist, play the role of a, fill in the blank.

There were no statistically significant differences in the output accuracy where there were differences, were like in the tone, and I don't wanna I'm not gonna use the Italian chef because my, I'm not gonna do a good Italian accent. But what you get when you ask it to do those types of things and play those types of roles is it takes on a persona.

But it's not more accurate or less accurate. It's almost exactly the same level of accuracy. It's just playing a character, and hyperbolizing that [00:18:00] character in a way that. Human experience might suggest that character would act, because in the end it's just a large language model.

It's just trained on the data, it's trained on. It's saying, oh, what do I know from my training model and my training data that an Italian chef should be, or a Toyota expert mechanic might be. So there's that. The other one is with these personas, I was just chat chatting with this gentleman Greg Allen, who runs this program called Leaders lab.io, which g and I may have on the podcast in the future.

And is similar to, and I know you've had on your podcast, Scott Gary Lloyd with his leadership skills lab and some of the coaching simulations and role plays. Where there was a difference in what leaders Lab had was this ability to choose from about, I wanna say 15 or 20 personas. And some of them were like defensive, obnoxious, passive empath tech bro.

These are the ones that I have in my notes that I can recall. And so it's okay, you're gonna have very different conversations depending on the persona that you [00:19:00] choose, based on what you need to practice. And you were also able to say, Hey to simulate. A conversation that you might have in the future and create like a custom role play experience.

But again, to G's point, there's those limitations. And yours too, Scott, those limitations still exist because chatbot's only going to be able to simulate these personas as good as the models that it was trained on and what it knows about its language, right? And whatever the next best Limerick or sonnet is that it's trying to construct.

Right. 

 Gaurav Khanna: And all that's got, yeah, and all that's got goes to show and which is something we like to highlight on the show. There's absolutely a need for humans in life. All this talk about is gonna automate this, automate that. We keep coming up with ways that humans are more clever, more empathetic, more attuned to what other humans need than the bots are.

And like we're not, like I said, we're past the point of innocent wonder. We're three years past the release of GPT. I understand things are moving so fast. I don't subscribe into this whole thing that humans are gonna just be left into this [00:20:00] void of just being, useless in this world of automation.

That's not what we see. That's not what I see. I see an increasing need. We almost dismiss just how powerful our brains are, right? This is not something bots can do. And then on the technical side, I would say this hasn't come up so much in the podcast, but there's increasingly louder chatter.

This current generation of language models that are built on this transformer model, which is pervasive at some point they'll peter out, they'll just get better and better at language, but not necessarily better and better at being human. 

Because of just their architectural limitations. So the question is what's next?

And that's that, that will we'll know in a few years. But it's a really interesting pivot point we're at now. 

Scott Allen: And you get into really interesting conversations around, a GI and then a SI, the super intelligence and I don't, I was watching a podcast with Steven Bartlett and Mo.

Got it. God. And he was talking about an AI super intelligence leader that [00:21:00] would save us from ourselves. So just even that conversation, this is on the Steven, I'll put it, I'll for listeners, I'll put a link to this conversation in the podcast because it's the first time I've heard someone say, no, it's gonna be better when we have a super intelligence leading us.

How do you all even think about a statement like that? Is it clickbait and just fantasy in your minds? 

 Gaurav Khanna: I do not agree. And I say this, it's funny 'cause it comes up. We're sitting with executives, we're supposed to be talking about all the routers and switches and software that's gonna make their.

Systems hum. And then this comes up like, Hey, are we gonna have this, overload, AI overload? I'm like, oh lord, how much time do you have, sir? Because we've got your network configuration. But, here's what I'll say, Scott, I, the technology, if somebody has that technology right now, they're not revealing it because it's obviously so important.

So it'll be so competitively [00:22:00] advantaged for them if they can build it. They're not telling us. I thought it was very instructive that Yann Koon left meta. He was their chief AI scientist for a long time. That made a big ease. He's one of the godfathers of AI for years that these architectures cannot mimic humans.

So the next so let's table just for a second, what you said about that super intelligent leader. What would it take to just get to some kind of super intelligence that mimics humans? So the path there, I believe, is not one single entity. If you look at the best and most intelligent thing we've had in the history of evolution, it's our brains.

And our brains work in a way where there's specialized domains talking. So most language models, you have one pass, you give it a sentence and it outputs the next word. But our brains, if you were to map what's going on in our brains right now, multiple regions of the brain are going back and forth before a single word comes out.

And there are filters along the way. Should I say this? Should I not say this? Yes. Some people don't have filters, but let's say most people do. Yep, 

Dan Jenkins: yep. 

 Gaurav Khanna: So the brain works in a way that's [00:23:00] very cooperative amongst the different regions. And what I think will lead, one of the things that will lead to super intelligence is we build a system that is very cooperative in how it works.

It's not a single entity, but perhaps many entities coordinating like our brain is number one. Number two, we need an architecture that absorbs more of the world than just language. Dan said, the chef, it can only mimic a chef because it's red about what an Italian chef needs to be like.

That's all it knows. But an Italian chef sees and smells and experiences the world and what is the architecture like? How do you capture that in. That a model can learn. And that's what Yian Laun is saying is that doesn't exist with the systems today. Now whether he's got an answer to that, we'll find out in a couple of years 'cause he started his own gig, but or who else has that?

But that's really the next level of effort. So I'm waiting to see Scott, how good that becomes before I make any judgment of whether that thing can be a leader. 'cause we don't even have that thing yet. So I think people are [00:24:00] extrapolating. Two or three architectures ahead. We don't even know what that next architecture is.

That's why I'm a little bit more human-centric than maybe some of my friends and colleagues is because we're not there yet. And I think we just keep discounting all the things that, let's go back to leadership. All the wonderful things you teach about what a wonderful leader is, right? Like nothing exists today that does that.

So I'm very skeptical, Scott. I dunno, Dan, 

Scott Allen: I'm reminded of the scene in Goodwill Hunting, where they're sitting at the. Pond and he looks at Will, and he says, you can probably tell me everything about, Leonardo da Vinci, but you can't tell me what the Sistine Chapel smells like. And you can tell me about this and this, but you've never held, you can tell me about love. You could probably even quote me a sonnet. But you've never held your dying wives. Hand and while she was and the people in the room look at you and understand that visiting hours don't [00:25:00] really count for you because this is the situation you're in.

You can't tell me how that feels. So you've opened my eyes to a really interesting nuance to this conversation. That is an important one for sure. Dan. 

Dan Jenkins: Yeah, it's that, it's that human element. It's funny that you bring up that you're bringing up Mo Modo and you, you had actually suggested, I think you texted me a couple of weeks ago and said, Hey, you gotta check out this YouTube interview that I saw about the future of ai.

And it's now in my audible library and it's the next thing I'm reading. Ironically enough. After I finished the book I've been trying to get to for a while, which was Nick Bostrom Super Intelligence. Oh, yeah. 

Yeah. 

Dan Jenkins: S and strategy. That's 

difficult 

Scott Allen: one. 

Dan Jenkins: Yeah. Oh yeah. It's a difficult one.

And he doesn't read it. There's a a narrator that, that reads it. I consume most books at one on a quarter speed, on, on Audible. Because I'm just reading so much in my, and in my work and my day to day. But the connections between. Watching some of that podcast interview with with Mo and reading through Super Intelligence.

Yeah. It isn't ever going to replace humans. Even what Bostrom is arguing is he's we're we don't [00:26:00] really have a super intelligent being, we're thinking about like a cyborg or this, ultimate, ubiquitous something that we haven't yet imagined.

It's more thinking about can we super, super connect? Groups of people together so that they can always be communicating with each other and solve problems. And he uses the analogy of building a space shuttle, right? Like you can't do that alone. You need multiple engineers from different disciplines, building all the different parts.

And is that super intelligence? Is that what we're talking about? How do we. How do we streamline that? But then to go back to to to, to mo ADE's, I idea part of that that scary smart, oh, his book's called Scary Smart. The Future of Artificial Intelligence and How You Can Save Our World.

Yeah. Which I've got up here 'cause but he was talking about this idea of of how. It's going to change our value systems if we let it or how it might redefine our value systems. And he talks about. AI having if, depending on which way we go with this and how it's adopted, [00:27:00] but there's this, there's a a reality where AI may define, may redefine freedom, accountability, connection, economics, reality, innovation, and power.

Those are all, how do you, other than economics and reality, maybe. I think we talk about all the rest of those things in leadership development and education and training. We definitely talk about freedom and inclusivity and collaboration and, being able to be open with your thoughts.

Accountability, a hundred percent. What's more important than human connection? Innovation, power and influence is leadership 1 0 1. As we're talking about relationship dynamics. How, where does AI. Fit in that process. I just got done co-authoring a chapter with with two folks around. We were re rebooting an article we wrote 10 years ago about being at the virtual table, which looked at moving from like in-person meetings to Zoom meetings which was like very.

Profound at the time, 10, 12 years ago when we were writing the article it's an afterthought today. 'cause here we are on Zoom recording a podcast. But when you [00:28:00] think about today, now a, now there's a chat bot in our Zoom meeting. It's an avatar. I, if I had a D for every time somebody's meeting assistant was in a meeting with me, or yesterday, I got to a meeting.

The meeting assistant was there before the person was 

Scott Allen: Yes. 

Yes. 

Dan Jenkins: And I'm like, is such and such ever gonna show up? And so I think about, connection and power and influence and like all these dynamics are changing because of what we choose to outsource. So it's it goes back to Brian Christians' work on values alignment and the alignment problem.

Like how do we align human values with machine learning and generative ai? What are our values? How much do we care about those values? And how much are we. Willing to outsource or or a allow AI to simulate to the best of its ability. You talk about can it simulate emotion?

Can it simulate being mean if it's never actually been, if no one's ever been mean to it, it's never experienced and people learn from those experiences. The. AI is not learning from experiences. Machine learning, [00:29:00] there is reinforcement learning and there's some, there's definitely some overlap there, but we're talking about something different.

And I wonder, and I know g knows more about this than I do and want and is learning more as he goes, the quantum computing and that whole area is that gonna be able to do this because, but can we build math and algorithmic structures that simulate this stuff? I have no idea.

That's not my, that's not my wheelhouse. 

Scott Allen: I don't know. I think it's all just a wave function. I don't know. Gee, do you think it's all just a, is everything just a wave function? Including life is a wave function and,

It's 

Scott Allen: all a wave function? 

 Gaurav Khanna: Good question. I don't know. Can I get back to you?

Because now I'm thinking about are we just all in a simulation? No, that aside. Something you both said is really interesting. I think, Scott, you got me thinking about where, this is all headed. So there, there's actually something I think we should worry about more in the immediate term than whether, we'll have a very benevolent or nefarious supreme AI leader and this is actually [00:30:00] something I see.

So I just maybe let me articulate it because I think this will resonate with the audience. So there's increasing talk about. Who AI is impacting and how. 

Dan Jenkins: Yeah. 

 Gaurav Khanna: And as usual, with any technology, it's not evenly spread out. And so there's this train of thought that where AI is really gonna give you 10 x 20 x return in your own abilities is for people who are high agency people.

Now, that's a bit of a nebulous definition, but essentially what I'm reading and I'm aggregating. Different sort of conversations and threads is people are already, very studious, they're curious, they wanna learn. They work really hard. They, just crank for these people, AI has the potential to be like 10 x.

Scott Allen: Yep. 

 Gaurav Khanna: So those are the people that are using ai, like a lot of people on average, and I'm not saying these are good people or bad people, but they're like, help me plan my vacation. Help me do this. Or here's a 50 page document. I don't have time to read it. [00:31:00] My teacher needs a quiz.

I need quizzes generated, help me generate quiz questions. That's not bad, but your, you are learning from these is very different. Yeah. And so the reason why I'm very interested in this is because it was literally an article in the Atlantic, I think it was in 2007, saying, is Google making a stone?

And it was similar argument that now instead of going to the library and going through the Dewey Decimal system and finding a book and actually reading it, we're just looking at the first few links that come up and we're going with that. And I don't think Google made. Everybody more dumb.

I think some people became overly reliant. Dan, you mentioned, how much of this do you outsource? I think some people completely outsource their discretion of what links are valuable and what are not to Google. Yes. Other people are more discerning and they were able to get the right stuff and go with it.

And I think the same thing's gonna happen with ai. So before we have a Supreme leader, what I'm worried about is that divide between the people who are really, their lives are being accelerated. [00:32:00] Their knowledge is being accelerated. Their agency for what they can do is being so accelerated by AI that there's a group of people that sort of leaps ahead and everybody else follows along behind.

So this is a new incarnation of the digital divide. 

Scott Allen: Yep. 

 Gaurav Khanna: Amongst the population. And that to me is probably the more immediate thing to think about. And then again, back to leadership for ai, air for leadership, is that something we can start to, as a society, train people more? I think Dan would agree. We don't have a large curriculum around this yet.

Maybe it needs to be developed to even that, like how do you really use AI to impact your life and further it is, I think, something that needs to be taught. I don't think you pick that up just by, cranking on GPT all day. 

Scott Allen: Yeah. So interesting. So interesting. You all I really appreciate you being with us today.

I think this is a conversation. You're doing a great service to the community by introducing these conversations. For [00:33:00] listeners, we have all kinds of links in the show notes for you to explore. And I really do. As you were speaking, gee, I was thinking of that scene in the film, hidden Figures.

Now I'm gonna go to another movie. And the computers were humans back in the day and they were these mostly women at NASA computing. And then of course we have the IBM mainframe and the individuals who had the agency to go learn that. And master that they stayed relevant. And so it's interesting, like how will this divide occur and how will individuals choose to stay relevant.

I had a friend of mine who owns a marketing agency in town, and he said, we've just invested about the equivalent of what would've been two junior. Marketing associates in technology and we chose to make the investments in the technology. So it's just interesting. Are [00:34:00] you a master of the technology and can you use these tools that's gonna keep you relevant in this new space?

But to your point, G, that divide can widen quickly. Very quickly, and you know what, I'm just very thankful for the two of you, thankful for the conversation today. I would like to end with one question. What's the practical wisdom for the two of you? So as you think about the conversation we've just had, what do you want listeners to leave with G Let's start with you and then we'll go to you, Dan.

 Gaurav Khanna: Stay engaged. Stay learning, stay curious. This is how, to your point, Scott, you. Not only keep up, but you thrive. And that is the best advice I can get proximate with it. And it will net benefit you. 

Scott Allen: Awesome, awesome. Dan. 

Dan Jenkins: Yeah, I know we, we brought up Ethan Mooch earlier and he's known for his book Co Intelligence and thinking about how do and the idea of that co being partner, right?

And how can you [00:35:00] partner with AI thinking about the conversation that we had what is stopping you from experimenting? Finding opportunities to do things better, do things differently. Challenge yourself and have fun doing it. There's a lot of things that we can enhance the human experience, but there are as we talked about, there are also some things that you, that are irreplaceable to the human experience that AI may never be able to replicate.

Scott Allen: Yeah, and I also have in my mind right now, and I don't know that this is the correct polarity, so for our polarity experts, I'm sorry, but maybe this will spark your own thinking. If we have the humanity on one end and we have technology on the other end, I think, what's that? What's that balance and what is that.

Space. And I'm sure there's gonna be some overshooting on either end at times as we learn. But and I think that's probably always been the case. I don't know, maybe in the caves some caveman looked at the other one and said, you didn't just tackle that lion [00:36:00] yourself and beat it. You used a stick that's cheating.

Or G in your world, mathematics well. You used a calculator. I used other, I did it all by hand. Some of the old architects that I work with, there's I learned on a T square. These kids don't know, they, they miss the fundamentals when they're using Revit, yeah, 

 Gaurav Khanna: Yeah.

But Scott, doesn't that tell you some of these problems have been around? A different incarnation. So maybe the lesson here is always be a student of history. There's another topic. Let's end where we started. That has no whatsoever. 

Scott Allen: Exactly. We hope. We hope 

 Gaurav Khanna: that's right. We hope. 

Scott Allen: Take care, you all.

Thank you so much. Be well. 

 Gaurav Khanna: Be well. Thank you. 

Scott Allen: Thanks Scott.