Practical Wisdom for Leaders with Scott J. Allen, Ph.D.
Practical Wisdom for Leaders is your fast-paced, forward-thinking guide to leadership. Join host Scott J. Allen as he engages with remarkable guests—from former world leaders and nonprofit innovators to renowned professors, CEOs, and authors. Each episode offers timely insights and actionable tips designed to help you lead with impact, grow personally and professionally, and make a meaningful difference in your corner of the world.
Practical Wisdom for Leaders with Scott J. Allen, Ph.D.
Humans in the Loop with Dr. Karl Kuhnert
Karl W. Kuhnert, Ph.D. is Professor of the Practice of Organization and Management in the Goizueta Business School at Emory University. Karl’s research focuses on how leaders cognitively, interpersonally, and emotionally develop over the life course. Karl has published over 80 peer-reviewed articles, 13 book chapters and made over 100 conference presentations, and served on numerous editorial and review panels. He teaches industrial and organizational psychology, leadership, organizational change, and professional ethics. Karl has won numerous awards for teaching and research. Karl also regularly teaches leadership development in the Executive Ed. Programs at Emory, UCLA, HEC Paris, and UGA. He has served as a consultant with many large and small corporations, non-profit and government organizations including, United Parcel Service, The U.S. Dept. of Treasury, Siemens, The Jet Propulsion Lab, and Cox Automotive.
A Few Quotes From This Episode
- “Every time I have done this, it has freed up experts to do the work they actually want to do.”
- “Tacit knowledge is lived wisdom—it’s what makes an expert an expert.”
- “AI is a tool, it is not truth.”
- “We need to ask how judgments are made, not just whether AI can render them.”
Resources Mentioned in This Episode
- Book: Personal Knowledge by Michael Polanyi
- Book: The MAP: A Practical Guide to Leadership Development by Keith Eigel & Karl Kuhnert
- Article: Training Innovative AI to Provide Expert Guidance on Prescription Medications by Kuhnert
- Article: Teaching Leadership: Where Theory Bridges Practice by Kuhnert
About The International Leadership Association (ILA)
- The ILA was created in 1999 to bring together professionals interested in studying, practicing, and teaching leadership.
About Scott J. Allen
- Website
- Weekly Newsletter: Practical Wisdom for Leaders
My Approach to Hosting
- The views of my guests do not constitute "truth." Nor do they reflect my personal views in some instances. However, they are views to consider, and I hope they help you clarify your perspective.
♻️ Please share with others and follow/subscribe to the podcast!
⭐️ Please leave a review on Apple, Spotify, or your platform of choice.
➡️ Follow me on LinkedIn for more on leadership, communication, and tech.
📜 Subscribe to my weekly newsletter featuring four hand-picked articles.
🌎 You can learn more about my work on my Website.
Okay, everybody. Welcome to Practical Wisdom for Leaders. Thank you so much for checking in wherever you are in the world. I have Dr. Karl Kuhnert. He is professor in the practice of organization and management. And he is located at Emory University. Have had another episode with him. It might be in the 90s. So check that out. Great conversation. Karl has been, and one thing I love about Karl is I've known his work for years. And he, if there's one word I would use to describe Karl, curiosity is certainly one of them. He has stayed curious over the course of his career. And Karl, you've had a number of adventures you've been engaged in recent times. And so that's where we're really going to take the conversation today. But I gave you very little of an introduction. What should listeners know about you? What do you want to add to your title?
Karl Kuhnert:Other than the fact that I've been at this for a long time. And uh and the work I'm doing now, I feel is very exciting and has it has a lot of implications for how we think about AI and how we're going to use AI in the future. Yes. Oh, yeah. I think you mentioned that before we started recording, your wife has asked you to retire three times. And you I have retired three times. I started this retirement process. Actually, I retired and I think it was 2016 or something like that. I started this process. And every time I every time, by the way, it's interesting. Again, this is for everybody out there. Every time I want to retire, I found something new to keep me here. So artificial intelligence, leadership, coaching. Let's talk a little bit about that. And maybe let's start with the kernel of the idea and just take me through what you've been learning. I'm so excited. Okay, let me start somewhere at the beginning here, which was really about eight or nine years ago. I got this call from a friend of mine, Mark Keith, and he was a financial person, and we enjoy each other's company. And then he left where I was at the time after Georgie moved away. And then about a year or so later, he calls me and he says, Karl, I've got this software from it's called Merlin Inc. And uh the guy who actually created the software, his name is Karl Wocke, W-O-C-K-E. And I should also warn you that if I say, Scott, that uh Karl's a genius, I'm referring to him, not to me in a third person. But anyhow, it and so Mark showed me this. And basically the idea was, and he's very simple. He says, you know what we can do with AI? He says, we can actually point it to a person. Not to the data set, but to a person. And I went, okay, that's interesting. And he says, what we want to do, and what we can do, is we want to duplicate or digitize a decision that an expert makes. And what we can do with that is then scale that decision to the benefit of others. I said, Mark, I mean, I said, okay, I have to see how this works. I had these ideas from actually graduate school about how decision making is done. And of course, this blew away everything that I had known. And I was like, I'm trying to put this all together. Where's the statistics? What kind of models are we using here? And so I had to get, I had to basically learn all over again what this does. But I started working with him and Karl Wilka. And I found myself in a number of different places where I was actually duplicating people's decisions. And it was absolutely fascinating. And so let me just give you just one example of this. Yeah. And I was working for Homeland Security. Okay. I I look, I've done ethicists, I've done psychiatrists, I've done a whole lot of people give their decisions. And by the way, as you said, I'm curious. Every time I sit down with someone, I'm I'm like having a birthday or something. This is oh my gosh, this is you can't get better than this. It's all working with homeland security and feeling, oh, this is really cool. And so, anyhow, I'm I'm interviewing this person, and it's interesting because I talked to the supervisor and some people at the airport, it's actually at the airport, and this was an agricultural inspector. And her job is people come in and she has lots and lots of data that she looks at to decide who might have bad peanuts from Brazil coming back from. Yeah, I always see those signs, but I've never paid. Do you have a live chicken? Nope. I'm good. But anyhow, she's better than this in anybody. Yeah. And they said, you have to interview her. You have to see if we can digitize her. And it just to cut the story short, I did. And I asked her afterwards when I was done. I showed her I showed her decision basically and algorithm. And I said to her, Is this you? Does this look like you? Does this sound like you? She comes over and hugs me. Wow. And she goes, Oh my gosh. She says, You know how much time I spend teaching people how what I know? And all these new people that come in and they're new to this job. And she goes, You know what I'm gonna do now? I can do what I'm really paid to do. Try to figure out what the bad guys are gonna do next. Yeah, yeah. And I want I want to make this point. Every time I have done this, it has freed up these experts to do things that they wanted to do. Interesting. And so very quickly, and again, I'll I this is on our we can share, I'll share a paper if you want to email me. But what's very cool about this is that what I'm really tapping into, and think about an expert, and that we can have fun with this just for a second, Scott. I could say to you, hey Scott, you told me recently that you decided to quit your job, stop your podcasting, and what you want to do is flip homes for a living. And uh, and you're like, I'm really interested in learning more about this. And he also says, there's a guy that's down the street, and he's been flipping homes for 25 years. And we can go on with the scenario, but I'll ask him, I'll ask you, I'll say, What would you rather get your information from? Would you rather get it from AI? Oh, by the way, it's gonna give you a lot of information. Yes. About flipping homes. You're gonna get a lot of information. But the other thought you have is maybe I should talk to my neighbor about flipping homes. And the point I want to make here is what would you gain from talking to your neighbor? Localized expertise. Perfect. Localized wisdom, specific wisdom. So this person could tell me that Elyria is a great hot spot in Northeast Ohio, so to speak. But it would also be bounded in some fact, in some way, like that it's it's not just some universal Chat GPT output, it's contained in some ways, right? Yeah, but that's the point, is that you're getting data, right, from this guy, that you can't get from AI. And we make this distinction, and this distinction has been around since at least 1957. And the guy's name is Michael Pogliani, and he brought this idea up in a paper of his called personal knowledge. And he actually referred to this personal knowledge as tacit knowledge, and essentially it's lived wisdom and it's in it's intuition, right? And you're gonna get that from your neighbor, then it's not gonna be an AI. And so this really just fascinated me because when you're working with experts, and you can say this: what makes an expert? It's not explicit knowledge, it is, you know, that they have the explicit knowledge because everybody has explicit knowledge. But how do you do this? How do you get at this tacit knowledge? Because that is what makes an expert an expert. How do we get into that dimension, that practical wisdom, that lived wisdom? The when is this appropriate, where is this appropriate, how is this appropriate? And wait, the way you've hit on this, Scott, is the way to think about this is what you get from AI, and again, this is awesome. It's knowing what. Yes. From the expert in tacit knowledge, you get to know how. Yeah. You're you're never right now, again, who knows what's gonna happen in six months or a year with AI, but there isn't really a way. I don't think you can know what that tacit knowledge is because it's not in the data. It's not in the data. And so fast forward. Yeah. I'm beard emory. I'm in one of the great research hospitals in the world. And so I'm working with this doctor. And at the time, this is about a year ago, she was actually a student in my class because I teach executive MBA students, which is just a joy, by the way. Just a joy. And she comes up to me. I have this uh simulation I create with AI and what I'm doing. I create the simulation for my class. And she comes up to me and says, Karl, we got to talk. She says, I'm one of the experts at this time on GLP1s. Oh, okay. She's again, knowledge, she is the person at this point about GLP 1. And she's telling people all over the country about when you have a patient, how do you give them a GLP 1 or not? And so I sit down with her, and I'll just do this very quickly. Is it takes me about an hour and a half, and I sit down with her to tell me what her key variables are, as well as the measures. And it's fun because I ask her about I ask her tacit knowledge questions, what makes you special here? And what do you think about when you're seeing a patient? And it's really interesting, right? It's more, she's giving me more than just what is in explicit knowledge. Yes. She's giving me other things. And so we create this algorithm for her. And oh, I should probably give you one more how we do this. And this is a little hard to explain, but this is the real, again, this is the real genius within the software, and which we call Tom, which is tacit object modeler. Okay, all right. Oh, nice. Yeah, tacit object modeler, tacit. And so what the software does is it actually takes all the variables, it takes all the measures, right? And just input it. And what Tom does, and this is again really unbelievable. I have Dr. Collins actually sitting beside me, and what it does is it, if you will, it creates a scenario, one scenario, and it has all the variables, and one, but it has some of these, by the way, she had something like 22 variables, which is a lot for an expert, by the way. This wasn't yes or no questions, these were sometimes six different options for a measure. Um so all of this is in there. And so what Tom does is it gives us a scenario, and the question at the bottom is do you provide or do you give a GOP1 or not to this patient? She looks at the data, no, fine. Next scenario, different scenario. And oh, by the way, I'll get to the bottom of this 220 scenarios, right? It's just cranking out scenarios, it's just crank scenario. It is, by the way, here's the thing, here's what's important it's learning about what you value. Oh wow. And so what you get, and by the way, this is so cool. At the end, I actually have Tom test her. Oh, wow. Okay. And oh, by the way, she had a hundred scenarios, she got a hundred. And for her, it sounds like this would be weeks or months. No, she's looking at these things and making decisions in about 10 seconds. Wow. And so this whole process, by the way, can take I don't know. I usually, it's about it take three or four days, but it doesn't take that long because what I do is when someone gives me, when I interview them and get their explicit and tacit knowledge, I actually send them home just to think about it. Is anything that you missed or whatever? We have this great algorithm that now can be shared with physicians all over the country. But by the way, there are physicians at this time that aren't really up on GOP wants. This is unbelievable. And the way we talk about this, and it's this is again very important. We offer this to physicians as a second opinion. That's all second opinion. But I'm saying, hey, listen, here's a second opinion from a leading expert, major research hospital. See what you think. And what they're doing, by the way, they're putting in their own data when they have the algorithm. It's like they put in it, so it's a personalized decision. And they also see, and by the way, the people who are who are using this actually see what she values in terms of the variables. So everything here is transparent. You actually know who you actually can look up who the position is. So it's it's this is not, and I usually I like to talk about this in terms of trust because you never know where the data's coming from AI. It's coming from this person.
Scott Allen:Yes.
Karl Kuhnert:And here's it's a second opinion. You can make do what you want. So the way I talk about it, the data is sourced, it's transparent, and it's personalized. Because what the doctor's using is their own explicit knowledge. They're putting in whether the patient has medullary cancer in the family, all they're putting in that data or their own data. But this is more the her judgment and her decision. And capturing that. Yeah, maybe because think of I have this visual in my head right now of all of the knowledge in people's minds walking around Earth right now. That is in some cases never captured. That could be making whiskey, that could be GOP1s, that could be decision making. Scott, I don't have I don't have time for this, but do you know how many companies have called me? Small businesses who said, Hey, we have this person who's retiring. We can't lose her. She's an accountant, and we can't we can't replace her right now. We need an app that what would Sharon do? That's what it is. And so that's happening all over the country, all over the world, this baby boomers. And I would love there's folks again, you're right. There's folks that I would love to be able to digitize. And so where we are today and where I am right now with this, is this, by the way, this work that I did with the GLP 1o actually got me into Emory Hospital. And so now I'm actually uh involved, I'm involved in clinical trials. That has been quite a uh an ordeal going through this. But we're gonna end up with not only with the digitized expert, but we're also gonna have the opportunity, and I've already I've done this with other kinds of institutions like banking and insurance, but I'm so excited about being able to collect the validity data that looking back on past decisions with physicians. It's really because there's a lot of things I can do, but what's really important for me is being able to validate their decision. And one more thing, this is it again, it's amazing, is that Dr. Collins came to me, I think it was about two months later after we digitized her. She says, Karl, there's some new science. And she says, I got to change my model. I said, Okay, cool. And it took us about a couple of hours. And the important thing is that this technology is actually moving at the pace of science. And you think, and I I make jokes with my physician friends, I said, How long does it take for you to get a paper out that we're distribute knowledge? And you say, Oh, a couple years. I mean, like, I can we can make this happen in a day. So this is this is where I am with this, and I'm excited. And maybe you can invite me back in a year or so and I'll explain exactly how this worked how this worked out. How do you see this connecting to leadership? I I'd like to get into your mind about that for a little bit. I see possibilities in my mind, but what are you seeing? All right. Let me tell you the other project that I'm currently working on. And I think I sent you a paper on this. There's a short, very short paper that went into the like was that a business magazine here at Emory. I talked to one of the professors here who works in AI, and he says, Hey, listen, Karl, you ought to look at this, what they call a GPT, which is essentially an app that you can create. And what I did is I said, Okay, this ought to be interesting. So I took, I don't know, it took me about a month or so. But what I was able to do, it's essentially I created kind of my history and all my work over the past 30 years with leadership, right? Using constructive developmental theory. I have a strong theory about how leaders grow and mature and all of the work of Robert Keegan, that kind of stuff, being able to show again how leaders grow over the life course and mature. And we know that more mature leaders make um the better leadership decisions. And I I put like all this in there. I've also been an executive coach for over 25 years. And so I was I started putting in all my transcripts from all my coaching sessions. I'm actually coaching in these sessions, and what they're getting is my coaching within the transcript. And I have outcomes and all that kind of stuff and what that what I think they should do and all this kind of stuff. But it's cool because I'm actually giving them their own language back to them. So it helps in terms of feedback, where they are and where they need to be. And so, anyhow, so I have about I think around 80 of these transcripts. I can actually have more, but I have about 80 of these. But I've also put in my classes, I have the book with Keith Eigel, the map, great book, and I put that in there. And so I essentially, my career, right? And what I really set out to do was can I use this as a way to help coach my students with their dilemmas currently? And again, these are executive MBAs, they got problems, they got issues. And it was interesting. I had this thought, they I'm probably gonna have to teach an undergraduate class next year. I'm thinking, look, what am I gonna do? And I'm thinking, oh yeah, that's it. This is it. Let's talk about your roommates. But the athletics, the roommates in your student organization. I'm not sure I could use it for that. But it is so what is what has absolutely blown me away is I had a student the first week I talked about this, and I literally was building this whole semester, right? That I'm teaching. And I kept adding things as the weeks went by and had different content that I could put things in that I had talked about previously. And in this woman, this is like the first week I talked about it, and what she did is that she had a problem at work. And what it did, and what the app did was say, essentially, here's a number of ways to think about the problem, depending on who your audience is, and you have to basically identify characteristics of your audience. And it gave her three different ways, and this third way, which was essentially the most mature way to handle this, she goes, Oh my god, Karl, I used it. I wasn't really sure I could do this. And she says, Not only I'm gonna give you the problem, is her boss kept giving her more projects.
Scott Allen:Okay.
Karl Kuhnert:She was overwhelmed.
Scott Allen:Yep.
Karl Kuhnert:And guess what? She was angry at him for keeping just giving her so much to do that she couldn't even working the whole time. Yeah. And of course, she's angry and she just goes, she wants to go talk to him about this. Right. And what the app does is it says, Hey, hold on here for a second. You don't know why he's giving you that information all the time. All those things to do. She may he may be thinking you're the next best person for a leadership position. Okay. Do you know that? No, I don't know that. And then, and then I'm just I'm gonna go to the end here. Is that one that sort of the most mature would call level five way of thinking about this? Is how can you create this, create a dialogue with him? And it gives you an example of what the dialogue would actually look like. Oh, I love that. What the dialogue would look like, and what she does is she uses the dialogue and she comes back to me, she's Karl. Not only did I better understand my boss and what he was doing, we had a great conversation, and the conversation that meeting took us off to how we can make a bigger difference in our organization. It changed her career path at that moment. And I'm doing this, and think about this in real time, I'm coaching 60 students. Yes, it's scaled. Well, okay. Some people, some listeners, I imagine right now are thinking, oh, AI, it's gonna hallucinate, it's gonna give bad advice. Would you address that a little bit? How are you thinking about that? Okay, so you have to understand here I am on one side criticizing AI, what it can't do. Now I'm telling you what I can do. I had to hold those things in my mind. Okay, but here's the distinction. And I'll use this word because I don't have a better way to really talk about this. But with my GPT, there was it was bounded. It's bounded by me. It's not open to the internet. And so what I was most curious about, and I kept getting things every week from students who knew I was gonna take it's gonna take more time for me to go through all this stuff. But it was so fascinating because I was just curious how well does their problem and the solution align with what I would say. What was that alignment like? And I'll just tell you one of the things that really flips me out in this thing, by the way, in this GPT, I call it the leadership growth lab, my LGL. The thing that really flips me out is seeing my own words coming out of a machine. Let's take a quotes out. Well, you don't just take my quote. It's a good quote, I can't use it in class anymore. It's uh it's it kind of it is weird, right? And it's using the theory, and and by the way, let's just let's call this on and out. How much time as an organizational psychologist, an academic organizational psychologist, and we've talked about how do we link theory and practice? Yes, I'm doing it. I'm taking the constructive developmental theory, tying it directly to people. My content is real to them in real time because they're using it when they get out of class. Exactly. In their problem. Now, when this woman was using the software, here's three ways to think about this. And you and she goes with the level five. How is the system then suggesting paths forward for her? Is that also based on your work or is it doing some inferring at that point? I'm just really super interested in that. Like as it gets into providing her with actionable advice, is that still 100% you? Oh, yes. Wow. Oh, I have to tell you, it will it will modify my language at times that actually makes it better. Wow, it does make it better. I'm like, oh, that's really I love the way you said that. And I'll tell you something is just I could enough about the theory that we're using. And I I just was playing with this, and I decided, okay, LGL, what I'd like for you to do is tell me on the move from level two to level three, I would like you to explain this to me in the form of a poem. Yeah. You know, let me get to the last point of the story. This is a big point here, is that when we think about AI, and I'm now at this place where I know it can't know what tacit knowledge is. And this is these are this is what experts bring to their knowledge, to their to what they know. What we need is not to ask whether large language models can render expert judgments, but how are those judgments made? And you think about yourself, think about man, when you make a judgment, what do you what are you considering? You're considering a lot of things based on your experience. And so I say I try to contrast this with the L LMs, is that fluency is not understanding, speed is not wisdom, certainty is not truth. What do we need to ask now when it matters most that human judgment is in the little I guess what I'm saying here is when human judgment is there, we we probably need that for making these critical decisions. And think about healthcare. And I haven't really I haven't pushed this yet. We're using a lot of AI in healthcare now, and I'm not sure that we should be making decisions, critical decisions with it, because it can't get to that tacit knowledge. I think that's a really important element to tease out of this whole conversation is that whole tacit domain, right? That's and so many other things. It is a tool, it is not truth. And to your point, at least for now, that human judgment is incredibly important. And you gotta know what's the source, where's it coming from? I want to know that. At least for now, there's a partnership. It's uh because we can both think of plenty of decisions humans have made that were not good decisions and could have been informed by a little bit more wisdom. And maybe just even the I think of your student who maybe wouldn't have even conceived of that level five option. She was just mad, she was just angry, and this tool is now providing her an opportunity to potentially practice what it's like to work at level five ways of thinking, ways of engaging, ways of approaching some of these challenges, which I think is brilliant, it's awesome. And it's a tool, it's not truth necessarily. It's not true. No, it's a tool. And and again, this I knew this to actually talk about this, but what she told me, she goes, I had no idea that I was gonna come out of that meeting with a win-win. Yes. And what happens at level five is trying to figure out what a win-win looks like. That is this not just for her, but for a company. Karl, I so appreciate, I think for listeners, you can very clearly see that curiosity in play and that experimentation. And for years and years, that experimentation, I just absolutely love it. I think we're so lucky to have individuals like you exploring some of these tools, understanding what are those benefits, what are the potential limitations, and you're at the forefront. You're at the forefront of this work, and that's just incredible. And I will reach out again in the future. I do want to have a follow-up conversation to track your adventures, see what your most recent learnings are, because I think this is super important. And I'm gonna put a couple articles in the show notes for listeners so that they can explore and see some of this in action as well. As I always close out an episode by asking guests what they've been listening to, streaming, reading, what's caught their attention in recent times. It may have to do with what we've just discussed. It may have nothing to do with what we've just discussed. But what's been on your radar? What's really energized me over the past week, and I'll send you another article that I that is actually under review right now. And and this guy, you can find his work actually on LinkedIn. And his name is Okay, Quattro COC. Okay, that's the best I can do. And he's Italian, so everything I everything on LinkedIn, thank goodness they have a translator button. They can translate all this because I couldn't read it. But he has this article, and it actually you'll see the amount of traction he's actually getting on LinkedIn um just over the past few weeks. But this article appears, and I'll send it to you, Scott. Uh it actually appears just recently in the proceedings of the National Academy of Sciences. Okay. He takes this farther right now than I do, but he his general premise that what we actually have now is basically synthetic knowledge coming out of AI. And he makes a brilliant case for this. It's worth a read. By the way, I have to also let me also say it's very tough to read. Okay. It's a it's a dense article, but hey, it's going to get a lot of play. And it's really is going to call it basically, it brings into question this idea of man, there are probably decisions that we need a human in a loop. Other things we don't need, right? We don't need. And uh so that that's the article that I would like to just have people share.
Scott Allen:Awesome. I will put that in the show notes as well. Karl, thank you so much. Appreciate you, appreciate your work. And I know that listeners are extremely intrigued. So I will have some links in the show notes for all of you. And as always, everyone, thanks so much for checking in. Take care. Be well. Thank you, Scott. Okay, before we get to my summary of that episode, I have a special guest, and this is Dr. Marcy Levy Shankman. And we have been colleagues, co-authors, friends since probably like 2006, back in the day, back in 06. She is helping with ILA's dialogue lab. And so, Marcy, tell listeners a little bit about this opportunity and how they can get involved, how they can get engaged. New Orleans in January sounds pretty good to me. Tell us a little bit more.
Marcy Shankman:Scott, thanks for asking me to talk a little bit about the dialogue lab. This is a really exciting experience. It's only offered every other year. We're going to be in New Orleans, as you said. So this three-day dialogue lab, which is going to be in New Orleans, is focused on dialogue as a form of leadership. So that means we're not going to have panels, we're not going to have workshops, we're not going to have presentations. What we're going to have is true deep engagement. So individuals will sponsor inquiry sessions, and those individuals are the participants themselves. And if you're interested in attending the dialogue lab, you can come and participate as a full-fledged member of the community. This is a full-on, co-created learning community. And if you want to bump up your level of engagement, then you can propose a topic to discuss. And the proposal is simply a question. And that's what we call our inquiry sessions. We're also going to take advantage of being in New Orleans, which means we're going to have this experience grounded in music, food, and civic life. And we'll have opportunities to engage with members of the New Orleans community. So we think this is the right time for this gathering. Dialogue's needed in this time of polarization, of complexity, and of disconnection. And the dialogue lab is an antidote, of sorts, to that. We want people to come who are interested in expressing their curiosity, who have courage to ask deep questions, practice deep listening, express their vulnerability. Expertise is not a requirement. A growth mindset is. So we're really excited to invite your listeners to apply to participate. The gathering is three days, as I mentioned earlier, January 30th to February 1st of 2026. And all who are interested in leadership are invited to attend.
Scott Allen:Awesome. And I what I love in there is your you mentioned the opportunity to practice. And we can practice listening and practice engaging and practice discernment and truly being present and mindful. Absolutely love it. So for listeners, there is all kinds of information in the show notes. So please feel free to check that out there. And you know what, Marcy, thank you so much for being a part of the leadership team that's putting this on. And thanks so much for stopping by today. I hope it goes awesome.
Marcy Shankman:Thanks, Scott.
Karl Kuhnert:I don't have too much to add other than reinforcing the word curiosity. And I think Karl beautifully represents that word. He is an individual who, over the course of his career, has stayed curious and is continuing to experiment, learn. And what I love is that he is running toward this technology, trying to learn better how it works, and continuing to see how it can be leveraged, but leveraged in an ethical way. And that notion of having a human in the loop, I think it's critical. I think it's incredibly important. And when the technology is moving much more quickly than the policy or the ethics, having people like Karl better understand this technology, Gary Lloyd, Jonathan Reams, Dan Jenkins, some individuals who in our space, that leadership space, who are really, really truly exploring this. I just, my hat's off because I think it's all of you who are going to help us better understand what all of that means. As always, thanks so much for checking in. Take care. Be well.