Episode Transcript
[00:00:00] Good morning, everyone. Today I want to talk about the psychographic, psychological, soft skills aspects of AI. And this is early research. We're working on talking to a lot of people, but I want to get you started thinking about it because everybody we talk to is going through some form of an AI transformation. We had more than 100 companies with us on Friday at the Big Reset and many, many, many companies are going through the same thing, and that is trying to build fluency or awareness or capabilities of using AI and understanding, prompting and how these systems work. At the same time, we're trying to encourage or cajole or force people to become AI literate and improve their productivity. And in, I would say 75 to 80% of the cases, there isn't much of a top down strategy yet. There's a lot of experimentation and frontline innovation expected. And so we're expecting employees at all levels, by the way, including senior people, to figure it out, quote, unquote, and use it. And as I talked about the last couple of weeks, these are very powerful programmable systems because they speak English or whatever language you speak, so you can tell them to do what you want. So as you learn how they work, you become the master of the AI and the AI becomes your robot. So imagine whatever your job is, that if you could speak or state in words what you want the AI to do and save that command, as long as it may be, once you get it to do what you want, you've just created a robot for yourself. But how many of us are prepared for this? And how many of us feel comfortable doing this? And how many of us feel empowered to do this? Or are we more intimidated by the idea? And you know, I don't want to go keep going back to this old paradigm, but it's very, very sort of valuable, which is that the spreadsheet, which is a blank square bunch of boxes on a screen, is a huge programming system. And there are companies that teach financial analysts and others how to build complex models and budget spreadsheets and pivot tables and other things and spreadsheets, empowering them to be superworkers. Well, you all have one of these now, and the programming language is your language. You get to iterate and improve and develop and create what you need to do in your job.
[00:02:28] Now, you know, the expectation under the covers is that the AI system that you're using has the correct data in it and it is accurate and unbiased. So before I kind of go into this psychological stuff, the company has to have some form of strategy for platform that you can use and trust. And we recommend, if you're an hr, you use Galileo, because we have all that in there and we're adding more data all the time. In fact, you know, the next early August, we're going to announce the next release of Galileo with some new data that's even more, more interesting added, validated by us. So what does it mean psychologically to have this kind of power? Well, I would say there's three or four things. First of all, you are a creator now. So if you've been used to doing work in the way that you were told and waiting for instructions or waiting for directions, or maybe a little bit fearful of doing things outside of the box and you were sort of an execution oriented person and not a creation oriented person, that's going to probably hold you back because you're going to end up waiting for somebody else to build the AI solution that you want. I'm not saying your job's going to be threatened, but it could be because if the job is that road or repeatable, then somebody else might automate it. If you're a financial analyst, for example, or you do budgeting or doing accounts receivable or you know, running general reports on the financial system and sending out invoices, that sort of stuff. Yeah, that stuff could be automated by someone. And most likely, if you're not the person to do it, you would really want to work with your manager or your team and say, let's talk about what we could do as a team to automate some of this routine work. And you may not do it yourself, but you should do it as a team. And we had a lot of people this week talking about this in L and D and recruiting and employee experience and all sorts of different parts of performance management, all sorts of different parts of hr. And what you end up finding is that if you put together a list of things you need to do and how you might want to make them better as a group and get somebody in the team who knows something about AI, chances are you're going to build something new. I mean, we have a project going on right now with the company to help them redo their performance management process, which everybody seems to always want to play around with. And the primary goal is to create more feedback, but also to reduce the amount of time spent in writing reviews. And it's going to be fun. We're going to figure it out with them and we're going to try to use Galileo and I think it'll work fine. Once they get to the point where it's doing mostly what they want, they can change it and improve it and add features to it and do other things without waiting for workday or Oracle or SAP or somebody else to fix the software they have, because that won't happen. You know, these big software companies in hr, they can't work on every feature of every module, every release. They just don't have enough engineers. So, as I've said, you know, many times, this is a massive change in the empowerment and the individual development we're going to have as companies. Now, the second thing that happens when you're in the job of rethinking how you do things is what I would call challenging the dogma or challenging the religion. And that is maybe the way we do something is not the issue, it's the fact that we're doing it at all. Now, Elon Musk likes to call this first principles, but what I like to think of it as is falling in love with the problem. You know, maybe we have some process we're doing. I mean, I could point out things in our company, for example, that we do it because it's just the way we've always done it, and we're kind of in this routine and everybody knows how to do it. And, you know, it might be kind of slow and clunky, but whatever, it works for us at the rate we're going now. But if we doubled or tripled our rate or wanted to double or triple our quality, it would not work. So this is what I would call testing or challenging the core. And in most business processes, the core is embedded in tools and systems. We already have, you know, for example, our publishing process. We have a bunch of tools for that. Our podcasting process, our research process, our survey process. And I, as a CEO and kind of a. An innovator type, I look at these things and I'm constantly saying, wow, why are we doing it this way? This seems so dumb because it's taking so long. How come this vendor tool is so hard to use? How come it doesn't have this feature and it's forcing us to do it by hand? The people part of it is easy to adapt. We can teach people how to use things pretty quick, but we can't change these fundamentals tools very easily, if at all. So this is the second aspect of the super worker company, which is challenging the fundamentals. Now, not everybody thinks that way. Not everybody wants to do that. And it's typically more of a management role than an individual role. But I think we're at that point where automating the current way of doing something isn't going to get you where you want to go. And you know, that's one of the flaws of turning on the Microsoft copilot and just using it to do your emails faster. Maybe you shouldn't be sending so many emails in the first place and maybe you shouldn't be using email at all. Or you know, there's a thousand examples of this and some of the AI discussions and analyses and things coming up from various consulting firms are all along the lines of reducing tasks, automating tasks, automating steps, et cetera. Well, that's fine, but some of those steps maybe shouldn't exist at all. And the reason they exist is because we don't have the automation. So I won't go through too many examples, but I'm sure when you design and self driving car or any other highly autonomous system, the first thing you do is you go back to basic principles. You don't think about how the driver moves his hands on the wheel because that's not what you're really automating. You're not automating the ability for the driver to use the steering wheel, you're automating the ability for the vehicle to go from point A to point B and ignoring the steering wheel. And you know, we have a lot of steering wheels in our companies that were, you know, kind of hassling through that probably shouldn't exist. So my experience with that kind of work is the way to do that is to have a meeting for a day or a half day or find somebody who's really a creative thinker and keep challenging, challenge yourself. Why do we do it this way? Why do we do this at all? And you know, in the L and D domain where the heat is getting red hot for transformation, this is going to be revolutionary. And you know, we've had now a lot of client implementations of Galileo Learn and we've moved all of our Josh Person Academy customers over and they are finding that they're going to be able to change the roles of massive numbers of people in L and D to do more exciting important things than sit around and do instructional design or content creation, for example. That's just one example. Okay, so that's number two. Number three is what I would call curiosity and iteration. These AI systems are not deterministic. What I mean by that is if you do A to it, you cannot predict with 100% accuracy that is going to respond. With B, it might respond with B one day, but then the next day it might respond with B plus because the data behind it changed. There are learning systems just like human beings, and no two human beings do the same. Well, even one person won't respond to the same input the same every day either. There's all sorts of other factors we have which are much more complex than AI. So we have these systems that get smarter over time or dumber, depending on how we train them. And we have to learn how to live in that world. And so, you know, once you get something working the way you want it, it may not work that way the next time you do it. So, you know, you would call that error handling as a, as a software engineer. But, but we have to kind of be curious about why this thing changed. What happened that made it work better or not as well, or what could be the reason that I'm getting a different answer. Is there a good reason behind it or is it a flaw? Is it a bug? Is it something we need to worry about or not? And I think curiosity, generally speaking, is becoming and has become just a massively important business capability in this new world of change all the time. Because the way I think about it is the more curious you are, the more likely you are to come up with a better solution to a problem. Now, some of us, I always kind of describe it as two sides of the brain. The creation, innovation, curious side of the brain, and the get stuff done, keep your head down, execute side of the brain. And you know, my particular brain has both sides. I'm a little bit more on the first and the second. I would say sometimes I focus on the second, but usually in the morning I wake up, I want to do more stuff on the first, and then later in the day, I want to do more stuff on the second, just depending on my energy level. I find execution work satisfying, but not energizing. I find creation work energizing and sometimes frustrating. And you know, I'm the type of guy that will find some new tool or technology and I'll stop what I'm doing and I'll spend a half an hour working on it because I think to myself, wow, if I can get this to work, it's going to be transformative for the company and for me. And if it doesn't work, fine, I'll just stop and try to cut my losses and move on. Some people don't do that. They're just not wired that way. They don't want to spend their time geeking around with stuff the way people like engineering type people like me do. But I think that's an issue that some of you are going to feel really comfortable with this experimentation, curiosity mode of work. And some of you aren't. Because the way we've mostly treated work is we assume that somebody else did that and it wasn't our job. Somebody else figured out the process for this and we sat around and griped and just complained, but we weren't really empowered to do anything. So after a while we just stopped complaining. So this is the third thing that's really different now. Sometime into the future, all of these AI things settle down and they become standard tools. So the hand waving detection device at Whole Foods that I've talked about, it just works. Nobody kind of questions it. You know, the credit checking application sort of works and you don't question it, et cetera. Because we've been given AI at a personal level. And I don't do really believe that we are all going to have our own agents in our phones. You know, I use Galileo on my phone, I talk to it all the time. Then this is never going to really stop. There's going to be creative activity all the time. You know, sort of like again, going back to the Excel thing, I think the fourth category of soft skill leadership, psychological issues with AI is what I would call the super manager. And we're starting a research project, Julia's going to lead it on identifying what a super manager is, how we have to think about management and leadership skills and capabilities and development in a new way. Because if you're a manager and you're being held accountable for numbers or results or projects or whatever, you don't want your team doing too much experimentation if it's getting in the way with your execution. On the other hand, if they come up with a better way of doing something, why would you stop them? In our company, for example, where we're not super big, we do kind of encourage people to figure things out on their own and come back with new ideas. And they do that. And I would say the vast majority of people in our company do it a lot lot because we don't mind it. We're sort of small enough that every incremental new idea is worth it. We're not gigantic in the sense of having highly integrated processes like big companies do. But if you're in a big company and you're a manager, director, vp, whatever, your boss may or may not care how you get stuff done, they just want to make sure you are getting it done and what the impact of your work is on the rest of the company. So you get to decide as a manager, how much creation and innovation will you allow, will you facilitate, will you be a part of it, Will you bring people together to share good ideas or not? Are you expecting each individual to do it on their own, or are you going to assign it to one person, or are you going to do it as a group or are you going to do workshops? I would say that as a leader who's been running small companies for a while, you have to be involved in this, you have to push it, you have to sponsor it. You need to decide and help people decide what's important and what's not important. You have to give people time to learn these new tools and resources to learn these new tools. And you have to be fairly hands on yourself. Now, I think for some people like me, that's natural. For other people it's not. You don't have to be the guru to have a guru. Like a lot of the software engineering we do in our company, we outsource it and I don't do it, but I kind of know enough about it and I know what needs to happen.
[00:15:10] So I'm involved in a lot of the design issues.
[00:15:12] So there's this fourth super manager capability model to consider. And I know most of you have management development models of different shapes and sizes and they usually talk about culture and values and quality and customers and service and ethics and diversity and things like that. Those, those go without saying. This is a new one. This is a creating creator one for managers. So Julia's going to be working on this and any of you that have cool management models or ideas or data, let me know, let her know, let us know and we'll get together a whole bunch of interesting information on this. One more quick topic and then I'll break off here. Kathy Enderis and I are working on a big project with the help of Workday to assess the state of maturity of AI. We are actively talking about this all day with companies. We want to help come up with methodologies and processes and governance models on how to do this, just like everything else we've done in the past. So we're pretty comfortable on how to go through this process. So watch for this in the next couple of weeks and maybe a few weeks before we get the survey out. But there'll be a survey and we'll want to do interviews. So if you are really excited about your AI project or implementation, reach out to us. We definitely want to talk to you. We already have more than 50 case studies or maybe 60. But some of them are small, some of them are big. We have a working group on job transformation going on and other things. So our ears are open and we would love to talk to you and collaborate with you. Have a great weekend, everybody. Talk to you next week.