Episode Transcript
[00:00:06] Hello everybody. Today I want to have a slightly philosophical conversation about two things. One is the path of direction for generative AI, leaving aside the drama over an open AI for a minute, which I think has passed, and how AI changes the role of an expert. And the conversation I'm having is based on two sources of information the research that I've done with all of you and through the market about what's going on in AI and how it works. And then our deep experience with Galileo, which is our AI platform based on the corpus of knowledge that we've developed over the last 27 years. And our corpus is very deep and very rich. We cover roughly 100 topics in management and HR. And all of the content is based on research, case studies, conversations with the vendors, conversations with clients, implementation stories. There's no punditry, there's no dogma, there's no assumptions in any of it. And so the corpus hangs together very, very well. It's consistent from place to place. In other words, the way we talk about goals in our performance management research is the same way that we talk about goals in our compensation research. And it's the same way we talk about goals relative to the impact on employee engagement and employee on productivity. So let me talk about AI in general then. I want to talk about how it changes the role of expertise.
[00:01:34] What I have discovered, and this is not new to many of you, but it is to a lot of you. The mathematical elegance and power of generative AI is really spectacular and it's getting better all the time. And of course, as you know, what this does is it takes vast amounts of information, identifies patterns, it can classify things, it can generate answers to questions based on statistics and so forth. And the result is a highly conversational, highly interactive system for generating content, answering questions, finding things and so forth. And there's no question in my mind that the mathematics and the power of the compute behind it are going to go up and up and up. So it's going to get smarter and smarter and faster and faster. But it's all dependent on the content. The smartest, most accurate AI that doesn't hallucinate, that is based on faulty content, will be worthless or dangerous. So under the covers of all of this energy to consolidate, assimilate, analyze, generate content is the source. And chat GPT, of course, we don't know where the source of its content comes from and they don't want to tell us. I think that would be dangerous for them as a company. So the clearer the content, the clearer the result. But that's a magnificent thing because within the file walls of a company like ours or yours, you know what content is real and what content is not real. And when you generate and organize your content in a meaningful way, generative AI can do spectacular things. Now, on the public Internet. We have a mess, we have advertising, we have lying, we have right wing propaganda, left wing propaganda, information warfare, chaos. So we have all been manually trying to be AI machines when we read what's on the Internet to try to figure out if it's any good.
[00:03:37] So in some sense, these generative systems have the potential to clean that up because what the AI could do is look for the sources and determine whether they're reliable and credible, just like Google. Now, most of the generative AI systems in the market don't show you where the content came from. Ours does. Galileo doesn't say anything without showing you the source. So we actually put a lot of time into the inline referencing so you can find why an answer came out the way it did and you can dig in and try to determine if that's meaningful for you. But generally speaking, generative AI might be the nirvana that we've been looking for in the Internet to clean things up at last. If the vendors, or at least the large ones use it in this fashion, and I think they will, however, I am a little disappointed that Bard is already giving me ads for things in line with the generated answers. And that is just a ticket to disaster. So I'm not going to discuss the business model of Google, but anyway, there's a magnificent amount of power here. I have seen products that can take a book or a document and generate courses and quizzes or essays or translate them to another languages. And in fact, all of that technology is available in Galileo. So we have a system filled with HR and management and training and technology vendor information that can be used to generate RFPs, generate implementation plans, generate, checklists, generate interview questions, look at skills, characterize, and prioritize skills. I mean, all of the things that we try to do with hand waving, galileo does based on our corpus of information. And we're just going to be adding more data to it in the same credible fashion. So my general sort of commentary here is that this is not a joke and it is not a dangerous technology. It is a really powerful technology that allows you to take advantage of somebody analyzing data thousands of times faster than you can. And of course, those of you that do analytics work or are analysts like me, you know what it's like. You capture a lot of information. You read it, you assimilate it, you try to draw the relationships between it, you come to conclusions based on that. This can be done by generative AI extraordinarily fast. And that leads me to my second topic. What is the value of an expert when you have a tool like this? Now, when we launched Galileo, we deliberately named it the AI powered expert assistant for HR. And the reason we used the word expert is because it is an expert it is based solely on comprehensive, deep, rich research, case studies, examples, real stories, real interviews, real implementations in HR and in management. Does it perform? As an expert, I think that's up to you to test. But my experience is that it is far surpassing our expectations. We have about 50 companies using it. We're going to be checking in with them for the next couple of weeks when we launch publicly the system in January or February. And the things it is doing are mind boggling. If you think about recruiting, if you think about training, if you think about HR technology, if you think about pay, if you think about diversity, if you think about well being, every one of those topics has dozens of subdomains beneath it. Employment brand, job descriptions, pay levels, communication patterns between managers and employees, culture. And as I like to think of HR, there is essentially a nest of connections between all of these different topics. They don't stand apart. HR and management affiliated with running companies is an interconnected process and all of these things are related. And Galileo sees and uses those relationships. I have been and we have been testing Galileo on a whole bunch of use cases. And it is rather surprising what it can do. It can give you a list of best practices in a particular industry, it can give you a list of best practices in a particular scenario or problem area. It can give you examples, it can give you vendors that are appropriate to that domain and so forth. Which begs the question, what is an expert? So let me talk a little bit about that. I mean, many of you listening to this are experts in many, many topics because you've spent your careers studying certain things. And what does that give you? It really gives you three things. Number one, experts have deep domains of knowledge. They have been accredited, they have gone to courses, they have read, they have studied, they have learned. They have met with many, many people and they know a lot about the topic. They often know the research, they know the history of the topic. They know how the domain stands today versus how it did in the past. They know some of the advanced practices. They know the science and the math and the psychology behind a topic. And many of the experts that you talk to, and of course us, have written books, they've written many, many articles and been validated by their peers for their level of expertise and understanding, understanding meaning. They really understand the fundamental principles behind the domains that they are expert in. But that's not enough. Number two, experts have lots and lots of experience. They have used and implemented and observed and studied many, many examples and implementations and problems and errors and mistakes in their domain. So they have learned through these experiences, processes and approaches that typically don't work versus ones that do work. The science or the core expertise doesn't tell you that. That's why when you hire a consulting team from a consulting firm and there are usually kids in their 20s who have graduated from college recently, they might understand the concepts of the domain, but they haven't done it before, so they're learning how to do it and what will and won't work on your implementation. You hope you have an experienced team, but if they're not experienced, that expertise is not that valuable. And that goes for creative people and software engineers. I've met a lot of software engineers in my career, and the best ones are not only smart and well read, but they learn very, very fast. They try things. They see what doesn't work. They understand that in a given situation, problem A can be solved by this, problem B needs to be solved by that. And they don't necessarily need the same solution. They see the big picture. And the reason they have that expertise is because they have a lot of experience. That is why the word wisdom tends to be affiliated with people who are older, because as you experience more things in your life, you do become wiser. Most of us do. And that is based on this collective experience of seeing many, many things and observing and learning from each one of them. The third thing an expert is, is they're a good listener, they're a good consultant, they're a good problem solver. What good is an expert if you ask him or her a question and they simply answer it but don't help you solve the problem, they're not doing you much good. I mean, they might be good in their own little research area, but great experts have learned how to teach others how to diagnose problems, how to listen, how to get to the root of a problem, and how to creatively solve it. That's a set of consulting expertise that comes from their knowledge and their experience. The consulting expertise can stand alone. There are expert consultants who are good at consulting anything, but without domain expertise, they tend to be less valuable than all three together. So I don't care whether you're talking about an HR consultant, an advisor, a technologist, an accountant, a tax specialist, a lawyer. All of those experts have those three things. They have deep levels of domain expertise and knowledge. They have lots and lots of experience, and they've learned over time to listen to the question and the problem you have and give you a solution based on that knowledge and that experience. Okay, given that and most of you kind of understand this concept, especially if you've worked as a consultant or an advisor, and I'm sure all of you have been a consultant and advisor to your children and to your friends and to your peers. Where does AI fit? Well, as I use Galileo and I add more content to it, and we view and observe its behavior. I'm realizing it actually plays in all three areas, much more in the first and much less in the third. We have found in our particular implementation of AI that all of the collective knowledge we've developed over the last 27 years or so is available to Galileo. And it can find it doesn't always find it. It depends on the question and how you ask it. The interesting thing in generative AI is the forming of your question or the prompt will determine the answer. The more detailed and clear and explanatory your prompt is, the better the result. For example, rather than saying, what are the ten best practices in recruiting? If you say to Galileo, I am trying to hire a senior engineer in cybersecurity, I know that the job title and the job description is accurate of the skills. However, many of the candidates we are recruiting are asking for more money, and they're refusing to come to our company. What are three or four of the top practices that you would recommend? And please give me examples of companies that have done this. Believe it or not, it answers that question exceedingly intelligently because it's drawing upon a lot of case studies, a lot of examples, a lot of analysis and research we've done in those areas. And it's assimilating lots and lots of information. And it's not just going to tell you to do a better job of finding a sourcing tool. It's going to talk to you about employment brand. It's going to talk to you about career growth. It's going to talk to you about marketing. It's going to talk to you about interviewing. It's going to talk to you about all sorts of things, because those are all domains of that problem that are interconnected. And anybody with a corpus of knowledge that broad would know that. It isn't a simple question with a simple answer. The second in the second area, you can ask AI for examples. So one of the things we've done for really two and a half decades is always done our research around case studies. We do lots of quantitative research to generate maturity models, best practices, and so forth. But without case studies, they're not particularly actionable. So we interview hundreds of companies per year. I mean, it's really dozens of companies per week. And we capture their stories, and sometimes they want us to share them, and sometimes they don't. But what happens is we start to see the patterns, and we realize that, gee, companies that do it this way tend to have that problem. Companies that do it this way tend to have that problem, and we understand the patterns. What we've done with Galileo is many of those case studies are written in our research, and many of them are recorded amongst clients that have agreed to let us share them. We've been putting that into Galileo, and it's getting very, very smart, and it's learning a lot from experience. In other words, when you ask a generic question like how to better hire an engineer, it will give you examples of how GE solved that problem, how Reuters solved that problem, how Accenture solved that problem, how Google solved that problem, and it'll go back through 25 years of research and give you those examples. The nice thing about those examples is you as a professional can learn from those examples just by reading them quickly, because you'll see the subtle contours of a solution that a consultant would tell you. Now, I'm not saying it's as wise or intelligent as a real consultant because it doesn't really understand the contours of each example and how it may or may not apply in this one, but it's getting better all the time. And in our particular case, we're going to make sure that the case studies we put into Galileo are very, very detailed so that it understands that it also knows a lot about vendors. So, as you know, the most confusing part of HR for many of you is, is there a tool that does this? Will my current tool do this? Should I buy a new one? Should I throw away the one I have? What's the difference between vendor A and vendor B? Well, it's pretty very good at that, at comparing the vendors and telling you which one would fit into a different solution. And by the way, I'm not trying to sell it here and over promise because this thing's a work in process. We're in the process of training it, and we will be training it constantly with more content to make it better and better. But I would say in the area of experience, it's pretty doggone good. Now, the third problem, really area of expertise is this idea of being a consultant. Because from our perspective, and I think you know this if you've done consulting, people don't want to hear the whole story. They just want to know how to solve the problem they have. So before an expert tells you everything they know, which they sometimes just like to do, they need to ask you what do you want to know and what is the problem or topic that you're trying to address? And that is actually a very human process because the individual asking for help doesn't always know how to define the problem. Sometimes they're talking about a symptom. And the symptom isn't the problem, the symptom is the symptom. And we spend a lot of time working on this idea we call falling in love with the problem, which is going to the root cause, which by the way, if you look at Elon Musk's experience with SpaceX and some of the other things he's done, that's what he keeps doing. He keeps going back to the root. And so can the AI do this? It's not clear that it can yet, but we are working on more advanced prompts with Sauna to teach it how to diagnose your problems in more detail. And that will be coming in the coming releases of Galileo as we get it further along. And I think if you look at tools like Paradox, for example, that has been building a conversational AI system for over a decade, I think it's more like 15 years, they have gone deep, deep, deep into many of the real problems that you have in recruiting. And their AI is smart enough to prompt you and answer very complex questions without simply generating an intelligent response. It will find data, it will collaborate, it will consolidate data and give you all sorts of things you need to know as a recruiter or a job candidate because it's been trained and learned over many years. What are the questions people ask? So where does that leave us as experts and all of you? Well, I would say there's a couple of things we have to consider. Number one, if you're not using tools like this, you're going to be falling behind because people that read the Dummies book on topic A are certainly smart enough to know what it is and address it at a high level. But if you're relying on the Dummies book to get started, and then you're in a complex situation at work and the implications of the correct or incorrect answer are high, and your solution is 75% right, 25% wrong, or maybe it's 50% right, 50% wrong, and it blows up in your face, people may not have a lot of tolerance for that. And I think, to be honest, that consulting firms are going to struggle because the level of expertise that's going to become available in these deep domain AI experts is going to far surpass what an individual consultant may know. Now, if I were running a consulting firm, I would do the kind of thing we're doing and I would build those pockets of expertise very carefully. And I'm sure that they are thinking about that. The second thing it does is it begs the question of what area of expertise are you focused on as an expert? Are you a domain expert? Are you an implementation expert? Are you a change management expert? Are you somebody who convinces and inspires people? Are you somebody that implements things? Are you a good listener? Are you a good doer? What part of the problem or domain are you particularly focused on and good at? The AI will help you do that. I think going forward, people like tax accountants or lawyers or politicians in different areas are going to have very, very deep specialties and they're going to use AI to get smarter and smarter and smarter in those specialties. If you're a generalist and you don't have deep expertise, yeah, you may be more or less replaced by AI, but that begs the question of why are you a generalist? You don't need to be a generalist anymore. We all need to be full stack professionals and keep learning every day. And so the AI systems will help you do that. And I think the third area of focus for those of us in the expertise business is content. If you decide to use an AI system like Galileo or another, and you rely on it to do your work, you need to make sure it's accurate. You need to make sure that your content and your expertise is loaded into it and that it becomes a smarter and smarter and more useful version of you. It's interesting. When we first started with our Copilot project last year and we loaded it up, the initial reaction people had was, it's a fantastic search engine. You can find case studies and examples and reports and articles and podcasts on topics that you never knew existed, immediately pull them up and listen to them or watch them or read them. And then after a couple of weeks, people said, wow, that was good. But now I asked it more complicated questions, and now it's generating more solution oriented answers. And then the third thing they realized was, well, I can use it to generate tools and artifacts and materials that I need to do my job. It's gone even further. So what happens as an expert is you're going to go down the same path. You're going to realize initially that the AI in your domain is a useful tool like a search engine. And then you're going to realize, wow, I can use this to generate tests. I can use this to generate graphics. I can use this to generate simulations. I can use this to generate action plans, I can use it to write articles, I can translate from language to language. And I think your personal level of expertise will go up. So I am not at all concerned that these AI systems are going to replace human expertise. There will be times when they make decisions in a more timely and maybe more high quality form than we would. A self driving car hopefully should be as safe as a human. But even when it does that, we will continue to evolve. Because I believe, at least from my career, that every individual that I've ever met in the business world at least, is always trying to do better. And if the tool teaches you or tells you something that you didn't know, you get smarter and you can interpret it in new ways. I am, for example, constantly amazed at the Amazon checkout where you can wave your hand over the cash register and it knows who you are. That has not eliminated the role of the checker. The checker that I know that I go to at the Whole Foods near me is a wonderful woman and she actually, because she doesn't have to do any of that. Manual labor focuses on loading the bags and very carefully sealing the fruit and vegetables so that I can get them home safely, which the machine doesn't do because she doesn't have to click in a bunch of things and type a lot of things into the cash register to get me through the process. And eventually there will be more and more of that kind of automation that takes place. So I hope this has been an interesting conversation. I really urge you to look at tools like Galileo, because you're going to be surprised how useful and intelligent they are. I think Chat GPT has certainly taught us that the problem we have with Chat GPT in general is we don't know where the content came from. We can't always find the source, and at some point we might find out that that content wasn't legally acquired. And we may be taking legal risks as a business if we use it. And that is why most companies don't let their employees put content into Chat GPT because it is a way to leak information into an unknown supply chain. But nevertheless, I think you'll be surprised that whether you're an expert yourself or you want to be an expert or you're going to hire an expert, these systems are incredibly powerful and incredibly useful. Thanks for the opportunity to give you some of my thinking on this holiday weekend. Have a great holiday time that's left and we'll talk to you guys in about a week.
[00:24:34] You SA.