Episode Transcript
[00:00:00] Hey everybody. Today I want to talk about AI adoption or AI literacy.
[00:00:06] Now there's a massive discussion in the world about the literacy and skills needed to adopt AI. And you can start with this in a traditional technology training perspective and say we need to teach people how to use it, like we need to teach people how to use Microsoft Office. But that's not really the way this works. This is not a tool like a hammer that you have to learn how to wield against a nail. This is a very, very flexible, powerful, non deterministic thing that does lots and lots of stuff if you treat it, train it, coach it, use it well. So it's more like learning to work with a living animal or a person than it is learning how to use a piece of software. So a lot of the training initiatives that are going on are great, but when I look at them, they're actually very old fashioned in their approach. Now, one of the reasons I decided to do this podcast is we just got 30 or more incredible applications for our AI pacesetters program that we're going to launch at our conference in June. And they're really astounding what different companies are doing to implement, adopt and train people around AI. So you'll learn more about that at our conference. But there's a certain tenor to them that I want to discuss.
[00:01:30] The first is that understanding what AI is and the basics of how it works are fundamental. It's like you have to know how to. You have to know how your car works, you have to know how your computer works, and you have to know enough about it that you can put the key in the car and turn it on and not run something over or hurt yourself. You need to know you put gas in it. You need to know how the steering wheel works, you need to know how to change the tires, or the fact that the tires need to be changed and the oil and all of that stuff. You don't have to know how the internal combustion engine works, but if you don't know how the operating pieces of it work, you're going to have a suboptimal experience in your car or your computer or any device that you have. So there's this basic stuff about AI. How do you prompt it, where do you put files, where do you put images, how do you create different types of output and fundamentally what is it doing?
[00:02:25] And the thing that's quite different about AI from the other computing things we've used in the past is you can't predict what it's going to do because it is a probabilistic system. I would guess that if you asked any given AI system the precise same question twice, you probably won't get the same answer, which means that you're kind of coaching it or cajoling it or teaching it to do what you want.
[00:02:50] So the second aspect of AI is teaching individuals about this learning experience that it's going through and how that works and the way they prompt it or interact with it to get it to do the most possible valuable activity. For example, we just did a really fascinating case study with a bunch of clients this last two weeks in Vegas where we went through a simulated company. And it's a, you know, it's a real simulation, it's like a real company. And we gave the HR people in the room a bunch of problems to solve for this company. And we showed them a complex prompt designed to teach the AI about this company and what this company's problems were so that Galileo, our AI, could diagnose the problem and develop our proposed solutions. And the more clearly and articulately and carefully we detailed the problem, the better the solution we got. So, so if you walked up to an AI and said, I'm tired, what can I do to feel better? It's maybe not even going to know what that question means. But if you said, I didn't get enough sleep last night, I had four glasses of wine, I was out jogging the day before and I twisted my ankle and I'm having an argument with my brother, what do you think I should do to feel better today? You're going to get a better answer. So in other words, there's usage training in the context of your work or your job. And by the way, you know, of course, if you use AI as a creator, as a designer, as an engineer, as a salesperson, or as a line operations person or a restaurant worker, there are different ways to ask questions and different ways to interact with it that will be useful for different things. All of that is part of the how do I use it? Kind of stuff. Just like in the car example, there are courses on collision detection, driving, high speed driving, chasing people, et cetera, that are ways to use the car for special purposes. So there's all of this kind of education that has to happen. Then there's the qualifications of building things on AI. And this is extremely important because we're all building things on AI. You can save a prompt or save a bunch of data, and you've just trained your AI to do something that it didn't do before, or you can program it, or you can give it a series of steps, or you can give it a workflow, or you can give it a rubric, for example. One of the interesting directions AI is going is since it is a non deterministic probabilistic technology, you can't tell it what to do in every single step because that kind of defeats the purpose. But you could give it a rubric or you could give it a rule book and you could say, in my work or in my company, here's 50 things that we do and we don't do. There's 50 rules or operating procedures or manuals or safety activities that we must abide by. And I want you to abide by them with no variations at all completely. And that teaches the AI in a sense, the guardrails or the boundaries of its behavior. If you're in, you know, oil and gas or energy or some safety related job, you could tell the AI that any operational procedure must be validated in our operating manual. And here's our operating manual guess, don't make it up, don't estimate, use the manual and refer to the manual. You can tell AI the level of precision you want in answers. I do this interesting thing all the time. Well, I'll have a really interesting query about the job market or a company or financials or something and I will send the same question to ChatGPT, Gemini and Claude. And I get three different answers because they don't all look at the same sources, they don't all have the same level of rigor. And frankly, sometimes the certainly OpenAI just seems to like to estimate things. I'm not saying which one's right and wrong, but if I don't tell it precisely where to get the data and what specific data characteristics I'm looking for and the parameters of variation I'm willing to tolerate, I'm going to get different answers. So you know, there's that and then there's this idea of a rubric. In our company we do things this way in our company. We use this philosophy, we use this approach.
[00:07:07] Here is our change model, here is our leadership model, here is our recruiting model, here's the way we interact with each other. We don't show up late, we have a five minute break period between meetings. Whatever it is, this agent, whatever it may be doing, it may be making decisions, it may be operating procedures, et cetera, needs to know the boundaries of your company. And there's an interesting project going on at Microsoft actually where they are experimenting with these external documents, rubrics Rule books, et cetera, to figure out how to create an AI that learns more and more about your company. Because at the enterprise level, to me at least, I think one of the really miraculous things about this stuff is it could learn everything it needs to know about your company. And I keep bringing up this topic of our digital twin we have, and I've just talked to two reporters about this, a digital twin that is storing and investigating the emails, meetings and documents of all the people in our company. Now, the tactical use for this is to ask a person a question when they're not there, to try to figure out what's going on, to make a decision, or to work with a client. But the bigger issue is if the master AI or a bigger agent that we decided to build, we haven't done this, looked at all of the interactions in our company. I could ask it a question like, describe our culture, Describe to me how we talk to clients, or what is the most positive experience we've had in a meeting versus the most negative? And what can we learn from that? So these things are very powerful if we give them rubrics and we give them rule books and we give them boundaries, to say nothing of data security boundaries of who's allowed to see what, but we need to train them. So all of that is part of the enablement of AI. Now, one of the companies that just blew my mind that submitted a pacesetter application is a technology company that has basically built a portal for a shared services group to build agents for other people in the company. You can go to this portal for this particular company. We'll be telling you more about it as we go through the process here. And you can say, I want to build an AI that does XYZ abc. Here's the spec of what I'm trying to get it to do. Please help me. Help me assemble the data, help me decide what the business rules could be, help me scope the project. And their IT department literally works with the business user, not engineer the business user to build the agent that they want. Now, this isn't a huge company, but it's a great idea. It's almost like an IT department helping employees build financial macros for planning or budgeting or expense accounts, et cetera, without letting everybody do it on their own. So, you know, there's a lot of ways to enable us to use AI.
[00:09:53] Then there's the issues of trust, security bias, and legal risk. You know, and I think a lot of you are aware of this, that the AI will behave in the manner of the data that it has access to. If you're asking the AI to recommend salaries, recommend a promotion, recommend a decision, or make a decision, it's going to use the historic data that you've given it. If in the past decisions were made a certain way, or people were promoted based on certain criteria, or there seems to be a pattern of bias inherent in your company, it will institutionalize those decisions. So you need to be aware of that. If the system inadvertently gets involved in hiring or pay or decisions about promotion or decisions about opportunities, it could be biased based on that. So those are things that we need to teach people about, and then we need to teach people about the legal risk. There are two lawsuits being filed against workday and another one against Eightfold by different job candidates who claim that the system has inherent bias and they want full transparency as to why in detail, the AI made certain decisions so they can audit the decisions. Well, I don't know if you really can audit a lot of these AI agents who are building today, yet at some point we will be able to. But there's a likely chance that if you make a decision with the AI has ramifications on safety or hiring or other personal things, that somebody will force you to audit that decision and explain precisely why and how that decision was made, because it may have created some damaging outcome, especially if it's a safety decision. So we have that. Anyway, there's a lot of things like this that we kind of want to teach people about as users and as developers. And by the way, you know, this whole topic of vibe coding makes it sound like it's pretty easy to just whip stuff up. Well, it kind of is and it kind of isn't. It's easy to start and just like it's easy to build an Excel spreadsheet or a macro. But as any of you who built the complex spreadsheet know, once you start adding things to it and adding things to it and add things to it, there are a few little glitches come up and then there's a few bugs, and then you can't find the bugs, and then you need a debugger to find the reason that you can't find the bug. You know, these are decision making tools, and so the various tributaries of decisions and branches and things that come up that you haven't thought about. If it's an enterprise corporate system, you're going to be working on it for a while. So if you are facilitating and encouraging people to build agents, I think it makes a lot of sense to do what this technology company does and have a team or a person to help the developer of agents build something that's reliable and manageable over time.
[00:12:39] So it isn't kind of a rat's nest of instructions. So, you know, I don't think vibe coding is as vibey as it may seem. That's my assessment in the corporate world at least. The bottom line though is, you know, this is all of our responsibilities as individuals and HR people. If you're in the training department, you really should not think about this as a bunch of courses. It's much more than that. It's experiences, it's testing, it's case studies, it's examples, and it's a lot of hands on experience.
[00:13:09] The upside of all this is that this is amazing technology. I mean, we get the chance to push Galileo to the limits. We're trying to push it as hard as we can, and I would say most of us have found that it is much more capable than we imagined. A lot of that is because of the amazing data in there and the careful way that we've built it and optimized it and crafted the content and also the consistency of the content. You know, one of the things that makes AI unpredictable or unreliable is when there are variations of quality and variations of topics that don't relate to each other. The AI may try to relate things together that don't belong in the same decision making tree. So I do think there's value in making more specialized AI agents and then super agents that communicate with the specialized agents as we talk about in our architecture, as opposed to trying to put everything into one single system in an integrated, you know, corpus. But we can talk about that as individuals anyway, hopefully. This is interesting. Final thing I'll just point out is the Department of Labor just launched a mobile, fairly high level AI course for American citizens to try to get people up to speed on what this is. And it's a nice little connection collection of courses. I've been through a lot of it developed by a company called Aerist, and I encourage you to take a look at it. If you're still sort of scratching your head about what's going on here and I'll put a link to it in the podcast. That's it for now. You guys have a great weekend.