Episode Transcript
[00:00:00] Okay, everybody, today I'm going to do a little education for you on the Microsoft copilot because of some announcements we are making with Microsoft to bring Galileo into the copilot using what's called fine tuning. Now, the way large language models and AI agents work is they use neural networks which have billions of parameters, which are basically numbers that define the relationship between tokens or words that create the intelligence of our conversations with them. And that network is built when the agent is trained or initially created. And that's where a lot of the math and calculus and science goes into these models, is creating these neural networks in a way that they will behave scientifically and accurately and helpfully. So when you, when a system behaves with a certain personality or it says things in a certain way that was trained based on the content that it was trained on the language it was and the information, of course. And then the way that the model was tweaked or tuned to use that information. And when you use ChatGPT or Claude or any AI, LLM, perplexity, whatever it may be, it has a model behind it that was trained on a certain corpus of data. And that data is embedded in the model as well as the behavior. So the model uses the data to answer questions. It also uses the data to decide how to answ questions so that the performance and behavior of the model is data dependent and then the answers themselves are data dependent. So the way the technology works is in most of these models, they get external data into them through what is called RAG retrieval, augmented generation. In other words, you ask the model a question and it looks through its internal data system and the external data that has access to through RAG and then answers it. And so when you upload a file into Galileo or any of the AIs and it uses that file to answer question, that's rag. And it works pretty well because what the RAG system does is it vectorizes the data you give it and extends the embeddings so that the answers that it would have created normally are complemented by the answers it gets from the external information. This is how you put HR policies or up to date safety procedures or up to date pricing or whatever into your model. The RAG process allows you to keep updating the system, but the core mod model remains the same. Well, there are situations where you want the core model to embed the intelligence of the content. If you're a law firm and you have certain legal language and legal policies that you use as a law firm, you want your entire agent to be representative of your firm. If you're a compliance engineering firm or a construction firm and you have policies and procedures and philosophies, you probably want that to be embedded in your model. So when somebody asks a question, they don't get a generic answer from chatgpt with a little bit of data from one of your policies, but they get the answer as if it was your company speaking. So Microsoft uniquely has created a feature that allows you to customize your version of the Microsoft Copilot to become the personality and behaviors that you want for your company. And it's called Microsoft Copilot Fine Tuning. They actually announced this five, six months ago. It's been available for a little while. Some companies have started to use it. And in our work with Microsoft over the last year, within hr, in the HR department of Microsoft, we've been testing the use of Galileo through rag versus fine tuning. And sure enough, the fine tuned model of Copilot performs even better than the rag model because it turns the Copilot into an HR expert, a management expert, a leadership coach, an expert on org design, an expert on pay, an expert on performance, all of those human capital things that we might potentially ask an agent at work, including lots of things about the latest policy for this and the latest policy for that and how do I get my vacation balance and what are my options for benefits and can I take vacation on Friday and all that stuff.
[00:04:29] So what we found with Microsoft, and they're demonstrating this this week at the Ignite conference, is that the fine tuned model of Copilot with Galileo is exceptionally good at all these human capital questions and answers, including stuff for HR. Now remember also that Galileo includes not only 30 years of research from us, hundreds and hundreds of examples, case studies, maturity models, our capability model, but also skills data by job, thousands of job typ titles, salary benchmarking data, benchmark data on turnover, benchmark data on span of control. So unlike ChatGPT out of the box, which might have some of that data from blogs or articles, but you can't really tell where it comes from. Sometimes you can, sometimes you can't, you would know exactly where all this came from and you could trust it and you wouldn't inadvertently be getting answers from a blogger or a consultant or somebody trying to sell something that might have appeared in ChatGPT. And I've been testing this and sure enough, the fine tuned model is exceptionally good at answering questions on all types of questions. Now we have 400 or more built in prompts in Galileo to handle all kinds of things that people ask or want to do in HR and training, performance management and goal setting and organization design and leadership. And you can ask Galileo things like I'm behind on my goals, what should I do? And it will give you a recovery plan. It will explain you how to have a meeting with your manager.
[00:06:04] If you're a manager and you have an employee who's behind on their goals, it'll give you the same advice in reverse. So what Microsoft is demonstrating today at the Ignite conference is the fine tuned version of the Copilot that has Galileo embedded. Now we are really excited. You cannot fine tune, by the way, ChatGPT or Gemini or Claude or Perplexity on your own. You have to hire a computer science team to do that. Microsoft has made this a feature of the product, takes a little bit of effort from it and we can show people how to do it, but you can do it. So companies are starting to embed intellectual property and business processes into their versions of the Copilot. And this may be one of Microsoft's biggest differentiators going forward, in my opinion. Because I want my company over time to have an agent that is representative of my company's policies, procedures, values, behaviors, customers, et cetera, et cetera, et cetera. It doesn't matter if you're a law firm or a professional services firm or an HR department or whatever you are. Every company's different, has different management philosophies for many, many things. Target markets, value propositions and so forth. So I actually think this fine tuning idea is a big, big differentiator for Microsoft. Now it's a little bit awkward in the sense that you can't fine tune the model every day. So you're still going to use RAG anyway because as content changes, some of it's going to be accessed through RAG just to make policies easy to manage and find. But based research that we've done and the testing that Microsoft has done, this is an exceptionally interesting option. So what we're doing this week is Microsoft is demonstrating this Galileo fine tuned model of the Copilot and describing how it works within Microsoft for hr. And I think it represents a major differentiator of Microsoft in the AI agent market, but also a really interesting way for you to think about your agent or your assistant as an even more strategic tool than ever before. It isn't just a place to put transactions and policies and training, but it can actually manifest the behaviors and business practices and personalities of your company. Now you kind of have to be an IT person or an IT department to do this. It's not technically difficult in terms of software, but you have to play with it to get it to do what you want it to do. But what we'll do when we're ready to productize this, which will be early next year, is we will offer advice and a toolkit on creating a fine tuned model of the Copilot around Galileo if any of you decide to do that. And we'll test between now and when we launch this, the differences between the rag model of Copilot with Galileo versus the fine tuned model of Copilot with Galileo, because we'll have both available. I can't tell you how excited we are to be working with Microsoft. It's an incredible team of people we're working with. They've been very, very supportive. And of course they're also using it within HR@microso. They're testing it in the real world. And this is kind of the evolution of AI. I've been kind of disappointed over the years, especially the last month or two, at seeing lots and lots of news come out about ChatGPT being incorrect, creating misleading answers, not attributing sources to the right time or date, using out of date information. I mean, the way that system works, having collected data from all over the Internet doesn't surprise me. That's just the way it is. So all of its weights are designed around that corpus of knowledge.
[00:09:46] So this is a big deal. And I don't know where the other vendors, you know, Gemini and Anthropic and others will have to figure out how to do this too. But I think this idea of embedding IP and business processes and behaviors and data into your core LLM and fine tuning the LLM is going to become a pretty big interesting area of AI going forward. Okay, let me stop there. If you have any questions, call us. I'm going to link to the article about this and then this weekend I'll post another podcast on some other interesting directions going on in the world of AI. Thank you.