Episode Transcript
[00:00:00] Happy New Year everybody. Big year ahead in 2026. Lots and lots of things to think about. What I want to talk about today is three potential challenges we're going to face around AI. And just to let you know, in January, in a couple weeks, we're going to launch a big body of research which we call the 2026 imperatives, which will help you understand all this in detail.
[00:00:24] But there are some things to consider that are not maybe as easy and simple, as positive as you may have hoped for in this new world that we've entered. And I think this is a year that's going to be marked as the year of enterprise AI. That's certainly the way I see it, that's certainly where we're positioning it. And that is a shift from AI as a fascinating tool to AI as a personal assistant, to AI as a corporate enterprise platform and set of solutions to make your company operate better. And so let me go through the three topics and then we'll conclude and have a lot of fun in the year ahead.
[00:01:06] Number one, the issue of cost.
[00:01:09] Now if you probably have followed this, the last year has witnessed around $2 trillion or more of debt capital for the most part invested in the build out of AI infrastructure.
[00:01:25] And AI infrastructure includes data centers, chips, power plants, electric energy, nuclear, mini nuclear sites, companies that build infrastructure around data centers, companies that have spin up to build alternatives to chips. Nvidia just spent $2 billion to buy a small company to get chips that are optimized for data centers as opposed to small computers in and massive amounts of data, almost $20 billion into data labeling that I'll talk about in a minute. And so the AI platform providers, the big ones, Microsoft, Google, OpenAI, Anthropic, Amazon, are spending a lot of money and a lot of debt capital has gone into this. And if you look at the GDP numbers for Q3, which was a growth rate of 4.3%, which is high, a very high percentage of that was AI infrastructure.
[00:02:21] And you know as well as I do that that stuff's not paying for itself at all. We're not seeing the return on investment of that yet. Most of you are not spending that much money yet. So that money is being absorbed by the vendors or by the debt capital risk takers who are investing in this infrastructure. Now, you know, there's a lot of arguments around the build out of the railroads, the build out of the Internet, the build out of other infrastructures during industrial age where we had to incur these costs in order to later see the applications and benefits. And that could be exactly the same situation that's happening here. But the stock market's at an all time high. A lot of people are capitalizing and making a lot of money on this. And two weeks ago, Oracle stock dropped by 30% or more because investors were beginning to say, wait a minute, I thought Oracle was the software company, maybe they're a real estate company, maybe they're a hardware company, maybe they're a big debt here because they had to take out, I don't know the number, some massive amount of debt to build this data center infrastructure they're doing for OpenAI. So a lot of things are changing economically. You could argue that the software industry, which used to be a very light capital industry where your only cost was really humans, is now a very heavily capital intensive industry.
[00:03:45] So even your favorite software stock is now an infrastructure company.
[00:03:51] Or the infrastructure companies tend to become independent companies owned by private equity or debt providers, which then compete for capital with other providers in the market. Bottom line is AI is really expensive.
[00:04:07] The average photo, when you take a photo and turn it into a beautiful image and enhance it, according to the research I was reading this weekend, that can take as much as 25% of the battery charge of your phone for one photo.
[00:04:23] So imagine all of your employees summarizing meetings, creating emails, doing all sorts of personal productivity stuff, which by the way, it's hard to measure the benefit of that. If that stuff's really making your company more effective or not, it might actually be wasting your time. It's a little hard to tell. And how much energy you're consuming, where's that energy coming from and who's paying for it, that is creating situations in the political sphere, which I'll talk about in a minute, and general costs that are going to change the way we consume this stuff. And what's going to happen, as most of you are beginning to see, is we're going to be paying for it by the token, not by the month. So you're going to get a bill based on how many tokens you consumed, not your annual or monthly fee like we would in general software, which doesn't have these kinds of costs.
[00:05:07] So anyway, this cost equation is coming home to roost. The stock market is beginning to pay attention to it and investors are working very hard to try to figure out how to figure, how to compute what the real profitability of these big high flyers is. And the pressure is on for them to monetize this technology.
[00:05:29] The consumers market will very likely be monetized by ads or application fees to buy things. But the business market will be monetized by you guys, by, by us paying for this stuff. And so we're buying a new form of electric power in a way, and we're going to have to consume it in a high volume or rather a high returned application area in order for it to pay for itself. So that's number one. And that's going to push us to build what we call super agents, which are high value applications on AI. And I'll explain a lot more about super agents in January 2nd. Interesting issue is the build out of these data centers.
[00:06:07] Many of you may not know this, but these data centers are massively big, massively disruptive to the environment, to the cities, the locations where they're placed. And there is political pressure building. They consume a lot of energy. They do not employ a lot of people. So when a mega campus data center is built in Arizona or Chile or Virginia or Texas or Wisconsin or wherever it may be, the local community gets tax revenue, but they don't get jobs because there aren't very many jobs once these are built, there's very few jobs. So what you end up with is this big thing with barbed wire around it that no one really knows what it is, making a bunch of noise, raising energy costs and consuming a lot of water. So we could end up in a situation where the political systems of the world fight back against the AI buildup. I don't know whether that's going to result in any impact on us as corporate buyers. But I think it gets back to this issue of sustainability. And for those of you that care about the environment and these bigger issues, you're probably going to want to ask your providers to be more transparent about their policies and their practices for sustainability of their platforms. Because the software companies are buying power plants, they're investing in huge facilities and keeping it off their balance sheet by leasing the facilities instead of owning them. But in many ways they're still responsible for them. So that's number two. Keep your eye on that.
[00:07:42] And then number three is really the big issue of the general intelligence or quality of the pioneer models, of the foundation models. And I'm talking about OpenAI Gemini from Google Anthropic. And those are the main big ones and there'll be others.
[00:07:59] And what you find out when you actually dig into this is that what these big companies. And by the way, I'm going to recommend a really, really good book for you to read called the empire of AI about OpenAI, the history. It's really worth reading. You'll learn a lot. What you find out when you sort of look at the history of the last five years here is that these foundation frontier models are really not application specific domain systems. They're very generic. And 70, 60 to 70% of the content that's been ingested and trained comes from tools like Reddit. So there's anecdotal personal information in there. It's not expert tagged, it's not expert validated. So you don't really know if the model is right or wrong. I've told you guys this many times. I do a lot of testing of this. I've been using Google Gemini for economic analysis and labor market analysis, and it does a beautiful job of analyzing data that's not correct because it doesn't know what's correct. So what the vendors are doing is they're spending upwards of 20 to 25 billion of their infrastructure on labeling. And labeling is in a sense, expert tagging. So there are now dozens of companies, including Scale AI and others, that specialize in finding experts to label data. So if you're a scientist or a lawyer or a mathematician or some other specialist, if you want to make another couple hundred bucks an hour, you can sit around and label data. What fun is that? So these AI systems are not intelligent in their own right. In some sense, the word intelligence is the wrong word. They're taking intelligence from us and they're turning it into platform intelligence. And what that means is that these massive corpuses of data are very spiky, they're very jagged. There are areas where they're very accurate and there are areas where they're very inaccurate. And the vendors, these are not, other than Google, they're not that big. They're constantly now looking for more expert data experts to label data. Now, now, lots and lots of issues you can read about in the book about OpenAI about how this works. But it's tricky doing this because if humans are labeling the data to make it intelligent, then the system isn't really more intelligent than the humans. And in some sense we end up with a little bit of a loop here where humans label it and then the system becomes more like the humans and less like AI that we want. There's a funny story in the book about Bill gates coming to OpenAI when they were launching GPT4 and he was not impressed with it at all. And you'll hear the story. You realize when you read the book how immature this market is and how relatively weak the validation is of all these models. But anyway, and he goes, I'm not going to be happy with this model until it can pass the AP Biology test. And so he leaves. And then OpenAI scrambles around and they go to Sal Khan at Khan Academy and they ask him if they can get access to all of his courses on AP Biology.
[00:10:57] And they train GPT4 on the Khan Academy AP Biology courses, and then it passes the test. So is that artificial intelligence? You tell me if you think that's what artificial intelligence is supposed to be, because they've run out of data, basically, they've run out of information. They scraped YouTube, they scraped LinkedIn, they scraped everything, they scraped Reddit. And now they need experts to go deeper. And there's all sorts of techniques for this. If you go into Gemini, you can see it sometimes gives you two answers and asks you to rate which one you like better. I'm personally concerned whether we all really understand what AI is anymore, to be honest, because if it's just a collection of humans tagging stuff, I'm not as impressed as I thought I was. But anyway, that aside, the quality of the data at scale is a big issue for us in corporate. And my recommendation and my strategy for us as a company and for us as HR people is to be specifically vertically focused with these systems and build AI solutions that are domain specific. If you're an insurance company, your AI claims system should be based on your claims data, your policies, your pricing, your customers, your business processes, your legal entities.
[00:12:13] You couldn't care less if some guy from Reddit thinks he knows how to buy insurance. What does that have to do with your business?
[00:12:20] Right, so we're going to be building vertically trained systems for the highest return on investment. Same thing goes for CRM, same thing goes for marketing, for sales, for manufacturing, for supply chain, for financial analysis. Now, I think this is an opportunity the way I see it, because I'm kind of a data guy and we have done this with Galileo and we're proving to ourselves that Galileo is spectacularly smart about the domain that it's trained in, because we know what it's trained on. And it's trained on a very consistent, unique, well labeled data set. And we take good care of that data. So you're going to have to do that too. So that's number three. And that leads me to really the big story for 2026, which is enterprise applications. Now, just to kind of wrap up a little bit, the AI space seems to have gone through three significant phases here. The first was the birth of GPT, where we actually had something that we could chat with and it wasn't just a machine learning model, it was a human like experience. And we all personified it and turned it into a chatbot and decided that all of a sudden we could use it for sales and marketing and coaching and development and all sorts of interesting human applications.
[00:13:37] Then we renamed it an assistant and we said, it's your personal assistant. It's going to do your emails, it's going to summarize your meetings, it's going to tell you what to do, it's going to keep track of your time, it's going to do all your personal stuff. All also very exciting. Neither of those have high return on investment at cost. If the thing is free, they're fine, but if the thing is not free, those are things you may not want to pay for.
[00:14:03] So the third phase is agents, where the system is smart enough to solve problems for you and inform you of things you didn't know so that you can do things that you could not do before. And then we get to what we call super agents, where the system becomes, we call level three or level four, autonomous. And they make decisions on your behalf and give you very smart advice. You know what this is like in recruiting, of course, those of you that are in talent acquisition, but it's also taking place in learning and development, career pay, job design, and many, many, many decisions that we have to make in businesses. So this evolution from the fascination of a chatbot to the application area of a business sol solution has taken three or four years, but we're almost there. And for those of you that are following us, we're going to show you how to do this because we've been tracking this now very carefully and we're building out in Q1 a blueprint of how to do this in the human capital domain. And the reason I bring up these three issues of cost, physical, environmental and data is that those are going to be issues that are going to matter to you in your strategies. We're going to spend a lot of time on this in 2026. I think this is going to be one of the most important and interesting topics in the human resources area. Of course, the vendor market is also fascinatingly interesting. I get messages every day from startups that are reinventing parts of hr. Many of them are from college kids or very young people that have never worked in hr, but they see the opportunities. So we have a lot of new ideas and creativity coming into our domain. We're going to keep up with that. The big vendors Oracle, Workday, UKG SAP are doing really interesting things we're going to show you what they're doing we're going to teach you a lot about what we're doing in Galileo. I think Galileo is going to be bigger and bigger this year as you'll see the learning and development and TA markets are going to be revolutionized by this and you're going to get really excited about what we're doing in super agents.
[00:16:07] So that's my new year intro think about those three issues read the book that I'm recommending I'll give you a few other resources to look at we're going to have a really exciting year I'm going to be all over the world once again doing workshops and meeting with you guys and so we an irresistible will be spectacular we are already well along filling it up june first week of June in LA and I look forward to kicking off the imperatives in a couple weeks Have a great New Year's talk to you guys soon.