Is Superintelligence Nonsense? Why The AI Market Stumbled.

August 23, 2025 00:20:23
Is Superintelligence Nonsense? Why The AI Market Stumbled.
The Josh Bersin Company
Is Superintelligence Nonsense? Why The AI Market Stumbled.

Aug 23 2025 | 00:20:23

/

Show Notes

The AI market boom took a pause last week when GPT-5 and some new research from MIT appeared. As you’ll hear, the expectations for AI are extraordinarily high and that causes a lot of bubble effects. Today I explain this bubble and where it’s going.

I also discuss our latest experiences with AI transformation in business, and why “task automation” is not really the big opportunity ahead. You’ll see that the Superworker effect changes organization design, jobs, and reward systems. Lots of opportunities for productivity ahead.

Galileo, the world’s AI agent for HR, is now the Power Tool you need to stay head. Global research, professional development, data, vendor analysis, and hundreds of HR and management tools, all in one AI platform. Get Galileo now.

Additional Information

The MIT Report And Why It’s Misleading: AI Definitely Pays Off

From 2017: Why Superintelligence Is A Myth

Workday to Acquire Paradox. A Bigger Deal Than You Think.

Get Galileo, The World’s AI Assistant for HR

 

 

View Full Transcript

Episode Transcript

[00:00:00] Hey, everybody. I want to talk about where we are in the AI superintelligence journey today after lots of activity this week with workday paradox, last couple of weeks, SAP smart recruiters, and new information about the job market. So on the job market, it is slowing. Companies are hiring, but not at the rate they were before. It's pretty clear to me from the Fed's data and the data I see that we're in a slowing economy, the tariffs are starting to stick, and so thank you to the US for forcing the rest of the world to put on the brakes. For those of you that are in the financial part of your companies, I would anticipate you're going to be going through some rightsizing or increasing accelerated productivity projects, and that includes hr. And there's no reason we shouldn't be part of this. So if you're not looking at a transformation of learning and development, recruiting, employee service, HR business partners, and other parts of the COE struct, you should be. We're experts at this. So if you're in the middle of this or you're thinking about it, I really encourage you to call us. We have diagnostic tools, we have benchmarks, and we're very smart about going through this because we've been through it with so many other companies and we understand systemic HR and where things are going. So that's sort of the business context of where we are. And I would not be surprised personally if we have a pretty messed up economy next year. But it's a little hard to tell on the AI side. The stock market took a bit of a tumble this week because there was a survey that some analysts did at MIT and published in MIT press that said that 95% of AI corporate projects do not have a positive ROI. Now, it's a little bit of a ridiculous survey because I think 95% of most IT projects don't have a positive ROI because nobody tries to measure it. I don't know what the positive ROI is of a $250 user license to Salesforce, but you still need it. I don't know what the ROI is of Outlook, but you still need it. I don't know what the ROI is of your financial system, but you still need it. So that type of analysis is kind of primitive to me, but I understand what they did, and I think that data validates what we're finding with all the companies we meet, which is that this is very new and the technology in some sense is way ahead of our organizations and structures of jobs and roles and business processes. So there's going to be business process re engineering around AI. But since the AI technology is changing so fast, it's not a insert and replace type of a situation, it's it's an experimentation situation where every one of us is going to buy some of these new platforms and start using them, start re engineering around them, understanding what they're capable of. And I will continue to say this, that I think a lot of the best engineering re engineering you're going to do is going to be done by yourselves, not by buying things off the shelf. But the off the shelf solutions are getting better. But that was a big report in the stock market and the disappointment of GPT5 led to a lot of discussion about whether the whole AI investment thesis is a bust. And sometimes these markets get way overinflated. I mean I remember very, very well the 2000 dot bomb and there have been others where investors pile in to some space because investors are both greedy and frightened. They don't want to be left out. When you're an investor, by the way, the interesting thing is you're not really only interested in making money. You're interested in making money at a rate that's competitive with your peers because if your investment fund is not keeping up, you don't get any more money. So it's a very dog eat dog sort of chasing market of people chasing each other to get into opportunities. And when an opportunity looks big, there's a fairly irrational piling on process that happens. And that's going on in AI right now. [00:04:11] I'm not saying that all this technology won't pay off and there will be some huge winners, but just as in the Internet in general, it's a little hard to predict who the huge winners will be. We know that the super scalers will be, but we don't know what Meta's play is going to be. We don't want, we don't know really where Microsoft's going to end up. It's a little hard to tell what part of the market Google's going to dominate, but they're going to play in a lot of it. We don't really know what's going to happen to advertising revenue in the AI market. You know, we don't know if it's going to replace psychologists. We don't know if it's going to replace lawyers. We don't know if it's going to replace salesforce.com or workday. I mean all of those things are worth discussing. But that's the reason why that data came out looking like that. My experience with many of you, lots and lots of companies, is that. And we're working. By the way, you're going to see a big survey from us pretty quick here on AI maturity. But we, I. We've always found that there's about 6 to 8 to 10% of the companies we meet with that are way ahead of the curve, very innovative, very creative and willing to try new things, and they pioneer incredibly interesting solutions. Moderna has been written about a lot and we've talked to a lot of other companies that have built fantastic employee chatbots and, you know, new implementations of recruiting and so forth that have really been highly successful leveraging AI in very creative new ways. So that's going to happen. But for most of you and most of us, it's very much of an experimentation process and getting the company aligned to the projects that seem to be the highest return and then a lot of individual productivity, too. We're far enough, along with our work now on AI transformation, that we have a model, A deck and lots of examples of how this works. And so for those of you that are involved in or leading those kinds of projects, reach out to us. We're not going to give that stuff away, but you can get it through our membership or through Galileo. And we'll be publishing a lot more on this. We're also at the next HR Tech conference in September, we're going to publish our work on the revolution of Talent Acquisition. So we'll have a cookbook for the revolution of L and D and a cookbook for the revolution of Talent Acquisition for you, which I know is two of the biggest inefficiencies and big spending areas and important strategic areas of hr. And then we'll later in October, we're going to launch something else with the name super in front of it that I won't go into too much detail yet. So what we did learn about GPT5 this last couple of weeks is that it's pretty good. It's not that much different than what we had before, but it's superior in the sense that it can find and decide which model to use for you. By the way, Sana does that. So you already get that feature in Galileo. Galileo has had that feature of model routing for almost two years. So that isn't actually a new idea, it's just new for OpenAI. And then, you know, people are basically saying, what happened to superintelligence? Where did it go? How come we don't have it yet? Well, I would challenge any one of you to find a definition of what superintelligence is. I don't even know if there is a formal definition anywhere. There's a bunch of people making stuff up. But in my mind, human brains are multifaceted, complex learning systems, and we are all different. Some of us are mathematicians, some of us are artists, some of us are communicators, some of us are very shy and introverted. Some of us are meticulously careful, some of us are big, broad thinkers. But because of the way the brain works, we're super intelligent because we do all of those things. We have very versatile learning systems in our biologic brain. [00:07:46] And computers are extremely good at things that we're not always good at. Calculations, math, numerical things, if this, then that that stuff's coded into the hardware in the operating system. And then creativity driven through algorithmic content development, which is basically what these content systems do that build content is they look at billions of vectors of content and they build new content based on prior content. So they're very good at mimicking and creating and copying and learning from one source of content, what kind of content you want, but they don't interpret and think and feel and sense the way humans do. So when you throw around the word superintelligence is a little bit of a weird concept. Of course they're super intelligent. They're very, very good at things that we are not good at because they're mathematical models and machines. But are they more intelligent humans? I think there's a certainly in my mind an argument that that'll never happen. They might appear to be, but somebody programmed them, and they're based on silicon. And the silicon doesn't adapt like a genetic sequence. So the silicon does what it was programmed to do, and that's all it does. So we've as a society, overinflated this idea of superintelligence. And I think it's a bunch of marketing fluff from the vendors. You know, the most prominent, of course, is Mark Zuckerberg, who's trying to convince everybody that Meta is going to build some kind of a super intelligence agent. But what it's going to be is it's going to be a really good advertising model where the agent's going to talk to you and it's going to know so much about you that it's going to convince you to buy something. [00:09:23] I mean, it won't feel like that, but that's probably where it's going to end up. You know, the other thing, of course, that's Underlying this bubble of AI is the cost of energy. You know, the funny thing to me about the energy industry is this is where I started my career as an engineer. In 1978 when I graduated from Cornell, I was a mechanical engineer and I studied thermodynamics and energy. That's what I did. And so I was going to go work for alternative energy, but there really wasn't much of an industry at the time, so I went to work for an oil company. And what you find out when you work in nursery industry is that it's a just like every other industry, it's very, very mature and sophisticated at what it does. So oil companies don't just pump oil out of the ground. They're very, very good at hydrocarbon science and building fuel that is very dense and can be very quickly transferred to motors, engines, diesels and other electrical producing equipment. And that's been going on for a long, long time. And so we have a carbon economy that generates electrical energy for us. Now that massive demand for energy in data centers and all of the building and real estate and political, environmental, social issues around data centers in everybody's backyard, we need a lot of energy. So we're going back to nuclear and we're looking at alternatives. Nuclear energy has been around since I was a young guy. I actually had a really interesting interview with the Department of Navy and met Admiral Rickover when I was an engineer and out of college. So the nuclear industry's around for a long time. It's not a new technology. It got squashed in the United States for political reasons and it's coming back. But you know, a lot of the hype about AI is investors trying to invest in this nuclear stuff and other energy companies to build the electrical infrastructure to power all this AI. So again, this is another bubble where, you know, if you want to start a nuclear energy company right now, you'd probably get a billion dollar capital market cap if you know what you're doing and it'll settle out. My gut tells me that the algorithms will improve pretty fast. And so the amount of data processing capacity we need will not go up at an exponential rate forever. It is right now just because of the expansion of AI. But you know, we're building an AI centric energy and distribution and software economy. Now. [00:11:47] The final thing I want to talk about, because I'm on vacation this week and I have a little bit of time to think is where this is taking us culturally. The idea of a super worker that we've been talking to everybody about lately. Is really about taking the industrial work model and transforming it into a highly empowered model for work where individuals have massive amounts of information and processing power and intelligence and can do jobs at orders of magnitude higher levels of productivity than they could in the past. It's a conceptual idea that's pretty easy to understand. And then you have all the cultural issues around it. What does a manager do? What does a leader do? How do you organize the team? What happens to all of the pay issues, the promotion issues, the skills issues, the hiring issues around that? Well, you know that's going to happen whether you like it or not. There's a study that came out from KPMG this week where the KPMG folks did a survey of their young employees, their new hires, and 60 to 70% of them are firmly convinced that AI is going to be an easy to use, highly empowering productivity tool. They're not worried about it at all. So the more we get younger people into the workforce, the faster this is going to happen. Then there's this cultural issue of how do we build a culture that facilitates superworkers without constraining the superworkers and telling them what to do in the old model? And there's going to be a million ways to think about this. Ideally, to me, it gets down to job and organizational design. If we're clear, and this comes from our org design research, by the way. And you can take our org design superclass in Galileo. If you design the business processes around your company in a meaningful way for customers and products, then the roles and jobs of the individuals supporting that are clear and they're defined by accountability, not job titles or tasks. [00:13:44] If you then this is a big idea that Kathy and I worked on a lot three or four years ago in that research. What defines org design is not your job title or your span of control. It's what you're accountable for specifically. Exactly, in detail. If you know exactly what you're accountable for in a business process, I don't care how you do it. Use the tools we have, but just do a good job and we're all going to be happy. And if you're an AI guru, you can do it with AI. Fantastic. If you're not an AI guru and you don't want to use AI, but you can do it even better, fine, that's okay. We're not going to tell you what to do. We're not going to give you a long list of tasks and say, do this, do this, do this. We're not going to micro design Your job. We don't have to. We're expecting you to figure it out. And if you can't figure it out, then you need to be accountable for less. If you can figure it out, well, then you could be accountable for more. So in some ways, the definition of a super worker company is lots of clear accountability and then honest, genuine, thoughtful, positive feedback on accountability deficiencies or gaps or mistakes or errors or shortcomings that obviously come up. That's the way I think about my job as the CEO of our company. When I see something go, you know, underperforming in my opinion, I think about my role in making that better and I take responsibility for it and try to do whatever I can to improve whatever performance I'm responsible for in my role as accountable leader. And if you're the head of marketing or you're the head of sales or you're an individual doing recruiting, or an individual doing data analysis or whatever it may be, you take the same role. Now along comes AI. If you refuse to use it, or you don't know how to use it, or you don't learn how to use it, or you don't like it, or you're afraid of it, or you're waiting for somebody to tell you how to use it, your level of accountability might go down and you might make less money. That's what happens economically to wages. We don't, you know, give people raises just because they've been around for a long time. We give them raises because they've delivered a lot of outcome for all of us. And it's worth it to the company to pay these people more money. [00:15:59] And they can, you know, obviously take advantage of that in the outside market. So the super worker organization and the super worker individual and the super manager, which we'll talk about later, is about levels of accountability and empowerment. And there will be structure and there will be guardrails. We're not going to let everybody do whatever they want. There's going to be quality rules and safety rules and process rules and metrics around it, of course, but we're going to let go of the top down. Let me tell you what to do. Model. Even if you go into a Starbucks or a coffee shop these days and you, you know, go to a barista or a young person who's making your coffee, they don't have somebody looking over their shoulder telling them what to do, they've been taught they know and they do it. And they do the best they can. And sometimes they do a great job and sometimes they don't, but they're accountable. And that's in some sense a mini version of a super worker situation where the AI is the equipment that they use to make your coffee. Software engineers all have AI built into their tool sets. Or will salespeople will too. I mean, it's much more complex than sales because it's a much more complex job and much more ill defined in some ways. Marketing, same thing, supply chain. All of the jobs we have, especially in hr, are like this. And when you go do an org design project and you sit around with a group of people and you're trying to figure out how to redesign some business process for AI, you start with the process, you don't start with the jobs. I do not believe, and I'm not a huge fan of just taking jobs apart and automating little tasks for people. They can do that on their own. We can give them tools to do this and we can show them how to use the tools. But that's not what business reengineering is about and that's not what AI transformation is going to be about. There will be some of that, but that's not where this is going to go. You start with the business process, you optimize it around your go to market strategy, your customer needs, your competitiveness, your pricing strategy, et cetera. And then you decide based on the platforms you have what the human accountability rules will be. And we'll show you that. For those of you that are interested, just ask and we'll get you involved in looking at this engineering process. We're not going to give it away this time because some of this stuff is just too valuable and we spent too much time and money figuring it out. But we're happy to share it with those of you that are clients. So anyway, I think the superintelligence thing and the stock market and the bubble on energy price stocks and the bubble on AI stocks is just exactly what I would have expected. It's kind of like the way things go in the economy that we live in. But the longer picture is we're all very fortunate to be in a world where at least our professional lives are going to be incredibly interesting and exciting from this new world. I won't talk about politics. I feel like the political world's going in the opposite direction. And I have some ideas as to why that is, but maybe I'll keep that to myself. We can talk about that personally if you'd like. And we're going to be at HR Tech in Vegas. We're going to be at Unleash. I'm going to be at SAP Connect, probably be at Rising in Europe. The US One we will not be at. Amy will be there because we're doing an announcement with Workday and you're going to see a whole bunch of flurry of announcements about Galileo in the fall. All the work we've been doing with Microsoft and these other vendors will come to pass and you'll see that. By the way, you know, the way we go to market for those of you that do business with us is more and more and more around Galileo. Galileo is an integrated suite of agent and server and learning technology all built together. And it is the best empowerment tool you're going to find in all aspects of hr, I can guarantee you that, because I know it's out there. So if you haven't tried it, just go get your hands on it and call us for help if you'd like some online assistance. And we have a whole success center, an ambassador program, and our conference is going to be focused on Galileo next spring also. [00:20:04] Okay, have a good weekend, everybody. [00:20:07] And congratulations to Workday in Paradox. Even though Workday had kind of a bad earnings announcement, the Paradox deal is really good for them and for Paradox. And I'm sure you're going to see some exciting things coming out of Workday in the coming quarters. Talk to you again soon. Bye.

Other Episodes

Episode 0

September 26, 2022 00:22:05
Episode Cover

Understanding The PowerSkills Economy And How Technology Makes Work Better

In this podcast I dig into the low unemployment rate and the new role technology plays at work. We're no longer in a world...

Listen

Episode 0

September 16, 2021 00:25:13
Episode Cover

HR Technology: The Investor's Perspective. A Conversation w/Nari Ansari

In this episode, I talked with Nari Ansari, one of the general partners at TCV. We talked about lots of important topics in the...

Listen

Episode 0

November 08, 2024 00:18:36
Episode Cover

Diversity, Inclusion, DEI, and Wellbeing In The Trump Era E197

This week I discuss the issue of DEI (must it DIE?), inclusion, and corporate culture in the Trump Era. We just completed a 1.5...

Listen