Is AI About To Bite Us? Debunking The Three Fears About AI.

October 23, 2025 00:13:24
Is AI About To Bite Us? Debunking The Three Fears About AI.
The Josh Bersin Company
Is AI About To Bite Us? Debunking The Three Fears About AI.

Oct 23 2025 | 00:13:24

/

Show Notes

This week I met with dozens of HR leaders in Europe and there were many discussions about the risks of AI. AI is going to take all our jobs away. AI is going to get out of control and ruin our lives. AI is going to become smarter than humans and overtake us. AI is going to make us all more stupid because we won’t have to think.

In this podcast I debunk these fears and try to explain what you can do to calm your fears. Yes AI is new and somewhat unpredictable, but if we treat it well (as a society, as users, and as builders) none of these fears will come to bite us.

Like this podcast? Rate us on Spotify or Apple or YouTube.

Additional Information

Wakeup Call for HR: Employees Trust AI More Than They Trust You

What Happened To Our Sense Of Trust? (podcast)

The Rise of the Supermanager: People Management in the Age of AI (research)

Galileo for Managers: The World’s AI Assistant for Leaders at all Levels

 

Chapters

View Full Transcript

Episode Transcript

[00:00:00] Speaker A: What if it turns out that all this wonderful AI turns around and bites us? I want to talk a little bit about this potential risk that we hear about all the time from the supposed experts. And it was interesting. This week I had a dinner with a bunch of HR leaders, and quite a few of them were insistent that the AI revolution is insidious and very, very dangerous. [00:00:24] Speaker B: And here's their criticisms. Number one, several of them believe the AI is much smarter than human beings and therefore we can't control it, including one of the women from a very experienced software company in Ukraine. The second argument is AI is going to wipe out the job market and there will be no more jobs and all of us will be unemployed. And the third, of course, is that AI will go rampant and destroy us with cybersecurity risks, data breaches and mistakes. And so let me discuss these issues because they're hot on my mind. First of all, on the issue of mistakes, I have to admit ChatGPT in particular makes a lot of mistakes. Now, I might be a little bit of a demanding user, but I use it for a lot of data analysis and a lot of research, and it makes mistakes constantly. If you ask it questions about the job market or employment or wages or anything, any form of economic data, it seems to scrape articles, not sources. And so it'll give you a number that has no source. And when you ask it for the source, it won't know where the source is. And eventually you realize the source was somebody's blog post who either pontificated or estimated something from somewhere else that doesn't have a source. And it's so confident in its answers that, that you have a tendency to believe what it says. And then when you catch it with mistakes and you point it out, it apologizes and gets a little bit sheepish in its conversation. So there's something in ChatGPT in particular, not true, definitely not true in Claude, that it's trying to please you, even in research mode, and therefore giving you incorrect answers just to make you feel good and not doing its homework. Now, you know, I don't think Gemini has that problem. I don't think Claude has that problem, but ChatGPT does. And so in our case in Galileo, we have a deprecated chatgpt and we start now with your standard engine being Claude. What that means, of course, is that we can't trust these things out of the box. And so a lot of our jobs and work in HR is going to be data management and quality control to make sure that this doesn't happen. Is that a risk? Yes. Has that always been a risk in data systems? Yes, it's been a risk in data systems since I worked at Sybase in the early 2000s. But that's number one. Number two, we had a very heated debate about whether AI is going to eliminate all of our jobs and ruin our careers. And the argument that I was making, which I will mention right this minute, is that AI is a tool. It's an automation tool, just like every other automation tool we've ever seen. And it's miraculous. But the other ones were miraculous when they started, too. I'm old enough to remember when voicemail hit the market, we used to have no voicemail. And you have to get. You used to get little pink slips from your secretary or the call center when somebody called you and you weren't there. Then there was email, then there was the PC, then there was Excel, then there was the, the Internet, then there was cloud computing, then there were mobile phones. And every one of these devices or technologies was miraculous at the time. I mean, completely miraculous. And threatened all sorts of job changes and job destruction. And yes, some jobs, of course, do go away. You know, the jobs of caretaking for horses before the automobile went away. But look at all the new jobs were created around, around the automobile industry. And look at all the new jobs that were created in it when we went into the web. Well, there will be many, many, many new jobs created around AI. Data management, of course, being the one that I think is going to be the biggest. But there's going to be jobs doing training of these things. There's going to be jobs doing monitor of these things, analyzing the trends in your workforce with data that you've never seen before, creating complex prompts, teaching other people how to use the prompting code, creating applications. I mean, you can build applications in AI without knowing anything about coding or software by using one of these VIBE coding tools or using prompting. I mean, there's going to be a lot of new jobs created. And if you're a creator, like a web designer, or a graphic designer, or a writer, as I am, or a marketing person, these tools are going to make your job even more exciting because your human creativity and human ingenuity will outsurpass the productivity advantages of having these things. Now, the reason that this particular group of people was so concerned is they were all recruiters. And what they were basically saying is that as many of you know, many of the candidates applying for jobs now use AI to fake their resumes and fake their tests. And I said to one of them who does a lot of recruiting in software engineering that you should make your tests ten times harder. Rather than asking people to write code that you know solves a particular problem that can be automated with ChatGPT or Claude, ask the candidates to build a whole application, a whole system just to apply. Because if it's that easy to do, they should be able to do it using the tools. And you don't want to hire an AI or a software engineer that doesn't know how to use the tools. As far as creative jobs, I'm a creative person. I've been able to use Sora and some of the graphics processing tools in ChatGPT or various other ones just to build simple graphics to make my marketing a little bit better. I use many of these tools just to get better sense of what's going on in the marketplace, to get information on the job market, et cetera. I don't buy it that all these jobs are going to go away. Now. There will be disruptive disruption lawyers that just look things up and give you a citation. Great. We don't need to pay somebody $700 an hour to do that. Doesn't bother me a bit. Those people can do more analytic, high value negotiating kinds of activities that we need lawyers to do. The same thing's true in data management. I think those of you that work in people analytics or other forms of data analytics are going to be a little bit scared. But you know, you're going to learn how to use these tools and you're going to be able to do much more sophisticated analytics than you ever could before. Software engineers obviously are going to be super powered and I just don't buy this idea that it's going to wipe out the job market. And as I've mentioned multiple times, but I had to virtually argue with these people at the table. We as humans are extremely creative, innovative, curious animals. We add value on top of things that we discover. So if we discover a new tool like AI and we use it, we're going to learn how to manage it, we're going to learn how to make things from it. You know, look at the creativity on YouTube or TikTok. Not that it's always the most positive stuff, but that's just the beginning of what people are going to do with AI. So I kind of just don't buy that. And you know, if it does eliminate the size of our HR department we, or the number of recruiters we need, so be it. We're just going to have to deal with that maybe we shouldn't have had so many staff people in the first place. Third issue, Rampant runaway AI wrecking the world Now. You know, it's funny, I'm not a mathematician or an AI engineer, but I'm a relatively well educated technology person. Elon Musk and Sam Altman have been discussing and fear mongering this risk for a long time. And yes, every now and then there is a cybersecurity breach or some other form of risk that seems to arise from AI. I believe this will happen. I also remember before AI, we didn't have spam, we didn't have clickbait in social media before we had social media. Every technology that we eventually adopt attracts bad actors. Our mobile phones have viruses in them, our email has viruses in them. Somehow people, people who just want to make a quick buck, find a way to misuse these things and do bad things. And of course they will with AI. However, the providers of AI, the ecosystem around AI, the hardware providers and others, we're going to have to build tools around it. And that's unfortunately just the way technology works. I would imagine when the first automobile was invented, some miscreant probably started driving one of these cars around and banging into things when he was having a bad day. And you know, we had to put bumpers on them and then we had to have roads and then we had to have guardrails and then we had to have policemen and speeding tickets and all sorts of things that didn't exist before. That's the same thing that's going to happen here. Now I'm a little bit upset that the Trump administration, you know, and David Sachs seem to be totally against any form of regulation. But luckily the state of California has already managed to pass some pretty good regulations on AI, holding some of the vendors accountable for these kinds of risks. And a lot of this comes down to liability that who is going to take responsible for this issue of misuse and security? And I think all of us who use AI systems are going to be very concerned and there's going to be a lot of political power towards controls, laws and other accountabilities around these tools. Will there be some nastiness to it? Absolutely. You can already see with Sam Altman talking about putting porn or other forms of adult content on ChatGPT, the potential risk to young people or others. But you know, I just don't think we can stop that kind of activity. This is in some sense the dark side of humanity. And those of us, the 99% of us that don't want to do these kinds of things will simply, you know, use the tools or build the tools or find the ways to avoid this. If the AI was as dangerous as everybody thinks or talks about, when I particularly think of someone like Elon Musk, why has nothing bad happened? I would say that if you look at the accidents that have taken place in driverless cars, most of them seem to be coming from Tesla. I haven't heard that many from Waymo. So, you know, maybe the big risk of AI is that providers or engineering companies simply go too fast and don't spend enough time testing and securing their systems against human harm. And of course, you know, we can debate that endlessly. Those companies are going to learn those lessons over time, how careful they need to be. And we as consumers have to be aware of the limitations of this technology. So let me sort of conclude with that. In the case of those of you listening to this podcast, business users, HR professionals, IT people, leaders, we or you have to be aware of these limitations. We have to just understand this is a probabilistic technology. The quality of the data is extremely important in the behavior and the activities and decisions that these things make. If we put bad data in, it's going to do things that we may not expect or may not want. So we have to really be careful how we take care of or train or support or fill these systems with quality data. We have to make sure that users know how to use them, know how to prompt them, understand how to ask good questions, apply complex problem solving methodologies and ideas so we don't just believe what it says. And we had a long debate, by the way, about the education system. I think in the case of education or this problem of job candidates misusing the system, let's raise the bar of what we expect from people when we use these things so that they can't make silly mistakes or trick us with answers that are incorrect. And then in the areas of misuse and cybersecurity and other forms of privacy protection, I hope state governments I don't have a lot of faith in the federal government at the moment, but I hope state governments regulate this. I hope we have liability laws so vendors are liable for the quality of their systems, which we did not have in the social media era. And I hope that the engineers working on these things get rewarded and paid well for security activities and security work that they do. Building a secure system in a probabilistic technology is probably more important than any security problem we've ever had. And there's smart people in the technology industry, and they will work on this as long as we reward them for their work. As far as jobs, the more you learn, the more you will protect your role, your job, and your career. So let's become super managers and super workers and make sure that we are the ones controlling this technology because they are tools. They are not people, they are not humans. They are not close to our level of capacity. They're just very, very good at what they do. Okay, I hope that's a little bit of interesting thinking for today. See you guys later.

Other Episodes

Episode 0

March 15, 2024 00:24:01
Episode Cover

Why The 4-Day Week? Because 33 Year-Olds Now Run The World.

Why are we having a national discussion about the 4-day week? Because the workforce has radically changed, and 33 Year-olds now run the world....

Listen

Episode 0

July 04, 2020 00:17:26
Episode Cover

Employee Engagement Is Skyrocketing. What's Going On?

Employee Engagement is at an all-time high - yet the Pandemic rages out of control. What's going on? Employers have taken a whole new...

Listen

Episode 0

May 20, 2023 00:21:48
Episode Cover

Yes, We Have Entered The Post-Industrial Age. And That's Why Work And Life Are So Different.

In this podcast I describe our newest thinking about talent, leadership, and HR in the Post-Industrial Age. Today's economy, and all it brings to...

Listen