Important Issues Of Leadership, Trust and Culture Behind Big AI Companies

April 17, 2026 00:19:52
Important Issues Of Leadership, Trust and Culture Behind Big AI Companies
The Josh Bersin Company
Important Issues Of Leadership, Trust and Culture Behind Big AI Companies

Apr 17 2026 | 00:19:52

/

Show Notes

This week, as Ronan Farrow’s expose on Sam Altman was published, I want to sensitize you to the fact that AI companies are run by humans. And this means that what we buy and how it works is very dependent on leadership, culture, values, ethics, and the personal motivations of these young, ambitious executives.

Obviously this is nothing new, but in this case OpenAI and Anthropic are by far the fastest growing businesses ever created on planet earth. So their ability to steer, direct, and prioritize their investments makes a huge difference in how they meet the needs we have in our companies.

I have learned over the years that great, long-lasting tech companies are among the most tumultuous businesses to lead. Not only are the personal economic payoffs huge (I live in a community with lots of Anthropic millionaires) but they are brutally competitive and the cost of a missed opportunity can sometimes be fatal.

In this case, I admire all the people in this space but as the AI vendors play larger roles in our lives and careers, we have to think much harder about their leadership and culture. As you’ll hear, many others (analysts, stock market, politicians) are also working on this, and I think we’re likely to see some of the most interesting business “drama” play out in the coming years.

As a consumer and buyer of AI, I encourage you to investigate the leadership, culture, and motivations of the vendors you do business with – it really matters.

Additional Information

New Yorker Expose on Sam Altman

Interview with Ronan Farrow, author

Irresistible: The Leadership Culture that Works

The Value of Values When Organizations Lose Trust

Get Galileo: All Our Research and Leadership Academy In AI

 

Chapters

View Full Transcript

Episode Transcript

[00:00:00] Good morning everyone. I want to talk about a topic that actually all of you are going to be really interested in and that is the human side of the companies building AI. We talk all the time about AI being human, but it actually doesn't matter quite as much as how the AI is built. And for all of you that are in HR and leadership and consulting and the things that I do, the culture and ethics and organization of the companies building, it is a big topic and it's all coming out right now. And I'm going to give you a bunch of examples and it's going to make you think twice about which tools you use and why you use them and how you feel about using them. So the first example is the fascinating story in the New Yorker about Sam Altman by Ronan Farrow. I'll link to it. I really suggest you read it. It's really interesting. And what you basically see is a leader who you probably wouldn't want running your company for a lot of reasons. I actually have met Sam Altman and I know his brother. And he's a very entrepreneurial type, very, very wealthy, very focused on money and making money, very competitive. And you can sort of see from the story how his sense of truth and ethics and honesty is probably lacking. And it makes you wonder what that implies about the AI business, the OpenAI business and the OpenAI product. For example, we know that OpenAI has hoovered up a lot of information without paying for it, whereas Anthropic actually paid a lot of us for the content that we have. That's a tiny, tiny advantage. [00:01:38] Example. We also know that when Pete Hegseth came down on Anthropic and Dario Amadai at Anthropic negotiated with him, Sam was taking the business away from him behind his back with no ethics whatsoever. We also know that the Microsoft executives who read the article were completely kept out of the loop during the drama in the OpenAI board fight when Sam was fired in the beginning. So you have to say to yourself, as you're say, an investor or an analyst or somebody who's thinking about the bigger picture here, is this company a well run company? Will it survive? Will it adapt? Will it grow? Will we trust it? And you got to wonder if the answer might be no. We also know a lot of people have been leaving OpenAI. They have a big executive shakeup. And one of the comments in the article really bothered me was Greg Brockman, who's I think the president, essentially saying he is doing this to make billions of dollars, which they're very capable of doing because they've managed to capture so much investment capital. Second example of the human side of AI is Microsoft. I wrote a big article about Microsoft on Substack. I'm going to move it over to the main blog this week. And what what Microsoft recently did is change the organization structure of how the Copilot is developed. And there's quite a few articles about it, not super detailed, but you can understand it pretty well. In the early days of the OpenAI relationship with Microsoft, Microsoft took the technology, hired a new head of AI internally and started giving it to the product groups to build products. And so we ended up with a co bunch of copilots. Copilot in Dynamics, Copilot in Office, Copilot in Excel, Copilot in PowerPoint, Copilot Studio, and then a whole bunch of other pieces of it. And that's the way Microsoft works. Microsoft's a very product centric, innovative company. And these product groups are very, very savvy software people and they build things that are very creative. And when you visit Microsoft and see what they're doing internally, there's hundreds of things they've built that they haven't even released that are up there. So that very much follows their culture. Well, of course what happened was the volume of customers using the Copilot has gone up. There's 15 million at least that they've talked about, probably more. And we have a lot of clients that have standardized on it. But the technology implementation isn't consistent. So it was hard for people to figure out what to do with it. So they made it very important. That I think is very strategic move as they consolidated the Microsoft Copilot product into one group with three or four main leaders watching over it, one of whom is ryan Roslansky from LinkedIn and then two product people. And they're now building a new vision of the Copilot as a container for multiple models, which is great because now they can stick anthropic in it and if OpenAI goes haywire, they can take it out and they can do other things because there's going to be many models that is an organizational success that is all about people. Had nothing to do with the AI itself. Third example, of course, is anthropic. You've all you all know about Dario Amadai and what he's done with Pete Hegseth and the US Federal government. [00:04:51] Yes. Last week or the week before, they released a model called Mythos that's really good at identifying cyber vulnerabilities. [00:04:59] So if it gets in the hands of, you know, the average hacker, it's a great way to break into a bunch of banks and stuff and do whatever they want to do. So they deliberately decided to only give it to certain companies in a very limited way. And, and by the way, as soon as they did that, Altman announced something similar which may or may not exist, called GPT crypto or something to give you a sense of the, the way they think or OpenAI. And you know, I think that shows the sense of morality and ethics that they have over at Anthropic, where many of the engineers who are leaving OpenAI are going. There's many, many more examples of this. And I talked to a lot of software companies about this. [00:05:41] I know what Workday is doing when they bought Sana, there's a lot of interesting changes there in the human capital dynamics inside of that company. I know what UKG is doing. I know. I just talked to the CEO of Hibob, which is extremely successful company. Hibob started out, I've been working with them since they were founded as a, an HR human capital sort of HCM platform that's very, very integrated and easy to use. They're now a multi hundred million dollar, I won't give you the number recurring revenue company. They grew at 51% last year and their AI is one of the most integrated systems I've seen. And the reason for that is, I know this, the CEO is a very, very good leader and he's really brought together a lot of people that know how to work together, focusing on value, not just activity. So the reason I bring this up is, you know, in my experience as an analyst and a researcher and a, an advisor and a consultant all these years, what I've seen is that the culture and the leadership and the behavior of the, of the company people are, are really everything. If you have a great technology and you don't know how to use it, or you people lose trust in it, or you overprice it, or you treat your customers poorly, your company doesn't survive over the long run. Every turnaround that you look at, whether you look at Nike and what they're going through, what Starbucks has been going through, the different airlines, the way they behave so differently at a leadership level, the fiasco at Wells Fargo that went on for many, many years and nobody stopped, you know, we observed that a little bit in the people we talked to at Wells Fargo. The, the Enron situation. They're all about people, they have nothing to do with the technology or the tools or the product itself. It's about how the company is run. And the reason it's a really big issue here is that this technology is very malleable. You know, I think at this point in time, three, four years into this, it's not that hard for a kid out of college who's really bright to get their hands on the source code of an LLM because there's a bunch of open source ones and do something with it. That's basically what Meta did. And by the way, let's talk about Meta. I've met people that work at Meta, but I've never really met the executives. But you've seen, you know, their behavior over the years going back to the Cambridge Analytica scandal. I mean, for many, many a lot of stories have come out now they're building AI into the ad engine and they want to build it into the glasses and they're building a consumer product. And we know their behavior and we know what just came out in the lawsuit about how addictive their tools are and how they knew they were addicted, yet they didn't want to let anybody else know that they knew and pretended like they didn't know. You know, these are sort of big ethical, behavioral issues. Now. You know, I'm not idealistic about this at all. A lot of companies have tough minded, battle oriented CEOs and they do things that you wouldn't do to grow. And the reason they do that, some of it's personal, but a lot of it is the, the pressures on a company are way, way bigger than you can imagine. If you're the CEO of any company that's either pre IPO or publicly traded, you're gonna do anything in your human power to avoid being embarrassed. I have examples of this in my own personal life which I won't tell you about, but where somebody who's a really good person sacrificed their ideals in the interest of a financial situation or a stock market situation or a public situation. I mean, it happens in politics all the time, obviously. And so, you know, at the top of these big AI companies, when there are people bending the rules or sacrificing their values or pushing the envelope on safety, you know, bad things could happen because this is very, this stuff is easy to change. As you read in the New Yorker article, there's supposed to be safety groups that test the AI against, you know, situations like will it teach you how to build a nuclear bomb? Or whatever. And the story, at least that came out, was that maybe at OpenAI they did away with all that, it appears to be true, but it's a little hard to tell. We know that Anthropic, from the stories that have come out about them, actually has a lot of that going on. They do a lot of various black box types of tests against the model to make sure it's not doing things they don't want it to do. Anthropic's talked a lot about the system's constitution and, and I think the behavior of the company in the public markets has been to me very honorable and ethical and warrants our trust. But the others, it's hard to tell. Microsoft, of course, is very, a very ethical, honest, trusting company. Trustworthy company too, as is these big companies like IBM. But there's a lot of small companies in this market also. So those of you that are out there sort of shopping around for tools and talking to vendors, I think you need to just be aware of these things. Because in the case of OpenAI, Microsoft is acting on our behalf because they're taking OpenAI tools and selling them to us with their name. So they'll do the best they can to keep us honest on what OpenAI is up to. But you know, there could be others. [00:10:48] And I think there's lessons to be learned here. Just like in every other business story you ever read in Fortune magazine, that leadership and culture really, really matter. Now, the final point I'll make is that for those of you that have worked in startups, that run startups, that do business with startups, the startup process of getting a company off the ground is by nature filled with risk and sometimes exaggeration and storytelling and futuring future visioning. You can't get investors to give you a bunch of money unless you have a vision for the future. So you sort of have to make things up or at least describe the future clearly. I mean, I do this a lot for you guys. I try to do it in a very pragmatic way. So the leaders of any company that's relatively young have to be pushing the envelope. They have to take risks. They have to challenge their teams to do things that may seem unsafe. They sometimes have to ship products that are not ready. They have to oversell and over market what the product does, otherwise they get, you know, taken out by a competitor. They have to sometimes talk about the problems of their competitors products in a way that normally you might consider to be slightly unethical. They sometimes steal people or information from their competitors. You saw the information, all the stuff that came out about Deal and the other companies in the Global payroll business. [00:12:17] And I don't think that's necessary, but it happens all the time. I worked at a bunch of startups. I would say I never worked for a company that was unethical, really. I think Sybase was a very, very ethical company. IBM was digital, think was. I certainly feel strongly about this in my case, but I've seen it. And you know, MicroStrategy. Michael Saylor, who's the leader of crypto, was a very, very sort of salesman when he was running MicroStrategy because we used to work with him at Sybase. I'm not saying he was lying, but he certainly exaggerated a lot. Larry Ellison has done this for years. So there's a. There's a behavior here that actually is very rewarded in the stock market. And obviously I didn't even talk about Elon Musk. And it's very hard to interpret what Elon Musk really believes is true because he exaggerates all the time. And I think people just get comfortable with that and they think, well, great, what a vision he is. He really believes that we're going to have 10 robots in our houses. Maybe he does, maybe he doesn't. I don't know. I mean, maybe he's making it up just to get us excited about the stock. So that behavior is normal, common, highly rewarded. And the way it gets validated is the customers and consumers that use the products validate it. And then the stock market and the market analysts validate it. And I occasionally talk to financial analysts, and a lot of times what they do is sometimes, not always is they dig around about the culture and the leadership of the company. They ask to talk to people, they look at stories on Glassdoor, they read the articles written by journalists, they talk to customers and they try to get a sense of whether the public Personas of the companies are real. And some of the financial analysts are very into this stuff. If I were an investment analyst, I would be, just because of what I do for a living. But I think for you, for us, when we decide to partner with a company, whether it be ServiceNow or Workday or Oracle or Microsoft or whoever, Sab, you should, as a leader, at least think about who is this company? Who are these people? What. What is their driving motivation? Who's their leadership? Where are they trying to go? [00:14:27] Because whatever product you buy, you're going to have it for a while. You know, there was a funny guy when I was at IBM, a software guy from Research. He used to laugh. You know, IBM was a very, very savvy. Company back then, and he used to say software is hard, hardware is soft. He said it's really easy to flip to switch out hardware. It's relatively trivial, he said, but it's almost impossible to switch out software because the software gets interconnected and embedded into your business. [00:14:54] So an AI is going to be really embedded because the AI engine or technologies that you buy are going to learn from you and they're going to become your AI. So if you read the vision, our HR 2030 report, by the way, we're going to put out a very detailed report around the HR 2030 article. So stay tuned for that and we'll make that available to everybody. Your AIs or AI infrastructure is going to become your company. It's going to become you. In other words, it's going to learn from all the things that happen inside of your company how to behave. So if it's weird or spiky or for some reason untrustable, or it doesn't keep security intact or I don't know what these things are going to do, your company's going to have that problem too. So I'm not saying any of these guys are selling us anything that's going to destroy us yet. But I think these issues of who these companies are, how they're run, are these leaders up for this? Do we trust them? Is going to reach Congress very, very soon. [00:15:59] Now, right now, in the federal government of the United States, there's this guy, David Sachs, who seems to be completely gung ho and very political about this. And they don't seem to be debating, at least at the executive level, what we should do as a country or economy about the AI companies. But I think Congress is going to get their hands on this over time. I think they felt like they got a little burned in the social media era where they didn't really do much. And there's going to be more inspection to this. But for nevertheless, if you forget that, because that's going to take some time, I think it's worth digging around and getting to know what these companies do. There will be more stories like the New Yorker one. I think the New Yorker one, Ronan Farrow, is a very, very meticulous writer. There's also a podcast with him that I'll link to and I think you'll learn a lot from reading it and it'll look familiar and it'll remind you of things that you've read about about many, many other companies too. The other thing that I think is also interesting about this particular cycle is that most of these companies are run by people that are very young. There's very few other than Microsoft and maybe Google. AI companies run by people in their 50s and 60s and 70s like Jamie Dimon. Very few, maybe none. I suppose ServiceNow is run by a very senior guy, but they're not building the underlying tech. So a lot of what we're observing is just youth. The immaturity of never having been through this before and sitting on a rocket ship that's going so fast that they don't really know how to drive it either. So they're doing the best they can and they're getting advice from other founders or other investors who are similarly inexperienced in anything that ever grew this fast. I don't think any of us have ever seen, at least in my knowledge, a company that has grown as fast as OpenAI and Anthropic ever, ever, ever. And Nvidia, by the way, that, that. Let me just mention Nvidia too before I wrap up. Jensen Huang, I've never met him, but we've interviewed people at Nvidia seems to be a very honest, ethical, savvy, long range thinking leader. You know, Nvidia is an old company, it's been around a long time, 20 years, 30 years, I don't know, something like that. They were in the graphic gaming business for a long time and then they got into this new stuff and when you listen to him and read about what they're doing and interpret where they're going, they are trying to build and behave in a safe and trusted and value oriented way. They have competitors too. You know, there's intel, there's Google building chips, there's amd. They have a lot of direct competitors. They're not, you know, they're not a monopoly by any means, but they're not bending your mind to tell you something that's not true. At least I don't see that happening at all. So I think we just need to be patient as this industry grows up and hope that the leadership themselves, the boards of directors, which by the way are also a lot of young people, the government and the stock market and the rest of us steer this industry towards value and safety and innovation and not into weird directions that end up hurting us. And you know, the whole business went anthropic and the Maven project in the federal government and the Department of Defense and you know, Google's history of employees rising up against Google not to be part of Maven, that kind of stuff's going to come back again. So I just wanted to sensitize you to this issue. I don't have an answer, but for all of us that are in the human capital space, it's a fascinating time to pay attention to it. And I think we're going to see a lot more about the management and leadership and culture of these companies, not just the technology. Going okay. Have a great weekend. I will be in Europe next week for those of you that are over there and look forward to sharing with you a whole bunch of interesting things that I should be experiencing while I'm there. Talk to you again soon.

Other Episodes

Episode 0

July 30, 2023 00:23:40
Episode Cover

From London: All About Talent Acquisition, And How AI Forces A Rethink Of User Experience

This week I'm podcasting from London and discuss my conversations with 50+ companies this week about talent acquisition and other important topics. We're also...

Listen

Episode 0

February 17, 2024 00:32:02
Episode Cover

Interview With Joel Hellermark, CEO of Sana - AI-Powered Learning Arrives

In this podcast I interview Joel Hellermark, the founder and CEO of Sana, one of the most exciting new learning and AI platforms in...

Listen

Episode

August 02, 2025 00:09:40
Episode Cover

SAP Acquires SmartRecruiters. Many Implications, Here They Are.

This week SAP SuccessFactors announced the acquisition of SmartRecruiters, one of the fastest-growing talent acquisition platforms. This is much more than a technology deal:...

Listen