Episode Transcript
[00:00:09] Hello everyone. This week I'm giving a presentation to the EU Banking Committee on the implications of AI in organizations. And I want to give you some thoughts, especially following the election results in the US last week. You know, the obvious environment we're in is rapid, rapid, rapid automation and what we used to consider to be white collar jobs. And even though most of us that are into AI know a lot about it, many people don't. Many people are not sure or aware or even comfortable with what's going on in the AI community and they might be reading, you know, inflammatory things in the press or other places and there is a lot of uncertainty and fear. There was in fact a study by ADECO that'll be out and I'll be writing about soon that found that of the 60,000 or so people they surveyed, more than 70% of the workers in that study said that their number one concern is about whether their employer will prepare them for AI automation technology or their careers in the future. So amongst a very large percentage of the workforce, and that tends to be about a third of the entire population in most countries, there is fear about this technology, at least today.
[00:01:26] And you know, as someone that uses it and studies it and talks to vendors about it, this is very disruptive technology and it's changing extremely fast.
[00:01:37] Now the other, interestingly enough, the other technologies that I've been involved in, PCs, mid raised mid sized computers, you know, Unix, the Internet, social media, they tended to come in phases with roughly two to three to five years of maturity before major disruptions took place. So we had some time to think about them. This one is coming much faster. I mean, ChatGPT is two years old and I would say in the last 12 months the capabilities and features have tripled or quadrupled or tenfold increase, some people say. And so the number of use cases and the disruptive examples of what AI can do to an accountant, a lawyer, a salesperson, a marketing person, a finance person, a consultant, an HR person, is happening so fast that most of us are not even sure how it will be used. And as I've talked about multiple times, you can wait for vendors to deliver solutions. But I think a lot of the high value uses of AI are going to go much faster through experimentation. And there's going to be a lot of use cases and applications of AI that are going to happen inside of our companies that are unique to each company. So even if you could use off the shelf tools, whether it be Copilot or Galileo, you know, you're going to experiment with them to decide precisely how you use them. And there's a bunch of reasons for that. One is that the vendors are new to this, but the other is that every company is a little bit different and we want these tools to fit into the tapestry of the other systems we have and the processes we have and the speed at which we can re engineer work. So this is a very fast moving space and we can't really predict where it's going to go for every job. And that's why some people are worried. Now those of you listening to this podcast hopefully are not worried because you're listening to me and keeping up on this stuff from other places. But just to let you know, even for me who thinks about this nearly every day and I read a lot and I talk to a lot of people. I'm not sure I can keep up with it either.
[00:03:44] And I talked to the folks at Sana about that. It's very, very dynamic. And that, by the way, is why I'm finishing our new document on a hundred use cases for Galileo. Because we want to show you the imagination that's possible with this technology now. You know, interesting. Over the weekend I was listening to Fried Zakaria and reading some stuff about politics and there is a big right wing move going in most of the countries around the world. And I think one of the reasons for that to some degree it has to do with immigration and the economy and other things, but one of the reasons is this sense of security or lack thereof that people have about their jobs. And so all of this is related. Now let me shift topics now and talk about what we're going to do with AI and how to come to grips with it. And this is going to be the high level stuff I'm going to talk about with eu. The first thing you have to understand is that there are some jobs that are much more impacted by AI than others. We're just about to publish a report that was authored by Drop that shows you that the jobs that are going to be more or less radically changed or eliminated are significant. But the vast majority of jobs are going to be greatly enhanced and there are some that won't be touched at all. And there are lots and lots of studies like this. I encourage you to read this one. Those of you that are members are going to get your hands on it. And what that means is that a lot of the things we have in companies levels, salary pay, job titles are going to have to be adjusted. If you're a highly esteemed editor, for example, or A writer or a curator or a researcher or a social media publisher. And AI comes along and does that job automatically or maybe much maybe it automates 70% of the job. Somebody's going to have to decide what your job becomes. Do you get promoted? Are you the AI version of yourself? Or do you decide that maybe that job wasn't so important and somebody else should just manage the AI for you and you could do some something different? These are going to happen all over the place. Now ideally, every individual who senses there's an AI opportunity should be empowered by it, but that's not going to happen. Call center agents may go away, there may not be as many sales development reps. I mean, there's a lot of jobs there where we just might not need as many of them. So in a big company where we can't expect everybody to re engineer their work, there's going to be people that are feel like they left behind. And again, you know, some of these jobs I mentioned earlier are being automated by bots or their salaries may go down or we might have to give them career pathways to find new roles. And there's going to be new jobs created. So you know, if you're the social media or editing expert or the legal analyst or the financial analyst or whatever it may be, and all of a sudden this big AI tool comes along that does a whole bunch of your work, you're going to possibly have a more interesting job, managing it, learning how to use it, making sure that it's trusted, putting the right data into it and so forth. Very, very analogous to spreadsheets. When spreadsheets were first launched in the 1980s, I was there when Multiplan was the first one hit the market. You know, the people that were the most fascinated with spreadsheets took advantage of them quickly and then later it became a criteria for a job title. You know that you in those domains. So you had to know this Stu, but in the beginning it wasn't like that. So these are going to be new roles, new responsibilities, new job titles, new pay. You know, a lot of things like that are going to happen. The second big issue that is going to be, you know, a sort of organizational issue in AI is privacy. Now I was with some friends this weekend. One of them is an executive in the airline industry. He travels a lot and he told me one of his peers was in China a few years ago and happened to be going through passport control. And because of the reflection of the window in the passport agent's room, he could see the screens that the password agent was looking at. And the password agent was scrolling through 30 or 40 pages filled with dozens and dozens of photographs of this particular person all over China, because there are so many cameras in China and they have great facial recognition technology, as actually does the United States, that they had a tracking record of everything that he did while he was there. And so he was a little bit freaked out. Now this was, this was a couple of years ago, so now it's even better. And this, this type of technology is in the Microsoft copilot, to be honest, even though Microsoft has not enabled it for it necessarily. But it's there. All of these conversations that we're recording, all of these zoom calls that we're recording, all these videos recording all of your emails, all of the voice activation that you use in your phone, it's all being stored somewhere.
[00:08:44] And before AI, there wasn't that much you could do with it. Now that we have AI, we can analyze it very, very quickly and the AI can make sense of where you're going, what you're doing, what you're working on, what you're feeling, what you're talking about, perhaps. So we are going to have a big privacy topic to talk about in our companies. Now. Most of the organizations I talk to are very aware of this and they're worried about it and they're working on it, and the IT folks and legal folks are creating standards for it. But in the broader geopolitical world, where country, you know, governments do many, many things without telling people about it, there are lots of privacy violations that are likely to happen and there's not much we can do to stop it. I mean, our phones are listening to us, our meetings are recording things, our emails are being stored. And, you know, that's going to be a big topic too. So I'm going to bring this up to them. It's going to get a little bit trickier because a lot of the AI systems that feel like information tools today become coaching tools where they keep a history of your conversation or transactions with the system and then they try to help you. Even Galileo does this. And we're, you know, a lot of things going on in Galileo to do this, and these are, you know, great aides or agents that can help you, you know, do your work, you know, ask you a question. Like yesterday you asked me about such and such. What's the status of that project? How can I help you? If it was a human coach and some of that stuff was personal, they would not share it with the company that's one of the rules, ethical rules of coaching. But if it's a digital coach and the data is just stored somewhere and it's only a security authority that prevents you from looking at it, it's going to be out there. So there's a lot of issues about how we're going to deal with privacy. Third big topic has to do with org design. Now, I'm going to write about this a lot in the predictions. This is going to be a big theme for next year. But I think personally, based on our experience with AI, here is that one of the things AI does is it creates superhuman performance. You know, I can do things with AI in my job as an analyst in minutes. That used to take me hours. So my productivity for developing a podcast, writing an article, giving a speech, I would say PowerPoint isn't that much easier yet, but hopefully it will be, has gone up very, very quickly. And this is early days now. I have a very particularly specialized role, but that's true of a lot of people. And sales and marketing and service and sales and hr. A lot of you, by the way, have jobs just like mine in different respects. In recruiting, where you're doing searching and you're trying to find pay information, or you're trying to create just the right job description and communicate correctly with candidates and assess candidates and all that stuff. I mean, these things are going to get way easier. In AI. We have about 15 or 20 killer use cases for Galileo for recruiting in there, plus many other tools that are out there from other vendors. So we're not going to need as many people to do the same quantity of work that we do today. That leads to the topic of what I call talent density. Now I wrote a big piece in Harvard Business Review this week about talent density and we call it force multipliers. A force multiplier is a term that I read about many, many years ago in the tech industry is a person that makes everybody else more effective or a thing, whatever the force multiplier may be. But generally think about it as a person. And so what's going to happen with AI is we're going to have smaller teams, smaller work groups, smaller companies doing huge amounts of work at a rate at which we used to have many, many more people. And of course there's two scenarios for that. One is reduce the number of people if we don't need them. The second is maybe we have a four day work week. Maybe we do more work, maybe we do more creative things that we couldn't do before and we Expand the profitability space of our company. Because the more routine stuff, which didn't used to be routine, that is now routine, is getting done in a more automated way, giving us a chance to do value add creation work in new areas. You know, I always think about a company, the reason I get so fascinated with business in general is that what we're really doing in a company is we're constantly looking for ways to add more value in new and creative ways. Human beings are good at that. Machines aren't really that good at it. So as the machine automates away more of the stuff that used to be non routine to become routine, we can do more of that now. It's not as easy as you think. And let me tell you a funny thing that just happened Friday. Salesforce announced that they're going to hire a thousand new human salespeople to sell their AI agents. And I was kind of laughing that that's actually essentially the exact opposite role that they're expecting their customers to do. I mean, they're selling something that's supposed to make it easier to do sales with automation and use fewer humans, yet they're going to hire a thousand humans to sell it. And you can see the problem is that we have so much management experience running companies where more people generate more revenue. In other words, we have this philosophy I call hire to grow that we're not used to the opposite. We're not used to saying let's grow the company with the people we have. Let's not hire people unless we really urgently need them. Let's focus on internal optimization, org design, role and skill, redefinition of responsibilities, training, et cetera instead of hiring. And you know, this is a big deal. You can call it job redesign, work redesign, task redesign. There's a lot of different words for it, but essentially it's learning how the technology works and finding ways to use it to get more things done with less human effort, saving time or energy so that people can do other things. Now, it might mean you do have fewer people and it might mean you do have fewer hours of work or people leaving early. There's been some funny tiktoks of people using AI to do their jobs and not telling their boss and then leaving early and not telling their boss because their boss thinks they're working really hard. But it also really is going to do other things. We're going to have people working on multiple projects. We're going to have flatter organizations. Small companies are going to outperform big ones and small companies are going to be able to scale without hiring tens of thousands, thousands or hundreds of thousands of people. Not all companies, you know, obviously hospitals and truck drivers and distribution and retail firms and others still need a lot of staff, but they're going to need less staff for a lot of applications. And those of us that have IP jobs, consulting firms, finance, accounting, even white collar work in construction and engineering is going to get, you know, smaller. You know, the construction industry, where we've had some really interesting clients the last couple of months, is extremely into AI now because so many of their jobs are clearly defined in well architected roles. So they can look at tasks and specific responsibilities within jobs and automate them more quickly. This is true in some software firms. We have a client that's a big software firm where they have very, very detailed job design in granular detail, task by task, so they can automate those more quickly. Defense contractors, often times like this, there's going to be a lot of opportunity to reduce the size of team. Most companies don't have a great detailed job architecture. They have sort of vague jobs and people sort of figure out what they need to do. So there will be more iterative design and experimental design. And I think a lot of you in HR are going to have some really fun projects getting involved in task analysis and putting on the whiteboard everything that we do in this role and then figuring out where the AI can help. We can wait for vendors to do that, which is going to happen. But I think there will be a lot of this going on inside of companies. Now. There's also some economic issues in this, in that if you live in a country without a lot of capital investment, an underdeveloped country, there may not be enough money to fund these smaller firms to start up and integrate or rather innovate in some of these areas. So this is an interesting global economy thing too, that, you know, if you're in the US or France or the Nordics, where there's a vibrant VC and R and D investment industry, you're probably going to have plenty of startup companies to build great things for you. If you're not, you know, you won't have as many opportunities and you'll have to do this yourself. So I think there could be a haves have not effect with AI, just like there is for every other technology, the beginning or stages of it. And by the way, if you work in a big company, you can do this too if your management is enlightened to break you up and do these kinds of experiments yourself. Amazon does this. Walmart does this. McDonald's does this. Starbucks is working on it. You know, it's interesting at Starbucks they're trying to go back to the more human touch in these stores because we don't really like getting our coffee from a machine and not having a relationship with anybody in the store. It's not a very enriching experience and it's not a very loyalty building experience. So there's a lot of things that are going to happen here. The final thing I want to talk a little bit about is leadership. There's millions of books on this and we've done quite a bit of our own. How do we design, develop, assess train leaders in this new model where job titles, roles, responsibilities, levels are changing so fast? It used to be that the leader was the boss, the leader was the person who was the supervisor. They decided who was going to do what, they set standards, held people accountable, maintained productivity and output, hired people, et cetera. But you know, I'm not sure if every leader has any idea what AI is capable of doing. Some do, some don't. And so these definitions of leadership and every company has their own are going to change. Because when things are adapting very quickly as they are now, and this happened, by the way, in the early days of the digital revolution too, in the early 2000s, but that was more than 20 years ago. We need leaders to be more flexible. We need leaders to understand that people are worried about their jobs. We need leaders to be aware of the technology and automation that's possible. We need leaders to support innovation, creativity, experimentation and redesign, and to not just hang on to their span of control as if it's their entitlement. We need leaders to train people. We need leaders to train themselves, to bring people together to look at what's working and what's not working, to think creatively out of the box. These are not traditional organizational execution skills. I'm not saying that every leader is going to be innovating in every department of every company, but actually they probably are. I mean, if you just get a tool like Galileo or the Microsoft Copilot, you know, that's sort of a consumer like tool and you start using it, people are going to find cool things to do with it that maybe you didn't know were possible and you're going to want to support that. So I think we're going to be entering a world of lots of discussions about what the role of leaders are and how they can facilitate these economic productivity and automation and scale advantages that are available now. There's an interesting comment in one of the IMF reports I just read about AI that may or may not be true, but let me just mention it. They believe, and I didn't read the source, that old people do not adapt as fast to tech as young people. And I don't think it has to do with age because I'm a techie and I'm almost 70. But I think for people that grew up with AI and used it from high school or earlier, they're much more facile with it and comfortable with it, and they're going to press it to do newer things. And so there is a generational issue. I think the big generational issue with AI is that my experience shows that AI is much, much, much more capable than you thought. And so young people who are not intimidated will push it a lot harder. And so those of us that have more traditional technology backgrounds are going to have to learn how to expect more out of these things and, you know, be more creative and innovative in our thinking about the world. You know, Elon Musk calls it going to basic principles. And I think about this all the time as an engineer. Why are we doing it the way we're doing it? Do we have to do it this way? Can we reinvent it? That's a sometimes a little bit scary thing to do. But I think we have to facilitate and encourage that kind of discussion at all levels. Now, I guess the final thing in leadership is what I would call systemic thinking. We talk a lot about systemic hr, and I love the word because it's the concept of seeing how a whole system fits together. Data processing, organizations, people. Well, you know, maybe one of the most interesting, you know, breakthroughs of AI is that you can throw a bunch of stuff into it. Videos, audios, texts, documents, whatever, emails, and it will make sense of it. And if you ask it the right questions, it will analyze vast amounts of heterogeneous data in a way that was really impossible to do before. I mean, literally impossible. You couldn't take three audios in a video and five research reports and throw them into a container and ask them questions before it was impossible. Well, this can be done by machine, which means that the systems that we use at work will be able to do things like if the revenue dropped or the quality dropped or the production dropped. It could probably tell you if it has the right data statistically, what factors are correlated with that. Was it overtime? Was it the younger people that had less productivity that were slowing down, the older people? I mean, there's A lot of things that it could do that a manager would not necessarily think of themselves. And this information is going to force us to be more systemic and complete in our analytic thinking. By the way, this is what people, analytics people have been doing for years and they have been surprising leaders for many decades with findings that were not obvious about how the company works. We're going to all have that kind of stuff. And so if you're a, you know, kind of heads down manager and you just like to kind of help get stuff done all the time, you're not going to necessarily have the concept of this. And this is really going to expand your opportunities to add value and understand how your team works, how the organization works and how to make everybody successful. There's also going to be a personification of this AI. I think there will be a lot of jobs created to manage these human digital agents, to curate them, to monitor them, to govern them. You know, if you think about an IT department prior to AI and if you found somebody in the IT department that knew, who knows every system we have? Nobody really. I mean, maybe there's a couple people, but nobody knows every system we have. That's not really anybody's job. Well, with AI you actually have to keep track of them and you have to take care of them and govern them and make sure that they're filling themselves with the right data and that they're secure and that they're making the right decisions and answering the questions correctly and so forth. We have to watch these things because they're not deterministic like a regular piece of software. You can ask the same question twice and get different answers. So we're going to have some really interesting new roles created in governance and operations and data management, testing corpus management, prompt engineering, and building user interfaces on top of these things. Because they're going to be able to talk, they're going to be able to listen, they're going to be able to see lots and lots of things like that. You know, another area that I just want to mention before I wrap up is comp compensation analysis. And compensation is a huge part of what we do in businesses, of course. And these systems are going be, I'm sure used for pay equity analysis, pay disparity analysis, geographic pay analysis, many, many things that go back to this issue of talent density that I think have been a problem. One of the issues in talent density is we pay people oftentimes based on prior performance history. So the super high performers may not be paid what they're worth because there's all sorts of pay bans and, you know, legacy balancing systems we have in pay in an organization where we have hyper performers executing 10x more than the person sitting next to them. We need to pay them for that and we need to understand the implications of that pay. And it's happened in the past, it happens now for competitive market reasons. But that is another big area that's going to be very interesting to think about during AI. Okay, so final thing, let me wrap up here. The political world we live in is a bit, a little bit strange. I don't think AI is the cause of it, but AI is going to be a part of it. And for those of you that are a little bit intimidated by this technology or you feel like you're not keeping up, welcome to the club. We're all there for us in hr. We're going to do everything we can to keep you up to speed. I would say if you have that unsettling feeling which most of us do, the thing to do is to get your hands dirty and play with this stuff. This is the reason we priced Galileo so low, is so you could have a tool that's more powerful than ChatGPT, because it's actually a superset of ChatGPT with all this content and benchmarking and vendor and other data in it that you can learn yourself personally how powerful AI could be for your job. We're going to publish the hundred use cases fairly soon. It's not quite wrapped up yet. And I think the best insurance policy you have as a professional is to use these things and play with them. And I know that sounds like it may not be something you get a kick out of, but I don't think you have any choice. I do remember when the PC was first launched that the people that played with it were the ones that thrived. And after a while, everybody played with it because everybody realized they had to learn how to use it. So these changes are happening. Some of them are societal, some of them are organizational, some of them are political, some of them are cultural. But a lot of them have to do with harnessing and understanding and leveraging the technology and building the organizational strength so that as the technology evolves, we have experts that can keep up on it and teach us all what the opportunities are and perhaps the risks in the future. So I will fill you in more on what the EU bankers want to talk about to the degree they want to, to share and look forward to talking to many of you in New York this week and talk to you again soon. Bye.