Episode Transcript
[00:00:03] Speaker A: The moment we start empowering our employees, giving them that basic literacy training, giving them opportunities to skill up in those areas.
Now they're capable because they're embedded in the work of streamlining workflows with these AI companions, AI agents, et cetera, and partnering with a build team to build very focused, custom built, actual autonomous agents. That's where you're going to see major gains.
Hi, this is Josh Burson. Welcome to the what Works podcast where Josh Burson Company analysts talk with innovative HR and business leaders about what's really working in talent, technology and the future of work.
[00:00:40] Speaker B: Hi, I'm Kathy Enros, Senior Vice President of Research and Global Industry Analyst at the Josh Burson Company and I'm thrilled to talk with Ralph McOslan, Vice President, Artificial Intelligence at SA Southern New Hampshire University, about what it means to approach AI with humans at the center. Rob, it's great to have you. Thanks for joining us.
[00:01:00] Speaker A: Thanks for having me.
[00:01:01] Speaker B: I can't wait to jump in. But before we do this, tell us about yourself and then about your role and a little bit about SNHU as well.
[00:01:09] Speaker A: Happy to. So currently I am the Vice President for Artificial Intelligence at Southern New Hampshire University.
I have served in this capacity since August of 2023.
I have spent a lot of time teaching and working and volunteering in Africa, in the Mediterranean and in East Asia, working with everything From K through 12 populations to graduate student populations to refugee populations. And all of this fed into what I was doing in terms of being a professor at Southern New Hampshire University, teaching sociology. I have been at Southern New Hampshire University now for about 10 years. Going back all that time I've been teaching an advanced course on society and technology that actually focused on artificial intelligence and in particular how societies adapt to the introduction of new technologies, how the labor force adjusts over time, what the societal implications likely are, what labor displacement looks like, and how adoption takes place at different speeds in different environments for different populations. I have been following ChatGPT basically or OpenAI since the Elon Musk days and in November 2022 the announcement came out and everything was publicly available or more user friendly. Anyway, I did not see it coming. Just to be completely clear about this, I pivoted very quickly. So I knew exactly how to adjust my teaching style, how to adjust access to these resources, how to introduce this topic to my peers around the academic institution as well as in professional environments as well as to my students, the then president of the institution, Paul LeBlanc, and our executive Vice President of campus, Don Brzezinski we started having conversations about what a vice president of AI position might look like. And it became clear they wanted a social scientist to be leading this, and in particular, a social scientist with a background in this technology who understood artificial intelligence. That group tends to be very small. They tapped me for this role to.
To talk a little bit about Southern New Hampshire University. We have a reputation for innovation. We are one of the largest online schools in the United States. We have currently between 215 and 230,000 learners.
And we overrepresent historically underserved grids. And so my perspective as a sociologist, there's a mission component here focusing on equity that is incredibly important to technology adoption, as well as providing solid educational products to those populations.
[00:03:26] Speaker B: So fascinating. So you used to be a professor at Southern New Hampshire University, too. So you really understand the university life, the students, how you're trying to teach and how you're trying to prepare the workforce for the future and all that. Wow, Fantastic. What a background. Yeah, And I think so relevant for our listeners, too. So I want to hear a little bit more about where do you want to apply AI at Southern New Hampshire University? How do you even get started with that kind of strategy?
[00:03:57] Speaker A: So instead, I'm going to turn that a little bit on its head because there are too many people out there, I would say, who are absolutely AI evangelists. I am a practical skeptic. This technology exists. This technology is incredibly powerful. It has the potential to address structural inequities. It also has the potential and does vastly exacerbate existing inequalities and inequities. And when it gets things wrong, if you're overly dependent, it can be catastrophic. And so from an organizational perspective, we started with a listening tour, and we went around the institution to not solve problems, just to listen, to hear what people were thinking about artificial intelligence, to get a feeling for what their level of literacy and understanding was, and then to start to understand the nature of the problems that they were dealing with. And what I'll say is, as we went around the institution, there was certainly a significant amount of deep concern, how the ethics of how these systems were created and the bias inherent in these systems, but the truth is, the technology now is there and widely available.
And if we don't embrace this technology, not only do we run the risk institutionally of being left behind, we're no longer serving our learners. So we crafted our first workshops where we just sat down with various teams and we walked them through what the definitions were, how to understand the nature of this Technology, the caveats giving them a basic level of literacy, an understanding of pitfall. And then gradually we developed a mandatory training. We started rolling out AI tools, we started giving people a little bit more access and we started to see teams really taking agency over their AI usage. And so working side by side with our former cto, Gerry Woodward, she and I created a proof of concept format which we still use now. So teams come forward, they have an executive sponsor, they have their business case stated clearly, and they say, we want to use this AI tool for these purposes or this platform that we already use has a new AI tool embedded in it. We want to vetted in these ways. So we created a team that is capable of doing the security bias and hallucination vetting along with once reasoning models became widely available, testing for agent alignment. Does the AI know it's being tested? Is the AI capable of jumping its guardrails? How do we test for that became a very clear question. Then we developed a framework that we have been using now for almost a year. And it's worked beautifully. Our proofs of concept run for a maximum of 90 days. There's very specific KPI and along the way we found 25% of the concept. When you've given people the agency to do this, they come back and like, actually this tool doesn't work or it doesn't save us any. That's useful information. It's great because then it gives us the ability to come back to vendors and say, look, we can build things in house that do what you do already. We need to make sure that we can actually maintain the integrity of our tech ecosystem. We don't want to be introducing tools that are risky, more broadly speaking. And that served us very well. They gave us credibility. We have transparency, they understand how our vetting system works. And we're at a point now where industry wise, we're in the upper right hand quadrant when it comes to AI adoption in educational environments. And I would say we're the most AI literate workforce in higher education. And I want to be careful when I say that because what I'm talking about here is our staff. I'm not talking about our faculty, not talking about curriculum. All of that is very important and we are working in those directions. But on average, we see, according to the Titan report and a few others that came out this summer, roughly 45% of employees at higher educational institutions are using AI on a weekly basis. And our surveying showed that we were up around 55 to 65 on a weekly basis.
[00:07:30] Speaker B: That's Fantastic. This is such a great way of thinking about it because you really went group by group, basically to really understand what's working and what's not working. Right. And really letting each of the teams and each of the departments buy into it, rather than just saying here's AI and you all need to use it, basically. I love that approach. I want to double down also on this maturity model. We have a four stage framework that we share with many of the teams we're working with on AI adoption. And I want to walk you through it and then just see where you feel you're at and then have maybe some examples around that. So for us organizations adopt AI in four stages. The first one is just using AI as an assistant or basically use something like ChatGPT or Copilot or Gemini. So it's just personal productivity. Stage two, we see as the agents. So you use an agent, but very narrow, very short, small agent that basically does a specific task for you. So for example, when you are a recruiter, maybe you have an agent that helps you with candidate outreach or it helps you with interview scheduling or something like that. It gives you some automation, but it really doesn't fundamentally change how we're doing things. And then it's a big shift to stage three, where we are thinking about these multifunctional agents that combine different agents together. Right. So the interview scheduling agent might talk with the sourcing agent to see basically which candidates you have sourced and all of that. And eventually you might have an onboarding agent that talks with a learning agent, all of those kind of things, and then you really have jobs fundamentally differently. Right. So people will have to do different work. And then the last stage we have is stage four, which we call autonomous agents, where an agent could do entire processes or workflows end to end. So I'd be curious, where do you see Southern New Hampshire University sit on this four stage model?
[00:09:19] Speaker A: What I would say is it depends on the part of the institution that we're talking about and it depends on whether we're talking about capability, as in we've done this as a proof of concept, we've tested it, we've used it for limited use cases, or if we're talking about full scale in production, there's variation there. Number one, yes, we rolled out copilot, we've done copilot training for everybody. We have licenses with OpenAI for enterprise, we have build teams, all of that. When it comes to that. Stage two, now we're talking about copilot agents, also custom GPTs that are available for individual teams, very specific task focused AI implementation. When we get to three, we have built some and we use them for some purposes. So curriculum refinement and addressing where you have multiple agents that are actually combing through and communicating back to another agent that actually has to refine, take in and then put out basic deliverables. So it has to do that on its own. But there are again some human interaction here and there. But that fourth one is where there's significant pushback and I would say rightly so, because the technology is there, it is not perfect. And that level of imperfection gives a lot of people really dramatic pause. There's good reasons for that and also the human implications that there that if we're starting to explore that it becomes a major concern for our workforce that the moment they know that we're looking at this, it's not just a matter of is this going to take my job? The answer that a lot of people give is like, oh, well, no, somebody who knows how to use AI is going to take your job. Actually the truth is more complicated now. And the canned response from a lot of institutions is, oh, your job is going to change. And then we ignore that. Okay, that might be true, but that person might not like the job anymore once it's changed. And so we have not begun on that Stage four, on stage three, limited capacity. We have done this. We've built agents in house. We have both limited mcp. We're working on the model context protocol. Whether or not we want to build our own MCP server or if we're going to partner with some of our partners. And then there's of course within some of these agents we have direct A to A protocols. So you have agent to agent communication. And then obviously you have that one master agent that is going to finalize the deliverable.
[00:11:27] Speaker B: Yeah. Wow, Fantastic. And you are more advanced than I would say 95% of companies as well. I was doing a keynote with 120hr and talent leaders in San Francisco, most of them set at stage two as a company and then of course in HR too. But even as a company, most of them are in the earlier stages because it's so new. You need to make sure that you are taking care of all the things that you talked about too do make sure that it's ethical, that it's unbiased, that it actually doesn't go outside of where we want to have to control. Or maybe we just feel we need a human to have that oversight just because you can do it doesn't mean you should do it. So that's a conversation that every organization, I think, has to make themselves.
[00:12:12] Speaker A: And I think so many are at stage one. And this mirrors what we saw in the MIT Nanda report that came out in September. But in that case is the individual level. AI usage is quite high up there, but when it comes to institutional usage and enterprise level, it's quite different. And part of the issue here is that a lot of the AI tools that are being created to streamline workflows, they're brittle. They're designed for one particular purpose. They don't leverage actually the inherent flexibility of AI. And the moment we start empowering our employees, giving them that basic literacy training, giving them opportunities to skill up in those areas, now they're capable because they're embedded in the work of streamlining workflows with these AI companions, AI agents, et cetera, and partnering with a build team to build very focused, custom built, actual autonomous agents. That's where you're going to see major gains. And I think though, that we have a problem with the focus on roi. So we actually have seen financial returns, massive financial returns in one particular case when it came to cutting down on subject matter, expert time consultants from outside for curriculum review by using AI and just establishing prompt libraries and going through and actually streamlining our process for our course developed. So we still use them, but we use them much more judiciously and we've saved a lot of money.
But most productivity gains, like my dev teams, they save eight hours a week. And in those cases, those eight hours don't translate necessarily to money or other projects. It's actually quality of life where people are happier because they get longer breaks and longer lunches and can reprioritize work. So they're focusing on the things they think are important, but it's not going to show up on your balance sheet.
[00:13:48] Speaker B: Yeah, I love that. So it's not necessarily cost savings. Maybe it's getting product faster to your customers. We call this the super worker effect, where we're saying amplify everybody to be more productive, but then also to your point, to have a more balanced, happier life and happier work and more meaningful work, all of that as well. So do you have other examples on how you used it? I love the curriculum redesign and I think people are always hungry, for example. So any other examples that we'd love to hear?
[00:14:16] Speaker A: We have a whole bunch in the education sector in particular, as you can imagine, based upon the type of courses that are being set up and run we actually see different approaches. In some technology courses, of course, we're teaching students to actually use the coding tools alongside what they'd be learning in their normal computer science curriculum. In business courses, we're using this to streamline how you actually put together, for example, Dexter presentation, understanding how to have your business plan worked out before you step into that very first pitch meeting and anticipating the responses using AI as a thought partner. All of that is on that student side. But you think about some of the ways that we're using it internally. The copilot rollout for us across the institution, it's not just about email. And I'll say yes, the search functionality works fairly well, but the truth is the time savings that are the result of specifically copilot for team. And so what happens in these particular cases? Not having a designated note taker, everyone can actually be present in the meeting, having the conversation. You get that clearly identified. Deliverables saves a couple hours a week in terms of my meeting time alone. So that's one of those areas where it's low hanging fruit. It is one of those easy use cases. Yes. We've seen increases in productivity. Is it going to show up on your bottom line? Maybe in your quality of life. Right. But in marketing, they're using it all over the place. In communications, we think about the advantages of setting up, for example, digital twins. This is one of the proofs of concepts that we had in terms of not just press releases and statements, but how one of our senior leaders might react in these circumstances and being able to based on their writing, based upon their media showings in the past, how we can anticipate what responses might look like for drafting. And again, this came out of kaition's full credit to that team. And some amazing work has been happening there. We have our own build team, the agents that we've built in house for reviewing our curriculum. We've automated some of the curriculum review process through having an agent that can just be deployed and just say go and come back with for example, all of the responses that you would generate to these particular series of questions within these particular series of questions courses or if you're looking for specific language as policies change over time, we build agents that are capable of combing through our curriculum to find things that might be particularly problematic or might be indicative of a political. All of these things we're able to do by virtue of AI being widely available across the institution. So we have a lot of different exciting things that are going on. The level of AI fluency at our institution is high. But I think when I'm looking towards the horizon, what I'm seeing is agents being deployed by our Cortex stack. So I think my concerns largely here are surrounding the fact that there isn't really a set of standards regarding what information agents are taking in. So I have a lot of concerns about how we govern these things.
[00:16:57] Speaker B: Wow. And you bring up many important points, but one of them I wanted to double down on is on the governance. How do you do all of this? Do you have a governance council and who is on it?
[00:17:07] Speaker A: So we're broadening this now as we brought in our chief AI officer. There are external bodies that are also going to be using some of the same AI resources that the core of Southern New Hampshire University uses, which means we're going to have more of a coalition when it comes to governance than what we've been capable of in the past. But the truth is, the reason it's a nightmare is there aren't any platforms out there that are capable of actually tracking AI usage appropriately for the purposes that my team has. So if we're thinking about data governance, there are platforms that are out there that are capable of handling data governance. And we do mean some of the same things when we say governance when it comes to data versus AI. And yes, there's a policy aspect as well that comes into this and when we think about the changes that are happening at the, the federal level and how that ripples down through state, municipal level. But then there's this whole other aspect about ethical usage and prompting necessary for good AI governance. And that requires an entirely new type of platform and we need to be able to monitor across all of the agents that are being deployed. And this is an area that we don't have a good answer. It's possible Microsoft's Agent365 will solve some of that problem. There are a couple academic facing tech companies that have developed tools that are good for governing, let's say faculty and student usage and monitoring. But the truth is there isn't one good answer yet. This is my focus and it's what I'm increasingly pushing for, along with a number of my colleagues.
[00:18:27] Speaker B: How did you develop that AI capability? Were you partnering with your people team or how did that go?
[00:18:33] Speaker A: We just did it ourselves and now of course, we partner with people all around the institution based on what their needs are. So we've moved a lot of our workshops into an asynchronous environment for the purposes of onboarding. And when we think about future focus, that's where all of our onboarding information is stored and kept with our people team. The content, we make sure that it is up to date, we make sure that it is fully aligned with AI policy. And our partners in the People team have been great working with us in terms of letting us know exactly what they need in order to make these things publicly available, as well as being sure that nobody's stepping on each other's toes, that anything that we generate or anything that we are developing that we're not taking something away from the People team in doing so totally makes sense.
[00:19:13] Speaker B: Now let's double down a little bit more on the People team because I know a lot of our listeners are in the HR and talent acquisition area. So any use cases, how you're using AI and specifically in the people HR side.
[00:19:26] Speaker A: So you think about role consistency in large organizations, what the skills might be for that position. A lot of that has been done manually in the past. You now have an ability to actually maintain consistency across board through the implementation of artificial intelligence also. And this is a really exciting area. There's the potential here to actually use this for skills mapping and artificial intelligence is capable of handling that at a scale that would be very difficult for humans to do. And again, we want human oversight every step of the way. But absolutely. And then the moment you get into a circumstance where you can have sentiment analysis as we use for our customers. So there's a lot of good things that you can do on the CX side.
You can do that also for employee responses, identifying key elements, when you're looking at email correspondences, getting things that are flagged as most urgent for your team, even if it's things like one of your employees is having a rough time and there are some scheduling issues that are showing up. Having AI capable of assisting you in those cases is important, but we have to maintain compliance and critically transparency. Our people need to know how these AI tools are being deployed and they need to know that the governance of this is not something that's opaque and behind the scenes that it's been deliberate. This is how it is managed and this is who you refer to the moment you think that something is off regarding the AI usage.
[00:20:44] Speaker B: Transparency is so important and I think the way that you've set it up, meeting with each of the departments, each of the parts of the organization is really good way of doing this rather than it's done to people. You did it with everybody. And I really want to give you credit for that. How you're thinking as a sociologist, basically from that perspective, not just from a tech perspective. I think that's a really interesting approach because a lot of times you might get somebody who is purely technology focused and they say, oh, this is the coolest thing and we just need to do more of this and all of that and more is not always better. So it's.
[00:21:17] Speaker A: Yeah, my team in general, this is one of those areas where there are new developments that we're going to be doing externally to the institution. More partnerships, more research, more build. And what I've told people inside the institution is I still have my foot over the brake. You want people to understand that it is not just those evangelists that are driving the conversation, that there are people who are also more skeptical and more practical that are equally present in those conversations.
[00:21:42] Speaker B: Yeah. Oh, fantastic. Well, this has been so great. What are some lessons learned? Maybe some things to watch out for, what to do or what not to do in this journey.
[00:21:51] Speaker A: So what I would say right off the bat is take the temperature of your employees and figure out where they're at regarding AI usage. Make sure that they are comfortable being honest with you about their feelings. So don't sell AI as an obligation. Everybody needs to know how to use it. Definitely listen and lean into the difficult conversations. The way these models were trained initially, there is questionable ethics involved in that. The fact that these models are by definition biased, lean into that and that leads you to that next round of what are you going to do about those things on the ethics side and the creation, there's not a lot we can do, but we can do more on the ethical usage and responsible AI. And that's where we start getting into find partners that have the same values that you do or that have made a name for themselves by testing the accuracy, bias and awareness of various language models. So that would be those first steps, but it really does depend on where an organization might be. I've made plenty of missteps. One huge portion of our employee environment is the online adjunct population. You're talking about thousands of part time employees. And while we were crafting our policy work, I hadn't really thought about the online adjunct population and certainly what we put in play for our policy guidelines really early on in late 2023, those were absolutely relevant to those populations, but their voice wasn't heard. And so we put a pause on release. We reached out to our adjunct pool representatives there as well as our FTEs and we had a conversation about what might be specific concerns and how we can deal with that. And when you make a mistake. You have to be willing to stand up and be like, hey, we totally missed this and you deserve an apology. That's what we did. Nothing is going to be perfect, but you need to own it when you get it wrong.
[00:23:30] Speaker B: That's such a good point. Well, Rob, what's next for you? Where are you taking all of this next? Any big plays that you're entertaining?
[00:23:38] Speaker A: So what excites me most is completely reimagining asynchronous education in an AI forward capacity. And that's where the good work that we're doing with the Learning Sciences team. This is one of those key elements of the Provost's area that I'm super excited to be involved in.
[00:23:53] Speaker B: Wow. Thank you, Rob. This was such a great conversation. I really appreciate all your insights and congrats to you for leading in the educational areas. Thank you, Rob.
[00:24:04] Speaker A: You bet, you bet.
[00:24:06] Speaker B: And that's it for our conversation with Rob McAuslan, Vice President, Artificial Intelligence at Southern New Hampshire University.
We covered so much ground from understanding his unique background to the opportunities of using AI in education.
His strategy started with listening to understand the needs of various departments. We also addressed the important need to train employees to use AI responsibly and the ethical considerations with AI deployment. Rob Bob also outlines the maturity model for AI adoption and shares insights on the future of AI in education. He highlights the potential for AI to enhance learning experience while also addressing structural inequities.
Thanks for listening to the what Works podcast. Until next time, keep exploring what works in your world.