Mervyn Dinnen 0:09 The HR Happy Hour Network is sponsored by Workhuman. Workplace recognition is more than feel-good moments. It’s a powerful source of people data that transforms cultures and businesses. Workhuman unlocks that power with Human Intelligence- AI trained on millions of coworker “thank-yous” to uncover the hidden patterns shaping your culture. Who's actually driving projects forward? Where are the skills gaps? Which teams are thriving- and which need support? Say goodbye to the guesswork. And leverage the collective wisdom of your people for insights you can act on. And here's the best part: Workhuman guarantees measurable results. Real culture improvements that drive real business outcomes. Visit workhuman.com to turn your recognition data into a competitive advantage. Workhuman, a proud supporter of the HR Happy Hour Network. Thanks for joining us! Mervyn Dinnen 1:09 Welcome to the HR Means Business podcast, which is part of the HR Happy Hour Network. I'm your host, Mervyn Dinnen, and today I'm diving into one of the most important conversations in HR and talent right now: how to really understand and use AI responsibly. There's a lot of noise around automation, bias and new tools, but what does it all mean in practice for HR and talent leaders? How can we balance innovation with ethics, efficiency with empathy, and make sure AI enhances rather than replaces the human side of work? To help unpack this, I'm joined by Martyn Redstone, an expert in AI governance, compliance and ethics in recruitment and HR. Martyn's been leading conversations on responsible AI use long before it became a boardroom buzzword. So he's the perfect guest to help us separate fact from hype. Martyn, welcome to the HR Means Business podcast. It's great to have you here! Would you like to introduce yourself? Martyn Redstone 2:02 Yeah, absolutely. Thanks for inviting me on. It's great to be here as well. Great to finally have this conversation with you. So my name is Martyn Redstone, as you said, I work in the area of AI and HR, but specifically, good governance, good risk management, good compliance, all for the outcome of good kind of ethical, responsible AI implementation. And I've been doing that like you said, for several years now, where I launched my AI advisory about seven years or so ago now. So pre kind of Chat GPT hype work going on as well. Mervyn Dinnen 2:38 Okay, so I suppose let's start with with where we are now, kind of is HR making progress? Are they experimenting with AI? You? You've been talking about AI in HR, as you just said, for a number of years. So how would you describe where we really are today in terms of adoption and understanding? Martyn Redstone 2:58 Yeah, I think we've gone through the experimentation phase. Now, I'd say that we're actually in the accountability phase. That's the kind of where, where I see us looking at the minute. You know, I think that we're done with testing. We're done with kind of talking about it, and now we're more thinking about being asked to prove outcomes, you know, ethically, operationally, legally, because there's so much going on around the world of litigation, legislation and regulation, I think it's really starting to worry HR leaders. You know, all of the early part of the hype cycle that we've been through was about efficiency, and now most of the conversations I have are literacy. So how do we make sure that people are up to speed? And how do we make sure that we're assessing for people to be up to speed as well, both from an L and D and a TA perspective, but also, how do we make sure that we're not going to be liable for any mistakes? So yeah, so I think there's a lot going on out there. I think that now we're starting to look at that critical kind of 5% when it comes to AI, the bit that carries the risk, the bit that needs to be looked at more from an expert level. And I think that's because early in the hype organizations invested heavily, went really fast, really quickly, but didn't think about the governance piece, and they're now paying that price. We're seeing litigation around bias going on to bias exposure that comes from poor data controls found procurement audit. So yeah, for me, we're now definitely in the accountability phase. Mervyn Dinnen 4:35 I mean, obviously, as you've said, there's been a lot of hype and everything around it, looking at HR, I suppose leaders in particular, what? What do you think that they still misunderstand about what AI is, but what it can and can't do? Martyn Redstone 4:53 Yeah, it's so I think they're still confusing capability and competence. I think that's kind of the key bit for me. Because AI we know can automate, but it can't reason, it can't justify, it can't defend. It has no world model. And I think sometimes, especially with a lot of the marketing hype that's out there, a lot of the magic that is that there is around things like large language models, HR leaders are still really confusing capability with competence, because I think that many leaders think of AI just as pure software as well. But for me, we need to think of AI as a decision engine, and anything that's making a decision needs oversight, needs to be checked in on, needs to make sure the decisions that are being made are right, are ethical, are based on the correct data, etc, etc, and so HR's misunderstanding around all of this magic, all of this capability versus competence, is the HR, I think, believe that AI reduces bias automatically. And I hear that a lot, you know, oh, you know, we use AI to reduce bias, because we're not using people, and in reality, it's scaling whatever bias is already in the data. So again, that capability versus competence confusion, that's going on. The second blind spot that I tend to see, and the second misunderstand that I tend to see is this human in the loop myth, you know? What happens isn't just having a human what needs to happen, and what matters is not just having a human in the loop when it comes to kind of the oversight of those decision making processes and what have you, but it's actually having an expert in the loop, because, and especially around some of the legislation that's come out recently over here in the UK and Europe, is that if you're checking in on a piece of automated decision making technology, a piece of AI algorithm and what have you, it's no good just saying we've had somebody Junior check in on it, you have to have somebody that's competent enough to check into that expert in the loop. So again, that bit of a misunderstanding around what type of person needs to be, the oversight person when it comes to AI decision making. And actually, the last thing was quite interesting. Ey recently released a piece of data, and it showed that only 12% of executives could match AI risk to the controls that they've put in place. So for me, those kind of, those leaders, both, you know, CHROs and the other C suite, completely flying blind a lot of the time when it comes to AI risk as well. So so for me, you know, the kind of, the biggest misunderstanding is this kind of bias, this human in the loop versus expert in the loop. Like I said, confusing capability with competence. Mervyn Dinnen 7:46 And obviously, you talk a lot about, you know, responsible use. So it's kind of, I'm thinking, kind of HR and talent leaders, you know, what? What are the the non negotiables, you know, what are? What do they have to have in place before they really implement, you know, in terms of ethics compliance and things like that? Martyn Redstone 8:09 I mean, as a headline, and this is kind of my three governance pillars, ultimately. So before you actually start using AI, you must prove that you're in control of it. So for me, the non negotiables are inventory, policy, and evidence. So when it comes to inventory, you can't govern what you can't see. So you need to understand what AI you're actually using within all of your HR workflows, what models are out there, and even going so far as understanding what shadow AI is being used across the organization, as well as people bringing their own AI into the workplace, is create a clear, responsible AI policy, what's acceptable use what's not, and also build evidence trails, you know, documentation, bias, testing, human validation logs, because all of that aligns to frameworks like the EU AI act, ISO 40 001, and they're all going to become quite standard for HR audits. So so yeah, my key non negotiables, get those controls in place before you start going big on AI, inventory, policy, evidence. Mervyn Dinnen 9:19 Of course, one of the other issues, I suppose, is that the HR and talent leaders are fairly dependent upon vendors to obviously explain to them what the tools are, what what, what they are investing in, and how they can use it. They also rely on them to integrate it into their systems. And kind of a number of times I hear people say, you know, what questions should we be asking? So what to you, what are the kind of questions that leaders should be asking tech providers, you know, to make sure it's safe to use, it'sompliant, It's transparent, and they don't fall foul of any regulations? Martyn Redstone 10:04 Yeah, yeah. Look, you know, the first thing I always try to try and get into people's minds is, you don't buy AI tools like you would, kind of standard consumer technology. You need to vet them like you would for like a regulated supplier. So somebody who is providing financial systems, health care systems, things that have a massive impact on people's lives. You need to remember that you know the systems you're putting in place, especially in the world of HR, they have an impact on people's lives. So you need to vet them with that level of responsibility. So you know things that I would ask of a vendor: How was the model trained? What data was used and what testing has been done for bias. I always tell people, they need to demand access to model cards. Model cards are the detail behind the AI model. How was it trained? How was it tested? Where is the training data come from, et cetera, et cetera, and they are significant pieces of documentation, but, but that's the starting point that shows that the vendors taking it seriously. You know, have they run bias audits? Do they have bias reports? Do they have explainability statements? You know that that, for me, is key. You know, can, can a vendor provide you with a statement on explainability, on how an algorithm works or how the model works, you need to also make sure that you're checking alignment with your internal responsible AI policy that I mentioned earlier, if a vendor can't comply with your internal policy. So red flag, as far as I'm concerned, and there are probably a good kind of 12 to 15 really deep dive questions that you can ask from a vendor perspective, depending on the type of use case that you're that you're putting into place with that vendor. But you know, my, my key message when it comes to when it comes to checking, you know, a vendor, and questioning a vendor is you don't just ask about what it can do, because that's their that's their job. That's their job to pitch it to you and sell it to you. Here's what it can do. You need to ask what they can prove as well, and it's all around proof and evidence. So, yeah, so, so lots and lots of questions you can ask, but you know the key three ones, how was the model trained? What data was used, what testing has been done for this? Can you provide those explainability statements? Does it match up to your own internal AI policy? So, yeah, you need to think about it in so far as anything you're buying is affecting the life and livelihood of whoever is on the end of it. Mervyn Dinnen 12:32 And obviously, bias and fairness come into it a lot, and we often hear, you know, the expression that AI can reduce bias, and it can reinforce bias. So I mean, from your perspective, what are the biggest risks? What, I suppose the biggest risks to having bias reinforced. But what are the practical steps? Or, you know, we've talked about questions to ask the vendors, but what practical steps can organizations take, particularly around hiring, I'm thinking bringing somebody in and onboarding them, what are the steps there that they really need to take? Martyn Redstone 13:05 Yeah, so look, most bias that we see in AI, it's not malicious. It's inherited ultimately. And we have this, this, this wonderful phrase, and turn of phrase in the world of AI, which is garbage in, garbage out. So like I said, it's inherited. So if the training data of that AI model reflects old hiring patterns, then your AI is going to replicate them. And so when I'm talking to people, and so, you know, we trained our our model on the last 10 years of hiring data, that's when the alarm bells start going off, because we know, ultimately, the most hiring data for the last 10 years and more, so is full of bias. You know, whether it's unconscious or conscious, it's full of bias. So we need to make sure that that that we're not entering into this danger zone, which is untested tools being used for selection screening and even in the world of HR, things like performance evaluation, all those kinds of things. So the bias testing needs to be part of the procurement process, you know. We need to make sure there's clear methodology around is the output bias against specific protected characteristics, you know, depending, again, on the regulation that you're covering off from a discrimination and bias perspective, you know? And so that, for me, is very, very clear, you know, because if we're putting fairness into a process by using AI systems, it's not about trusting the output, it's about testing the the input and the process as well. So for me, the biggest bias risk is in it sits in the data pipeline, and those are kind of unverified vendor models, or if a vendor is just wrapping itself around, you know, a commercial model like Chat GPT, you know, that's where the risk is, it's in the data pipeline. So once we understand that, we can de bias that, but we have to have continuous bias auditing and bias testing happening to make sure that there's no bias that. Creeping back in, and that also is really important as well. So it's not just a one and done, and especially, again, from a legislation perspective, New York local law 144, EU AI act, etc, etc. You have to have regular bias audits, especially done, potentially by third party as well. So yeah. So it's all about the data pipeline. You know, the like I said, the key to kind of AI is garbage in, garbage out. So we need to make sure that what's coming in isn't full of bias ultimately. Mervyn Dinnen 15:29 I get that. I get that. Now it this is a whole new set of skills and competencies, really, I guess, for many HR people. So I suppose I've got two, two parts to what I'm thinking now the first one is, who else within the organization can can they turn to? So presumably, depending on the size of the organizations, there might be in house legal team, who are the kind of people they should be turning to, internally and externally for help on some of the things you've been talking about. And then we'll move on to more of the kind of skills they need for the ARH. Martyn Redstone 16:03 So look, you know, when you're implementing AI, it shouldn't be done in silo. Anyway, I actually think that it needs to be a multi disciplinary initiative across the organization. So obviously that all depends, like you said, on organization size and what have you. But ultimately, yeah, legal procurement, it all need to be involved in this, you know, to make sure that, number one, you're doing things as per company policy and as per legislation or what have you, but also to make sure that it's aligned. You know, that's one of the biggest challenges when it comes to HR, continuously, I've seen over the last 20 years of being in the space is that we're often either forgotten about or we try and do things by ourselves. And I think when it comes to something as big as kind of AI transformation, we need to make sure we're we're doing it as part of a wider organizational process as well. So look, you know, if you are lucky enough to have an internal steering committee or or AI enablement lead or what have you. Then absolutely you should be working with them. If you don't have then there's there. There are external resources that you can reach out to. Obviously, I'm one of them. But there are organizations that do automated bias testing. There are organizations that do wider kind of AI governance, operations and systems and other advisors such as myself, who can help you no matter where you are in the world. So there's lots of people that can help, but again, make sure you're procuring those people and those organizations properly. You know, unfortunately, what we've seen over the last couple of years has been this explosion of AI experts. And it kind of takes me back to the 2017, 2018 pre GDPR days, where all of a sudden everyone was a GDPR expert. And every time we see this in the industry, we see lots of experts coming out the woodwork, and we're seeing the same Unfortunately now when it comes to AI, you know, somebody has a play around with Chat GPT, thinks all of a sudden they can, if they can wrangle chat GPT, they're an AI expert, and this is why I now do what I do when it comes to governance, because they don't think about these things. So yeah, so if you are engaged with an external expert, ask things like, you know, what are your thoughts on governance? Ask to see previous work. Ask if they've got any experience of building AI systems outside of Chat GPT, you know, make sure you're procuring those people properly as well. Mervyn Dinnen 18:29 And I guess this is, this is across an organization. So for example, I mean, depending on the size of the organization, but the Yeah, one of the areas that I often hear people say, AI will, will, you know, embeds itself. First is things like customer services and so, but so, you know, I suppose, yeah, if you're in an organization with many different parts, many different divisions and customer services says, hold on a minute; you know, you've invested in this, but I need something to do this, or we need something to do that. What you know, does AI, sorry, does HR have the capability, do you think to be able to advise on that, or is it, is it something that you know, somebody else within the organization would then need to investigate? Martyn Redstone 19:12 So it's a really good question that I think that, like I said, I think all organizations need to set up some kind of working group or steering committee that brings in representatives from all over the organization, and HR needs to be a key part of that. And the reason why I say that is because it doesn't matter where the transformation is happening. It always impacts people. If you think about customer service, you think about some of the kind of horror stories we've heard around Klarna, where they decided to get rid of 700 customer service members of members of the team, and replace them with AI, that's a human transformation piece as well. And so HR absolutely need to be involved in that. I mean, we won't talk about the kind of mistakes they made there in the fact they had to rehire a lot of people. All, but again, a human based challenge. So, yeah, so I think that, I think that HR actually need to be involved across the organization with every, with every AI transformation, because everything that you do when it comes to AI impacts the people in the organization. Mervyn Dinnen 20:20 And AI literacy is itself as you know, as a term what it's obviously now part of core, well, HR capability, capabilities within the whole organization, where, where does, where do you find this from? How do you build literacy competency? Is it external providers, is it kind of, you know, part, partly through the vendors? Is it something that they have to go and effectively source for themselves? Martyn Redstone 20:49 It's good question. Yeah, I think that this is now kind of core to most organizations. AI literacy is core from an HR perspective as well, not just on, you know, how do we build literacy in HR? But also, how does HR support the organization when it comes to literacy? And for me, you know, and I do this with a piece of tech that I built called Genesis, which is all about assessing people's AI readiness across their literacy level as well. For me, it's not just about how well you prompt at all. It's also about how well you understand AI, you know, in terms of the basics, in terms of data, acumen, in terms of ethics, in terms of how well you can question and validate the outputs, all those kind of things. And actually, you know, there was some research done recently by Pew, which showed that most employees are actually more concerned about AI than excited about it. And I think that that goes along with helping with literacy is when you provide key programs that provide people with literacy, understanding, confidence and excitement around AI that builds up your your AI readiness and your AI literacy across the organization, because people start feeling like they get coming on board. You know, for me, HR needs to move from, move from that kind of AI awareness to AI aptitude piece across L and D, across TA and recruitment, not only to understand how to how to assess for it, but also how to kind of strategically embed it into the organization as well. Like I said, I built a piece of tech that helps assess kind of where people are, but also brings about the the opportunity for L and D interventions to be able to increase that kind of readiness and literacy as well. So so for me, kind of by next year, I think every HR team needs that kind of literacy framework. Because, you know, just much like they have, like a DNI policy, data privacy policy thing, it's going to be imperative to the future of an organization is how well their talent layer is ready for AI. Mervyn Dinnen 23:00 I suppose we need, or we want people, to use AI to enhance, should we say, human decision making and human interactions, rather than replace it, which I suspect you know, is still in the early days. I suspect some people think this is going to replace human decision making, but it needs to be enhanced. So particularly where we're dealing with people, things like, I suppose, employee experience, employee relations, leadership, development, skills, those kind of areas, even candor experience when we're hiring, how, what should organizations be doing to ensuring that AI isn't just making the decisions, but just enhancing and supporting the decision making.? Martyn Redstone 23:46 So, excuse me, the key here, or the goal here, for me is, is augmented judgment, not automated judgment. I think that's kind of the key bit that I get people to, kind of get their, you know, their mindset on. Because for me, AI should be supporting complex decisions, and what that means is keeping humans as that kind of final validator, especially, like you said in hiring performance, employee relations and what have you, it's super important to make sure that we go back to that kind of expert in the loop, and almost that kind of human in command, rather than human in the loop? Yeah, because for me, it's absolutely imperative right now that we don't put all of our trust in AI and that will come over years. Because like any system, any any learning system, machine learning system, it takes time to not only understand how to make those decisions or how to support those decisions, but also for us to keep checking in, keep making sure it's going in the right direction, keep validating everything that happens around the augmentation of that judgment, that augmentation of the intelligence that comes to make that judgment. So yeah, so for me, how do we ensure it enhances it? We ensure it enhances. It by augmenting and not replacing I think that's really important. And actually that's key to a lot of legislation that's out there right now. You think about the EU AI act, they've put employment AI. So any AI relating to the hiring, the managing and the firing of of people is high risk. And so there are a lot of things that you have to put in place to manage that high risk, and they've done that because we don't want machines making full and final decisions on people's lives. So yeah, so absolutely, for me, it's about shifting into the mindset to say that actually what we want out of AI is is augmented judgment, and that, for me, is the key to ensuring it enhances that decision making, rather than replacing it. Mervyn Dinnen 25:45 Okay, it's been a fascinating conversation, Martyn, we're coming to the end. So I suppose to date this conversation for people listening in the future, we're coming to the end of 2025 so I suppose it's timely for me to say, if you were advising, if I came to you as a head of HR, a chief people officer saying 2026, around the corner, what's my people strategy for next year? What should I prioritize about AI and responsible use of AI, and how can we be more effective with it? What? What What would be your key? 2, 3, 4 points, Martyn Redstone 26:24 So I think that there is kind of four strategic priorities the chief people officers need to start thinking about so. So we talked about that kind of, that visibility piece, that inventory piece, you know, make sure you understand exactly what's going on in your world when it comes to AI. Make sure there's an inventory of it. Then ensure that you've got policy in place. So those kind of you know, AI usage policies, acceptable usage policies, etc, etc. So you know, inventory policy provision, you know. So make sure that you are also providing AI systems to people. So we talk about kind of two different types of AI systems here. We've got kind of discriminative AI so you know, the things that help make those systems and augment those judgments, but also the generative AI tooling that people are using in their daily activity to help make them more efficient, more productive. Make sure you are provisioning that to people, because otherwise they're going to start using their own which is going to be in pure breach of things like GDPR and ethical guidelines and all those kind of things. And then ensuring that you've got that literacy program in place to make sure that everybody's up to scratch on the world of AI, so you can ensure that your whole organization is doing things ethically, responsibly. And if you do that, you know, inventory, policy provision and education, you've got yourself a really nice start to a governance structure going on there. So building and maintaining that live AI, inventory across all people's systems, you know, providing policy around, around your organization's stance on AI and policy also includes things like, you know, in TA, what's your thought process around candidate, use of AI and all those kind of things as well, literacy programs, you know, diagnosing literacy and putting in place L and D initiatives and provisioning Good tooling as well. So four key areas to create a really good governance structure. I think my kind of final thought is to start treating AI like people treat cyber security. You know, it's really, for me, a non negotiable organizational capability when we come into 2026. Mervyn Dinnen 28:41 Martyn, it's been absolute pleasure to talk to you. If people want to get get hold of you or connect with you. What's the best way? Martyn Redstone 28:49 LinkedIn is always the best way to get in touch with me. Martin Redstone, that's down there. Martin with a Y, you know, I'm always on LinkedIn. We're having this kind of chat offline beforehand. Unfortunately, it's the challenge of being in the world of HR and recruitment is that you always end up living on LinkedIn, so you can find me on there, but yeah, it's been an absolute pleasure. Mervyn, thank you so much for the fantastic conversation and the great questions. Mervyn Dinnen 29:13 Thank you for being a fantastic guest. Absolute pleasure. Transcribed by https://otter.ai