Digital Screening for Smart Hiring in HR

Hosted by

Mervyn Dinnen

Analyst, Author, Commentator & Influencer

About this episode

Digital Screening for Smart Hiring in HR

Host: Mervyn Dinnen

Guest: Ben Mones, CEO & founder, Fama

In this episode Mervyn talks to Ben Mones (CEO & founder at Fama) about how AI enabled online and social media background screening helps identify the right people, and the key findings from their Misconduct at Work report.

They discuss:

– What is online and social media background screening

– How can it identify potential pitfalls with future candidates

– The most common types of misconduct that screening uncovers

– Addressing emerging political situations when screening (eg current situation in the Middle East, upcoming elections in both the US and UK)

– Identifying online intolerance and harassment

– Incorporating emerging AI regulations and compliance in the EU

 

Thanks for listening! Remember to subscribe to all of the HR Happy Hour Media Network shows on your favorite podcast app!

Transcript follows:

Mervyn Dinnen 0:17
Hello, and welcome to the HR Means Business podcast, which is part of the HR Happy Hour Network. I’m your host, Mervyn Dinnen. And today I want to talk about a topic, which I think is increasingly important, particularly in hiring and when we come to hire people, and that is the concept of social background screening. My guest today is Ben Mones from Fama technologies. Ben, Welcome to HR Means Business. Would you like to tell people a little bit a little bit about yourself?

Ben Mones 0:48
Yeah, so thanks a lot for having me, Mervyn and to the audience out there. My name is Ben Mones. I’m the CEO and founder of Fama, we are the world’s largest online screening company. And yeah, we’re thrilled to be on the podcast today. So yeah, thanks, again, for having me. We appreciate it.

Mervyn Dinnen 1:05
It’s a pleasure. What made you interested, I suppose in starting a business in this area?

Ben Mones 1:12
Of course. So, you know, very high level, my background was in software, I’ve been doing software startups for pretty much my entire career. And at an early company, I had hired a guy who looked great on paper, his resume references checked out. All of that work we did in the interview process of asking the right questions, putting him through a series of assessments, flying him out doing a social interview, all of the things that we do as hiring managers to essentially make the determination, how can we make a good bet on how this person is going to act when they joined our company, right, the basic stuff, how they’re gonna act with people who work there, how they interact with customers, we went through what we thought was every step that’s possible to take and hiring this guy. Fortunately, six months in to that guy’s job, he ended up sexually harassing one of our top salespeople at the company. It was a horrible experience for the woman for the victim, but also set off a series of events or chain of events within our company that led to material financial downturn within the business. So after the fact, we, you know, had seen on this guy’s social media, all of this misogynistic, horrible content, pejorative content about women that had we seen it, we never would have hired, this guy, never would have brought him on board. So you know, really, it was that experience that led me to start Famo was really experiencing the pain that we solve for. And I think, you know, when you kind of consider what social media screen if it’s alright, I can just kind of jump into like how I think about social media screening and online screening. If that’s, that’s cool with you, Mervyn to jump right into that,

Mervyn Dinnen 2:52
Be my guest, Ben. Be my guest. Sounds good.

Ben Mones 2:55
So, you know, very high level, you know, as hiring managers, like I was saying, We think so much about how can we generate the most signal? How can we generate the most predictive insight? How do we reduce the kind of downside of our bet that we want to make sure this person is going to act with customers in line with our brand values, right, we are going to want to ensure that when this person comes in, that they’re going to treat people fairly, that they’re going to represent our culture rather than detract from it right. But at the end of the day, hiring, if you reduce it down, it’s all about how do we put the steps tools technology in place, so that we as hiring teams can make the most informed decision about how this person is going to be once they come from the outside world into, you know, the beautiful garden, that is our company. And online screening is really, I’d say, an evolution in the traditional screening industry, brought by AI and technology, which allows us to essentially say, now we can enable end users and customers to tap into one of the world’s richest data sources where there is so much signal about how people act around others how people evidence, their point of view, for example, on you know, maybe things relate to harassment and tolerance, right. There’s a lot of great stuff on social media, a lot of dog pics, a lot of, you know, restaurant reviews, etc. But there’s also the signal out there that hiring teams can use to make a more informed decision about how this person is going to act. Once they join their company right now do they qualify now after they have the skill set, but how they’re gonna act with customers and other employees, and it’s that simple. It’s one of the richest datasets out there in the world. And we built the tools that allow you to access it in a GDPR compliant fashion, with all the latest privacy legislation around AI specifically in the UK. So, you know, really taking that concept of framework of, hey, there’s insight out there that’s relevant to you making a good hiring decision. We provide the tools and technologies so you don’t invade a person’s privacy and see what you shouldn’t see. So long winded answer to your question, but that’s sort of how I got here and what it is.

Mervyn Dinnen 4:54
So no, that’s okay. That’s okay. I suppose the first the first question if you talk to somebody about social media screening is around context. And social media is a platform where people can be themselves. And I think in the vast, vast majority of cases around the world, in my experience, people are themselves. Some people want to portray a persona, though, particularly those who might not realize that, you know, this kind of screening could happen. So how do you differentiate between somebody who is maybe portraying an image, contextually, as opposed to the real person?

Ben Mones 5:38
Yeah sure. So, you know, I think there’s a question around initially, technology and usage, the first being how do we limit false positives, meaning flagging something, to your point that someone is maybe talking about, my Premier League team is going to kill it this weekend, you know, in the game that they’re playing, right, compared to, you know, I’m gonna kill my team member this weekend, because I’m really upset about the comment they made about my favorite Premier League team. Sounds very similar, right in terms of concept. But at the end of the day, a very different statement, Keywords are the same, but very different statement in terms of tone and context. So part of what we do at pharma, and this is that sort of evolution of artificial intelligence that allows us to access these insights is, we have technology that reads text and image just like a person can. So it looks at everything from of course, the keywords, but the sentiment analysis, the concept clustering, a series of algorithmic techniques that allow us to tell the difference between someone using my example, but also promoting and decry, and intolerant ideology online, somebody who might be acting threatening towards others, versus making a comment about a sports team, you know, that they engage with, right. And so that’s one key piece of being able to pick out the needle in the haystack, if you will, where there is a lot of stuff out there that is frankly, just you know, who people are just like in the real world, right? You know, and in who we are, and how we engage with others, there’s, most of it is good stuff. And most of those people, you know, act in the best interest of those around them. But sometimes, you know, you don’t see that. So that’s, you know, one piece of it of getting the data to the customer itself.

Ben Mones 7:17
But the second piece is also, you know, companies put policies in place that reflect their brand, that reflect their culture where they can look at things and say, Alright, let maybe one off color joke is something you know, that we as part of our policy, and the same way that you might adjudicate other hits and your candidate screen, you might have an adjudication policy around verifications, you might have a adjudication policy around right to work, right. If some one does or does not have the right to work, then you’re going to adjudicate that asset and adjudicate that person. Of course, as far as that decision is concerned, in line with, you know, your own existing policy. So companies will apply things like the recency of a post the frequency of it, right? If you have one person makes one comment. Five years ago, not that big of a deal, you got somebody that’s posted 60 times in the past six months, and is, you know, promoting anti semitic and intolerant content online altogether, a different story. So, you know, it’s one of those things, I would say where, you know, similar to the Supreme Court ruling United States in 1980s, related to pornography, you know, when you see it in a lot of ways. So, you know, it’s one of these things that if we can reduce the noise for the client, and give them the signal, oftentimes, they have the policy and the intuition, the years of experience and expertise on how to adjudicate and bring that into their own process.

Mervyn Dinnen 8:30
Okay, I mean, you touch slightly there on a couple of I suppose, of current affairs events, which I want to come to enter question shortly. I suppose that the first question or the question that might be, in the minds of people listening is we’re talking about the kind of behaviors that can be picked up? What kind of behaviors do people look for when they approach you?

Ben Mones 8:55
Generally, we see things that are directly related to that initial question I posed, which are related to risk factors, what are the most important things for me to know? So I can limit my downside on my bet on how this person is going to act with employees and customers, right? So think of things that people might ask in the standard interview process, or try to construe in other methods of candidate screen. So companies will look at things like fraud, they’ll look at things like a history of intolerance or harassing behavior towards others. I have a multicultural workforce, a multi ethnic customer base, for example, I need to ensure that people coming in are going to treat other people fairly excuse me, just because you know, what they look like they’re not going to, you know, change the way that they treat those folks. Right. And so, you know, companies really tend to look at things like fraud, illegal activity, illegal drug use, you know, but you can get very, very specific, you know, I would say there was a partnership we did with the National Health Service in the December of 2020. Over in the UK As part of the COVID-19 vaccine rollout campaign, which you I’m sure folks still remember, and Boris caught a bunch of heat as to why it was taking so long to get jobs in arms. But the NHS was interested in screening people who were giving the job to make sure that there were no people who were posting or interacting with anti vaccine or vaccine conspiracy theorists content online. So you can get very specific tied to the unique risk of your business and the methods and sort of potential downsides that you really want to solve for, and control for respectively.

Mervyn Dinnen 10:37
That’s a very good example. And for people listening in the UK, and I guess Europe as well, that would be a very relevant example, because the I know, in the US there were kind of various various anti vaccine sentiments. There was over here in UK in Europe a bit as well, possibly not as pronounced. What, when you are looking for this or when a client approaches you and says, Look, I want people effectively screened for these kinds of behaviors? What are the kinds of pitfalls to be aware of what do you warn them about could be the downsides of looking for this?

Ben Mones 11:15
Yeah, the most important thing I would say is to identify, particularly for those that are in the UK listeners that are out there is really identifying the business purpose for which you are screening, and ensuring that the criteria that you are screening for are able to, of course, not incontrovertibly in a black and white, anyone who looks at and understand that way, but in a very reasonable business way that a non technical user or non HR user would understand that you’ve sort of set the precedent, the reason for why you’re screening for this sort of behavior. And it might be that example, you know, that I shared with you previously, that we’ve got a diverse workforce we want to ensure people coming in are going to contribute to that, as opposed to detract from it. It’s that simple. But what I typically recommend to clients is to think about your Northstar, what are the things that you care about again, but back to that very simple analogy, we want to make sure we have our best bet, and we have the best signal, the best insight for the decision that we’re about to make to bring this person in. What are those vectors that you want to screen for? And oftentimes, it goes back to the criteria we just talked about, but to really, you know, streamline? And so what do we care about? Why are we doing this? And to answer that question, put it in writing and allow your team to react to it. It’s tough conversation at times. And often. I’ll tell you, it’s not conversations that folks in HR teams typically have of, you know, where do we draw the line for our culture? How many misogynistic comments? Is it one? Is it three? Is it five? That, you know, we’re going to say, hey, we’re not going to tolerate right and so different businesses, I’ve talked to sports gambling companies that have very different criteria, compared to big financial services firms, you can imagine different cultures, different customer bases reflect different screening criteria, and solutions like Fama, you can tailor and tune, you know, to that use case, as well as a wide range of other technologies that are out there.

Mervyn Dinnen 13:09
Okay. You’ve been involved in some research, the state of misconduct work. I’d be interested, I think the listeners would be interested to know maybe some of the, I suppose common types of misconduct that you uncovered in the research?

Ben Mones 13:26
Yeah, of course. So the state of misconduct at work, is a report that essentially quantifies the scope of the signal that exists out there in our digital identities, right? If you consider digital identity to meme, a person’s publicly available social media, their complete web presence, etc. What we did was we anonymized and aggregated data from across our customer base, we looked at all the different types of misconduct that we’d flagged over the course of one year, organized it by type, organized by industry. The most common types of misconduct we found to answer your question directly, are harassment, sexual misconduct, and intolerance. So those are the sort of three flags that we found most across our entire book. The interesting piece was that for every single customer that did, excuse me, every single candidate that did have a flag that we anonymized and aggregated about 20%, had an intolerance flag. And we’ve actually seen that intolerance flag increase over the past year or so. So we’re tracking very closely, you mentioned current events. We’re tracking within our data right now, how often we’re seeing changes in intolerance and methods of harassment, other sorts of things online based on you know, for example, the war in Israel and even to some degree, the war in Ukraine, from what we’ve seen as well.

Mervyn Dinnen 14:46
Okay, and by intolerance, you’re looking at how people interact with it, how they share it, maybe the comments they make.

Ben Mones 14:53
Exactly when they post about it, they like it, they repost it, for example. So we’re looking at all sorts of user engagement around again, key topics that clients care about when it comes to, you know, making a decision on candidate screen.

Mervyn Dinnen 15:09
Okay, now we’re having this conversation in the last few weeks of 2023. And as you’ve alluded, there is currently war in the Middle East. We’re not here to talk well, to discuss that. But obviously, that is something which is very personal to a lot of people online, we can see that every day. And there are increases, I suppose, being noted in kind of intolerance and harassment and things like that online. How, how is this playing out? And over the next six to 12 months? How do you think organizations when they approached you for screening might want to deal with this? Is it something that they’re going to want you to look for, obviously, dependent upon? Maybe their employee base that can’t their client base? Or is it something that they feel you think isn’t something that they want to get involved with?

Ben Mones 16:05
Yeah, absolutely. And I agree, certainly not are not the there’s a separate podcast, probably to talk about the war in the Middle East. But you know, certainly as it relates to, to our universe, you know, to your point, we have seen an increase in anti semitic and Islamophobic content online. So for example, there was a company called the cyber well, which is a trusted partner to Mehta and Tiktok, who track trends in these sorts of behaviors and use it to report content to those social media platforms, very much in line with a lot of the UK and EU regulation of ensuring that these platforms keep this sort of content off as hate speech off their platforms, as you and probably some of your listeners know, companies now can be fined for not taking the appropriate steps to remove that content from their platform. So for example, you know, we’ve cyber well, they quote, a really interesting piece of data that they looked at the term hashtag, Hitler was right, for example, and the incidence of that hashtag on the platform X. And you know, again, we’re in the November timeframe now, but, you know, since October 7, in Arabic, that that term sword 29,000% and 16 100%, in English following the attack, right, and companies are reacting to that companies are now seeing their people, their candidates interacting with that content, and asking themselves, how can we potentially ensure that the people coming into our organization that we either identify and have a conversation with the candidate about, hey, you know, you know, these are the not the sorts of things that make our company great, right? You mentioned the policies, right? Hey, here’s our line, we’d love to hire you. But just want to let you know, this sort of, you know, promotion of this horrible behavior online is not something that we stand by at our company, which oftentimes, you know, Mervyn is, is enough, is that that sort of point of intervention doesn’t always mean to hire somebody, but just to simply course, correct. And let somebody know, right. And so, companies are coming without a doubt and saying with the latest trends, you know, in the sort of postings online, the data that people are seeing, how do we, you know, ensure that we’re taking the steps to align our organization with the values that we believe in? And how do we tap into social media presence and web data to do that companies come, you know, to farm and we work with many today, and even a pretty significant increase of folks in that past couple of weeks, who are interested in the same thing. So, but I’m sure much more we could talk about there. But probably beyond the scope of that. It’s not Oh, of course,

Mervyn Dinnen 18:42
I would have thought so I suppose one thing for listeners, because we’ve highlighted that type of content. And as we say, 2220 23 moving into 2024. One thing we do know is that certainly in the US, and almost certainly in the UK, there will be elections at almost the same time, which I think is quite rare. I’m not sure last time. Yeah. And I suspect that with everything going on in the world, you will find a lot of people online, interacting with this kind of wanting to put their viewpoints across maybe disagreeing with a lot of people. What would you advise companies if they are, I suppose, concerned about kind of what’s online coming from their people and might be seen by customers might be seen by prospective customers and clients? With, as I say, probably two hotly contested, elections coming up. What advice would you give companies to possibly to speak to their people in terms of guidelines, not so much ground rules, but I mean, guide guidelines and frameworks.

Ben Mones 19:56
You know, there is a sort of distinction between the United States and Western Europe overall when it comes to this particular question, because there are rules throughout parts of Western Europe that govern a little bit more explicitly what people can say, online, but I’ll give you kind of the fama philosophical answer that, you know, is more US centric, but I think, does cover the spirit of a lot of the EU legislation that’s out there. And that it is very important to remember that you are not screening using American politicians for, you know, whether or not your candidate is a supporter of Joe Biden or Donald Trump, you are not screening for the concept of does Palestine have a right to exist? does Israel have a right to exist? Right? You know, these are, these are questions that are in many ways are fundamental to people’s belief system, it is when it crosses the line from political support, or which is, again, a very difficult line, especially these days, especially with what’s online, to identify just what is, you know, by definition, anti semitic, just what is intolerant, just what is a political point of view? Right. And so, a lot of, you know, the recommendations that we provide are to ensure that if you have people doing this internally at your company, we work with some businesses that have teams of analysts, or investigators that do this work, that there is alignment across the board of what they’re screening for, what they’re filtering for, and that you have the policies in place to separate the wheat from the chat, you know, essentially to say, we need to make sure that we’re not using political affiliation or someone’s, you know, simple political belief or critique of a government, for example, where that crosses the line into, you know, this bigger concept of hate speech, right, which is a line that many of us, I think, are still identifying and trying to figure out right now. So we try to capture that nuance and from and try to separate that out from, you know, what’s kind of mainstream, just political support. So try to stay away from, you know, key words and use use technology, I would say, because otherwise, you’re going to end up with a ton of false positives and seeing things you shouldn’t see as part of the hiring process.

Mervyn Dinnen 22:16
This leads me on I suppose to my next question, which is about what what can companies do? What actions can they take? You know, from the research you’ve done, seen in the conversations you’ve had with clients? What can they do to combat this?

Ben Mones 22:29
Yeah, there are a wide range of tools you can use? I think it starts with answering that simple question of what does great look like at our company? What does the model employee what are we trying to accomplish? What is our North Star and then arranging everything from your talent funnels, including where you source talent, how you structure interview questions, the focus, for example, on quantifiable skill sets versus more of the qualitative of really structuring and ensuring that your entire chain of candidate screening tools, technology, etc, are aligned against the core values of what great looks like within your company, and that you standardize as much as possible in what is going to increasingly become blurry and gray area. So you know, typically that relates to and it sounds boring, and nobody likes to do it, but enhancing internal policy, as it pertains to what we screen for and what we begin to adjudicate if a hit as it were, comes back. So really, I would say, examining your entire funnel and mapping that talent acquisition funnel to key value criteria within your business, using tools and technology where appropriate, and establishing methods of escalation and adjudication within that process. So that everyone’s treated fairly, everyone gets the exact same scope. And no one, you know, has an additional pejorative level of screening for one reason or another.

Mervyn Dinnen 23:53
Okay. Look, looking to, I suppose, wrap up the conversation in a way. What what what is the future look like the future of AI in hiring, considering regulations in Europe and around the world? And I think you’ve, you’re you’ve done some research having you around the EU, and the use of algorithmic decision making.

Ben Mones 24:16
Yes, yes. And I’d encourage you know, anyone who’s interested we, we wrote a very EU centric sort of overview and ebook with a leading law leading global law firm on the topic that really digs into how we’re going to see more and more automated processing. And really, you know, what that digs. What that means is that solely automated processing and methods of talent acquisition or that use HR data, have a lot of perils associated with them. So it probably goes beyond me and I can play a lawyer on a webinar for just just long enough but encourage folks to read read the white paper but very high level the things that I have to give one piece Some advice on sort of how these legislative frameworks are rolling out the identification of bias within algorithms and asking your vendor the simple question. Have these algorithms been validated and audited for bias by somebody outside of the development? Or sales chain of command at the company who’s selling you the product? Meaning, we want to make sure that people who are selling us algorithms don’t have their thumb on the scale, that they aren’t saying, oh, yeah, the software was validated. It’s been audited for bias, if the answer, you know, and the reason they’re saying is because they have a commercial or development interest in saying yes to that question, right. So there’s a wide range of new laws, it’s changing really quickly. And I think if you look at the sort of evolution of technology, some people are wondering, why is all this AI legislation coming out? Well, the very simple fact is that for 1000s, of years, humans have been biased. We, as people are biased, it’s been part of who we are, for as long as we’ve been, you know, walking this earth bias is implicit in all humans, and anybody who tells you otherwise, I think, is that they themselves biased, right. And what’s interesting is over those 1000s of years, we’ve developed the tools, we’ve developed, the ways that we interact with each other, the ways that we engage with each other, the cultural systems, the institutions that enable us to reduce bias in our human to human interactions, we’ve tried to inject fairness, we’ve tried to, you know, invest in meritocracy, and all elements of, you know, what we do, or at least try, you know, in all elements of what we do. And it’s an imperfect process. But now you’re seeing this massive Cambrian explosion of artificial intelligence and legislators are trying to essentially condense 1000s of years of D biasing that humans have done into this kind of algorithmic framework. So that’s why you’re seeing all this legislation rollout. All these rules and regs it’s a little bit wild, wild west. But you know, at the end of the day, still something we think that is tremendous, tremendously important. And the book gives a ton of detail as to what’s changing how it’s changing, and how you as a user can stay ahead of those changes. And active within, you know, your talent technology portfolio.

Mervyn Dinnen 27:18
I suppose is a kind of a final question. Because we’re all human. And we’re talking or we’ve been talking about prospective candidates, prospective employees, what they’re saying is, what advice do you have for HR people listening for managers listening about maybe what they’re doing as well? I mean, how do you see, I suppose the future of this kind of screening, and what should organizations again, as individuals, as well as organizations be doing to, to ensure that when candidates do their own version of screening, they might not find things that are either not saying?

Ben Mones 27:59
I would say, the best advice I can give at the moment, is to remember that the use of tools like this, and the you know, I’m not going to do a whole sales pitch on my company. But you know, the use of tools, I think that whether it’s a tool like pharma or another algorithmic tool that companies are using, remember that these are not job replacers. These are not replacing your intuition. These are not replacing your expertise, this isn’t replacing the years of compounding experience that have made you the professional that you are today. What AI is offering you is a power drill instead of a screwdriver. It’s taking the tools that you use previously, and dramatically increasing the capacity of those tools. So what we say is, when you’re seeking out technology, try to identify whether it’s candidate screening tools, you know, such as farmers, right, what are the things that bring you to the precipice of action, not what’s making a decision for you, but informing the decisions, that you’re making data and insights that you otherwise were not able to obtain through the old screwdriver used to have. And maybe now you can with the power drill. So you know, that’s the sort of high level advice I would provide. But just know that no matter what tool or technology is out there, you as the end user, you as the professional, you have the experience, you have the upper hand, and it will be those groups that can adapt the power tools into their workflows quickly and, quote, a cheaper bid than their competitor with a shorter lead time because of the technology that they have. Right? Those are the companies that are going to win. That’s how the market is going to be affected. I think, over the next 10 years or so. Are those companies that really adopt technology into their existing workflows and don’t look for a simple full replacement of a human because, of course famous last words, but I just think that ain’t going to happen anytime soon.

Mervyn Dinnen 29:58
Thank you, Ben. It’s been a fascinating conversation. If people who are listening, want to reach out to you and maybe kind of continue the conversation or find out more, what’s the best way to reach you? Is it email? Is it Twitter? Is it LinkedIn?

Ben Mones 30:16
We’re pretty active on LinkedIn. We share a lot of our content there. I’m a pretty active user. So connect me on LinkedIn. My name is Ben Mones. Love to follow, stay engaged, and check out our website, where we have a bunch of other news and all the latest and greatest.

Mervyn Dinnen 30:33
Thank you, Ben. It’s been an absolute pleasure.

Ben Mones 30:35
Mervyn, thank you so much, and we’d love to be back on so thanks again for having us. We appreciate it. You’re welcome.

Transcribed by https://otter.ai

Leave a Comment





Subscribe today

Pick your favorite way to listen to the HR Happy Hour Media Network

Talk to us

If you want to know more about any aspect of HR Happy Hour Media Network, or if you want to find out more about a show topic, then get in touch.