How to avoid the most common pitfalls of AI in recruiting
Recently I was contacted by a recruiter who invited me to apply for jobs within the communications industry. Firstly, I thought, why was I getting communications job opportunities when my background clearly indicates that I work HR? And, secondly, all these opportunities were New York-based, when I live in Italy.
I quickly responded to the email explaining that I was not looking for a job and thought about why I’d received the email in the first place. Was it because the algorithm had miscommunicated me as a millennial living in Manhattan? Then I remembered that I had recently written about my holiday to New York on Twitter. The algorithm must have picked up this conversation and thus inaccurately targeted me.
Identifying potential candidates has become more advanced than ever, thanks to artificial intelligence. Unlike traditional Applicant Tracking Software (ATS), which could only search for certain keywords in job applications, AI has the ability to go deeper than ever before when sourcing information. As a result of this, an area that AI has been extensively used in is recruiting.
Recruiting tools based on artificial intelligence, can help companies identify candidates which may not have been taken into consideration previously. Human recruiters can hold, what’s known as, an unconscious bias towards candidates. Meaning that they are judging candidates based on factors such as gender, locations or educational background without knowing. AI has the ability to eliminate this bias.
However, despite the potential of AI in recruiting, there remains some pitfalls. Here are some examples and the ways in which we can avoid them.
1. Artificial Intelligence can be influenced, just like humans
Pitfall - As mentioned, AI-based recruiting tools can automatically reduce bias and claim to be able to increase diversity because it can automatically detect patterns in current staff levels. But since AI can learn from these patterns of past behaviour, there could be hidden prejudices in the selection processes that AI would inevitable take into consideration.
The Society for Human Resource Management warns that "if a company's highest performers historically have been identified as white males between 30 and 40 years old—because those individuals were frequently promoted into next-level jobs—that bias can inadvertently become built into algorithms that learn from talent management patterns."
Solution - Work with partners who can customise selection systems to avoid attaching to prejudices and diversify the pool of candidates (regardless of past models).
2. AI is not always social-friendly
Pitfall - It’s possible for AI to misunderstand information collected from social media, as seen in my example at the beginning. This could highlight the negative aspects of candidates, potentially damaging that person’s reputation.
Solution - Talk to the legal department. Some privacy laws prevent the use of social media information in recruiting. So, it’s best to try to gain understanding of what social channels are relevant in recruiting and how to disregard them in the recruiting process.
3. Risky business
Pitfall – AI selection tools use hidden proprietary algorithms that are not subjected to any type of control, thus leaving HR unaware of how the technology actually works. This can lead to companies causing potential damage to their reputation if a breach occurs.
Solution – Employers must monitor all AI systems in operation and update them accordingly, to comply with new rules and regulations.
Using AI for recruiting purposes can have many benefits, but it also does not come without risks. It’s vital that HR managers review every risk that may arise from AI and, of course, do everything to avoid them.