Tech

How AI decreases bias in recruiting to find the best people for the job

- January 10, 2023 3 MIN READ

With only one quarter of job applications being seen by a human, it has become clear that human bias is impacting recruitment significantly, making it harder for employers to find the best people for the job. And this is where AI and machine learning can help, writes Gergo Vari, CEO of AI-powered HR platform, Lensa.

Modern technology is indispensable in our current universe, especially with the rise of applications that use artificial intelligence and machine learning. Despite the mysticism surrounding it, artificial intelligence is a sophisticated program that uses algorithms and vast amounts of processing power to culminate data into an outcome.

There is a key difference between artificial intelligence and machine learning. Machine learning is a subset of artificial intelligence. ML uses gathered data to make predictions, whereas AI is the umbrella term for all these subsets and uses data intelligence similar to a human.

It can be safely said that what determines these predictions is the original source of information machine learning receives: the data.


Immeasurable amounts of data are the primary driver for these programs – some would even argue that data is worth more than gold in today’s world. AI can be used everywhere, such as for AI-powered job platforms like Lensa.

Bias begins in data collection

Data is a collection of information that holds great value in different contexts. However, data can run back countless years and the information can be affected in many ways, from being altered to lost to out-of-date, which can lead to datasets becoming harmful with biases and misinformation.

Bias can show up as a prejudice coming from a human in data collection, aggregation, selection, or even in the end product. It can also result from oversimplification, also due to human error.

The way to mitigate some of these harmful influences is by nipping them in the bud during the pre-processing phase. Pre-processing starts with inputting, making it the easiest culprit for data bias. Machine learning depends on imputed data to come to conclusions, making it vulnerable to bias, especially in fields where it is used often, such as the recruitment industry.


Most recruitment agencies nowadays use AI or ML to benefit their processes by making them more efficient, quick and subjectively accurate. This wasn’t always the case.

It has been found time and time again that a company’s recruitment software starts favouring one characteristic over others. These characteristics can be race, gender, social background, ethnicity, and the list goes on. Recently, Amazon’s recruitment ML software was found to prefer men over women when searching through candidates.

However, there are ways of minimising bias to a fraction of what it would be.

artificial intelligence AI facial recognition

How AI and machine learning helps eliminate bias in recruiting

There are different types of biases that can occur in recruitment. The most prevalent bias is prejudicial bias.

Prejudicial bias is a type of bias that discriminates based on gender, religion, race, ethnicity, or anything of the sort that causes social identity. This type of bias originates from the data trainers themselves, making it crucial for those inputting data to remain as unbiased as possible and to have a broad range of datasets. ML can mitigate this issue by providing no subconscious to the way it processes and organises data.

Anchoring is another bias that happens when a recruiter has an ideal person in their mind and compares new candidates to that constantly. This causes a disconnect between potentially great employees and the recruiter trying to fit an unrealistic mould.

ML also removes confirmation bias from the recruitment process. Confirmation bias is the need to prove one’s own negative judgments of someone else, or vice versa for positive traits. A program cannot do any precinct judgements on candidates.

Beauty bias is a bias that is subliminal yet dominant. ‘Pretty privilege’ is sadly a reality, but it can be mended by using machine learning in recruitment to erase any conscious or subconscious attraction towards a candidate, since the program doesn’t know what beauty even is.

Machine learning also provides recruiters with efficient hiring pipelines even with vast amounts of candidates. This is essential to reducing recruitment bias because it has statistically been proven that only around one-quarter of candidates’ resumes get viewed by a human being. The reason for this is the over-burdening of recruiters with manual resume viewing. This is why ML is important in automating the process and keeping recruiter tiredness away.

Machine learning in recruitment is essential for reducing bias by removing any subconscious human prejudices, analysing data more accurately, and simply having an objective decision-making ability that is data-based. Though it is impossible to remove entirely, human bias has significantly decreased since the automation of more manually taxing tasks.

There are still concerns about the use of AI and ML when choosing candidates, especially when it comes to ethics. Thankfully, so far, we’ve been seeing positive outcomes for both recruiters and candidates alike.


Want more? Get our newsletter delivered straight to your inbox! Follow Kochie’s Business Builders on FacebookTwitter, Instagram, and LinkedIn.

Now read this:

How AI is putting the human back in HR