AI hiring bias: Everything you need to know (2023)

Tip

Bias and discrimination can creep into your hiring process in all sorts of places. Here are the ones to watch out for, and tips on mitigating their negative effects.

AI hiring tools can automate almost every step of the recruiting and hiring process, dramatically reducing the burden on HR teams.

But they can also amplify innate bias against protected groups that may be underrepresented in the company. Bias in hiring can also reduce the pool of available candidates and put companies at risk of not complying with government regulations.

Amazon was an early pioneer in using AI to improve its hiring process in 2018. The company made a concerted effort to weed out information about protected groups, but despite its best efforts, Amazon decided to terminate the program after discovering bias in hiring recommendations.

What is AI bias and how can it impact hiring and recruitment?

AI bias denotes any way that AI and data analytics tools can perpetuate or amplify bias. This can take many forms, the most common being increasing the ratio of a particular sex, race, religion or sexual orientation. Another form may be differences in hiring offers that increase disparities of income across different groups. The most common form of bias in AI comes from the historical data used to train the algorithms. Even when teams make efforts to ignore specific differences, AI inadvertently gets trained on biases hidden in the data.

"Every system that involves human beings is biased in some way, because as a species we are inherently biased," said Donncha Carroll, a partner in the revenue growth practice of Axiom Consulting Partners who leads its data science center of excellence. "We make decisions based on what we have seen, what we have observed and how we perceive the world to be. Algorithms are trained on data and if human judgment is captured in the historical record, then there is a very real risk that bias will be enshrined in the algorithm itself."

(Video) AI Recruiting Trends | AI Algorithm Hiring Bias - AI Human Resources [Brian Chan 2019]

This article is part of

Ultimate guide to recruitment and talent acquisition

  • Which also includes:
  • 10 steps in planning a strong recruitment strategy
  • How are recruitment and talent acquisition different?
  • Best practices to build a sustainable hybrid work model
Download1 Download this entire guide for FREE now!

Bias can also be introduced in how the algorithm is designed, and how people interpret AI outputs. It can also arise from including data elements such as income that are associated with biased outcomes or insufficient data on successful members of protected groups.

AI hiring bias: Everything you need to know (2)

Examples of AI bias in hiring

Bias can get introduced in the sourcing, screening, selection and offer phases of the hiring pipeline.

In the sourcing stage, hiring teams may use AI to decide where and how to post job announcements. Various sites may cater more to different groups than others, which can add bias to the pool of potential candidates. AI may also recommend different ways to word job offers that may encourage or discourage different groups. If a sourcing app notices that recruiters reach out to candidates from some platforms more than others, it may favor placing more ads on those sites, which could amplify recruiter bias. Teams may also inadvertently include coded language, like "ambitious" or "confident leader," that has greater appeal to privileged groups.

(Video) This Tech Startup Uses AI to Eliminate All Hiring Biases | Fast Company

In the screening phase, resume analytics or chat applications may use certain information to weed out potential candidates that may also be indirectly associated with a protected class. They may look for characteristics the algorithm associates with productivity or success, such as gaps between jobs. Other tools may analyze how candidates perform in a video interview, which may amplify bias toward specific cultural groups. Examples include sports that are more often played by particular sexes and races or extracurricular activities more often undertaken by the wealthy.

In the selection phase, AI algorithms might recommend some candidates over others by using similarly biased metrics.

Once a candidate has been selected, the firm needs to make a hiring offer. AI algorithms can analyze a candidate's previous job roles to make an offer the candidate is likely to accept. These tools can amplify existing disparities in starting and career salaries across gender, racial and other differences.

Government and legal concerns

Government organizations like the U.S. Equal Employment Opportunity Commission (EEOC) have traditionally focused on penalizing deliberate bias under civil rights laws. Recent EEOC statistics suggest that the jobless rate is significantly higher for Blacks, Asians and Hispanics compared to whites. There are also significantly more long-term unemployed job seekers age 55 and over compared to younger ones.

Last year the EEOC and the U.S. Department of Justice published new guidance on steps companies can take to mitigate AI-based hiring bias. The guidance suggested enterprises look for ways that AI could be taught to disregard variables that have no bearing on job performance. It is also important to recognize that AI can still create issues even when precautions are in place. These mitigation steps are important because enterprises face legal ramifications from using AI to make hiring decisions without oversight.

(Video) Mitigating Bias in Artificial Intelligence: What you need to know & actions for business leaders

EEOC chair Charlotte Burrows recommendsthat enterprises challenge, audit and ask questions as part of the process of validating AI hiring software.

Four ways to mitigate AI bias in recruiting

Here are some of the key ways organizations can mitigate AI bias in their recruiting and hiring practices.

1. Keep humans in the loop

Pratik Agrawal, a partner in the analytics practice of Kearney, a global strategy and management consulting firm, recommended firms include humans in the loop rather than relying exclusively on an automated AI engine. For example, hiring teams may want to create a process to balance the different categories of resumes that are fed into the AI engine. It's also important to introduce a manual review of recommendations. "Constantly introduce the AI engine to more varied categories of resumes to ensure bias does not get introduced due to lack of oversight of the data being fed," Agrawal said.

2. Leave out biased data elements

It is important to identify data elements that bring inherent bias into the model. "This is not a trivial task and should be carefully considered in selecting the data and features that will be included in the AI model," said Carroll. When considering adding a data point, ask yourself if there is a tendency for that pattern to be more or less prominent in a protected class or type of employee that should have nothing to do with performance. Just because chess players make good programmers does not mean that non-chess players couldn't have equally valuable programming talent.

3. Emphasize protected groups

Make sure to weight the representation of protected groups that may currently be underrepresented on the staff. "This will help avoid encoding patterns in the AI model that institutionalizes what you have done before," Carroll said. He worked with one client who assumed a degree was an essential indicator of success. After removing this constraint, the client discovered that non-degreed employees not only performed better but stayed longer.

4. Quantify success

Take the time to discern what qualifies as success in each role and develop outcomes such as higher output or less rework that are less subject to human bias. A multivariate approach can help remove bias that might be ingrained in measures like performance ratings. This new lens can help to tease apart traits to look for when hiring new candidates.

Related Resources

Dig Deeper on Talent management

  • EEOC employer lawsuits could increase in number, reachBy: PatrickThibodeau
  • Using AI for recruiting: Complete guide for HR prosBy: LukeMarson
  • How AI is transforming the talent acquisition processBy: KatherineJones
  • Federal warning on AI hiring bias now comes with teethBy: PatrickThibodeau

FAQs

How can AI bias happen in HR? ›

In the selection phase, AI algorithms might recommend some candidates over others by using similarly biased metrics. Once a candidate has been selected, the firm needs to make a hiring offer. AI algorithms can analyze a candidate's previous job roles to make an offer the candidate is likely to accept.

Can Artificial Intelligence eliminate bias in hiring? ›

Although AI mimics and potentially amplifies human prejudice, when used correctly, it can help to eliminate unconscious bias and make data-driven decisions. AI tools use data points to source and evaluate candidates. The models can predict the best candidates without using any bias or assumptions.

How does AI get biased? ›

It is relatively common knowledge that AI systems can exhibit biases that stem from their programming and data sources; for example, machine learning software could be trained on a dataset that underrepresents a particular gender or ethnic group.

Why is bias in AI a problem? ›

AI bias occurs because human beings choose the data that algorithms use, and also decide how the results of those algorithms will be applied. Without extensive testing and diverse teams, it is easy for unconscious biases to enter machine learning models. Then AI systems automate and perpetuate those biased models.

What are the three main sources of biases in AI? ›

The most common classification of bias in artificial intelligence takes the source of prejudice as the base criterion, putting AI biases into three categories — algorithmic, data, and human.

How do we eliminate AI bias? ›

5 Ways to Prevent AI Bias
  1. Understand the Potential for AI Bias. Supervised learning, one of the subsets of AI, operates on rote ingestion of data. ...
  2. Increase Transparency. AI remains challenged by the inscrutability of its processes. ...
  3. Institute Standards. ...
  4. Test Models Before and After Deployment. ...
  5. Use Synthetic Data.
23 Sept 2022

What percentage of AI is biased? ›

USC researchers find bias in up to 38.6% of 'facts' used by AI.

How do you avoid bias and discrimination in AI? ›

Eight ways to prevent AI bias from creeping into your models:
  1. Define and narrow the business problem you're solving. ...
  2. Structure data gathering that allows for different opinions. ...
  3. Understand your training data. ...
  4. Gather a diverse ML team that asks diverse questions. ...
  5. Think about all your end-users. ...
  6. Annotate with diversity.

Why is AI bias unethical? ›

But there are many ethical challenges: Lack of transparency of AI tools: AI decisions are not always intelligible to humans. AI is not neutral: AI-based decisions are susceptible to inaccuracies, discriminatory outcomes, embedded or inserted bias. Surveillance practices for data gathering and privacy of court users.

What are examples of AI bias? ›

3 AI Bias examples
  • Racism in the American healthcare system. ...
  • Depicting CEOs as purely male. ...
  • Amazon's hiring algorithm. ...
  • Testing algorithms in a real-life setting. ...
  • Accounting for so-called counterfactual fairness. ...
  • Consider Human-in-the-Loop systems. ...
  • Change the way people are educated about science and technology.
5 Oct 2022

What is the AI fallacy? ›

One of the most common fallacies is that narrow intelligence is on a continuum with general intelligence. Narrow intelligence refers to a machine's ability to perform a single task extremely well. Advances made in narrow AI are often described as the first step towards general AI.

What is the AI paradox? ›

The AI effect paradox is essentially that what is AI, isn't AI. Also known as the AI effect, this paradox sees AI tools lose their AI label over time. This is usually due to not being 'real' intelligence. (Despite, that is, no change to the technology behind them.)

Who affects AI bias? ›

These biases usually reflect widespread societal biases about race, gender, biological sex, age, and culture. There are two types of bias in AI. One is algorithmic AI bias or “data bias,” where algorithms are trained using biased data. The other kind of bias in AI is societal AI bias.

Why should we care about bias in AI? ›

By not addressing the potential for systemic advantage or disadvantage, unwanted or unintentional bias could affect performance, potentially exacerbating societal inequities and eroding trust.

What is AI bias in simple words? ›

Machine learning bias, also sometimes called algorithm bias or AI bias, is a phenomenon that occurs when an algorithm produces results that are systemically prejudiced due to erroneous assumptions in the machine learning process.

What are the 4 types of bias? ›

4 leading types of bias in research and how to prevent them from impacting your survey
  • Asking the wrong questions. It's impossible to get the right answers if you ask the wrong questions. ...
  • Surveying the wrong people. ...
  • Using an exclusive collection method. ...
  • Misinterpreting your data results.

What are the 5 sources of bias? ›

We have set out the 5 most common types of bias:
  • Confirmation bias. Occurs when the person performing the data analysis wants to prove a predetermined assumption. ...
  • Selection bias. This occurs when data is selected subjectively. ...
  • Outliers. An outlier is an extreme data value. ...
  • Overfitting en underfitting. ...
  • Confounding variabelen.
5 Jan 2017

What is Elon Musk's opinion on AI? ›

Musk has said he fears artificial intelligence could one day outsmart humans and endanger us, citing AI as the biggest threat to civilization. But he said that by building the Tesla robot, the company could ensure it would be safe.

Do people trust AI more than humans? ›

A study by researchers at Penn State University found that people are more likely to trust machines with their personal information than other humans. The findings seem to fly in the face of people's general distrust of computers and artificial intelligence.

Is bias inevitable in AI? ›

Because AI technologies are ultimately modelled, specified, and overseen by people – with all their flaws. As such and unconsciously, it is inevitable that we carry our biases into those systems we create.

How can bias be reduced in hiring algorithms? ›

How to Reduce the Risk of Bias in Your Hiring AI
  1. Understand AI's limits. AI is an invaluable tool for streamlining HR departments, but it should not be the one and only solution. ...
  2. Create bias and fairness definitions. ...
  3. Job postings can influence AI. ...
  4. Review and refresh AI tools.
24 May 2022

What are the 3 types of bias examples? ›

Confirmation bias, sampling bias, and brilliance bias are three examples that can affect our ability to critically engage with information.

What is an example of controversial AI? ›

Here are some of the biggest AI controversies in recent times. Google's LaMDA artificial intelligence (AI) model has been in the news because of an engineer in the company who believes that the program has become sentient.

What is Plato in AI? ›

At Uber AI, we developed the Plato Research Dialogue System, a platform for building, training, and deploying conversational AI agents that allows us to conduct state of the art research in conversational AI and quickly create prototypes and demonstration systems, as well as facilitate conversational data collection.

What are the 7 types of AI? ›

7 Important Types Of AI To Watch Out For In 2022
  • Reactive Machines.
  • Limited Memory.
  • Theory of Mind.
  • Self-Aware.
  • Artificial Narrow Intelligence (ANI)
  • Artificial General Intelligence.
  • Artificial Super Intelligence (ASI)
8 Mar 2021

What is Roko's basilisk theory? ›

Roko's basilisk is a thought experiment which states that an otherwise benevolent artificial superintelligence (AI) in the future would be incentivized to create a virtual reality simulation to torture anyone who knew of its potential existence but did not directly contribute to its advancement or development.

Is Siri a true AI? ›

All of these are forms of artificial intelligence, but strictly speaking, Siri is a system that uses artificial intelligence, rather than being pure AI in itself.

Why was Stephen Hawking afraid of AI? ›

Hawking cautioned against an extreme form of AI, in which thinking machines would “take off” on their own, modifying themselves and independently designing and building ever more capable systems. Humans, bound by the slow pace of biological evolution, would be tragically outwitted.

What are the disadvantages of AI in HR? ›

The ugly: The cons of AI in HR
  • Introducing machine-generated errors. Computers aren't always the right choice for doing analysis. ...
  • Perpetuating biases in hiring. ...
  • Some decisions require human involvement. ...
  • Increased risks to cybersecurity.

What are AI bias explain with some examples? ›

For example, Amazon found out that their AI recruiting algorithm was biased against women. This algorithm was based on the number of resumes submitted over the past 10 years and the candidates hired. And since most of the candidates were men, so the algorithm also favored men over women.

How does AI violate human rights? ›

AI in fact can negatively affect a wide range of our human rights. The problem is compounded by the fact that decisions are taken on the basis of these systems, while there is no transparency, accountability and safeguards on how they are designed, how they work and how they may change over time.

Is an example of use of AI in human resource? ›

Another great application of AI in HR is assessing employee referrals. The system can identify the type of referrals employees provide. It gives insights into who provides the most successful referrals. AI systems analyze data from previous referrals.

What is unethical about AI? ›

But there are many ethical challenges: Lack of transparency of AI tools: AI decisions are not always intelligible to humans. AI is not neutral: AI-based decisions are susceptible to inaccuracies, discriminatory outcomes, embedded or inserted bias. Surveillance practices for data gathering and privacy of court users.

What are the 5 examples of bias? ›

Reduce your unconscious bias by learning more about the five largest types of bias:
  • Similarity Bias. Similarity bias means that we often prefer things that are like us over things that are different than us. ...
  • Expedience Bias. ...
  • Experience Bias. ...
  • Distance Bias. ...
  • Safety Bias.
25 Feb 2021

What are three ethical issues surrounding AI? ›

The legal and ethical issues that confront society due to Artificial Intelligence (AI) include privacy and surveillance, bias or discrimination, and potentially the philosophical challenge is the role of human judgment.

Does AI invade your privacy? ›

AI processing makes use of vast amounts of data, and some of it could be sensitive personal information. Through analyses some of it could be used for identifying purposes. There is also the risk, where data has been anonymized, of it being deanonymized (possibly using AI) or not anonymized sufficiently to begin with.

Is AI a potential threat to human employment? ›

According to PwC research, by the mid-2030s, one-third of all employment will be at risk of being automated. The workforce segment most likely to be affected will be individuals with a low level of education. Anxiety about employment losses induced by greater use of machines has existed for centuries.

How is AI used in recruitment? ›

AI-powered preselection software uses predictive analytics to calculate a candidate's likelihood to succeed in a role. This allows recruiters and hiring managers to make data-driven hiring decisions rather than decisions based on their gut feeling.

What companies are using AI in HR? ›

Startups and investments
CompanyHQ / year foundedInvestors
CensiaUS / 2017Streamlined Ventures, Plug&Play, X Factor Ventures
RemeshUS / 2014General Catalyst, others
Leena AIIndia / 2015Y Combinator, angels
EnboarderAustralia / 2012Greycroft, Next Coast Ventures, Stage 2 Capital, Thrive Global, Venmo, others
36 more rows

Videos

1. The Problem With Biased AIs (and How To Make AI Better)
(Bernard Marr)
2. Amazon scraps AI recruiting tool showing bias against women
(Canadian HR Reporter)
3. AI for Recruitment & Talent Management - Bias: How to Avoid it
(Avature)
4. Algorithmic Bias and Fairness: Crash Course AI #18
(CrashCourse)
5. How AI is Deciding Who Gets Hired
(Bloomberg Quicktake: Originals)
6. What is an Algorithmic Bias Audit?
(BABL AI Inc.)
Top Articles
Latest Posts
Article information

Author: Aracelis Kilback

Last Updated: 11/16/2022

Views: 6465

Rating: 4.3 / 5 (64 voted)

Reviews: 87% of readers found this page helpful

Author information

Name: Aracelis Kilback

Birthday: 1994-11-22

Address: Apt. 895 30151 Green Plain, Lake Mariela, RI 98141

Phone: +5992291857476

Job: Legal Officer

Hobby: LARPing, role-playing games, Slacklining, Reading, Inline skating, Brazilian jiu-jitsu, Dance

Introduction: My name is Aracelis Kilback, I am a nice, gentle, agreeable, joyous, attractive, combative, gifted person who loves writing and wants to share my knowledge and understanding with you.