Using AI to Improve Hiring Legally and Ethically

No comments

Artificial intelligence (AI) and the ability to predict outcomes based on analysis of patterns are helping advance almost every area of human society, ranging from autonomous vehicles to predictive medicine. The business world derives great value from AI-driven tools and leverages data in almost every function.

Most interestingly, perhaps, is the recent proliferation of AI tools in the Human Resources field that address hiring, internal mobilization, promotion, and the possible effects deploying these technologies can have on the business overall. These tools can offer great value to HR professionals, as they aim to save time, lower recruiting costs, decrease manual labor, and collect vast amounts of data to inform decisions while helping avoid biases in human decision-making.

Companies must comply with strict legal and ethical requirements, and it’s incumbent upon HR leaders to understand how incorrectly deployed and designed AI tools can also be a liability. 

The real challenge for HR leaders is that most AI-driven tools are “black box” technologies, meaning algorithm design and logic are not transparent. Without full insight into “the box,” it’s impossible for HR leaders to evaluate the degree to which such tools expose an employer to risk.

This article will briefly review some of the dangers of utilizing AI for people decisions; provide examples of how algorithms can be biased when they are trained to imitate human decisions; highlight the promise of AI for people-related decisions; and explore how AI can facilitate these decisions while addressing compliance, adverse impact, and diversity and inclusion concerns. 

The Dangers of AI-Driven People Decisions

“Black box” algorithm design. Algorithms that leverage machine learning can both make decisions and “learn” from previous decisions; their power and accuracy come from their ability to aggregate and analyze large amounts of data efficiently and make predictions on new data they receive.

However, the challenge of algorithm design is deciding which factors, variables, or elements should be given more “weight,” meaning which data points should be given relative priority when an algorithm decides. For example, if not taken into careful consideration, factors such as gender, ethnicity, area of residence, etc., can affect an algorithm, thus biasing the decision and negatively affecting certain groups in the population.  

Recently, the Electronic Privacy Information Center (EPIC) filed a joint complaint with the Federal Trade Commission (FTC) claiming that a large HR-tech company providing AI-based analysis of video interviews (voice, facial movements, word selection, etc.) is using deceptive trade practices. EPIC claims that such a system can unfairly score candidates and, moreover, cannot be made fully transparent to candidates because even the vendor cannot clearly articulate how the algorithms work.

This company claims to collect “tens of thousands” of biometric data points from candidate video interviews and inputs these data points into secret “predictive algorithms” that allegedly evaluate the strength of the candidate. Because the company collects “intrusive” data and uses them in a manner that can cause “substantial and widespread harm” and cannot specifically articulate the algorithm’s mechanism, EPIC claims that such a system can “unfairly score someone based on prejudices” and cause harm. 

Mimicking, rather than improving, human decisions. In theory, algorithms should be free from unconscious biases that affect human decision-making in hiring and selection. However, some algorithms are designed to mimic human decisions. As a result, these algorithms may continue to perpetuate, and even exaggerate, the mistakes recruiters may make.

Training algorithms on actual employee performance (i.e., retention, sales, customer satisfaction, quotas, etc.) helps ensure the algorithms weigh job-related factors more heavily and biased factors (ethnicity, age, gender, education, assumed socioeconomic status, etc.) are being controlled.

For example, the data these algorithms are learning from will sometimes reflect and perpetuate long-ingrained stereotypes and assumptions about gender and race. One study found that natural language processing (NLP) tools can learn to associate African-American names with negative sentiments and female names with domestic work rather than professional or technical occupations.

Onetime calibration. Most HR-tech companies that support hiring decisions using AI conduct an initial calibration, or training, of their models on their best-performing employees to identify the top traits, characteristics, and features of top performers identify these same factors in candidates.

The rationale behind this process is valid, so long as the company’s measures of performance are neutral, job-related, and free from bias based on protected characteristics such as gender and ethnicity. However, performing it only one time is counterintuitive to the long-term goal.

In today’s business context, in which companies are constantly evolving their strategy to address dynamic market conditions and competition, the key performance indicators (KPIs) used to measure employee success and the definition of roles change frequently. The top performers of today may not necessarily be the top performers of tomorrow, and algorithms must consider this and continuously readjust and learn from these changes…

Source: HR Daily Advisor

TRG Guide

Sponsored By