For Further Information Contact:
Southeast Asia Update: The Dangers of Employee Recruitment on Autopilot: AI and Discriminatory Hiring Decisions
02/05/2023Instead of the typical dystopian scene of flames, wastelands of shattered buildings, and robotic overlords policing the remaining humans, our actual dystopian future may be a workplace filled only with men named Jared who once played lacrosse in high school. This may sound far-fetched, but one resume-screening tool was found to be using an algorithm that concluded two factors were most determinative of job performance: the name Jared and a history of playing lacrosse in high school.
The frailties of artificial intelligence (AI) systems in recruitment and hiring could transform our workforces in unpredictable ways. If employers blindly follow AI outcomes without a deeper examination of how the algorithmic decision is reached, hiring outcomes may be not only ridiculous but also discriminatory.
Risks of AI-Reliant Hiring
Some employers have enthusiastically embraced AI as a way to reduce costs and replace human bias in the recruitment process. Human recruiters do not have a great track record; for example, in France, discrimination in recruitment has posed such a serious problem that the government submits false work biographies with ethnic names to identify and punish employers that unreasonably reject qualified ethnic applicants. Unfortunately, AI is modeled on human thinking, so it may amplify our own prejudices and errant conclusions while giving the appearance of providing a fair and clean process.
AI typically learns inductively by training on examples and historical data. Factors such as exclusion of certain groups from educational or career opportunities has often shaped this data, so AI’s decisions may amplify this past prejudice. For instance, Amazon attempted to mechanize recruitment in 2014, but abandoned these efforts after the AI tool selected a predominantly male workforce. The AI learned by analyzing patterns in resumes submitted to the company over the last 10 years. Since over this period men submitted the most resumes, the AI concluded that male candidates were preferable. In rating candidates, the AI downgraded resumes including the word “women” (such as in mentions of women’s sports) and those where the applicants attended female-only universities.
Besides illustrating how AI may rely upon historical data without examining the underlying reasons for historical trends, Amazon’s failed attempt at automated recruitment also exemplifies AI’s flaw of confusing correlation with causation. Amazon’s recruitment AI concluded that Amazon had hired more men than women over the last 10 years due to a difference in skill level, while in reality, factors such as gender stereotypes and discrimination may more accurately account for the imbalance. The AI resume-screening example mentioned at the outset of this article—the recruitment tool that had a preference for former lacrosse players named Jared—also demonstrates AI’s inability to distinguish correlation from causation. The algorithm observed that many high performing employees had the name Jared and had played lacrosse in high school, so concluded that these factors caused the high performance rather than correlated with them.
Rules programmed into the AI may also have unintended consequences. For example, one employer prepared STEM job advertisements to be gender-neutral, but the algorithm disproportionately displayed them to male candidates because the cost of displaying them to female candidates was higher, and the algorithm had been programmed to be cost efficient. Facial and voice recognition software has also been shown to downgrade applicants of different races or with speech impediments, effectively discriminating on the basis of race or disability.
What Employers Can Do
To combat discriminatory and illogical hiring decisions, users of AI recruitment tools should ideally be able to identify the algorithmic decision by deconstructing the AI decision-making process. However, as AI’s complexity increases it is becoming more and more difficult (or even impossible) to reverse-engineer algorithms based on machine learning.
Instead, the most feasible approach to determining whether an algorithm is biased appears to be running samples of data sets in advance of using the system for recruitment. The city of New York recently passed a law (to be enforced starting in July 2023) that requires employers to conduct a bias audit of employment decision tools prior to their implementation, in addition to informing candidates and employees resident in New York about the AI tool and the job qualifications and characteristics it will take into account. The state of New Jersey is taking a similar approach, with a bill that would require sellers of automated employment decision tools to conduct a bias audit within one year of each sale, and to include yearly bias audits within the sale price of the tool. This approach of requiring regular bias audits for AI recruitment tools may be adopted by other legislators around the world as lawmakers attempt to catch up to the realities of AI’s role in the hiring process as well as the social and legal implications of leaving it unchecked.
In the meantime, employers would be well advised to include human oversight in the recruitment process and to be critical of the outcomes of AI recruitment tools. Enlisting the aid of outside experts or neutral third parties can also help ensure compliance with employment regulations that fight bias and other unfair recruitment practices.
If one day you look around the office and find yourself surrounded by an army of “Jareds” with former lacrosse careers, it may be necessary to take your recruitment process off autopilot and have an actual human being review applications.
By Tilleke & Gibbins, Vietnam, a Transatlantic Law International Affiliated Firm.
For further information or for any assistance please contact vietnam@transatlanticlaw.com
Disclaimer: Transatlantic Law International Limited is a UK registered limited liability company providing international business and legal solutions through its own resources and the expertise of over 105 affiliated independent law firms in over 95 countries worldwide. This article is for background information only and provided in the context of the applicable law when published and does not constitute legal advice and cannot be relied on as such for any matter. Legal advice may be provided subject to the retention of Transatlantic Law International Limited’s services and its governing terms and conditions of service. Transatlantic Law International Limited, based at 42 Brook Street, London W1K 5DB, United Kingdom, is registered with Companies House, Reg Nr. 361484, with its registered address at 83 Cambridge Street, London SW1V 4PS, United Kingdom.