It has been argued that artificial intelligence can free the processing of hiring from biases and prejudices. In theory, there can be a totally neutral system that looks at job candidates and chooses the best one, no matter their gender, race or other characteristics. (Inc.com)
It sounds good, but so far, it has been largely a failure, some AI experts say. AI works only as well as the programmers make it. And programmers are human beings with flaws.
Amazon’s AI System Discriminated Against Women
Amazon, which has a lot of money to spend, actually had to dump its AI recruiting tool because the system was discriminating against women. Machine learning specialists at the online retail giant discovered that their new recruiting engine was being unfair to all women candidates. (Reuters.com)
The machine learning team had been designing computer programs since 2014 to review resumes with the idea of automating the search for the best talent. Automation has been vital to Amazon’s dominance of the e-commerce world, whether it is in warehouses or making pricing decisions. The company’s hiring tool was using AI to assign scores to job candidates from 1-5 stars, sort of the way shoppers rate products on the website.
According to one of the specialists on the projects, everyone involved in the project wanted there to be an AI engine where you could put in 100 resumes and it would give you the top five, and then those people would be hired.
But by 2015, Amazon became aware that the new AI system was not rating job applicants for tech jobs in a way that was gender-neutral. This was because the computer models were trained to check applicants by looking at patterns in resumes that were submitted to Amazon over 10 years. Most of them came from men, which reflects how much men dominate the technical world.
Amazon’s AI system learned that male candidates were more desirable. It was penalizing resumes that including ‘women’ or ‘women’s,’ such as women’s debate club captain. It also penalized the resumes of people who graduated from all-women’s colleges, according to some on the machine learning team.
Amazon performed edits to the programs so they would be neutral to female-oriented terms. But that was not a guarantee that the system would not figure out another way to sort candidates in a discriminatory way.
Amazon eventually had to disband the machine learning team by the end of 2018 because executives did not think the project would work out. Amazon’s recruiters did look at recommendations that the tool generated, but they would never rely on those rankings alone.
Amazon will not say much more about the challenges of the technology, but it has said in media reports that the AI tool was never used by recruiters to evaluate job candidates.
Amazon Experiment Shows Limits of AI
This Amazon experiment shows some of the limits of machine learning. It also is serving as a good lesson to large companies that want to automate the hiring process. Approximately 55% of HR managers in the US report that AI will be a routine part of their job in the next five years, according to a Careerbuilder survey in 2017.
Employers have for years wanted to use technology to make hiring easy and reduce reliance on fallible human beings. But according to many computer scientists, there is much work to be done before an AI system will be able to be relied on to hire candidates.
Also, HireVue is facing criticism from civil rights groups over its systems used for hiring. According to The Washington Post, the system uses video interviews to review hundreds of thousands of data points related to how a person speaks, the words they use, and facial movements. The system creates a computer estimate of the person’s behaviors and skills, which includes the willingness to learn and their personal stability.
Learn How to Beat the AI Bots
These types of AI programs are what are encouraging several South Korean consultants to create new businesses to teach people how to beat AI bots in the hiring process.
Gaming the system has been tried for as long as people have been trying to get hired for jobs. There are tons of articles online that tell you how to give good answers to standard interview questions or tell you how important a firm handshake is. This is not much different than the training these consultants are providing. But the difference here is you are trying to convince an AI-driven machine to hire you, not a human being.
That is what makes this type of training so important. While it is generally true that firm handshakes are important, you can run into an interviewer who prefers a dead-fish handshake. In that case, the advice would hurt you and not help you. But if two companies are using the same software, the information from these South Korean consultants will help you no matter who the human hiring manager is.
The goal is to take all human biases out of the interviewing process. But biases are still in AI. It is just that all jobs require you to overcome identical preferences. This means it will be easier to beat the system. After the consultants have figured out what the algorithms want, they can train you to respond in preferable ways that could get you consideration by the hiring manager.
This type of training can level the playing field, but people who can afford the consultants’ training programs will do best in job interviews. Interviewers are already known to discriminate based on class, so the problem is not really solved at all.
Can AI eventually make hiring less discriminatory? It is possible, many believe. But, as these South Korean consultants understand, anytime there is a system made by human beings, there is a way it can be beaten. All humans are fallible, but everyone knows that. AI allows some people to think that the process has no biases, but that is not the case. It just makes the bias consistent.