Close
Updated:

Survey says that artificial intelligence may actually promote bias, rather than help to eliminate it

Image Credit: Pixabay.com (https://pixabay.com/en/artificial-intelligence-brain-think-3382507/)

Wait, what?

Last week, I participated in a Pennsylvania Bar Institute Continuing Legal Education program entitled, “Law’s New ABC’s: Artificial Intelligence, Blockchain & Cryptocurrency.” During that conference, I led panel discussions on how human resources can deploy artificial intelligence and the legal implications of paying employees in cryptocurrency.

We’ll save the crypto recap for another day.

On AI, I shared some of the benefits of using AI to hire (efficiency, analytics, long-term cost savings), but warned that there are some pitfalls too. For example, there’s the news from October that Amazon scrapped its AI hiring program following concerns that the AI favored men over women.

Well, it seems that Amazon’s issues with AI may be more systemic.

Yesterday, I read these survey results from Upturn, which indicated that “without active measures to mitigate them, bias will arise in predictive hiring tools by default.”

The company further notes that “[p]redictive hiring tools can reflect institutional and systemic biases, and removing sensitive characteristics is not a solution. Predictions based on past hiring decisions and evaluations can both reveal and reproduce patterns of inequity at all stages of the hiring process, even when tools explicitly ignore race, gender, age, and other protected attributes.”

So, how can a company use AI to break the cycle? Here are some solutions from the report:

  • Vendors and employers must be dramatically more transparent about the predictive tools they build and use, and must allow independent auditing of those tools. Employers should disclose information about the vendors and predictive features that play a role in their hiring processes. Vendors should take active steps to detect and remove bias in their tools. They should also provide detailed explanations about these steps, and allow for independent evaluation.

  • Regulators, researchers, and industrial-organizational psychologists should revisit the meaning of “validation” in light of predictive hiring tools. In particular, the value of correlation as a signal of “validity” for antidiscrimination purposes should be vigorously debated.
  • Digital sourcing platforms must recognize their growing influence on the hiring process and actively seek to mitigate bias. Ad platforms and job boards that rely on dynamic, automated systems should be further scrutinized–both by the companies themselves, and by outside stakeholders.

Upturn also suggested that one way that the law can begin to catch up to technology is for the EEOC to consider new regulations that interpret Title VII in light of predictive hiring tools, or otherwise issue a report on the subject.

What do you think? Are you using AI to enhance your hiring process? If so, are you concerned about possible bias?