If Amazon’s Tool Could Discriminate, Could Yours?

Yesterday, Reuters reported that Amazon created a recruiting engine using artificial intelligence.  This isn’t news.  Amazon is a leader in automation, so it makes sense that the retail giant would try automation in their own recruiting processes to try to quickly find the “best” candidates.  Yet, Amazon’s tool had a big problem – it didn’t like women.

As the article describes, “Everyone wanted this holy grail,” one of the people said. “They literally wanted it to be an engine where I’m going to give you 100 resumes, it will spit out the top five, and we’ll hire those.”  Who doesn’t want this?  To make hiring faster and easier?  Currently, there are hundreds of AI tools available to human resources – many of them in the recruiting space – that promise to do these things for you.  But if Amazon found problems, what about those tools?

Amazon’s tool used a 10-year look back of existing employees (largely male-dominated).  The tool then could rank applicants based on what it learned makes a good Amazonian.  Based on its own analysis, the tool learned that male candidates were preferred over female candidates in a mixture of words that appear on applications, like “women’s,” experience, job requirements, and potentially proxies for gender.  While Amazon tried to solve for this problem – making “women’s” a neutral word so the tool did not reduce the applicant’s rank – the results of the tool still had a negative impact on women.  So, in 2015, Amazon abandoned the tool.  Good for Amazon.  This is the right thing to do.  But again, there are hundreds of other AI tools out there.

At this year’s HR Tech Conference in Las Vegas, my friend Heather Bussing and I presented on this very topic.  We spoke about how AI can both amplify and reduce bias. Here are a few of the highlights:

  • We know that AI is biased because people are biased.
  • We know the sources of the bias include the data we use to teach the AI, the programming itself, the design of the tool, and people who create the tool.
  • Employers have to be vigilant with their tools.  We have to test for bias and retest and retest (and retest) for bias in our tools.
  • Employers – not the AI – are ultimately responsible for the results of the tool, because even if we follow the output of the tool, the employer is making the ultimate employment decision.

It is very possible, even probable, that the tools out there on the market have bias in them.  Employers can’t simply rely on a vendor’s salesperson’s enthusiastic declarations that the tool eliminates bias.  Instead, employers should assume bias plays a factor and look at their tool with a critical eye and try to solve for the problem ourselves.

I applaud Amazon for doing the right thing here, including testing its tool, reviewing the results, and abandoning the tool when it became clear that its bias played a part the results.  This isn’t easy for every employer.  And, not every employer is going to have the resources to do this.  This is why employers have to be vigilant and hold their vendors accountable for helping us make sure bias isn’t affecting our decisions even when using an AI tool.  Because ultimately, the employer could be liable for the discrimination that the tools aid.

 

Photo by Kevin Ku on Unsplash

Two Questions

There are two questions that can change how well our people perform, how we work as a team, how we manage, and how we keep compliant.  Here they are:

  1. How are things going?
  2. What can I do to help you?

Definitely not rocket science, but think about these.  If you manager came to you, and genuinely asked, “how are things going?” how would you respond?  Would you respond with some of your concerns or roadblocks, would you say “my mom has been really sick” or “I’m having a hard time getting through to my Assistant,” or would you say “I completed this project!” More likely than not, if you believed your manager really wanted to know, you’d share information about your or your team’s work performance.  You might also share information that affects that work performance.

If your manager asked what she could do to help you, would you give an honest response?  “Janelle in Accounting is holding this up, could you please chat with the CFO?”  “I would like to go to this conference so I can learn more about XYZ.”  “I might need your help filling in for me while I get my mom to the doctor.”  Or, “James has been saying weird things to me, could you help me figure out how to handle the situation?” If you know your manager is willing to help, would you ask for it?  Wouldn’t this help you?

The Harvard Business Review published an important article about questions and how they build emotional intelligence and most importantly, trust.  If all the research is correct that when employees trust their manager, their performance and engagement increase, why wouldn’t we ask managers to ask questions to build trust?  These questions are business related by identifying successes and concerns while offering to help.

So, how does this tie to compliance?  Well, that’s an easy connection – when would people trust us, they tell us when something isn’t going quite right.  They tell us when someone said something he shouldn’t have, when they need a reasonable accommodation, or when they fear a co-worker might be breaking the law. If we want to foster communication from employees on these issues, we need them to trust us.  So, let’s ask them the two questions more often.

One other thing – it’s easy to train managers to lead with these questions.  The hard part is getting those managers to live these questions, to turn them into real information-seeking questions.  Look for those managers who do it well, keep them, train them, promote them.

 

Photo by William Stitt on Unsplash