In today’s hiring market, a black candidate is 36% less likely to get a call-back for an interview than an equally qualified white counterpart. Reprogramming a human to avoid biases in hiring is unlikely to work, as studies have shown diversity training to be largely ineffective.
When it comes to hiring practices, algorithms can provide a cloak of objectivity – but that doesn’t make them infallible. Just like humans, algorithms may rely on stereotypes that reinforce discriminatory hiring practices.
Why is this? Because that’s what they’re designed to do! The backbone of many of these potentially discriminatory algorithms is something data-scientists call “satisficing.” Satisficing is a decision-making strategy that aims for a satisfactory or adequate result, rather than an optimal solution.
So why does this lead to stereotyping? Stereotypes are really just shortcuts that help you draw conclusions quicker, sacrificing accuracy in the name of quick judgments.
Algorithms are not inherently biased. They simply learn the stereotypes or shortcuts from the human data they train on and are designed to emulate.
Design algorithms thoughtfully, however, and you’ll have a potent weapon to fight discrimination. Research has shown that even relatively simple algorithms can outperform humans by roughly 50% when it comes to spotting genuinely qualified candidates.
Technological change is anxiety-provoking, and rightfully so. But the simple truth is this: the status quo is unacceptable and machine learning is our best shot at a scalable solution to overcome hiring bias.
Science isn’t just about innovation– it’s about responsible innovation.
Unlike people, all algorithms are not created equal. Make a careful choice when selecting the data scientists who will craft your company’s future.
Read more on the Cangrade blog.