26 Debiasing recruitment
Recruitment is one of the most critical components of success for an organisation, yet also one of the activities with the most potential for bias. As a simple test, Bertrand and Mullainathan (2004) sent fictitious CVs in response to ads with randomly assigned African American or White sounding names. White names received 50% more callbacks for interviews.
There is also substantial evidence that unstructured interviews, a common interview approach in many corporates, have limited effectiveness in identifying people who will preform well on-the-job.
Some of the mechanisms we have already discussed in this module are useful in improving recruitment outcomes, including the use of mechanical predictors. There is also substantial evidence behind the use of work-related and cognitive tests (although all of these methods can at best be described as weakly predictive.)
But what of approaches to debias the hirers? To date, diversity training has generally not been shown to have impact. Lai et al. (2016) tested 9 interventions to reduce implicit racial preferences, and found no effect after a delay of several days. Paluck and Green (2009) found that, despite hundreds of studies, little was known about the causal effects of interventions to reduce prejudice.
One experiment that did find some success involved a “nudge” that changed the way that candidates were assessed. Bohnet et al. (2016) argued that when evaluators assess candidates one by one, they are more likely to rely on group stereotypes than if candidates are assessed jointly. They ran an experiment in which candidates varied by actual and stereotypical mathematical ability (i.e. men). Joint evaluation increased the selection of candidates with higher actual mathematical performance relative to separate evaluation, where gender was a major factor in selection. (See also Bohnet (2016))
26.1 Backfires
There is some evidence that blind recruitment has increased the likelihood of women being chosen for certain roles, such as the use of blind auditions for orchestras (see Goldin and Rouse (2000), although this is debated).
In 2016 Hiscox et al. (2017) ran a randomised controlled trial to test gender and ethnic minority bias. They asked Australian Public Service staff to complete a fictitious shortlisting exercise with 16 CVs to see whether removing a candidate’s name and personal information would change the way it was assessed.
They found that participants were 2.9% more likely to shortlist female applicants when they were identifiable compared to when they were de-identified. Minority males were 5.8% more likely to be shortlisted and minority females 8.6% more likely.
These results suggest that blind recruitment processes in the Australian Public Service may result in less women and minorities being shortlisted for interview.