The world is relying on a flawed psychological test to fight racism

There are more than a dozen versions of the IAT, each designed to evaluate unconscious social attitudes towards a particular characteristic, such as weight, age, gender, sexual orientation, or race. They work by measuring how quick you are to associate certain words with certain groups.

The test that has received the most attention, both within and outside psychology, is the black-white race IAT. It asks you to sort various items: Good words (e.g. appealing, excellent, joyful), bad words (e.g. poison, horrible), African-American faces, and European-American faces. In one stage (the order of these stages varies with each test), words flash by onscreen, and you have to identify them as “good” or “bad” as quickly as possible, by pressing “i” on the keyboard for good words and “e” for bad words. In another stage, faces appear, one at a time, and you have to identify them as African American or European American by pressing “i” or “e,” respectively.

race-IAT-1
Photo credit: Project Implicit

Then the test shows you both words and faces (separately, one at a time, but within the same stage). You’re told to hit “e” any time you see an European-American face or a good word, and “i” for an African-American face or a bad word. In yet another stage, you must hit “e” for African-American faces or good words, and “i” for European-American faces or bad words.

race-IAT-2
Photo credit: Project Implicit

The slower you are and the more mistakes you make when asked to categorize African-American faces and good words using the same key, the higher your level of anti-black implicit bias—according to the test.

“Implicit bias” became a buzzword largely thanks to claims that the IAT could measure unconscious prejudice. The IAT itself doesn’t purport to increase diversity or put an end to discriminatory managers. But it has certainly been deployed that way, partly due to its creators’ outreach. In 2006, Scientific American praised Banaji for telling investment bankers, media executives, and lawyers how their “buried biases” can cause “mistakes.” “Part of Mahzarin [Banaji]’s genius was to see the IAT’s potential impact on real-world issues,” Princeton University social psychologist Susan Fiske said at the time.

“There’s the idea that we can decrease biases by slightly overhyping the findings,” says Edouard Machery, professor at the Center for Philosophy of Science at the University of Pittsburgh. “I’m not sure it was intentional. Every scientist must persuade other people that what they do is worth doing.”

HR departments quickly picked up the theory, and implicit-bias workshops are now relied on by companies hoping to create more egalitarian workplaces. Google, Facebook, and other Silicon Valley giants proudly crow about their implicit-bias trainings. The results are underwhelming, at best. Facebook has made just incremental improvements in diversity; Google insists it’s trying but can’t show real results; and Pinterest found that unconscious bias training simply didn’t make a difference. Implicit bias workshops certainly didn’t influence the behavior of then-Google employee James Damore, who complained about the training days and wrote a scientifically ill-informed rant arguing that his female colleagues were biologically less capable of working at the company.

Silicon Valley companies aren’t the only ones working on their “implicit bias” problem. Police forces, The New York Times, countless private companies, US public school districts, and universities such as Harvardhave also turned to implicit-bias training to address institutional inequality.

There’s a typical format for workplace implicit-bias programs: Instructors first talk about how we all have unconscious prejudice. Then they run through related psychological studies—some of which, such as a commonly cited paper showing resumes with white names get more callbacks than those with non-white names, show prejudice rather than unconscious prejudice. Next, they have participants take the IAT, which purports to reveal their hidden biases, and conclude the program with discussions about how to be aware of and combat behavior driven by such biases.

The latest scientific research suggests there’s a very good reason why these well-meaning workshops have been so utterly ineffectual. A 2017 meta-analysis that looked at 494 previous studies (currently under peer review and not yet published in a journal) from several researchers, including Nosek, found that reducing implicit bias did not affect behavior. “Our findings suggest that changes in measured implicit bias are possible, but those changes do not necessarily translate into changes in explicit bias or behavior,” wrote the psychologists.

“I was pretty shocked that the meta-analysis found so little evidence of a change in behavior that corresponded with a change in implicit bias,” Patrick Forscher, psychology professor at the University of Arkansas and one of the co-authors of the meta-analysis, wrote in an email.

Forscher, who started graduate school believing that reducing implicit bias was a strong way of changing behavior and conducted research on how to do so, is now convinced that approach is misguided. “I currently believe that many (but not all) psychologists, in their desire to help solve social problems, have been way too overconfident in their interpretation of the evidence that they gather. I count myself in that number,” he wrote. “The impulse is understandable, but in the end it can do some harm by contributing to wasteful, and maybe even harmful policy.”

Read more at qz.com

Trackback from your site.

Leave a comment

Save my name, email, and website in this browser for the next time I comment.
Share via