ICYMI: Study Says Artificial Intelligence Industry Must Confront ‘Diversity Disaster’

By catherine lizette gonzalez Apr 19, 2019

The artificial intelligence (AI) industry—which is overwhelmingly populated by White people and men—is due for a reckoning with its diversity crisis, according to a new report released on Tuesday (April 16) by the AI Now Institute at New York University

The authors of “Discriminating Systems: Gender, Race and Power in AI” call AI’s lack of diversity a “disaster.” Women make up just 18 percent of authors at AI conferences, 15 percent of research staff at Facebook and 10 percent at Google, according to the report. Black workers make up only 2.5 percent of Google employees, and 4 percent at Facebook and Microsoft, respectively. The study notes that much of the data views gender as a binary and the “overwhelming” focus on women in diversity privileges White women.

The AI industry largely discusses this lack of diversity as a “pipeline problem,” referring to the way people are hired. But the study says companies need to stop placing the burden of addressing the diversity crisis on those who experience discrimination and instead look at the perpetrators.

Per the report, “pipeline” research has yet to lead to meaningful action:


A recent survey of 32 leading tech companies found that though many express a desire to improve diversity, only 5 percent of 2017 philanthropic giving was focused on correcting the gender imbalance in the industry, and less than 0.1 percent was directed at removing the barriers that keep women of color from careers in tech. This meant that out of $500 million in total philanthropic giving by these companies that year, only $335,000—across 32 tech companies—went to programs focused on outreach to women and girls of color. The AI sector must confront the racist underpinnings of systems that are designed for the classification, detection and prediction of race and gender, which harkens back to histories of “race science.” And it must reconsider the production, selection and distribution of products that give power to those who benefit most from these products.

The industry must also reconsider the production of products that work in favor of the powerful, perpetuate racism and benefit the carceral state. The report draws several examples, including image recognition systems that miscategorize Black people, Uber’s facial recognition that does not identify trans drivers, chatbots that adopt racist and misogynistic language and sentencing algorithms that discriminate against Black defendants.

“Systems that use physical appearance as a proxy for character or interior states are deeply suspect,” the report states. “Such systems are replicating patterns of racial and gender bias in ways that can deepen and justify historical inequality.”

The study goes on to highlight the significance of recent worker-led initiatives and actions that have shaken up the tech industry to implement change. Among them is last year’s Google Walkout, in which hundreds of Google employees staged protests against what they described as a toxic and discriminatory work environment. 

“As AI systems are embedded in more social domains, they are playing a powerful role in the most intimate aspects of our lives: our health, our safety, our education and our opportunities,” the report concludes. “It’s essential that we are able to see and assess the ways that these systems treat some people differently than others, because they already influence the lives of millions.”

Read the full report here.