The Disappearance of Black Women in Tech - DINT 151
A new study neglects to include Black women in study subject matter.
The Erasure of Black Women in Tech
I’ve noticed as I’ve read my LinkedIn feed I see posts from my peers, Black women in professional environments who experience daily attacks on our credibility, our beauty, our capacity to simply exist.
The feed has started to become one-note and I feel trapped in the echo chamber of my own experiences reflected in the lives of others, staring back at me through what has now become a mirror.
To look at these issues takes strength. I believe mine is waning. As I ventured to write another post about the intersection of tech, race, and gender, I ran into yet another example of erasure of Black women in tech.
For the first time I found myself questioning whether to publish another story in this vein. It makes me wonder why I do this at all. Then I remember, I’m here to bear witness. I’m here to document. I’m here to shine light on what some would wish to keep in darkness.
With that said, here is this week’s story:
A study of 900 U.S.-based adults ignored Black women in its data set. The topic of the study? Perceptions of freelance writers who use AI. The study concentrated on three areas: perceptions based on gender, perceptions based on race, and perceptions based on nationality.
Each of the three sets of questions included photos, a key factor in forming perceptions. The study creators chose not to include a Black woman in the gender or the race portion of the study. I would expect more diligence from Cornell University and University of Pennsylvania researchers.
The authors of the “Generative AI and Perceptual Harms: Who’s Suspected of Using LLMs?” study presented their findings at the Conference on Human Factors in Computational Design in Yokohama, Japan on April 29.
During their 12-minute session the researchers put forward what they call a new category of AI harms centered on perception. Here is their theory in their own words:
“In this work, we define a new category of potential AI harms: perceptual harms,” the authors wrote. “Perceptual harms, we argue, occur when the appearance (or perception) of AI use—regardless of whether AI was actually used—results in differential treatment between social groups.”
They then go on to say perception isn’t affected by the design of AI models (also known as large language models or LLMs).
“Because perceptual harms are not caused by the model’s outputs, they can be categorized as a potential societal harm.”
Here’s are the images used in each experiment:
You’ll notice white men are the baseline of the study, appearing in all three experiments. Black women appeared in zero.
Were Black women eliminated from the study images because of the intersection of race and gender here? Is there no room for intersectionality in tech? Is tech pushing out the possibility to be more than one thing at the same time?
DINT posed these questions to PhD. candidate and study co-author Kowe Kadoma but had yet to hear back from her by the time of publication.
Kadoma had this to say about the reason for the study to the Cornell Chronicle.
“In casual conversations with friends and family, someone would mention that they suspected an email or message to be AI-generated. I then became curious about when people suspect AI writing and if some people are praised for using AI tools while others are penalized for it,” said Kadoma. “As more people adopt AI technologies, we need to consider who might benefit and who might be disadvantaged. The technology is changing rapidly, and so are the norms around its use.”
White men are shown in all three of the study sets. Black women are shown in 0.
The study authors concluded that white men and east Asian people are seen as users of AI.
The implied bias is that women, Black women, Black men, Asian women and everyone in between, isn’t using AI and they aren’t tech savvy enough to even consider their trustworthiness.
That said, this study stands out to me because it is co-authored by an African woman. Kowe Kadoma’s colleagues Dr. Danae Metaxa and Dr. Mor Naaman collaborate with her often. Kadoma is a PhD. candidate at Cornell University and has co–authored several papers on the topic of AI and human-computer interactions.
Related Content:
More News at the Intersection of Tech Race and Gender
What Meta’s dispute in Nigeria means for its millions of users
Facing a penalty for data breaches, the social media giant is threatening to pull WhatsApp, Facebook and Instagram from Africa's most populous nation.
Source: Rest of World
Microsoft puts some ousted employees on a 2-year block list and counts them as 'good attrition,' internal document shows
Source: Business Insider
Well, Well, Well: Meta to Add Facial Recognition To Glasses After All
May 9, 2025 at 9:09 AM
Meta previously lost its sh*t at 404 Media when we reported that someone had paired facial recognition tech with the company's smart glasses. Now Meta is building the invasive technology itself.
Source: 404 Media