This Week in Tech, Race, and Gender - DINT 144
Watch out! AI agents are coming, and they're bringing the 'coded gaze' of tech bias with them, affecting your job, housing, and health.
AI agents aren’t messing around and by 2032 they’ll be running our lives, say experts
Anthropic, maker of once highly respected AI system Claude, just removed its anti-discrimination page from its website, reports People of Color in Tech. It was one of the ‘good guys’ that first introduced a constitution on self-correcting AI.
Just last month Anthropic introduced a new way to stop users from generating offensive responses in their system, something Meta and Google still struggle to do, according to The Financial Times.
With this removal of anti-bias and discrimination language from its site, just how ‘good’ is Anthropic and its chat system Claude?
We know there’s a rollback of DEI initiatives but the anti-discrimination in AI poses a different challenge. Here’s why:
AI systems will soon be employed to make decisions for us through something called autonomous AI agents.
The market for this technology will reach $100 billion by 2032. To put that into context, it took 25 years for cell phones to reach widespread adoption. AI agents are poised to grow at a 45% rate within seven years. AI agents are pre-programmed systems designed to make stock trades or close monthly subscriptions on our behalf.
Whenever I think about AI agents I think of this episode of Black Mirror.
The complicated history of AI and Black and Brown people should give people pause in light of the many ways AI agents can affect our lives.
Can you remember back to when cell phones really took off and were in every person’s pocket? Especially the iPhone? One day, few people had them and it seemed the next day no one could remember life without them.
AI agents are designed to make our lives easier by:
saving us money,
simplifying our daily schedules,
taking care of calls and texts that are usually time consuming.
The downside for Black and Brown people is that these agents may not act in our best interests, not because they’re inherently evil on their own, but because they’re based on the AI algorithms today. And we know just how biased these systems can be when interacting with and representing Black and Brown people.
Related coverage
These systems especially negatively affect us in the areas most important to any person’s survival:
Income
Biased hiring decisions through AI algorithms continue to make a difficult job landscape harder for women and other underrepresented groups.
Example: Workday, one of the largest companies in the human capital hiring space, uses AI to help employers sift through resumes. They’re now being sued by Derek Mobley, a 42-year-old Black man who alleges he was eliminated from more than 100 roles he applied for through Workday because of his age, race, and disability status. He’s now seeking to make the case a class-action lawsuit that could include millions of others who have applied for jobs through the Workday platform. If successful, this case sets an important precedent holding third-party vendors accountable for the discriminatory results of their systems.
Housing
Several cases are flowing through the courts in the U.S. and they’re all around rental systems that either deny housing opportunities or hike up rent prices based on race and gender).
Example: In November 2024, U.S. District Judge Angel Kelley ruled SafeRent, a rental platform, must pay $2.3 million in damages to people who were eliminated from housing opportunities because of their race and economic status. (Source: Law.com)Healthcare
AI algorithms used in healthcare extend existing biases and enable further damage to people who are women, Black, and / or Hispanic.
Case in point:
“Exploring the Impact of AI on Black Americans: Considerations for the Congressional Black Caucus's Policy Initiatives evaluators found that the algorithm underestimated the needs of Black patients; it assigned lower risk scores to Black patients, even when they were sicker than white patients who received the same score. While the algorithm explicitly eliminated race as an input, racial bias resulted from the decision to use the cost of care as a proxy for the severity of need. Since the algorithm drew upon past patient data, it reproduced historical patterns of racial bias, namely the tendency for the healthcare system to spend more money on treating white patients than their Black counterparts.”
(excerpted from “Exploring the Impact of AI on Black Americans: Considerations for the Congressional Black Caucus’s Policy Initiatives” by Stanford University’s Human-Centered Artificial Intelligence program)
What’s sad is that calling out this kind of bias is often met with a kind of boomerang effect in which people who aren’t being directly accused feel the need to defend the offenders. These are the people who downplay bias of any kind, especially in algorithms. This approach to glaring problems in our society deepens injustice and begins to encode it into the very innovations that could unite us.
For some, unity means equity. And that, just can’t be tolerated. Denying the existence of bias equates to an ostrich hiding its head in the sand while its body is in full view for all to see.
When we accept the discomfort of listening instead of disregarding the pain of others, we may see that approach extend into the very systems that now create an even wider wedge, keeping us apart.
Related coverage
Meta Reveals Human-Like AI Model That Creates Images to AI Researchers
Thank you