DINT 128 - Digital Colorism: How AI Headshot Generators Are Failing Dark-Skinned Users
In this guest post by tech insider Christelle Mombo-Zigah, we discover a well-researched case study of AI and its effects on people with darker skin.
Editor’s note: When I came across Christelle’s research on LinkedIn I was blown away. Once I saw her story on her experiences using AI-powered headshot generators, I immediately reached out for re-publishing rights. Here you’ll find a real-world, first-person case study on the effects of the tech industry’s “we don’t see color” approach to AI. No one should have to go to these lengths to use a publicly available service that works for them. [Our story on Microsoft’s Diversity Report is re-scheduled for next week.]
By Christelle Mombo-Zigah
[Originally posted on LinkedIn - October 14, 2024]
If you know me, you know I’m passionate about #AI. Since diving into this technology in 2021, I’ve made it my mission to promote Responsible AI, addressing concerns around safety, security, fairness, transparency, and bias. But despite my love for this field, a troubling issue has surfaced: digital colorism in AI-generated headshots.
I recently tried several AI headshot generators—both free and paid—to experiment with their accuracy for use on social media profiles. To my disappointment, the very biases that marginalize dark-skinned women and girls in real life—colorism, featurism, texturism, and sizeism—were glaringly present in these digital tools.
I purposefully selected pictures where I had my hair in braids, or showing my very deep- melanated, dark-chocolate dipped, age-defying, smooth and glowing skin complexion through different lightings to see what the results would be.
The outputs failed to meet my expectations, I could barely recognize myself in some while experiencing a complete shift of my ethnicity.
This is more than a technical glitch. These headshot generators aren’t just editing photos; they are altering identities. In one of my experiences, the AI drastically lightened my skin, changed my hair texture, and even slimmed my face—essentially erasing parts of who I am.
This is dangerous, especially for younger generations who are constantly bombarded by unrealistic beauty standards. When AI reinforces these biases, it can harm users’ self-esteem, leading to practices like skin bleaching, hair straightening, and even eating disorders—all in pursuit of an impossible ideal.
To combat this, I created a notation system to rate these AI tools based on their performance across a range of factors: skin tone representation, hair texture accuracy, feature preservation, and body positivity. The results are eye-opening and demonstrate just how much work is needed to ensure these technologies represent everyone fairly.
AI companies must do better. By using more inclusive data and tuning their algorithms to recognize and respect all skin tones, hair textures, and features, they can create tools that empower, rather than erase, their users.
Digital colorism is not just a technical oversight; it reflects a deeper societal issue that AI now amplifies. As creators, users, and advocates of AI, we must demand better. The onus is on AI companies to ensure their algorithms reflect the full diversity of human beauty, without reinforcing harmful stereotypes or erasing identities.
But this is not just their fight—it’s ours. We all have a role to play in holding these systems accountable, whether it’s by sharing our experiences, pushing for more inclusive datasets, or championing fairness in AI development.
By working together, we can ensure that AI empowers everyone—regardless of skin tone, features, or body shape—and becomes a tool for representation, not erasure. The future of AI is bright, but only if it includes all of us.
The story doesn’t end here. To find out more about Christelle and her efforts to create human-centric AI, please visit DigitalColorism.com.
About the Author
Christelle Mombo-Zigah currently serves as the Global Renewals Strategy and Planning Leader for AppDynamics, a Cisco company.
Her superpower lies in helping international technology companies drive growth, transformation, and innovation.
Her commitment to human-centered experiences and sustainability led her to join the Responsible AI committee at Cisco where she advocated for intersectionality and inclusiveness to fight against biases.
Know someone with great insights who’d like to share what they know about tech, race, and gender? Send them my way.
News Bites
Keeping a close eye on this story about AI-generated police reports.
Exciting news in the world of venture capital funding for the underrepresented.
MacC earns $150 million in funding to help diverse founders get the proper investment levels for their business ideas.
Scared yet? New glasses scans faces and creates compilation of public data about the person
Discover what information is available about you online and gain control of where your data shows up online. Find out more here.
Palate Cleanser
Look at the love guests get when they visit The Jennifer Hudson Show.
Adaptation, thank you for keeping me well informed 👍