AI researcher Joy Buolamwini discusses bias in facial recognition technologies at Duke event

AI researcher, activist and artist Joy Buolamwini spoke about bias in facial recognition technologies (FRTs) and the state of AI policy at a Tuesday event.

Buolamwini shared insights from her book “Unmasking AI: My Mission to Protect What is Human in A World of Machines,” which chronicles her career uncovering racial and gender bias in popular AI FRTs, beginning as a graduate student at the Massachusetts Institute of Technology. She also spoke about her advocacy work through the Algorithmic Justice League, an organization she founded to raise awareness about the impacts of AI’s harms and biases.

The event was held in Penn Pavilion and moderated by Robyn Caplan, assistant professor in the Sanford School of Public Policy, as a part of the David M. Rubenstein Distinguished Lecture Series.

The coded gaze

Buolamwini introduced the concept of the coded gaze, which she defined as “the power to shape the priorities, the preferences and also … the prejudices that are embedded into technology.” 

As a graduate student, Buolamwini said she first discovered the coded gaze when she was working on an art installation project called the Aspire Mirror, where one could look in a mirror and see the face of someone they admired. When using a facial recognition application, Buolwamini realized that while the software was able to recognize her lighter-skinned friend, it was unable to identify her darker skin.

“I literally had to put on a white mask to have my dark skin detected,” she said, prompting her to research the matter herself.

Buolamwini’s research led her to discover that the existing benchmarks and datasets of faces for training FRTs did not represent the “global majority” since they contained “largely male or largely lighter-skinned individuals.”

To evaluate popular FRT software, she created her own benchmark called the Pilot Parliaments Benchmark, which uses datasets from nations with “gender parity in the national parliaments.” She tested AI services from IBM, Amazon, Face++ and Google, noting that current literature suggests that these models have a 97% accuracy rate.

But when she took a closer look, the data unraveled.

“We actually have misleading measures of success,” Buolamwini said. “It turned out, yes, you could get 97% accuracy if most of the data was male and most of the data was lighter skin … Once you started actually representing what the world looked like, those accuracy numbers started to go down.”

According to Buolamwini, this discrepancy occurs because of “power shadows” resulting in skewed face datasets that “did not look like the rest of the world … but it did seem to look [like something that] holds power.”

She said that power shadows occur because of unequal representation of people of color in the media and in leadership.

“The past dwells in our data, right? And so does the present in terms of who holds power,” she said.

Buolamwini discovered that many AI-powered FRTs misgendered her image as male or were unable to detect her face at all. Applying other images of darker-skinned people revealed the same pattern.

She emphasized that biases in FRTs have led to scenarios including false arrests and nonconsensual deep fakes, creating the ‘X-coded’ — those “condemned, convicted, exploited [and] otherwise harmed by AI systems.” 

Her results prompted Google, IBM and Microsoft to take action and improve their facial recognition software. Buolamwini shared that in 2020, all U.S.-based companies she had audited ceased selling FRTs to law enforcement.

The state of AI policy

Buolamwini pointed towards a need to “institutionalize protections” following President Donald Trump’s overturning of then-President Joe Biden’s 2023 executive order on AI safety and security. She emphasized that policy not supported by legislation can easily be overturned depending on the current administration’s priorities.

However, despite the recent crackdown on diversity, equity and inclusion, Buolamwini shared that she has “so much hope” due to a global environment of support for AI safety.  

“This moment is a reminder that we have to put forward the vision of the world we want to see, and we have to continuously fight for it,” Buolamwini said. “It's not a given that it will stay.” 


Ishita Vaid | Senior Editor

Ishita Vaid is a Trinity junior and a senior editor of The Chronicle's 120th volume.

Discussion

Share and discuss “AI researcher Joy Buolamwini discusses bias in facial recognition technologies at Duke event” on social media.