TheGridNet
The Milwaukee Grid Milwaukee

Meet the scientist protecting women of color from the wrong side of AI!

In 2023, computer scientist and artist, Dr. Joy Buolamwini, was named one of Time’s “100 Most Influential people in AI” for good reason — prejudice that’s often baked into this […] Computer scientist and poet, Dr. Joy Buolamwini, has been named one of Time’s “100 Most Influential people in AI” and has been a pioneer in the field of artificial intelligence. Her research began as a graduate student at MIT when she noticed a software program was struggling to detect her skin color and she applied a white mask on her face to make it work. She found out about bias in machine learning, and founded the Algorithmic Justice League (AJL) in 2016 to study the social impact and potential harms of AI. Her new book, “Unmasking AI: My Mission to Protect What is Human in a World of Machines” explores the intersection of AI’ development and the dangers of bias in its algorithmic systems.

Meet the scientist protecting women of color from the wrong side of AI!

Published : 2 months ago by MKE Community Journal, Adobe_Animate_CC in Tech Science

In 2023, computer scientist and artist, Dr. Joy Buolamwini, was named one of Time’s “100 Most Influential people in AI” for good reason — prejudice that’s often baked into this technology has victimized women and people of color.

At 34, computer scientist and poet, Dr. Joy Buolamwini, has already made her mark as a pioneer in the rapidly developing field of artificial intelligence.

She’s advised President Biden and Big Tech on the benefits and dangers of AI, was named one of Time’s “100 Most Influential people in AI,” has worked on documentaries about the subject, and she recently released a book about her personal journey in the space: “Unmasking AI: My Mission to Protect What is Human in a World of Machines.”

Her research as an AI scientist came into focus during her time as a graduate student at MIT: addressing the downfalls in machine learning (the building blocks of AI systems).

At the time, Dr. Buolamwini was working on a face detection technology for an art installation she was building. She noticed the software program was having trouble detecting her skin color.

It wasn’t until she decided to place a white mask on her face that it finally started to work properly.

“It was this experience of literally coating in white face that made me think: ‘Is there something more here?’” she explained.

“I put on my scientist hat and started to conduct experiments that showed there was actually bias where these systems work better on some faces than others.”

Dr. Buolamwini — who founded the Algorithmic Justice League (AJL) in 2016 to study the social impact and potential harms of AI — shared lessons from her new book, where she explored the intersection of the technology’s development and the dangers of bias in its algorithmic systems.

Below is the conversation, which has been edited for brevity and clarity.

Know Your Value: In the book you write about a moment where you ran photos of Black women you admire into an AI system, and they were misclassified. How did that shape the significance of your work?

Dr. Buolamwini: I admired people like [former first lady] Michelle Obama, Oprah Winfrey, Serena Williams — women who have a lot of photos of themselves out there — and I started running their iconic images, and sometimes their faces weren’t seen, or sometimes they were labeled as male or other types of descriptions.

I remember one description of Michelle Obama as a child, actually described as “toupee.” Looking at these women that I admire and hold in such esteem — either being misclassified or not even seen by machines — really made me pause.

What does it mean for some of the technology from some of the most powerful companies in the world to fail on these faces of people I admire so much?

And sometimes the failure wasn’t that they weren’t recognized, but that they were misclassified.

And it reminds me of this notion of the exclusion overhead: How much do you have to change yourself to get systems that weren’t made with you in mind to somehow work for you?

Know Your Value: Could you explain the touch points on a day-to-day basis where [AI systems] can generate these consequences?

Dr. Buolamwini: The work I do looks at different ways computers analyze faces for a particular project.

You have government agencies that actually adopt that type of facial recognition for access to government services like, for example, the IRS.

When you login, you might be asked to scan your face. If there are failures, either somebody could get into your account and commit fraud, or you can’t even access your own information. So that’s one way [AI systems] can enter people’s lives.

Schools are actually using facial recognition and facial detection on everything from class attendance to E-proctoring, which became particularly popular during the pandemic, as there was more remote learning.

And then there’s the law enforcement use of this technology. I think of a woman named Porsche Woodruff. She was eight months pregnant when she was falsely arrested, due to facial recognition misidentification. I think that part is really important because you could not have anything to do with a crime that has occurred, but your image can be picked up. And she was actually pregnant when she was in the holding cell.

When she reported having contractions and after she was released, Woodruff had to be rushed to the emergency room.

So those are different ways in which computers can analyze faces. If there are biases and discrimination, you can end up in on the wrong side of the algorithm.

Know Your Value: Where does the responsibility lie in making these systems safer and less biased?

Dr. Buolamwini: We need legislation — at the federal level — because the legislation then puts in the guard rails for the tech companies. Currently, you have some tech companies that have done a little bit of self-regulation.

But all of the U.S. companies that we audited have stopped selling facial recognition to law enforcement following that work.

And also, we need to think about AI governance globally. I do think that all of our stories matter. When you share your experience with AI or your questions about it, you encourage other people to share their stories.

Know Your Value: What is your advice, for women of color especially, on embracing their dualities?

Dr. Buolamwini: I would say, start something small. Do an experiment. For me, that experiment was exploring the ‘AI, Ain’t I a Woman’ poem, and seeing how that was received. Then, [I tried] the documentary. It wasn’t everything all at once.

It’s also important to have that peer support group that you can share some of these things with … I wanted to focus on issues of bias and sexism and racism, and I was warned by my colleagues that it might pigeon-hole my career, and it was talking to other women in science — computer science — who encouraged me to do the research anyway.

And these were people I respect, very well-meaning people.

I do think sometimes you have to understand that not everybody sees your vision. That’s why you’re the visionary.

—Daniela Pierre-Bravo is a reporter for MSNBC’s “Morning Joe” and a Know Your Value contributor. She is the co-author of “Earn It” with Mika Brzezinski. Her solo book, “’The Other: How to Own Your Power at Work as a Woman of Color,” is out now. Follow her on Twitter and Instagram @dpierrebravo


Topics: AI

Read at original source