Show Notes
In this compelling episode, Michelle and Samah dive into the intersection of artificial intelligence and gender equality. We unpack a powerful article from UN Women, exploring how AI systems shaped largely by male-dominated industries can reinforce existing gender biases. From hiring algorithms that prefer men to voice assistants defaulting to submissive female personas, we examine why representation in AI development matters and how the data we feed these systems influences their fairness. We also explore tools and initiatives working to create more equitable AI.
Whether you’re in tech or just curious about how AI is shaping our world, this episode offers a critical, thoughtful, and, at times, humorous perspective on how we can build smarter and fairer AI systems for everyone.
Episode Transcript
[00:00:00] Speaker A: Welcome to the Underrepresented in Tech podcast, where we talk about issues in underrepresentation and have difficult conversations.
Underrepresented in Tech is a free database with a goal of helping people find new opportunities in WordPress and tech.
Hello, Samah.
[00:00:21] Speaker B: Hello, Michelle.
[00:00:23] Speaker A: I slept through my alarms this morning because it was a three-day weekend. And I got a little too into it being downtime. But that’s okay, I’m here.
[00:00:37] Speaker B: How was your weekend?
[00:00:38] Speaker A: I did nothing but play with Legos, watch TV, and sleep. So it was very, very good.
[00:00:46] Speaker B: That was the best weekend.
[00:00:47] Speaker A: Yeah, it was too hot here. We had storms and it’s. We have a heat advisory right now.
I woke up this morning, it’s already 80 degrees here and then it’s supposed to get up into the 90s, which is just especially for the north. We’re not used to it and it’s not a dry heat so it’s very. Impacts the air quality and asthma and things like that. So.
So just take it, take it slow, take it as cool. Be cool. How was your weekend?
[00:01:13] Speaker B: My weekend was good. I also spend it lazily. I am now into a sport called Padel. I like it, I’m playing it, which is really nice. I’m trying to be more active, and Sunday just lying down next to the swimming pool and I’m just enjoying life, you know, that’s it. I think I deserve it, you know.
[00:01:32] Speaker A: I think you do too. Is that like pickleball? I know pickleball is huge here.
[00:01:36] Speaker B: It’s. It’s like a tennis and glass room. It’s like between squash and tennis. Like squash is tennis and battle, you know, like with two people, my husband and I, very competitive people. So it was a really fun game. Nice.
[00:01:51] Speaker A: Like racquetball or something. I like it.
I’m glad you’re enjoying it. I would die.
[00:01:58] Speaker B: No, you’re not. But one day we will find an activity. For me, it’s fun if just both of our activities are for me, I just want to sit and have a conversation, you know, this is my first activity in the weekend, and just relax.
[00:02:10] Speaker A: Absolutely. Maybe do some crafts together or something. I don’t know.
[00:02:14] Speaker B: Yes, Lego. Let’s do some Lego together.
[00:02:16] Speaker A: Yes. I built this weekend I built the Sherlock. It’s brand new this year, the Sherlock Holmes Book One.
[00:02:24] Speaker B: Nice.
[00:02:25] Speaker A: It actually opens up, and there are little hidden secrets, and Moriarty is hidden somewhere in it. So that was fun.
I’ve also been collecting the Chinese New Year ones. I don’t remember which year I just did one, but I did another one yesterday. So, it’s been a lot of fun.
[00:02:40] Speaker B: That’s awesome. That’s good.
[00:02:42] Speaker A: Yeah. So, tell us about the topic that you brought us today.
[00:02:46] Speaker B: The topic today we’re going to talk about artificial intelligence and gender equality. I was reading an article for the United Nations Women.
Of course, I was reading it, and I was a little bit, a little bit, how can I say emotional? Because, as you know, AI is becoming part of our daily lives. Of course, we are using those apps to discuss or ask for information, or even I. I use it now instead of Google if I want to have past information or if I want more details about something. The topic, of course, is growing, and it’s really important to ask ourselves, who is building it?
Who is it serving? And.
And right now, the tech industry, especially the AI, is still mostly designed and led by men.
That means the AI system can sometimes reflect and even amplify gender biases already present in our society.
For example, an AI tool used in hiring has been found to prepare male candidates, and voice assistants often use female voices and show submissive behavior.
If we want AI to work for everyone, we need more women and more diverse voices and more people from under presented groups to be involved in building it and using it. It’s not just about using it, it’s also about people who building it and also about making it better, creating it better and creating more smarter technology that can serve all of us and not only serve one specific group or gender.
The article, of course it goes a lot about of course explaining more, and we will share it in our link, and when we share the bio of the podcast, of course, it shares more explanation about what gender biases are. And of course, they did a lot of research, and of course, they also give advice about how to avoid it. And of course, there was a report in 2023, and I’m pretty sure it’s a little bit late the report but all the research and pictures there are more recent research that 30% women only now working in AI. They didn’t mention of course under other group of adding from underrepresented the only 30% only women working in AI and STEM to be honest and about the leadership it’s in STEM and ICT education and careers are less more women are working there and that’s why we should little bit really working and emphasizing people who are leading the AI or getting the AI tools are thinking to make more diverse group to work on it and not only Focusing on men or they’re from their aspect because it’s also can be hidden biases because people they don’t know and they’re thinking in from their own perspective. So it really will be more efficient to add more diversity groups to this.
[00:06:05] Speaker A: Yeah.
Yeah, it’s really interesting. Yeah. No. So many thoughts. First of all, I was remembering talking to Zach Hendershot about his new plugin that answers some questions with AI. So like if you’re or addresses them with AI within your plugin or within your website, for example, if your client says can you make the logo bigger? You have answers that are automatic and that you have taught AI how to use.
I thought my question to him was Could you automate something that, if that question was asked, you could do? Like the movie 2001 A Space Odyssey where when they say open the pod bay door, Hal says I can’t do that. Right. Like that’s not something that’s not something that’s in your best interest. I’m sorry, Dave, I’m afraid I can’t do that. This mission is too important for me to allow you to jeopardize it. It’s like wait a minute, you’re not real, you’re AI, right? Kind of thing.
And then I was also thinking like, so I just did a really quick search while you were doing what you were talking about that I started to type in movies that warn and about AI automatically filled in. Right. So there’s so many movies that have already been made.
The Terminator, Ex Machina, The Matrix, 2001, iRobot, Megan, Blade Runner, Westworld. Like all of these AI the movie AI the movie her from 2013. Like some of these go way, way back. Right. Like two Terminators. I don’t even remember what year that was. That was like 1984. 1984, even before then. Right.
And even Wall-E has some level of understanding of what that’s about and how AI can be really detrimental to society. Now, of course, the movies are going to take everything and run with it as far as they possibly can. But you have to ask yourself what. What are we setting up within AI now? What are we allowing it to do that could cause major issues at some point? I think you bringing up today how AI doesn’t necessarily take into account gender and represent gender well is one of those issues. I also remember hearing that facial recognition software often doesn’t recognize non white people. And so that’s another AI. I think we mentioned that last year, sometime. So there’s more than just women, of course, I think minorities also are affected by AI.
But when you think about the fact that so much of human history has been written by men, it really has, right? I mean, maybe up into the last hundred years, but where more and more women have had the opportunity to do those things. But how it has been led and STEM has been led so much by men.
It seems a natural bias, but we need to counter it. The bias has to be a dud, Sorry. So that we can not have this gender bias going forward in AI.
It’s scary. It really is scary.
[00:09:33] Speaker B: Yeah, I am. It is.
Yeah, it is scary because like AI trained on the data that we give to it.
But if the data lacks diversity or mostly male, mostly Western mind information, the AI will reflect it. You know, I’m not saying AI itself is bad. I’m saying the data we should work on. Like that’s sometimes when I do search in ChatGPT, and sometimes git gives me like, I feel like this is wrong information. And I said, like, no, this is not the right information. And this is, this is how it, how it, how link. I’ll go use Google, grab the link, and give it to him.
And also, yes, most of the people who work on STEM or lead in STEM now are men. And that’s sometimes like it’s really important when you work in an AI tool for your plugin for your company or for hiring a platform for anything to make sure that you collect or make a team from different backgrounds and not only from one similar background or one similar ethnicity. For example, including more women, people of color, and voices from different communities in AI search it can really enhance the design, the testing, and the results, and it can overcome this bias. And as you said, the AI for the face cannot recognize white people because most of the people who invent, like working on it, are white people. So they really like focusing on, on one thing and also the testing, the auditing. And I really believe that the AI tools can learn and do much better. If some company find out like, okay, there’s a bias with hiring for like the assistance only like with the AI tool, it’s figure it women or this is wrong information or it’s a little bit, this is not true, can be a little bit biased, then they should fix it. They should really work on making it better.
And yeah, it’s like, yeah, transparency. Because I really don’t know many AI systems, like I feel they are like black boxes. Like you don’t know how they make decisions or where I know they making they’re getting their information from online, from the data that we provided, that people wrote it online.
And I think I would love to see more transparency. Like when people are building the AI system, they can explain how they are this AI making decision or how they.
[00:12:22] Speaker A: Are.
[00:12:24] Speaker B: In very clear human terms. And also to explain where they are getting the information. Because that is really important to fix the source of the information, and then we can fix, fix how the AI brings us the information. And not like it’s a black box. We don’t know what’s, what’s going on or how they’re getting.
[00:12:42] Speaker A: Yeah, I think part of it. And you said, you know, sometimes it’s, we can correct it. We can. Like I’ve asked it to make pictures for me, right? Graphics for me for articles. And at the end, it’s like thumbs up or thumbs down, right? And I’ll say, you know, what are five reasons xyz, right? And then I give a thumbs up or thumbs down. And the thumbs up or thumbs down is, is just the bare minimum of how we can train it.
If we actually were to provide feedback, if we were to type in what was wrong with something or why it didn’t wasn’t quite true, we could actually help it learn better.
But we’re so busy and we have so many demands made of us nowadays that taking the time to provide feedback, very few people will do that.
Most of us won’t even hit the thumbs up or thumbs down. We’ll just abandon it, go Google it ourselves as opposed to training the AI model to do better for us.
And I will ask my Amazon Echo for information.
And sometimes it’s not just like, usually it’s the weather. I’m not gonna lie, right. Like every morning before I even open my eyes, what’s the weather today? Right? And it tells me what the weather is.
But sometimes I’ll ask for other information, like what year an actor was born, or just things like that. And it’ll ask me, Did that answer your question?
I will say yes or no and provide the feedback because verbally it’s much easier to provide feedback than typing something out. But I usually will remember to do that and provide feedback if I’m not in a huge hurry, because I want it to do better.
By the way, on Siri, on my phone, I put a male voice because I want him to serve me and I want him to remember that I’m in charge and women are okay too.
My Siri has a male voice.
[00:14:46] Speaker B: I know why we are friends, because I also have the same.
I love it, I know I don’t want a woman’s voice, I want a man’s voice.
[00:14:54] Speaker A: Exactly.
You also make me wonder, and I haven’t looked at all of your research yet, but I wonder too if language inputs have different outputs back to us based on the language that we’re using. So in English, are we getting one answer? But, and I would imagine like the main, the big languages, right, the ones that are more universal, like Spanish, most of the European languages, I should say Spanish, Italian, German, those would probably have similar returns.
But some of the lesser spoken languages, like Tagalog and other languages from Asia and the African continent, would have the same response in ChatGPT, for example, if we looked at them across language models? If we all asked the same question in our own language, are we getting back the same basic answer, or is it serving us different information? I would love to know that as well.
[00:16:04] Speaker B: The same me, I’m gonna do that immediately after our recording, because like I would.
[00:16:08] Speaker A: Because you speak seven languages.
[00:16:10] Speaker B: Yeah, I speak Arabic and Dutch, and I want to do the testing to see, to check it out.
It will be interesting because I know, how can I say sometimes also in Dutch, like the Dutch language. I know, I think it’s like 21, 22 million only speak the language. But the Dutch community is really active on the website and creating information, even in technologies and but for example, if you go to Suheli, I think there are more than 200 million people who speak it. But of course, it’s a language used in Africa. It’s not like the African country is not at the same level as European countries in technology for many factories in the economy, and then life costs, and stuff like that. So I would love to know if you’re, I’m just going to do some testing later on, but that, that.
[00:17:04] Speaker A: Really, if the socioeconomic level of a country is higher or lower than, say, whatever average might look like, are you more or less likely to get an answer first and then to get an answer that is equivalent to what you would get in higher economic countries?
So I would imagine that Japanese would serve high because you know, the Nikkei and all of that. You know, Japan has a very high ranking business-wise, let’s say in the business world. Right. So, is Swahili used globally in business? I would argue it probably is not. Right. Is it used within its country, within its, you know, language-speaking area for business? Yes. But does that translate outside of a geographic area? Probably not. Which is why so many, I think, I would guess that so many global languages, if you would say like business languages, would be much more likely to be similar in the responses than those that are not.
Interesting hypothesis.
[00:18:10] Speaker B: Yeah. Me me I’m going to do the search and let you know what will be now my brain all of it think about like search search about it.
[00:18:17] Speaker A: Like, what question should we ask it? Yeah.
[00:18:21] Speaker B: And also to be honest also to talk about a lot of companies who is. Who is creating the tools or working on the AI? They are.
How can I say they will not act unless if the government or they oblige by the law to do something like and I think sometimes that is when the people like they need to push for laws and ethical standards that requires ferments or non discrimination accountability in the AI development because some companies they don’t care unless you have to do it. Now we have an example now the Accessibility Act in Europe, especially the Netherlands, since 28 June. If your website is not accessible, you will get a fine. You will not. And that’s something like people now like starting, like, oh, we need to work on our accessibility widely.
We could work on it before and don’t wait until it becomes a law or rule. But also at the same time, people, if they push a little bit more, it should be. We should not ask for it. It should be the AI should not discriminate against people. It should not be biased. It’s the companies that should have that mindset of hiring or having a diverse team, they can work on a tool that can serve everyone, and not only a specific small group of people they are targeting.
[00:19:50] Speaker A: You came up with a list of tools and platforms that are trying to help this model.
I’ll read the first two and you could read the second two just to give an idea. IBM AI fairness 360 which is an open source toolkit that helps detect and reduce bias in machine learning models. I think that’s pretty cool. Then there’s Google’s what if tool. It’s a visual interface to test how machine learning models why is that so hard to say? ML models behave helping identify fairness issues without needing to write code. That’s pretty cool.
[00:20:26] Speaker B: Yeah.
Another tool is Fair Layered by Microsoft. A toolkit to assess and improve the fairness of AI systems. Integrates well with Python and Jupiter.
Hugging face, this is a cute name. Hugging face, I want to hug your face.
[00:20:46] Speaker A: Yes.
[00:20:48] Speaker B: Same Data Assist plus model cards. Their model cards include information on potential biases. I recommended uses for the AI model. I think this is really amazing. I like those tools. They are working to make it to make sure that others are.
Or the AI tools are less biased, or this list is discriminatory.
Yeah. People or users. Yeah.
[00:21:18] Speaker A: And makes you wonder who’s behind them. Are men or women behind them?
[00:21:22] Speaker B: I’m pretty sure a woman.
Maybe. Maybe. You never know.
[00:21:26] Speaker A: You never know. Sorry. I’m sure that there’s both, but.
But it is interesting to wonder sometimes. And then there’s also organizations and programs that you have a list of here.
I think it’s. Everybody knows that Sama does our research, so I’m just reading the lovely research that she provides and the organizations and programs that are working for. This one is out of NYU, New York University.
AI Now Institute researches the social impacts of AI, focusing on inclusion bias and accountability.
And that’s ainowinstitute.org, which I think is a cool name too, by the way.
Data and Society is a think tank that explores how technology impacts society, including algorithmic bias.
You go ahead and take over your list.
[00:22:13] Speaker B: There’s a partnership on AI on a coalition of tech companies and civil society groups working on reasonable AI development.
Members include Google, IPM, Mozilla, and Amnesty International.
Black NI women in AI, queer AI, grassroots communities that support underrepresented voices in AI, especially at major conferences like NeuroIPS. AI for all nonprofit focused on increasing diversity and inclusive inclusion in AI by providing education and mentorship to high school students from underrepresented and Frontis group.
Also, there are influential people and advocates, like I’m sorry if I pronounce your name wrong. Dr. Timinet Gabor and Deb Rajee. Both of them are really working on making more focus, highlighting those biases, and how to fix that discrimination.
[00:23:20] Speaker A: Yeah, I think that’s wonderful. I am embarrassed that I didn’t even think that it might be an issue.
So I’m very glad that there are people who recognize these kinds of issues that are happening. I’m just barely going about my little life, recognizing some things and not others. So it’s good to know that there are entire organizations that are watching out for these kinds of inequalities and trying to find solutions for them. And that some of them are big companies absolutely helps. That’s pretty cool. And I don’t know how you found this article, but I’m really grateful that you did.
[00:23:56] Speaker B: I was reading about it. Like me, I’m obsessed with politics. I like politics is my jam and United Nations reports. If I told you that I was reading, I know we don’t talk about politics, about Rwanda and Congo signing the treaty. I was reading it, I was finding it very interesting and my husband told me like this is really awkward but anyway and I was happy about the international IH hello they are gonna do they’re gonna apply it but me I found it and all the time I open United nations women because they work all around the globe to to make sure and women development and and talking a lot a lot about things. Maybe we cannot talk about it in the podcast because it’s not related to technology or not related to things but there’s a lot of amazing they do with education with FGM with a lot of things and that is to also to highlight the articles one year old that me I was looking about if they also if there’s any new things about it but sadly and I love it that they talk about it one year ago and I know a lot of people now talking about it now and that is sometimes it’s really important to talk about things before we start.
Yeah, fixing them later is okay. And it’s really amazing to fix them later. But I think from the beginning, it’s really amazing to hear other people’s voices when we create tools, plugins, and start working on some specific, specific things. And yes, even after one year, the problem is still growing, and we should pay more attention to it, and people should work more and more to fix it. Yeah, absolutely.
[00:25:35] Speaker A: That’s unwomen.org. I’m looking for a way to sign up for there, there must be a way to sign up for their newsletter. So I will find that and sign up as well. But I know you’re going to put this in the show notes so people can go back and read the article itself and find more on their website. It’s just awesome.
I love seeing things like this, especially if I didn’t already know about it. It makes me happy that they already exist. It’s not brand new. They’ve been around for 15 years. That’s wonderful. Which is still.
[00:26:04] Speaker B: Yeah. And also, I love the logo for all women and girls. You know they are global champions of gender equality, and that is it is really amazing. There’s a lot of organization, a lot of education. Of course, we always like to talk. We try to talk about and share it with anyone who would love to learn about but yeah but we need, but we need to do a lot of work to fix our world.
[00:26:30] Speaker A: I love, I love that the word all is bolded. It says for all women and girls. It doesn’t say, for CIS women and girls, which is nice too. So I haven’t read about them as much, but I’m going to assume that it includes trans women as well. I can’t imagine.
Yeah. So pretty cool. Thank you, Samah for such a great article and such a great topic.
Yeah, this is a great conversation and I look forward to learning more now that you have opened my eyes to potential issues. But yeah, if you do the, if you do the different languages into ChatGPT, I’ll be very interested to hear what your findings are.
[00:27:10] Speaker B: For sure, you’re gonna hear me say, send you a smiley face or just like a long message to you, being angry about it.
[00:27:19] Speaker A: Exactly, exactly.
Oh, good times. Well, we will come up with hopefully just as interesting a topic for next weekend. Hopefully I won’t oversleep so I’ll sound a little more coherent in our conversation.
[00:27:35] Speaker B: Or maybe, maybe I could push it one hour so you can wake up in your relaxing time. But I can also do that. You know, I don’t have to make it, but super early for you.
[00:27:45] Speaker A: We can talk about that. I would like to actually, you know, be more productive.
Anyway, we’ll talk, we’ll talk. But we’ll see everybody next time on Underrepresented in Tech. Bye.
[00:27:57] Speaker B: Bye.
[00:28:00] Speaker A: If you’re interested in using our database, joining us as a guest for an episode or just want to say hi, go to underrepresented in tech dot com. See you next week.

Michelle Frechette
Host

Samah Nasr
Host