Image of a woman teaching a high tech class

Artificial Intelligence, Machine Learning, and Women: Broken But Not Hopeless

March is Women’s History Month in the United States. March 8 is International Women’s Day. March has been a great time for an overwhelming number of #womenintech posts and memes!

IFM is run by two women who, between them, have over 50 years of experience in tech, but we don’t really want to focus on tooting our own ♀ horn. Instead, we want to examine some important aspects of technology that particularly (but not exclusively!) impact people who identify as female. Earlier this month, we wrote about digital personas and how they influence what you see (and what people learn about you) online. In this post, we’re going to take a different look at how a pair of specific and immensely powerful technologies influence all humans, but with particular implications for women and minorities, online: Artificial Intelligence (AI) and Machine Learning (ML).

AI and ML

But first, what are AI and ML? AI is short for artificial intelligence. It’s another way of saying “super-smart algorithms.” If I asked you to look at two sets of data that describe two personas, you will be able to compare them in your head and draw conclusions. (That’s because you are a smart human, and that kind of analysis is literally what humans have been designed to do over eons of evolution.) However, if I took 3 million of those personas and put them in front of you and asked for real-time analysis… Yeah, not so much. That’s what computers are for! And that is where AI shines. AI can index and analyze all that data at lightning speeds and then make decisions, based on the options in their code, as to what to do next.  

But AI by its lonesome has limitations. If it finds some personas that don’t match anything in its algorithm, then we have a game of “stump the chump” with a computer. Which is fun, granted, but people spending Big Money on Big Data generally aren’t amused. That’s where Machine Learning comes in. Machine Learning takes the options in the code ‘under advisement’ and instead looks at the data that doesn’t match and says “awesomesauce! Let’s just adapt this code a bit, shall we?”

It sounds like magic, but honestly, it’s just really complicated decision trees. Does the data fit in A? No? Then go to B. Does it fit in B? Yes! Great, go to B+1. Say that 10 million times fast. 

AI, ML, and Other People – the Downsides

With the sheer amount of data online, we wouldn’t have a particularly functioning Internet if it weren’t for the ongoing evolution of AI and ML. Think about it: how can search engines do their thing? How do big web commerce sites know to offer me particular recommendations? How do voice-recognition systems recognize so many different voices and accents? How can systems do facial recognition with so many faces out there?

AI and ML are absolutely critical, no two ways about it, but they share a particular problem: AI and ML and all their children start with a human. Statistically speaking, that human is probably male. Someone(s) had to write that code, and in writing that code, they cannot help but introduce some bias into the system they are designing. For example, an audit by Harvard University showed that facial-recognition algorithms consistently had the lowest accuracy for dark-skinned women and the highest accuracy for light-skinned men.   In an ideal world, the developer is a member of a highly diverse team where biases are quickly identified and dealt with in the code before anything goes live. I like that world. I want to live there. 

Unfortunately, that’s not the world we currently live in. I can point to a gazillion articles that say women and minorities are not well represented in tech (keeping the focus on AI and ML, I really like this article from Wired). If biases aren’t dealt with in the development phase, we are going to see all sorts of problems. In fact, we DO see all sorts of problems, in healthcare, in criminal justice, in hiring systems… the list goes on. I can even point to how the algorithm can drive individuals towards extremist views

Mitigating the Bias in AI/ML

All that sounds pretty dire. Is the value of having speedy search engines enough to justify the societal costs of biased AI? That’s the question, isn’t it. If there was no way to address bias in AI and ML systems, then I’d probably say no, it’s not worth it. But, if it all worked as designed… Medical treatments could happen at the very earliest moments where interventions would be most useful. People could get the support they need before they make life choices that will get them arrested. Hiring systems would actually be neutral and fair, and not solely dependent on human judgment (still a little dependent, though). Heck, AI could even start generating its own datasets!

It’s not like we don’t know that bias in these systems is a problem. There is quite a bit of literature out there on how to handle it, too. Do a web search on “how to prevent bias in AI”. I personally can’t decide which article to read first: the one that promises three ways, four ways, six ways, or seven ways to reduce or mitigate bias in AI systems. And this is just the popular content out there. Going to a site like Google Scholar will net you properly researched studies from real data scientists on the topic. 

Even the Organisation for Economic Cooperation and Development (more commonly known as the OECD; think high-powered, treaty-based, international organization) has guidance for how AI should be designed. No one is required to follow those guidelines, but they’re a good place to start. Microsoft, a company that does quite a bit with AI, has some pretty extensive guidelines and governance as well. So there is hope and some established guidelines out there. 

What it Means for You

So, what does this mean for women (and everyone else)? Well, since I’m not Queen of the Universe (if I were, I would wave my magic wand and get equal representation in the fields of automation, AI/ML, and all of tech for that matter) I’ll just say: be aware this is a thing. Know that the data you put into the system, be it a streaming service, search engine, or social media site, will automatically influence what you’re going to see in ways that might be really hard to stop. You (or your doctor) are going to be shown curated material based on what the supersmart algorithm and its buddy, machine learning, think you want (or need) to see. Maybe it’s right, maybe it’s wrong, but what you’re being shown is only a small slice of the information pie, and it’s based on what a computer thinks you want to see. Your medical care provider might not even realize that they are also relying on AI to help them be more efficient in how they handle patients, and that information might be right (or not) as well. 

Just because the computer told you so, doesn’t mean it will always be correct. Feel free to question what you’re seeing, or feel free to feed it more data so that music selection on your streaming service is exactly what you want. 

Posted by heather

Leave a Reply