Skip to Content

Loading...

Conferences

Experts share how to mitigate bias in A.I.

November 08, 2021 00:00 AM UTC
- Updated November 08, 2021 23:12 PM UTC

An all-woman panel of experts from AI workplaces chime in to discuss.

Transcript
one of the key challenges in ai today is data on algorithmic bias. So can you give us a specific example from your work where bias can creep in and some of the ways you're mitigating that. Yeah, sure. So there's no such thing as a perfectly unbiased algorithms. So I always tell business units the fact that we're pushing you to conduct fairness assessments is not because we think that you've done something wrong or that your product in particular is really problematic instead of something we always want to check for. So whether it be something relatively innocuous, like an auto focus feature that um finds faces or find eyes and um focuses on that. Um anything from that too. You know when we're talking about robotics frequently computer vision is a major part of that. So if you have a self driving car then you need to think about being able to detect pedestrians and ensure that you can detect all sorts of pedestrians and not just um people that are represented um dominantly in your training or test set. And so these are just a couple examples um but it really spans the gamut in terms of the types of use cases we're seeing and the challenge, oh is is business units often understand when you explain to them um what the issue is broadly of bias and no one wants to produce products that are biased but it's actually quite difficult in practice to figure out the right benchmarks for testing for bias, the right techniques to mitigate bias and that's where research really plays a key role because this is a really new space. There's constantly new methods being developed and we really want to be on the cutting edge in terms of the techniques that were employing Margaret. How about you any examples where you've seen bias and in some of the algorithms, what you can talk about? Yeah, that's tricky. I've worked on face recognition and face detection and language models. Um, I suppose I won't speak to language that uh, I suppose I won't speak to computer vision. I'll do language models or something instead. Um, so we find that when we have these large language models training on tons and tons of data, um, a lot of it is sourced from the web. Most of it is sourced from the web where we see a lot of racism and sexism and able ism and ageism, one of the largest corporate used right now, is Largely sourced from Wikipedia, which is primarily written by men, white men between something like 20-30 or so. Um, and single and PhD higher level education, which means that the kind of topics that are covered that are then scraped in training the language models reflect those knowledge bases reflect those backgrounds. So for example, if you try and do a search for black history, you'll be redirected to african american history, which is american centric and not quite understandable about, you know what, you know, the whole history of people who are black. Um, and so it's uh, it's a really key issue when it comes to what the language models regurgitate as a function of the normal skews just in who is talking and who is being scraped on the web as well as the inherent racism and sexism et cetera that gets expressed. Um and you end up coming out and what's generated and what suggested. Yeah, I was looking recently at GPT Three and and there's uh I'm muslim by background and and um there was a lot of research on how D. 23 basically if you start writing the words muslim, it turns into this violence related narrative I thought is very interesting. There's a lot of evidence of that. Yeah.