How to Reduce AI Bias Like Google Does
how-to-reduce-ai-bias-like-google-does

How to Reduce AI Bias Like Google Does

By  |  January 13, 2021  |  Uncategorized  |  No Comments

Social movements of the past year have helped shine a light on the many ways in which human bias can creep into the algorithms that influence a growing portion of our everyday lives—despite the developer’s own lack of malicious intention.

During a panel at CES on Tuesday, Google’s head of product inclusion, Annie Jean-Baptiste, shared some of the ways in which her team is attempting to root out prejudices that may manifest in data on which a machine learning system is trained, or in the course of the development process.

Google itself has faced controversy in recent months over its firing of respected AI ethics researcher Timnit Gebru following her coauthoring of a paper highlighting the risks of large language models, which constitute a key pillar of the search giant’s business.

Still, Google and other big tech companies claim to have implemented more steps to help stamp out bias in their algorithms in recent years in the face of pressure from a growing movement of activists, academics and technologists who are drawing attention to the issue.

Below are some of Jean-Baptiste’s tips for reducing bias when developing an AI algorithm.

Adversarial testing

Adversarial testing is a common method of ensuring the security of a product or system by having engineers essentially try their best to hack or break it in order to identify problems prior to release. Jean-Baptiste said that Google also applies this form of testing to AI bias by having members of underrepresented groups, especially those not reflected in the makeup of the development team, vet products in the same manner.

“We’re bringing what we call our inclusion champions together; these are Googlers from underrepresented backgrounds—who were able to break the product before it launched and suss out the negative things we didn’t want it to say, but also, proactively add positive kind of cultural references,” Jean-Baptiste said. “And when it launched, there were only a few inquiries that we had to act on.”

Shared language around diversity

Jean-Baptiste said her Google team has trained around 12,000 technical employees in the past year on a common framework for understanding issues of bias and diversity in order to have a foundation on which to build communication between disparate parts of the company.

“We all come with our own backgrounds and experiences, so I think it’s really important for an organization to think through what shared language do we need to have around what we’re doing,” Jean-Baptiste said.

Identify inflection points

It’s important to examine the entirety of the development process beforehand and identify points at which bias is most prone to creep in, according to Jean-Baptiste. While prominent researchers in the AI community have made controversy recently by arguing that bias is mostly the result of training data, Jean-Baptiste said a more comprehensive approach to the problem is needed that takes into account the potential for bias at each step along the way.

“Just like any other part of product design, or just like any other part of a process that you’re trying to be successful at, you have to have infrastructure, you have to have accountability around this, or it’s not going to be successful,” Jean-Baptiste said.

About the Author: Patrick Kulp

Leave a Reply

CommentLuv badge