Technology

How we taught Google Translate to not be sexist anymore

With the help of online translation tools, we were able to learn new languages, communicate across language barriers and view foreign websites in our native language. But the artificial intelligence (AI) behind them is far from perfect and often replicates the prejudices that exist within a language or society rather than rejecting them.

Such tools are particularly prone to gender stereotypes as some languages ​​(such as English) do not tend to use gender nouns while others (such as German) do. When translating from English to German, the translation tools must decide which gender to assign to English words and how “cleaner”. Most of the tools correspond to the stereotype and opt for the female word in German.

Prejudice is human: it is part of who we are. However, if they remain unchallenged, prejudices can arise in the form of specific negative attitudes towards others. Now our team has found a way to retrain the AI ​​behind the translation tools using targeted training to avoid gender stereotypes. Our method could be used in other areas of AI to help technology reject prejudice within society rather than replicate it.

Biased Algorithms

To the horror of their creators, AI algorithms often develop racist or sexist traits. Google Translate has been accused of creating gender stereotypes, such as translations that assume that all doctors are male and all nurses are female. The AI ​​speech generator GPT-3, which wrote an entire article for the Guardian in 2020, recently demonstrated that it is also shockingly good at producing malicious content and misinformation.

These AI flaws are not necessarily the fault of their creators. Scholars and activists recently pointed to gender bias in the Oxford English Dictionary, in which sexist synonyms for “woman” – such as “bitch” or “maid” – demonstrate how even an ever-revised, academically-edited catalog of words can contain prejudices that reinforce them Stereotypes and perpetuate everyday sexism.

AI learns bias because it is not built in a vacuum: it learns to think and act by reading, analyzing and categorizing existing data – as contained in the Oxford English Dictionary. In the case of translation AI, we expose its algorithm to billions of words of text data and ask it to recognize the recognized patterns and learn from them. We call this process machine learning, and along the way, both bias patterns and grammar and syntax patterns are learned.

Ideally, the text data that we are showing AI will not contain any distortion. However, there is an ongoing trend in this area to build larger systems based on ever growing data sets. We speak hundreds of billions of words. These are pulled from the Internet using non-discriminatory text scraping tools like Common Crawl and WebText2, which maraud across the web, devouring every word they come across.

The sheer size of the resulting data makes it impossible for anyone to actually know what is in it. However, we do know that some of this came from platforms like Reddit, which made the headlines for containing offensive, inaccurate, or conspiratorial information in users’ posts.

Some of the text users shared on Reddit include a language that we may prefer not to learn using our translation tools. Gil C / Shutterstock

New translations

In our research we wanted to look for a way to counteract the distortion in text data sets from the Internet. Our experiments used a randomly selected part of an existing English-German corpus (a selection of texts) that originally contained 17.2 million sentence pairs – half in English, half in German.

As we have pointed out, German has gender-specific forms for nouns (doctor can be “the doctor” for men, “the doctor” for women), where in English we do not treat these noun forms in a gender-specific manner (with a few exceptions that are themselves controversial, such as z “actor” and “actress”).

Our analysis of this data revealed significant gender imbalances. For example, we found that the male form of the engineer was 75 times more common in German than its female counterpart (the engineer). A translation tool trained on this data will inevitably replicate this tendency and translate “engineer” into the masculine “the engineer”. So what can be done to avoid or mitigate this?

Overcome bias

A seemingly straightforward answer is to “balance” the corpus before asking computers to learn from it. For example, perhaps adding more female engineers to the corpus would prevent a translation system from assuming that all engineers are men.

Unfortunately, there are difficulties with this approach. Translation tools take days to train on billions of words. Retraining by changing the gender of the words is possible, but inefficient, expensive and complicated. Adjusting the gender in languages ​​like German is particularly difficult because grammatical reasons may require multiple words in a sentence to be changed to reflect the gender change.

Instead of this laborious balance between the sexes, we have decided to retrain existing translation systems with specific lessons. When we found a bias in existing tools, we decided to retrain them on new, smaller datasets – a bit like an afternoon of gender awareness training at work.

This approach takes a fraction of the time and resources it takes to train models from scratch. We were only able to use a few hundred selected translation examples instead of millions to tailor the behavior of the translation AI. When testing gender-specific jobs in translation – as we had done with “engineers” – the post-adaptation accuracy improvements were about nine times higher than the “balanced” retraining approach.

In our research, we wanted to show that tackling the hidden bias in huge amounts of data doesn’t have to mean the hassle of adapting millions of training examples, a task that could be dismissed as impossible. Instead, the bias from data can be targeted and unlearned – a lesson other AI researchers can apply to their own work.

This article by Stefanie Ullmann, Postdoctoral Research Associate at the University of Cambridge, and Danielle Saunders, Research Student in the Department of Engineering at the University of Cambridge, is republished from The Conversation under a Creative Commons license. Read the original article.

Published on March 31, 2021 – 17:00 UTC

Leave your vote

0 points
Upvote Downvote

Related Articles

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.