The BIAS of Speech

According to the Oxford Lexicon a bias is defined as “inclination or prejudice for or against one person or group, especially in a way considered to be unfair” and is a bigger problem than often thought. Bias exists especially in modern applications that are based on Artificial Intelligence. Not every AI-application but especially those that are trained on human-generated data are at risk of a severe bias.

What is AI bias?

At the website of AI Multiple , bias in modern AI is defined as “AI bias is an anomaly in the output of machine learning algorithms, due to the prejudiced assumptions made during the algorithm development process or prejudices in the training data”. Or in plain English: it is the assumption that by our relatively young, mostly male and Western-oriented software developers, generated “data” is the norm and that it is interchangeable with the data generated by “others”.

Let’s focus on Human Language Technology with an “English-speaking” example: if he understands me, he understands everyone who speaks English. But… we often forget that “our” data, norms and values are not simply valid or true for every English-speaking person, or for any other language, by the way. So, an algorithm trained with this kind of data can perform very well if the users are more or less from the same “group”, but the performance will drop down if users are from a different group. This shift in performance is called the bias.

Impact of Data collection

Modern software development uses more and more AI-based routines where the main algorithm is trained on “human generated” data. Under “Human Generated Data” (HGD) we consider data that is produced by humans and are characteristic for those humans. Think about your face, your voice, the way you walk or sleep, or the books you read.

Often a project starts with a good idea and (a limited) amount of data; data that you often try to get from your own environment. And there the risk starts! The first clearly recognisable modern software bias was with the recognition of faces. The training and testing group consisted of pictures of young, high educated (mostly) men. After severe coding, training and testing a pretty good result was achieved, the software went to market! But… it became clear that women were less well recognised than men. So, a database with young women was quickly added and the system was re-trained. With version 2, now men AND women could be recognised. But… it became clear that elderly people and/or people with other skin colours were less recognised. So, new data were added and it went on for a long time until the database was a non-discriminating, good representation of all kind of people.

Bias with Automatic Speech Recognition

Is there a bias with speech recognition? Unfortunately, yes! It is not different to other AI-based application that use “Human Generated Data” (HGD). With Automatic Speech Recognition (ASR) and other speech-based projects the “Bias Law” applies. We trained the recogniser on how and what WE say, and by “WE” we mean: our words, our tone of voice and of course our pronunciation. Once Speech Recognition left the laboratories, it started its market introduction as a user-specific application with which we could semi-automatically help certain groups to get something easier, faster, and/or cheaper.

But Speech Recognition got better and better, it became popular, and it was used by a growing group of other people. And as the user group expanded, the original assumptions (you speak like me, you say this or that as I do) were increasingly compromised. Whereas five to ten years ago we could still say that we could recognise “correctly spoken English” of “native English people”. Although still true, this turns out to be less and less useful. English is the Lingua Franca of our time and is spoken by a huge variety of people who do not have English as their mother tongue. Of the approximately 1.5 billion people who speak English, less than 400 million use it as a first language. That means over 1 billion speak it as a secondary language with their own, sometimes with typical pronunciation.

Moreover, Speech Recognition is not something you make and leave it as it is for the next 50 years. Languages always change, new generations pronounce existing words differently, the language itself changes under the influence of neighbouring languages, and through cultural exchanges and second-language speakers: the use of the language by groups who didn’t speak that language before.
Just listen to an interview with a non-native English speaker or a broadcast from the 1930s. You can usually follow it, but for our ears it sounds different. To keep up with speech recognition and be able to recognise new, young, older, sick, or different linguistic varieties of English, and to deliver what they ask for, the Speech recogniser must be updated continuously. You need to gather conversations, retrain your modules and bring it out. Always keep on training. And once done, are you ready? Not quite, because apart from the slowly disappearing bias, we need to focus on the next big step: “understanding what is meant”. But that will be discussed another time.

Examples of innovate use of AI but that suffered from Bias at launch

Amazon’s biased recruiting tool. With the dream of automating the recruiting process, Amazon started an AI project in 2014. Their project was solely based on reviewing job applicants’ resumes and rating applicants by using AI-powered algorithms so that recruiters don’t spend time on manual resume screen tasks. However, by 2015, Amazon realized that their new AI-recruiting system was not rating candidates fairly and it showed bias against women.

In 2016, Microsoft launched “Tay”, a Chabot supposed to chat with teenagers on social and to have “real” conversations on a lot of different topics. Tay was launched to study language understanding and more especially 19-years old teenager language. But ill-intentioned people use the bias law to teach unappropriated and inflammatory contents to Tay so that its replies became more and more offending. This example shows how fast AI can switch from good to bad without an objective supervision.

How to avoid a bias?

Unlike many of my colleagues, I’m not really surprised by these results. After all, you must start with what is available, with people of whom you have a profile, a face or their speech. And often these are people who are similar to you, in your direct environment. The wrong thing about it, is the time to market. Especially with human generated data you use for training of your algorithms, you know that you have to enlarge your data because the data must be a good and honest representation of the people who will use the software. And with the fast increase of AI-based software in our daily life, this often means everyone. So, once you have proved that the principle works, you must continue to collect new data from people who belong to different groups and then start the training again.

To go further: Kriti Sharma; a leading global expert in AI and its impact on society and the future. She tells in an inspiring TED Talk about AI bias, here personal experience and what she did to avoid being taken seriously as a woman in the AI world.

SHARE
SHARE

Demo

Curious to see our solutions in action? Fill in your details and we'll contact you to schedule a demo session.​

Contact us by phone

Get In Touch

How can we help you?