Technology

Are AI Traders Shortening Black Lives?

Artificial intelligence often doesn’t work the same way with blacks as it does with whites. Sometimes the user experiences are very different, such as when Voice assistants have difficulty understanding words in black voices. In other cases, such as when cancer detection systems do not consider race, the issue is life and death.

So who is to blame?

If we deliberately leave aside malicious uses of AI software like facial recognition and crime prediction systems for law enforcement, we can assume that this is the problem preload.

When we think about bias in AI, we are usually reminded of incidents like the Google algorithm that mislabels pictures of black people as animals, or Amazon’s detection system that misleads several seated black members of the US Congress as criminals identified.

But bias is not just obviously racist ideas hidden in the algorithm. It usually manifests itself unintentionally. It is safe to assume that aside from sabotage, the employees of Amazon’s AI department are not trying to develop racist facial recognition software. But they do, and it took the company’s management far too long to admit it.

Amazon argues that its software works the same for all faces when users set the correct threshold for accuracy. Unfortunately, the higher the accuracy threshold set in a face recognition system, the less likely that the system will compare faces in the wild to faces in a database.

Cops use these systems at a threshold low enough to get a hit when scanning a face, even if that means setting it lower than the parameters verified by Amazon to achieve a minimum acceptable level of accuracy.

But we already new face recognition was inherently biased against black faces. And we know that cops in the US and other countries are still using it, which means our governments are funding research on the front end and buying on the back end.

This means that the reality of false arrests for blacks, given the current status quo and in practice, is an acceptable risk as long as it leads to some valid ones. That’s a shitty business model.

Basically, the rules of engaging in global business dictate that you cannot build a car that has been proven to be less safe for black people. However, you can program a car with a computer vision system that detects black pedestrians less reliably than white ones, and regulators won’t turn a blind eye.

The question is why? And the answer is simple: because it makes money.

Even if everyone in the loop has good intentions, the bias can inexorably manifest itself in almost any AI project that deals with data related to people.

Google and other companies have brought out AI-powered mammography screening systems that don’t work as well on black breasts as they do on white ones. Think about it for a second.

The developers, doctors, and researchers who worked on these programs almost certainly did so in the best interests of their customers, patients, and the public. Let’s say we all really hate cancer. But it still works better for whites.

And that’s because the threshold for commercialization in the artificial intelligence community is way too low. We need to invest heavily in cancer research, but we don’t need to commercialize biased AI: research and business are two different things.

The doctor using a cancer screening system needs to trust the marketing and sales team of the company that sells it. The sales and marketing team must trust the management team. The management team has to take the development team’s word. The development team must have a good faith belief that the research team is responsible for the bias. The research team must also assume that the company from which it purchased the datasets (or the publicly available dataset that it downloaded) used various sources.

And no one has any evidence of the privacy issues associated with human data.

This is not always the case. Very rarely can you trace the data sets back to real people and see exactly how diverse the training data really is. But here’s the problem: these verifiable data sets are almost always too small to train a system robust enough to recognize, for example, the demographic nuances of cancer distributions or how to distinguish shadows from features on black faces.

For example, when the FDA decides whether the use of an AI system is ethical, companies only need to provide small batch studies that show what software is being used, not the variety of data used to train the AI.

Any AI team worth their money can create a demo that shows their product works in the best of circumstances. Then all they have to do is back the demo with the results of previous peer reviews (where other researchers use the same dataset to arrive at the same conclusions). In many cases, the developers themselves have no idea what is actually in the datasets other than what they have been told – let alone regulators.

In my experience as an AI journalist – someone who has told tens of thousands of stories – the vast majority of all commercial AI entities claim to be checking for bias. However, hardly an hour can go by without a social media company, big tech, or government having to admit that they somehow managed to use algorithms that were racially biased and are working to solve the problem.

But they are not. Because the problem is, these companies have commercialized a product that works better for whites than blacks.

From creation to production, anyone involved in bringing an AI product to life can focus on developing something for the common good. The moment a human decides to sell, buy, or use an AI system for non-research purposes that they know will work Better for one race than another: you decided it would there is an acceptable amount of racial prejudice. This is the definition of systemic racism derived from racist privileges.

But what is the real harm?

When one class or race of people is treated better than others because of their privileges, an injust economy arises. In other words, when the bar for commercial adoption is “if it works for whites but not blacks” and it is easier to develop systems with bias than without, it becomes more lucrative to focus on system development It doesn’t work well for black people than it does to develop systems that work equally well for blacks. This is the current state of commercial artificial intelligence.

And it will stay that way as long as VCs, big tech, and governments keep the bar that low on commercialization. Until things change, they are effectively “shortening” blacks’ lives by benefiting from systems that work better for whites.

Published on February 11, 2021 – 19:14 UTC

Leave your vote

0 points
Upvote Downvote

Related Articles

Check Also

Close

Log In

Forgot password?

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

Add to Collection

No Collections

Here you'll find all collections you've created before.