Big Data Could Be a Pillar For BLM (Black Lives Matter) Campaign

We all know about the recent George Floyd episode and his unfortunate death. It was really depressing and saddening, to say the least. Of course, this is not the first one based on racism. This has been happening for hundreds of years.

Photo by Chris Henry on Unsplash

But all we did was just acclimatize ourselves to live with it. Yes, we have seen some movements here and there about eradicating this deep-rooted issue, but there were no concrete solutions to prevent it from happening again.

This Time It is Not Going to be The SAME

Do you think I am going to suggest a solution that is going to completely obliterate the issue with technology?

No!

In fact, it is the other way.

Yes, you heard it right. Artificial Intelligence Solutions is going to reinforce the issue with its bias. Don’t believe me? Here is one such example:

Yes, the soap dispenser detected the white hand, but not a black hand — why? Simple — the infrared sensor wasn’t designed to sense a black hand. Well, you can say it is an issue of technology, but wait, isn’t the technology created by humans?

Remember, this hasn’t happened somewhere else. This has happened right at Facebook’s office, which we think of as one of the front-runners in adopting the latest technologies.

Here is what the person who was on the receiving end posted on his official Twitter account:

As you can observe, it has received a lot of attention on social media with 8.8 million views, 2.7K comments, 170.3K shares, and 215.6K likes.

Some people wrongly said that the machine is ‘racist.’ The machine isn’t racist — it is doing what it is supposed to do. After all, we created it.

If you think this is one such rare example. Look at the following example:

Imagine that you have to wear a white mask to recognize your face by the machine — it’s horrible, right? 

Absolutely!

This is exactly what happened to Joy. Imagine her plight at the time. She was very disappointed, obviously, but she pointed out that there is something that is seriously wrong with the current algorithms in AI, which is going to rather reinforce the bias instead of improving the situation.

This is one such example. There are many of these that genuinely need to be addressed.

Okay, What Do We Need To Do?

Before we eliminate this bias, first we need to understand what bias is.

Bias develop involuntarily, with no extra effort everywhere. We usually know these as "stereotypes" in many places. Even the best of us would sometimes judge people based on stereotypes. 

If you think it happens only against black people, then you are completely wrong. We have prejudices against black people, white people, women, men, religions, regions, ethnicities, short people, tall people, vegans, non-vegans, Asians, Westerners, Americans, Indians, Chinese, Brits, rural people, urban people, humans, animals, aliens, etc. I can go on and on and on, and it won’t end.

These biases are natural, and there is nothing wrong with it because we are all different in different ways. But it becomes an issue when you are accusing someone based on the bias that you have.

There is a definite need to combat this issue in society, but it is still not as bad as machine learning or AI. What you have seen above are some examples of bias in machine learning and AI.

Okay, Why is This Happening?

Is big data biased? 

No!

Machine learning uses algorithms that receive inputs, organize data, and predict output within predetermined limits and ranges. Machine learning bias seeps into algorithms in subtle and unsubtle ways because of hugely biased data on the internet. Let us take an example of this bias that was explained in this article.

How Recognizing Patterns Impacted On Differentiation Of Tanks

The article explained how the army trained the machine to differentiate between American tanks and Russian tanks and how it identified with 100% accuracy. The examination revealed an interesting story behind the 100% accuracy which no one was expecting. 

The examination disclosed an unbelievable story about how they photographed. American tanks are photographed on sunny days, and it photographs Russian tanks on cloudy days. Therefore, the machine distinguished the American tanks and Russian tanks based on the brightness level in the pictures.

From this, we can assume that machine learning follows a pattern based on that, it organizes the data and predicts the output.

The above is one such example of many unsophisticated results from machine learning. Anyway, coming to the crucial point, according to Olga Russakovsky, there are three root causes of bias in machine learning:

  • Bias in data
  • Bias in the Algorithms
  • Human bias

All three biases play a major role in reinforcing the idea of bias in machine learning. It may be an unintentional one, but it still affects the overall system and strengthens the idea of bias in society.

Okay, How Can We Overcome Bias in Machine Learning?

It is not a simple thing to overcome, and it doesn’t solve overnight. The data on the internet is in such a way that it resembles human behavior. You will understand what I mean by this in the following part of the article.

Diversified Data

Diversified data is one of the important things that we need to keep in mind when we are feeding data to the system. If the data only has one section of people (for instance, white people), then there is a high chance that the machine would favor white people.

Therefore, it is important to diversify the data to feed the machine with a variety of skin colors, ethnicities, religions, sex, and others. Hence, you can eliminate the bias from the data sets. 

Weakening the Bias in Algorithms

More or less, the bias in the output results from implicit bias in the algorithms. As I said in the above section, similar data in large amounts would seep the bias in algorithms. 

Algorithms are the reasons biased output has resulted. Algorithms have the tendency to find the patterns as quickly as possible to result in an aim without looking at the larger view. 

This implicit bias in the algorithm needs to be addressed because, in the end, the world is not fair, and it always has some sort of bias. There is a definite need to weaken the bias in algorithms.

Overcoming Human Bias

Finally, it is always the responsibility of humans to look after the bias problem. Machines are just human creation — it doesn’t know what is correct and what is wrong unless you instruct the machine.

Photo by Christina @ wocintechchat.com on Unsplash

If it isn’t for human bias, George Floyd wouldn’t have died today. It’s the bias that resulted in hundreds of years of discrimination against some sections of people. Discrimination, bias, partiality, stereotypes are part of human nature, but that shouldn’t be imparted to systems in the same way.

Last Words

I know it is impossible to have an unbiased human. Also, without an unbiased human, it is impossible to design unbiased machine learning. But we can certainly do better than what we are doing right now.

The primary goal of the technology should ease the lives of humans, not to make it hard and divide people based on bias. As responsible humans, we all have the responsibility to neutralize the power of machine learning and AI.

In doing so, we are leaving a better world for the next generation.

On that note, I hope we will see artificial intelligence development companies with better data sets, better algorithms, and better AI in the future, which serves everyone in the same manner showing no discrimination against anyone.

Jayesh Chaubey

Hello there! I'm Jayesh Chaubey, a passionate and dedicated content writer at Infiniticube Services, with a flair for crafting compelling stories and engaging articles. Writing has always been my greatest passion, and I consider myself fortunate to be able to turn my passion into a rewarding career.

You might also like

Don't Miss Out - Subscribe Today!

Our newsletter is finely tuned to your interests, offering insights into AI-powered solutions, blockchain advancements, and more.
Subscribe now to stay informed and at the forefront of industry developments.

Get In Touch