5 Times AI Business Analytics Failed Horribly

When we look back at the history of humanity, it is impossible to overlook the prevalence of war. Whether ancient wars like the Battle of Troy, or more recent events like World War II, war is undeniably a part of humanity. And while previous wars were won by using weapons and soldiers, the wars of the future will inevitably be fought with information and technology. It is in this understanding that the pursuit of artificial intelligence is born. And while you likely aren’t planning on using artificial intelligence to conduct warfare, there are many ways in which AI can impact your everyday life. In this article, we’ll dive into 5 times when AI business analytics failed.

WHAT IS AI? WHAT IS AI BUSINESS ANALYTICS?

Put simply AI, or artificial intelligence is a computer that is programmed to perform human tasks. Obviously, this is a gross oversimplification, but in reality, it is true. AI programs are designed to think, act, and make decisions like a human. Some basic utilities of this could be performing routine jobs or tasks, but the actual potential of AI is virtually limitless.

Meanwhile, business analytics is the process of gathering important data and then using that data to make tangible improvements to your business. It involves using measurable data to make decisions to optimize performance. So basically, AI business analytics is simply a combination of the two. It is using an AI program to gather/collect this data. Then it’s making strategic decisions based on the information it receives. In the practical sense, this should result in nearly flawless decision-making, but the reality of the situation can be frightening. Here are 5 times AI business analytics failed horribly:

NOBODY, NOT EVEN AI, COULD PREDICT COVID-19

While it may be slightly unfair to pin this one on artificial intelligence, the reality is that AI was not ready for a global pandemic. Major corporations and small businesses alike lost hundreds of millions of dollars as a result of decisions made prior to the pandemic. Many of them were utilizing AI software to guide decisions on inventory, marketing, etc.

While it remains to be seen how humans will adjust to the threat of a global pandemic, Covid-19 proved that AI was not yet infallible. Because of AI’s lack of ability to predict Covid-19 many lost their businesses.

AI MISCALCULATES, ZILLOW SLASHES EMPLOYEE BUDGET

Last year, Zillow announced that they would be cutting approximately 2,000 of their employees from their payroll. The company announced the decision following the reveal that an AI program known as “Zillow Offers” grossly miscalculated revenue and sales. As a result, the company bought well over 25,000 homes and was only able to sell 17,000. The AI program had an unfortunately large margin of error rate, and as a result, the company overspent massively on its purchases. While placing blame on the higher-ups at Zillow, this also serves as another example of AI business analytics failing.

AI HEALTHCARE PROGRAM SHOWS BIAS AGAINST BLACK PEOPLE

A couple of years ago apparently insurance companies and hospitals used a prediction algorithm to locate patients in need of “high-risk care.” This “high-risk care” essentially meant providing nurses and/or medical professionals to individuals at risk of serious medical problems. On the surface, this was a great idea to help get health care and treatment for those in need. However, the program recommended white patients at a much higher rate than black patients. The discrepancy came as a result of a miscalculation in the AI software’s algorithm. A scary example of a supposedly unbiased computer system struggling with human biases.

AMAZON SCRAPS AI HIRING PROGRAM

In 2014, Amazon announced its plans for an AI-driven hiring/recruitment system. The project promised to bring in hundreds, if not thousands of job candidates on a daily basis. Yet, in 2018 they scrapped the project after finding it only recommended male candidates. The algorithm had been based around previously submitted job applications. Yet, most of the previously submitted applications came from male candidates. This led to AI excluding female candidates from its searches entirely. While Amazon claims they never used the program, they also abandoned hope of the system entirely after citing that it poses too much risk for potential discrimination.

MICROSOFT RELEASES A RACIST CHATBOT

Our final story of AI failure comes from the tech giant Microsoft. Some years ago the company attempted to release a chatbot on the social media platform Twitter. The goal of the chatbot was to learn and adapt to human language by studying and interacting via Twitter.

This goal quickly fell apart as within just hours the chatbot was spewing racist and xenophobic slurs. This example falls more on the shoulders of the group of Twitter deviants tweeting slurs. But it still shows a failure in oversight can lead to traumatic consequences.

Source: Tech Report

Become a Client

Or give us a call

+1 (520) 350-7212
+91 77760 92666
By clicking the "Submit" button, you are agreeing to the Intent Technology Publication Privacy Policy.