10 Common Myths About AI and ML Debunked header image

10 Common Myths About AI and ML Debunked

“AI” and “ML” are very popular terms in the mainstream right now. Artificial intelligence and Machine Learning appear everywhere, from news headlines to hype about new products and companies. This rising popularity comes with a lot of misunderstandings and myths about AI and ML; and the two terms are often, quite mistakenly, used interchangeably.

 

Let’s take a look at ten of the most ubiquitous AI and ML myths, and see if they hold up to scrutiny:

 

 

1. AI and ML can predict the future 

 

Yes, but with a major caveat. AI and ML can help you predict outcomes, based on patterns and situations which resemble the past.

 

 

2. AI and ML train computers to think like humans

 

Artificial conscious thinking is a far-fetched idea, and remains in the realm of sci-fi movies for now. The AI and ML we have at the moment can give you certain indicators to specific and “narrow”ly scoped-out problem statements or scenarios.

 

Now, let’s change how we approach this statement with a few questions. Is there a way to actually automate decision-making using AI and ML? Technically, yes. Can you write a program to make automatic decisions? Yes. Should you? No. Hence, is it possible to use AI and ML to make decisions as we understand them fundamentally as humans? No.

 

You can allow AI and ML to generate indicators, which can in turn help humans make better decisions. For example, it can give you the likelihood of employee churn, or the likelihood of sales volume changes based on pricing fluctuations.

 

 

3. ML models get better over time

 

Nothing is perpetual or self-sustaining. The models that we build erode over time. They actually get “dumber” or less accurate as time passes. For example, when data distributions change in your business processes, the models you have right now may not represent the past exactly in its truest sense, so the performance drops. Models need retraining from time to time.

 

 

4. AI and ML are very objective

 

The objectivity of a machine learning model depends on that of its creator. It’s like the age-old concept of the tool and the maker. The tool is the same. It is up to the maker whether to make or break something with it.

 

This is why we see a lot of discussion online about concepts like responsible AI and ethical AI. At the end of the day, this is not something that can come out-of-the-box from a platform, software, or program. The responsibility falls to the people who are actually developing, building, and training the models. Their biases are going to be infused into the models they are building.

 

An essential and proven way for organisations to mitigate the possibility of bias in their machine learning models is to have a peer review of them.

 

 

5. AI and ML are very expensive

 

When we talk about implementing AI and ML, we’re talking about predicting what’s going to happen next based on the current operational structure of a business. The discussion is really about mitigating business risks.

 

At its beginning, any new technology would seem expensive because of low adoption, but once it is assessed under the right lenses, the value becomes obvious. At the end of the day, money saved by preventing losses is money earned. That is where the value of AI and ML lies.

 

 

6. AI and ML has modular implementation time

 

No. As every problem is different, finding the pattern underneath the problem is equally a different process. For every use case or model, tuning and configuration takes its own number of iterations and has its own processing methods.

 

We can make high-level estimates, though, depending on the kind of business scenario we are trying to handle.

 

 

7. More data means more accuracy

 

This is another thing we hear very often. Although there is a certain amount of truth to this statement, it has also been massively generalised and oversimplified. Let’s take a deeper look.

 

There is a difference between what is “sufficient” and what is “necessary”. By now, we can all agree that a Machine Learning model looks for patterns in a dataset.

 

If there is not enough data to show a pattern, adding more data would be helpful; but only if the new data has some variance to it that the existing data does not. This is the “necessity” aspect of quality data for an ML model.

 

Once the model has seen enough data to capture the patterns in the dataset, then any amount of new data which does not carry any more variance (or new information) is not of any help. This is the “sufficiency” aspect of quality data for an ML model.

 

So we could say, in this case, that more data means more accuracy – if that data is “necessary” data.

 

 

8. “Citizen” data scientists

 

This term is another example of oversimplification of concepts when we talk about AI and ML. There are a lot of interpretations involved in an end-to-end data science process. There is some subjectivity to it that we always need to consider, and these subjective calls often make or break decision-making indicators. If we enable the concept of citizen data scientists in any organisation without this training and “compartmentalisation”, this could introduce some risk and adverse effects.

 

To give you a very small example, an ETL engineer or an ML engineer would know technically how to find and remove records with null values from a table. But only upon careful consideration would an ML engineer know better whether to remove those records or to impute them. And if the records are to be imputed, the ML engineer would then make the call on how to impute them.

 

This is just one small example among many kinds of scenarios where common citizens without necessary skills and experience cannot make those calls.

 

 

9. Auto-ML is a panacea

 

Auto-ML is also an oversimplified concept. It is currently a huge, growing area of research in ML Engineering. Still, completely automated machine learning models are not in existence yet.

 

The “auto-ML” we have available now can help you enhance your turnaround time for your experiments. It can perform certain steps automatically and help to increase productivity.

 

 

10. Higher accuracy = Better model

 

Does higher accuracy mean that you have a better model? Not always.

 

Accuracy can be a mirage. It is not always true. The reason for this is that you must always keep the business context in mind. Let’s say you are trying to predict fraudulent transactions. And let’s say, again, that only 1% of all transactions in a bank are fraudulent.

 

Even if you predict blindly that all transactions are non-fraudulent, the model is still 99% accurate. The problem with this is obvious. The business is more interested about the missed 1% rather than the correct 99%.

 

Often, we also see that finding the “best” model has introduced bias into business processes. So, in the end, it is better to go with a robust and more general model than one with high “blind” accuracy.

 

We would all do well to remember that the terms “best” and “worst” are nothing but two forms of extremes.

 

 

Conclusion

 

Finally, artificial intelligence and machine learning are some of the most powerful technologies in use today, but it is essential to understand the best way to apply them in business scenarios to get realistic returns on investment. At ClearPeaks, we partner with major corporations to strategise and implement their AI and ML transformations.

 

Want to know more about how AI and ML can help your organisation? Contact us today at info@clearpeaks.com.

 

CoolThoughts is a series of articles to discuss evolving topics and spark inspiration.

 

Advanced Analytics Service

Dhiman D.
dhiman.dey@clearpeaks.com