Advertisement

ML Can’t Solve Everything. Here Are 5 Challenges That It Still Faces

2018-09-18

We are always amazed at how machine learning has made such an impact on our lives. There is no doubt that ML will completely change the face of various industries, as well as job profiles. While it offers a promising future, there are some inherent problems at the heart of ML and AI advancements that put these technologies at a disadvantage. While it can solve a plethora of challenges, there are a few tasks which ML fails to answer. We are listing five such problems in this article.

1. Reasoning Power

One area where ML has not mastered successfully is reasoning power, a distinctly human trait. Algorithms available today are mainly oriented towards specific use-cases and are narrowed down when it comes to applicability. They cannot think as to why a particular method is happening that way or ‘introspect’ their own outcomes.

For instance, if an image recognition algorithm identifies apples and oranges in a given scenario, it cannot say if the apple (or orange) has gone bad or not, or why is that fruit an apple or orange. Mathematically, all of this learning process can be explained by us, but from an algorithmic perspective, the innate property cannot be told by the algorithms or even us.

THE BELAMY

Sign up for your weekly dose of what's up in emerging technology.

In other words, ML algorithms lack the ability to reason beyond their intended application.

2. Contextual Limitation

If we consider the area of natural language processing (NLP), text and speech information are the means to understand languages by NLP algorithms. They may learn letters, words, sentences or even the syntax, but where they fall back is the context of the language. Algorithms do not understand the context of the language used. A classic example for this would be the “Chinese room” argument given by philosopher John Searle, which says that computer programs or algorithms grasp the idea merely by ‘symbols’ rather than the context given. (You can find the complete information on Chinese room here).


Download our Mobile App



So, ML does not have an overall idea of the situation. It is limited by mnemonic interpretations rather than thinking to see what is actually going on.

3. Scalability

Although we see ML implementations being deployed on a significant basis, it all depends on data as well as its scalability. Data is growing at an enormous rate and has many forms which largely affects the scalability of an ML project. Algorithms cannot do much about this unless they are updated constantly for new changes to handle data. This is where ML regularly requires human intervention in terms of scalability and remains unsolved mostly.

In addition, growing data has to be dealt the right way if shared on an ML platform which again needs examination through knowledge and intuition apparently lacked by current ML.

4. Regulatory Restriction For Data In ML

ML usually need considerable amounts (in fact, massive) of data in stages such as training, cross-validation etc. Sometimes, data includes private as well as general information. This is where it gets complicated. Most tech companies have privatised data and these data are the ones which are actually useful for ML applications. But, there comes the risk of the wrong usage of data, especially in critical areas such as medical research, health insurance etc.,

Even though data are anonymised at times, it has the possibility of being vulnerable. Hence this is the reason regulatory rules are imposed heavily when it comes to using private data.

5. Internal Working Of Deep Learning

This sub-field of ML is actually responsible for today’s AI growth. What was once just a theory has appeared to be the most powerful aspect of ML. Deep Learning (DL) now powers applications such as voice recognition, image recognition and so on through artificial neural networks.

But, the internal working of DL is still unknown and yet to be solved. Advanced DL algorithms still baffle researchers in terms of its working and efficiency. Millions of neurons that form the neural networks in DL increase abstraction at every level, which cannot be comprehended at all. This is why deep learning is dubbed a ‘black box’ since its internal agenda is unknown.

Conclusion

All of these problems are very challenging for computer scientists and researchers to solve. The reason for this is uncertainty. If researchers aim at more groundwork related to ML rather than improving this field, we might have an answer to the unsolved problems listed here. After all, ML should be realised apart from being utilitarian.

More Great AIM Stories

Abhishek Sharma
I research and cover latest happenings in data science. My fervent interests are in latest technology and humor/comedy (an odd combination!). When I'm not busy reading on these subjects, you'll find me watching movies or playing badminton.

AIM Upcoming Events

Regular Passes expire on 3rd Mar

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Early Bird Passes expire on 17th Feb

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, Virtual
Deep Learning DevCon 2023
27 May, 2023

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox
AIM TOP STORIES

What went wrong with Meta?

Many users have opted out of Facebook and other applications tracking their activities now that they must explicitly ask for permission.