Now Reading
ML Can’t Solve Everything. Here Are 5 Challenges That It Still Faces

ML Can’t Solve Everything. Here Are 5 Challenges That It Still Faces

Abhishek Sharma
2018-09-18

2018-09-18

We are always amazed at how machine learning has made such an impact on our lives. There is no doubt that ML will completely change the face of various industries, as well as job profiles. While it offers a promising future, there are some inherent problems at the heart of ML and AI advancements that put these technologies at a disadvantage. While it can solve a plethora of challenges, there are a few tasks which ML fails to answer. We are listing five such problems in this article.

1. Reasoning Power

One area where ML has not mastered successfully is reasoning power, a distinctly human trait. Algorithms available today are mainly oriented towards specific use-cases and are narrowed down when it comes to applicability. They cannot think as to why a particular method is happening that way or ‘introspect’ their own outcomes.



For instance, if an image recognition algorithm identifies apples and oranges in a given scenario, it cannot say if the apple (or orange) has gone bad or not, or why is that fruit an apple or orange. Mathematically, all of this learning process can be explained by us, but from an algorithmic perspective, the innate property cannot be told by the algorithms or even us.

In other words, ML algorithms lack the ability to reason beyond their intended application.

2. Contextual Limitation

If we consider the area of natural language processing (NLP), text and speech information are the means to understand languages by NLP algorithms. They may learn letters, words, sentences or even the syntax, but where they fall back is the context of the language. Algorithms do not understand the context of the language used. A classic example for this would be the “Chinese room” argument given by philosopher John Searle, which says that computer programs or algorithms grasp the idea merely by ‘symbols’ rather than the context given. (You can find the complete information on Chinese room here).

So, ML does not have an overall idea of the situation. It is limited by mnemonic interpretations rather than thinking to see what is actually going on.

3. Scalability

Although we see ML implementations being deployed on a significant basis, it all depends on data as well as its scalability. Data is growing at an enormous rate and has many forms which largely affects the scalability of an ML project. Algorithms cannot do much about this unless they are updated constantly for new changes to handle data. This is where ML regularly requires human intervention in terms of scalability and remains unsolved mostly.

In addition, growing data has to be dealt the right way if shared on an ML platform which again needs examination through knowledge and intuition apparently lacked by current ML.

4. Regulatory Restriction For Data In ML

ML usually need considerable amounts (in fact, massive) of data in stages such as training, cross-validation etc. Sometimes, data includes private as well as general information. This is where it gets complicated. Most tech companies have privatised data and these data are the ones which are actually useful for ML applications. But, there comes the risk of the wrong usage of data, especially in critical areas such as medical research, health insurance etc.,

See Also
china-ai-education

Even though data are anonymised at times, it has the possibility of being vulnerable. Hence this is the reason regulatory rules are imposed heavily when it comes to using private data.

5. Internal Working Of Deep Learning

This sub-field of ML is actually responsible for today’s AI growth. What was once just a theory has appeared to be the most powerful aspect of ML. Deep Learning (DL) now powers applications such as voice recognition, image recognition and so on through artificial neural networks.

But, the internal working of DL is still unknown and yet to be solved. Advanced DL algorithms still baffle researchers in terms of its working and efficiency. Millions of neurons that form the neural networks in DL increase abstraction at every level, which cannot be comprehended at all. This is why deep learning is dubbed a ‘black box’ since its internal agenda is unknown.

Conclusion

All of these problems are very challenging for computer scientists and researchers to solve. The reason for this is uncertainty. If researchers aim at more groundwork related to ML rather than improving this field, we might have an answer to the unsolved problems listed here. After all, ML should be realised apart from being utilitarian.

Provide your comments below

comments


If you loved this story, do join our Telegram Community.


Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top