Artificial Intelligence And Its Potential Power Of Deconstructing Abstract Thoughts

Representational image: Portrait of Félix Fénéon by Paul Signac, 1890

As artificial intelligence ascends the height of human capabilities, there are parallel studies that are exploring the reasons for them. Scientists and AI enthusiasts alike are digging deep into the uncharted territory of AI’s actual intelligence by comparing it to the human mind.

As far as studies concerning the mind’s abstract thought process go, many approaches either have a philosophical perspective or a psychological one. However, these perspectives are not fully applicable to AI systems as they lack two significant elements — emotions and a human’s development process. Abstract thoughts stem from both these elements and this phenomenon is now explained through AI. Areas in AI like deep learning could hold the key to uncover abstract thinking if it is realised fully.  

Deep Learning And Abstraction

Dr Cameron Buckner, a cognitive scientist and faculty at the University of Houston, postis in his recent paper about how the exceptional performance of a convolutional neural network is rooted through a hierarchical processing that is derived from sensory experiences gained as per the empiricism theory. He labels this entire flow as ‘transformational abstraction’.


Sign up for your weekly dose of what's up in emerging technology.

Transformational abstraction iteratively converts sensory-based representations of category exemplars into new formats that are increasingly tolerant to“nuisance variation” in the input. Reflecting upon the way that DCNNs (CNNs) leverage a combination of linear and non-linear processing to efficiently accomplish this feat allows us to understand how the brain is capable of bi-directional travel between exemplars and abstractions, addressing longstanding problems in empiricist philosophy of mind.”

This powerful ability of a neural network is what makes it an ideal choice to recognise abstractions. As you can see in this example, empiricism is merged with the working of a CNN to establish both a philosophical as well as a psychological perspective. However, Dr Buckner suggests that CNNs face setbacks along three factors

Download our Mobile App

  1. A weak unsupervised training model
  2. Cell type in neuroanatomy
  3. Adversarial examples

Unless these are minimised, abstract thought analysis with neural networks is difficult.

Only further research can tell how this area can scale abstract thinking. As of now, current deep learning has limitations such as massive data requirements to fulfil complex tasks. On top of this, deep learning itself is quite unknown in terms of how it works. Shedding light on this uncertainty might help with deciphering human thoughts.

A Mathematical Connection

The neural network example is just one side of the equation. What if abstract thoughts are mathematically manipulated? In fact, mathematicians even thought that the standards in the subject were mostly conceived based on intuition. This interlink between mathematics and thinking can also be used to capture abstractions too.

Thus, the whole concept of abstract thinking not only stands on philosophical or say even psychological perspective but also includes a mathematical side. This is why abstraction is challenging in AI.

Experiential Knowledge Plays Crucial

Earlier we mentioned that how abstract thoughts in humans also comes due to emotions and experience in the overall developmental processes. So, as we grow, we accumulate a dearth of experience and react with different emotions. Just like we learn new stuff, experience too, counts as knowledge. This is exactly where it becomes a pitfall for AI. Decoding abstract thoughts can go wrong if AI doesn’t realise collectively take human experience along with knowledge. It should differentiate between a regular computer programme.

Sociologist Brent Cooper, succinctly tells how the abstraction from the computer world should be isolated if AI is to be realistic.

“The whole point is that abstraction is ubiquitous and something that absolutely everyone on the planet does; its part and parcel of thinking itself. But most people do it poorly, intuitively, and take shortcuts in thinking. Abstraction is explicitly a technique to zoom out to the big picture. Computer scientists, sociologists, and the public all still have a lot to learn from each other in this regard.”

All in all, AI is bound with a lot of intuition-based challenges on top of its own structuring. Be it the mathematical link, experiential knowledge or deep learning’s light on understanding, these factors are just the tip of an iceberg of challenges in building a ‘thinking’ AI.

More Great AIM Stories

Abhishek Sharma
I research and cover latest happenings in data science. My fervent interests are in latest technology and humor/comedy (an odd combination!). When I'm not busy reading on these subjects, you'll find me watching movies or playing badminton.

AIM Upcoming Events

Regular Passes expire on 3rd Mar

Conference, in-person (Bangalore)
Rising 2023 | Women in Tech Conference
16-17th Mar, 2023

Early Bird Passes expire on 17th Feb

Conference, in-person (Bangalore)
Data Engineering Summit (DES) 2023
27-28th Apr, 2023

Conference, Virtual
Deep Learning DevCon 2023
27 May, 2023

3 Ways to Join our Community

Telegram group

Discover special offers, top stories, upcoming events, and more.

Discord Server

Stay Connected with a larger ecosystem of data science and ML Professionals

Subscribe to our Daily newsletter

Get our daily awesome stories & videos in your inbox

What went wrong with Meta?

Many users have opted out of Facebook and other applications tracking their activities now that they must explicitly ask for permission.