Now Reading
Myths And Realities In The Quest For Artificial Intelligence

Myths And Realities In The Quest For Artificial Intelligence

W3Schools

The term Artificial Intelligence (AI) was created and used for the first time in the mid-1940s. However, it was not until 1956 that it was formally used for the first time in a small gathering attended by some psychologists, physiologists and computer scientists.

Since its inception, AI has had some successes in enabling computers to perform (on a limited basis) some tasks that are normally done by the human mind. It is today that the technological aspects of AI have become more visible. Public interest in AI and its coverage by the media have increased tremendously in recent years. More and more people see AI as an emerging technology with great potential and future social significance.

In particular, this public interest and knowledge arise from 1981, when Japan announced its national ten-year plan to develop what was called the “Fifth Generation” of computers. Such computers not only base their operation on large-scale parallel processing but also seek to incorporate AI techniques into processing. In fact, the goal of this plan is “the creation of artificially intelligent machines that can reason, draw conclusions, make judgments, and even understand the oral and written word.” Since then, large investments in AI research have been made by governments and private industry in industrialized nations.



The Origin of the Myths

Sensationalism feeds ignorance, and many of the descriptions of AI in the media and popular science books are sensational in nature. Whether they proclaim the “wonders” or the “dangers” of AI, they are generally not informative of reality, and highly misleading. They suggest spectacular advances that can or will be made in the immediate future, and many of them can only be made (if possible), perhaps after decades of research.

An example of such claims is that made in 1958 by Herbert Simon and Allen Newell, both pioneering computer scientists and founders of AI as part of computer science. They wrote that:

“… there are now machines in the world that think, learn, and create. Furthermore, their ability to do these things will rapidly increase until – in the visible future – the range of problems they can handle will be coextensive with the range to which the human mind has been applied “.

Realities behind Artificial Intelligence

Jonh von Neumann, the creator of the computer architecture that is still the international standard today, was one of the first (if not the first) to recognize that computer instructions are merely symbols that can be manipulated by computers in the world. same way as numbers or any other symbol can be manipulated. Commonly cited as an “example” of a computer pioneer who developed the foundations of automated thinking and AI, in his latest published work, von Neumann states that “an approach to understanding the nervous system from a mathematical point of view” It simply has nothing to do with “computers displaying intelligence”.

In reality, von Neumann had a standard answer for anyone who asked him if computers could think, or be intelligent. His answer was that if the questioner could present an accurate description of what he wanted the computer to do, someone could program the computer to behave in the required way. If von Neumann thought that there would be some things within the human experience that did not satisfy this criterion, we simply do not know. However, his position that every aspect of nature must be accurately described, and his colorary that all human knowledge can be stated in words, is the central creed that every true believer in the limitless possibilities of AI must. to hold.

Following this same line of thought, and outside of the myths, some realities about AI are exposed below, briefly discussing the areas that have been developed as part of the research in AI.

One of the most cultivated areas of AI research is “low-level” vision, based on techniques that use parallel hardware and cooperative processing. Such research is based on detailed studies of the formation of images from the three-dimensional characteristics of ambient light (such as shape, depth, texture and orientation of surfaces), in order to extract high-level knowledge from a scene. dice. Part of this work is done in the context of human psychology and neurophysiology, and part in a more technological context. Massively parallel machines dedicated to this particular area have been designed, and still, the greatest advancements depend on hardware.

See Also
Denis Batalov AWS

Another area in which we can expect significant progress is robotics. This includes problems of motion control, trajectory planning, and coordination between sensors and motors (using elements of the work in “low level” vision). As in the case of vision, projects in this area rely on “artificial” means to ensure success in the proposed activity. For example, welder systems use stripes of light to recognize different types of joints, thus guiding the activity of the parts welder. Also in this area, media related to psychophysiological theories of motor control and coordination present in living organisms are used.

Some progress has been made in natural language processing in sentences and texts. Key points in this research include syntactic review, integration of syntax and semantics, and understanding of related text. Machine translation of texts can still benefit from advances in sentence revision and text analysis.

An extremely important area, and one that is increasingly becoming studied with interest because of recent developments in hardware, it deals with the computational properties of large parallel systems. So far, we understand very little about the potentialities and limitations of such systems. Some work suggests that cooperative processing can have some very surprising properties. However, the computational properties of parallel systems appear not to be well understood for a long time, but experience with these systems in the immediate future will undoubtedly lead to some considerable advancement.

Impact on other areas

AI will influence other sciences both in its philosophical approach and in its specific theoretical content. It is true that psychology and (to a lesser degree) biology have already been affected by ideas of AI. Contrary to what many people assume, AI has had a humanizing effect on psychology. For example, the behaviorist approach to psychology had rejected any reference to the “mind” and the “mental process”, assuming them as non-scientific and mysterious concepts. However, AI, as it is based on the concept of representation, has made these concepts theoretically respectable again.

The influence of AI will be felt especially in the psychology of vision and language, and, as mentioned above, it is possible that robotics is involved with the psychophysiology of movement. On the other hand, research in psychology will also influence AI in turn. For example, while psychologists try to gain a better understanding of the organization of knowledge, their work can be useful for the design of computerized expert systems. Interdisciplinary and cooperative research should be encouraged: the institutional separation of psychology and AI or computing has hampered a fruitful collaboration between these two groups.

What Do You Think?

If you loved this story, do join our Telegram Community.


Also, you can write for us and be one of the 500+ experts who have contributed stories at AIM. Share your nominations here.
What's Your Reaction?
Excited
0
Happy
0
In Love
0
Not Sure
0
Silly
0

Copyright Analytics India Magazine Pvt Ltd

Scroll To Top