A technology journalist with a master's degree in Robotics. Likes…

When John Von Neumann said that, “Kurt Gödel’s achievement in modern logic is singular and monumental—indeed it is more than a monument, it is a landmark which will remain visible far in space and time. … The subject of logic has certainly completely changed its nature and possibilities with Gödel’s achievement,” little did he know that Gödel’s work will have implications in the development of General Artificial Intelligence.

A Gödel machine, developed by AI researcher, Jurgen Schmidhuber, is inspired from Gödel’s ‘self-reference’ observations made regarding the limitations of mathematical proofs.

Gödel’s incompleteness theorems are two theorems of mathematical logic that demonstrate the inherent limitations of every formal axiomatic system capable of modelling basic arithmetic. These results, published by Kurt Gödel in 1931, are important both in mathematical logic and in the philosophy of mathematics.

Gödel machine was proposed by Jürgen Schmidhuber back in 2003 to make a case for superintelligence in machine what is now considered as General AI.

The first self- change will be executed only if it is provably useful (in the sense of the present utility function) for all future self- changes (for which the present self- change is setting the stage). This is actually the main point of the whole self- referential set-up. Gödel machines can be implemented on traditional computers.

Godel machine implements a meta learning behaviour and no matter how many meta levels there are in the process, they are collapsed into one .“Any proof of a target theorem automatically proves that the corresponding self- modification is a useful basis for all further self- modifications affected by the present one, in recursive fashion,” said Jürgen Schmidhuber in his paper.

Formal descriptions of non- computable objects do not at all present a fundamental problem for the Gödel machine – they may still allow for finding a strategy that provably maximises utility. If so, a Gödel machine can exploit this. If not, then humans will not have a fundamental advantage over Gödel machines.

As computers are getting faster, more and more important mathematical proofs heavily depend on automated proof search. Some proofs are indeed hard to find by any method, but here humans and Gödel machines face the same fundamental limitations.

### How Good Is Godel Machine From Other Models

Theorems like Hsearch and AIXI(t,l) are hardwired, non-self-referential, unmodifiable, brute force meta-algorithms that cannot improve themselves. They suffer from constant slowdowns . But, according to the creator of Gödel machine, there is nothing in principle that prevents the truly self-referential code of a Gödel machine from proving and exploiting drastic reductions of such constants.

The Global Optimality Theorem of the Gödel machine, however, is justified through a quite different type of reasoning which indeed exploits and crucially depends on the fact that there is no unmodifiable software at all, and that the proof searcher itself is readable and modifiable and can be improved.

Hsearch wastes time on finding programs that provably compute f(z) for all z in problem domain X even when the current f(x) (x in X) is the only object of interest. A Gödel machine, however, needs to prove only what is relevant to its goal formalised by its axiomatised utility function.

Gödel machine is more flexible and does not require an enumerable environmental distribution, and is not restricted to O()-optimality.

Check the full paper on Ultimate Cognition here.

### Applications Of Gödel Machine

**Solving NP-hard Problems **

NP stands for “non-deterministic polynomial acceptable problems”. The problem is NP-hard, meaning that the time needed for a computer to solve it grows exponentially. When something like a Godel machine is tasked to solve a complex problem, it tries to conclude in the shortest time possible, self referential by nature, it will cut down on the redundancies and might obtain faster solutions.

**Rewarding Models**

In the case of widely popular reinforcement learning, a machine learning model is embedded with a reward function. These functions determine the output of the model. Every input that a model says, a robot in this case, is fed with input variables like terrain from its sensors or on board camera. These inputs are then processed along with changing environments in a self correcting manner. This enables the model to take better decisions of human equivalence. A Gödel machine, as discussed above, works towards the betterment of the model just like in any self-driving car or a attitude adjusting rocket. This permits a computable strategy for making near-optimal predictions. One by-product of maximising expected reward is to maximise expected lifetime.

If “consciousness” is perceived as the ability to execute unlimited formal self-inspection and probably useful self-change (unlimited except for the limits of computability and provability), then the Gödel machine and its Global Optimality Theorem does provide the first technical justification of consciousness in the context of general problem solving.

*Enjoyed this story? Join our Telegram group. And be part of an engaging community.*

### Provide your comments below

###### What's Your Reaction?

A technology journalist with a master's degree in Robotics. Likes to write about machine learning advancements. email:ram.sagar@analyticsindiamag.com