The Fallacy of Artificial General Intelligence: Microsoft's Recognition of the Limits of LLMs

  Microsoft released a research work last week [1] that claims that GPT-4 capabilities can be viewed as an early version of Artificial General Intelligence. The authors states that " the breadth and depth of GPT -4's capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system. "  The researchers adopted the following definition of human Intelligence to reach this conclusion: " a very general mental capability that, among other things, involves the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience. ". According to the same paper, the definition was proposed in 1994 by a group of psychologists. Interestingly, the authors of the paper [1] acknowledges that the definition of human intelligence is somehow restrictive. They also acknowledge that some components of this definition are currently missing

Division by zero is not infinity, it is undefined

We all learn at school that dividing by zero is infinity. This is, unfortunately not true. So, to explain why I'm going to use successive subtraction.


But before that, let's see why most of us believe that division by zero is infinite. This belief comes from the fact that the division's result is producing huge numbers when the divisor converges to zero. So, for example, if m=a/b, we will obtain big values for m as b converges to zero. 


Typically from a subtraction point of view, the result of any division operation (m) represents a number that we can multiply by the divisor and then subtract from the dividend to obtain a number (called remainder) that is less than the divisor. In arithmetic, we call this Euclidean division: a-m*b=r where r <b.


Our objective is to find m that can produce a remainder (r) whose value is less than the divisor b. If we want now to divide by zero, the euclidean division will have the following form: a-m*0=r.

The question boils down to find a number m that needs to be multiplied by 0 and reduce a to some value less than r


If we don't believe in infinity, like Gauss and Bishop, we can say that m is always a number. Based on the hypothesis that zero multiplied by any number is zero, and the hypothesis that considers subtracting any number from zero is always the number itself, we can conclude that there is no number m that can make a reduce itself to r


If we believe in infinity, like Russel and Cantor, the result of the division is not defined because the operation 0×∞ is not defined. An interesting example that illustrates why 0×∞ is not defined is given here (https://brilliant.org/wiki/is-infinity-times-zero-zero/). 


If you take two functions (f and g) that converge to infinity and zero, their multiplication will not necessarily converge to zero or infinity. The result depends on the nature of the functions, and in their example, the value converges to 1.


So whether we believe in the existence of infinity or not, division by zero is always undefined. 



Comments

Popular posts from this blog

What does information security really mean?

Can we build perfect secure ciphers whose key spaces are small?

ChatGPT or CheatGPT? The impact of ChatGPT on education