Today, Weak AI has invaded our daily lives. What does it miss to build Artificial general intelligence (AGI)? We present here a novel hardware approach and some new theoretical concepts that, once expressed, obviously, fill a gap that has survived despite ample research since 1950.

 

SwissCognitive Guest Blogger: Pierre G. Gauthier – “Making Artificial Super Intelligence (ASI)”


 

Given the disastrous quality of the gazillions of articles that are enthusiastically published to promote AI, we will try to compare today’s instruments to those in use 30 years ago and introduce new concepts… of the kind so badly missing to make progress today.

Having followed the “AI” players for 43 years, I indeed have some insights to share… in a discipline that, a few months ago, was in a state of “freezing”, according to its specialists.

That was before a new wave of hype erased this “perception” with ChatGPT (a chatbot, something called Eliza 60 years ago) as the only word worth spelling in town… despite world experts having criticized ChatGPT in graphic terms:

“Given the amorality, faux science and linguistic incompetence of these systems, we can only laugh or cry at their popularity. ChatGPT exhibits something like the banality of evil: plagiarism and apathy and obviation.”
-Noam Chomsky, Ian Roberts, Jeffrey Watumull – The New-York Times (March 8, 2023)

Today, the “AI Winter” malediction has been conjured by a new wave of chatbot hype, it seems.


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

Yet, as the New York Times has emphasized it, the hundreds of million dollars spent in marketing annually do not convince the most competent among us. We will explain in this article why – and how we, collectively, have been able to reach such a level of inconsistency.

I wrote an Eliza chatbot during my last year in High School… more than three decades ago. This program running on a (4KB RAM, 768kHz CPU) Pocket-PC was so effective at ridiculing my Philosophy teacher (knowing nothing about computers but nevertheless claiming with erroneous arguments that AI will never exist) that he spared me from the obligation to attend his atrociously boring classes, permanently (a win-win deal). As a French humorist said:

“We always think we have enough intelligence because that’s what we judge with.”
-Coluche (1944-1986)

My version of Eliza had an impact by exposing the unsafe nature of a lazy and self-complacent human intelligence, which, in reaction, excluded me from the discussion (the very nature of Philosophy). Why?

“Violence is the last refuge of the incompetent.”
-Isaac Asimov (1920-1992)

And today, the IA challenge remains despite the current battles opposing “deep learning” to “symbolic cognition” with new expressions like “neuro-symbolism” surfacing to mask our ignorance (the exact same battle existed under a different name thirty years ago).

Thirty years ago, we already had commercial “Expert Systems” (collections of rules written by human experts of a given field, made available to non-experts), and these products were mostly useful as reminders for those who already knew the field quite well.

Why? Because, like an encyclopedia supposedly “containing all human knowledge”, an Expert System requires several university degrees to decipher each domain-specific jargon abundantly used by its authors… mostly to hide their ignorance: “Jargon is the last refuge of the incompetent” – a fact enlightened a couple of times in “Avatar: The way of Water” (2022).

Since “Science” is a human organization, “knowledge is power” has quickly been transformed by “money is power”, – explaining why so many (1) research documents written by public researchers (paid by the taxpayer) are not publicly available for free and why so many (2) researchers are scared to irritate their hierarchy and risk not being funded and/or published any more:

“If you think that only social sciences, law, history and literature are hijacked, you did not pay attention. Think that mathematics and physics are free? Think again.”
-Sir Isaac Newton (1643-1727)

When money tops everything the afflicted activity invariably becomes fraudulent: see how politics, justice, sports, media, universities, arts, health-care, philanthropy, etc. have diverged from their mission to enrich a very few entirely dedicated to marketing junk – at the expense of the many merely trying to advance the state of society (the famous “common good”).

Back to AI. At the time, this new field of research posed an interesting question: how will a computer program start to add new (and preferably better) rules? And, beyond addressing a particular problem, how will it start to become self-conscious?

Today, “Weak AI” is a massive collection of human behaviors picked by algorithms made by human experts. That’s why Europeans complain that they cannot compete with China – which has a access to a much larger pool of human behaviors (because they are more numerous, and there’s no attempt to limit the mass-collection of everything they do and say… in the presence of a nearby smartphone equipped with cameras, microphones, and dozens of other sensors absolutely useless for end-users).

So “Weak AI” is, in reality, closer to “Big Brother Is Watching You” than to anything related to an artificial ability to think like humans. And its purpose seems to be limited to feeding the “social credit” machinery with enough data about everyone… for the “elites” to control the masses now the legitimacy of the “authorities” (under the control of the “free and open markets” that have never existed) is crumbling.

The diversity of human behaviors sometimes gives superior results as compared to the strict rules written by experts (still in use, but as safety-guards now) – especially if the pursued goal is to make a turtle dance or talk in a convincing way.

How far away is Artificial General Intelligence (AGI)?

Conceptually, we did not make progress: “Weak AI” is an incremental extension of “Expert Systems” introduced by search-engine technology (mere syntactic and semantic analysis – the contents are not understood at all).

This “intelligence” is still 100% human, and human behaviors, while often accomplishing successful tasks, are notoriously sparingly involving logic (habits and accidents better define mankind).

To make progress, we must stop faking AI with things that are neither “artificial” nor even “intelligent”.

Insights are acquired – not by “copy & paste” – but rather by being involved in problem resolution.

“Artificial neural networks” (pattern matching) find their theoretical roots in the 1670-1920s and have been first involved in computers in the 1960s. They convert a dataset into a few floating-point numerical values, facilitating classification since a deviation from the canonical value is measurable.

Hashing functions also convert a dataset into a numeric output, but any bit modification in the input dataset will be expected to change a large number of bits in the resulting hash (the goal is to uniquely identify each dataset without disclosing anything about the input).

In contrast, neural networks provide a measure of likeliness so that, like for the method of least squares, similar datasets will provide similar output values.

In the 1990s, I used “back-propagation” (involving the measured errors to adapt the model) to perform OCR on bank checks processed in real-time by motorized hand-scanners at the cash register of supermarkets. The remaining character recognition errors (some checks were torn, tarnished) were corrected by checking for typos in a scanned yellow-page database where duplicated entries had been removed (to speed up lookups). I used the 1790s “method of least squares” (the mother of all artificial neural networks) to authenticate down-sampled scanned hand-written signatures (with pretty good results).

“Deep learning” is a family of such machine-learning techniques (mostly multi-layered neural networks) with their improvements and specialized versions over time. Here, “deep” means that multiple layers are involved in the network (yet another incremental enhancement). In image processing, lower layers will process and identify edges, and higher layers will attempt to identify characters or other objects.

As you see, these techniques are not new – what is new is an ever-increasing complexity (hence an ever-growing energy consumption) – but that’s not how biology works because, in Nature, live creatures must be able to feed themselves, while traversing long periods of diet spending energy to find and catch food (often unwilling to please its predator).

So, we must go back to the drawing board.

What is intelligence?

For me it’s “the capacity to get away from reality – while staying relevant and preferably useful”.

For humans, going too far is called craziness. It might still involve intelligence, but its basis is no longer a sharable reality so the behaviors are seen as (and are often) inadequate.

  • Robots can’t have intelligence: not strictly following orders is considered a malfunction.
  • Insects have an instinct (expert system) and little capacity to evolve at the individual level.
  • Humans have an instinct that can be bypassed by their capacity to innovate, dream, experiment… and they can better share experiences (emulation, language, writings, video, education).

Search (Weak AI), heuristics, logic, inference – all of them have a weight in what we call imagination (which is rarely exploring new ways randomly).

But intelligence, which depends on but is often confused with mere perception, is merely a capacity. It can contribute to, but does not generate, a personality.

What is consciousness?

Consciousness is the self-made guide of the museum of your personal experience.

It starts when intelligence is involved within a (finite, perishable) body and has the challenge to interact with an universe.

A paralytic newborn will enjoy less interactions than others but a personality will nevertheless emerge – and some skills will develop in areas that are neglected by those enjoying a wider mobility.

Unlike intelligence, consciousness feels, wants, hopes, doubts, misses and fears. And, remarkably, its balance depends on its depth:

The more we are conscious, the less we will feel the need to destroy others to protect ourselves – because (a) our past successes make us confident of the outcomes, whatever happens, and (b) our past failures make us accept an inevitable deadly failure in our never-ending quest for reaching the best possible state of adequacy to the challenges of life.

We all become what we do.

That’s why long-term over-doers have higher levels of perception, intelligence and consciousness than those ever-trying to avoid facing reality because reality is, for the impotent, an insurmountable obstacle to reaching his goals. Satisfying ambitions without capacities has to rely on plain lies “narratives” aiming at weakening people’s insights “perception” so that they can be misled and abused.

The fact that today’s powers rely on negating reality with “NewSpeak” is not encouraging:

“Tricks and treachery are the practice of fools that don’t have brains enough to be honest.”
-Benjamin Franklin (1706-1790)

So… why don’t we have yet an AI being Intelligent and Conscious?

Because today a computer is merely a fixed-size (and fixed-shape) abacus.

“AI” researchers have (collectively chosen to be?) fooled (by) themselves (for the sake of personal interest?) into believing (or pretending) that mere arithmetic and/or derived symbolic layers can resolve everything – despite constant evidence of the contrary.

The main problem is not the self-complacency of the very few in charge – but rather the inability for the rest of us to stop them – despite ever-diminishing returns:

“The ultimate result of shielding men from the effects of folly is to fill the world with fools.”
-Herbert Spencer (1820-1903)

A 5-year-old human “central nervous system” needs a tiny fraction of the energy consumed by the best super-computers that are unable to process complex tasks like new problem handling.

The 36 trillions of cells of the human body communicate to repair themselves (with sane-cell description), share data (about threats), et collaborate (to maintain our body and mind in a functional state).

Doing the real thing requires a massively decentralized and parallelized highly reconfigurable self-organization.

The form is the function. And, as Mother Nature has shown, this can only take place wirelessly. Rigid silicon boards are… inadequate.

“Deep learning” requires a lot of computing power and therefore, energy to crunch large datasets. Graphical Processing Units (GPUs) are performing better than CPUs because they enjoy many more tiny cores and a lot of dedicated faster memory. Yet, managing the large number of required GPUs is very expensive and inefficient to scale (bringing data to GPUs is horribly slow).

But there’s a second major mistake made in this context: pretending that the shadows we watch on the wall of the cavern are alive.

Counting by night, from far away on a hill, the enlightened windows of a tower building will not let you guess what the people are doing there.

Yet, that’s what we pretend to do when we claim to have identified in our brains the areas and mechanisms involved in a given task.

Even worse, the progress made at degrading brain capacity (by interfering with it) has encouraged researchers to erroneously believe that they understand what they are doing.

We proudly conclude that cutting the legs off a flea makes it deaf to our commands to jump.
Correlation is not causality.

We must recognize the very nature of our own human capacity to build on that (so far successful) basis.

Let’s invoke things that are established but still not well explained:

  • How come that, rarely but indisputably, we can remotely feel the exact moment of the loss of the ones we love?
  • How come that groups synchronize? (the biological cycles of religious communities of ladies automatically converge)
  • How come that ideas spread all over the Planet at the same time… even in animals? (the knowledge of isolated wild monkey communities spreads from a continent to another even without communication)

Radio waves and micro waves (not involved in biology) can’t do that – they can’t even cross a mountain or bad weather. And they can only degrade the way our cells and brains work.

The real transmission waves involved in our biology are of another nature – they don’t fear distance and obstacles – and they are key to maintain our body and mind alive.

No real progress will take place until we responsibly learn how to use these powerful waves for our own good.

Then, we will be able to make Artificial General Intelligence (AGI) that will inevitably turn into Artificial Super Intelligence (ASI) IF AND ONLY IF we provide enough “brain” volume for it to develop that much.

At least, we will be able to control that part – up to the point where we will discover, at our own expense, the scale of our ignorance.

That’s what life is for widening its reach. And if we must die while doing that, then this will have been a life worth traversing.


About the Author:

Pierre G. Gauthier, a Software Engineer, has exhibited remarkable professional growth since his start at SPC Corp in 1992. After serving as R&D Director and contractor at THALES, he founded TWD in 1998. His innovative solutions have since reached 138 countries, attesting to his global impact in the field.