The rapid progress in generative AI is having an impact on many human activities. In this article, we explore a longer-term perspective and elaborate on how the relationship between humans and AI may evolve.

 

SwissCognitive Guest Blogger: Christopher Ganz, Founder, C. Ganz Innovation Services – “Generative AI In Relation To Humans – A Long-Term Perspective”


 

SwissCognitive_Logo_RGBGenerative AI and large language models (LLM) (e.g. ChatGPT) have shown recent progress that is surprising to many, and I have to admit: also to myself. Unlike the variants of neural networks that had to be trained on each use case with lots of data, these systems are trained a-priori and draw their results from a data pool (i.e. the internet or any other accessible data source) that is not specifically related to the questions asked. The answers that are generated are mostly indistinguishable from human-generated text. While such systems are still far from artificial general intelligence, their broad application without use-case-specific training may well deserve them to be called general artificial intelligence (note the difference in sequence).

People around the world are exploring their capabilities, at times with playful examples trying to fool them, but more and more in applications that provide a significant time saver to the user. The posts where users confirm their usefulness are increasing. Most of them state, that although the results are not perfect, and sometimes wrong, as a subject matter expert it is not very difficult to catch these flaws and set them right. For someone not familiar with the area of the response, such inadequate output is difficult to capture, and may easily lead to confusion. In any case, a user must be aware that the results may be wrong and need to be checked against other knowledge sources.

For the expert, these tools provide an easy way to formulate an initial text that is built on the vast resources of the internet, without having to review the top ten Google hits on the subject in question. Furthermore, the text is well-formulated and grammatically correct.

If we recall writing a thesis to achieve an academic degree, this is what was expressed in the literature research. We had to read a significant number of publications, extract the key findings, search for more references, and properly summarize the state of the art. Even though the LLMs are at times not very good at referencing their sources, they probably reach the level of an acceptable literature review. But this LLM-generated literature summary does not cover the many details and explanations from the original papers that a human reader would have caught in passing. A lot of the learning gathered from reading through the material does not take place.

By using these LLMs without expert guidance, we may skip important learning steps needed to become an expert. Many of the tasks an LLM is capable of doing are not low-level tasks, but rather tasks that are given to juniors in a team. By reading and compiling material that is required to properly prepare a subject to be used by others, the junior team member learns more than what is written in the summary handed over to the expert.


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

Today, the experts have gone through the ‘manual’ process of learning through lots of reading. In the future, that step may significantly be shortened. We may risk that through the simple and easy question to an LLM we may lose context and detail to become the expert capable of properly using an LLM and interpreting the responses.

If we project this towards an even further out future, we may see fewer experts enhancing LLM-generated texts. These texts may not just be the literature review or first draft of a text, with only a few modifications they may be published again. Since LLMs are still statistical models that sequence words and phrases that have been observed to have a high probability in the context of a question asked, feeding back LLM generated texts to the internet, and hence be used for further training of the LLM will confirm these statistics. In a situation where LLM results are published with only minor changes, the LLM converges to an average understanding of a topic.

Innovation and progress mostly originate from unorthodox ideas or combinations thereof. Today, an educated reader spots such ideas when researching a topic, and is capable of connecting dots and bringing an unorthodox idea to a successful innovation. However, unorthodox ideas have a low probability in the context of the orthodox state-of-the-art. Those would be suppressed by the LLM. And since we would have fewer experts through less cumbersome background reading, generative AI and humans together may converge to a stable average on any topic.

We risk that the un-reflected use of the tools that help us today may result in the tyranny of the average with stagnating innovation.

We have to make careful use of these tools, investing in understanding their capabilities and shortcomings, and ensuring they are transparent. Even though they may relieve us from cumbersome work, some of these aspects are still needed in education to stay ahead of the bots and remain creative and innovative.


About the Author:

Christopher Ganz has spent more than 25 years in ABB in various roles in innovation. He helped crafting the ABB digital strategy and prepared positioning proposals on a variety of digital topics (e.g. AI) to both the CTO and the CDO. In addition to running his own consulting business on innovation, he teaches the CAS ‘Applied Technology: R&D and Innovation’ at ETH Zurich.