Communication of intelligence is fundamental for effective decision-making. No matter what decisions we make, the more intelligence we have to predict and evaluate outcomes, the better those decisions will be. Whether it be self-generated or communicated to us by third parties, the communication of intelligence is vital for complex decision-making.
So, will AI be able to communicate intelligence with AI?
SwissCognitive Guest Blogger: Eleanor Wright, COO at TelXAI – “AI Communicating With AI”
Our ability to communicate intelligence has enabled us to send rockets to the moon and develop life-saving drugs. We build, test evaluate, and innovate based on intelligence generated by others, and the evolution of the internet as a tool for communications has accelerated this process. Our ability to intake intelligence derived from multiple sources across varying domains has enabled new levels and complexities and increased advances in decision-making.
So, will AI systems be able to communicate intelligence with other AI systems? AI communicating with AI.
We’ve heard of chatbots talking to chatbots and creating their own language, but how do AI systems communicate across domains to share intelligence? How do language models access intelligence generated by machine vision systems to evacuate buildings, and how do sensors communicate with robots to put out fires?
Systems such as autonomous vehicles utilise a multitude of sensors to inform their decision-making processes. They are constantly detecting and categorising objects around them, making decisions as to whether those objects represent a safety risk. These systems, however, could be made smarter if they had the ability to access intelligence around them. For example, if autonomous AIs were able to access audio intelligence of activities in urban environments, maybe the AI could improve its decision-making processes by leveraging this extra layer of intelligence.
To enable this communication of intelligence, AIs must understand how to evaluate and categorize intelligence. They will require the ability to quantify intelligence, implementing a hierarchy of severity and confidence. In the same way we apply a level of confidence in intelligence generated by our sensory nodes or third-party data, AI must have its own qualifiers of intelligence.
Once intelligence has been communicated and decisions have been made, outcomes may once again be communicated back into the AI network.
If this level of communication between AI modules is facilitated, and intelligence is exchangeable and useable between decision-making systems. The applications will be boundless. Autonomous systems will have access to a range of sensors communicating intelligence and providing feedback on the environment around them. Large Language Models within the network will be able to directly interface with humans delivering details on activities and supplying suggestions as to what events to attend.
This prospect of AI systems communicating intelligence with other AI systems may be both futuristic and terrifying. Once this level of intelligence is achieved, and the human capacity for decision-making is surpassed by AI, our roles will shift. Humans will become the weaker element in the command-and-control loop and AIs will objectively be able to implement more accurate and reliable decisions.
The application of this, however, will be highly complex, it will require collaboration and an open architecture of communications. AI development across domains will require a common tool for communications and a framework of intelligence validation. These systems will need to be secure from interference, and the architecture of deployment will be critical. Finally, those who build and deploy these systems will obtain significant control and responsibility. As such, a structure of governance and procurement security will be critical.
About the Author:
Holding a BA in Marketing and an MSc in Business Management, Eleanor Wright has over eleven years of experience working in the surveillance sector across multiple business roles.