As AI rapidly matures, it’s becoming indispensable across industries, from unraveling cosmic mysteries to revolutionizing business operations. However, with the advent of Foundation Models and Generative AI, emerging risks are amplified. These technologies, while transformative, require vigilant, globally-coordinated regulation, embedding ethical, unbiased, and explainable moral reasoning. To harness AI’s potential responsibly, we need to balance innovation with safety, ensuring that AI is not a black box but a transparent tool for positive transformation.

 

SwissCognitive Guest Blogger: Alessandra Curioni, IBM Fellow, VP Europe and Africa, Director IBM Research – Zurich. “AI is getting smarter. With foundation models, proper guardrails are crucial.”


 

Have you ever seen the night sky over Ticino, in southern Switzerland?

Look up.

Stars as if pinned onto a black velvet, with the Milky Way stretching over the curvature of the sky. And to truly capture and understand the data about the vastness of space, artificial intelligence has been indispensable. But while in astronomy AI helps us spot new supernovas and try to uncover the mysteries of dark matter, more down to Earth AI technology deals with people. And when it comes to people, AI, just like any other emerging technology, carries with it certain risks that need to be assessed and mitigated.

After all, AI is maturing at a breakneck speed, helping humans across a multitude of industries and impacting our lives daily. At IBM Research, making sure that AI is used responsibly is of paramount importance. Policymakers and industry must ensure that as the technology matures further, it remains secure and trusted, with precise regulations. Such as those outlined in European Commission’s draft Artificial Intelligence Act, but on the global level.

Especially today, with the advent of Foundation Models and Generative AI that enable machines to generate original content based on input data, positive transformational power of AI for business and society is increasing enormously. And it is amplifying issues related to bias, reliability, explainability, data and intellectual property – issues that require a holistic and transparent approach to AI.


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

That’s exactly why we at IBM have just introduced WatsonX. It’s a powerful platform for companies seeking to introduce AI into their business models, with a feature for AI-generated code and a huge library of thousands of AI models. WatsonX allows users to easily train, validate, tune, and deploy machine learning models and build AI business workflows. And crucially, doing so with the right governance end to end, with responsibility, transparency and explainability. Our expectation is that the new AI tools will be integrated much easier into fields like cybersecurity, customer care and elements of IT operations and supply chain, in the most responsible way.

Unlike the previous generation of AI aimed at a specific task, foundation models are being trained on a broad set of unlabeled data. They rely on self-supervision techniques and can be used for a variety of tasks, with minimal fine-tuning. They are called foundation models because they can be the foundation for many applications of the model, applying the learnt information about one situation to another with the help of self-supervised learning and transfer learning. And they are now starting to be applied in a variety of areas, from the discovery of new materials to developing systems that can understand written and spoken language.

Take IBM’s CodeNet, our massive dataset of a lot of the most popular coding languages, including legacy ones. A foundation model based on CodeNet could automate and modernize a huge number of business processes. Beyond languages, there is also chemistry. My colleagues at the Zurich lab have recently built a tool dubbed RoboRXN that synthesizes new molecules for materials that don’t yet exist, fully autonomously. This cutting-edge technology poised to revolutionize the way we create new materials, from drugs to solar panels to better material for safer and more efficient aircraft, the list goes on. IBM has also recently partnered with Moderna to use MoLFormer models to create better mRNA medicines. And our partnership with NASA is aimed at analyzing geospatial satellite data with the help of foundation models to help fight climate change.

And soon, quantum computers will join forces with ever-smarter AI. Then, the future for countless tasks we are struggling with today will be as bright as a supernova – including material discovery. The same goes for numerous other applications of AI, from voice recognition and computer vision to replicating the complexity of the human thought process.

But to ensure that AI continues to bring the world as many benefits as possible, we mustn’t forget the importance of regulation. We need to ensure that those designing, building, deploying and using AI do so responsibly. Given the huge advantages of foundation models, we need to ensure we the economy and society are protected from its potential risks. All the risks that come with the other kinds of AI, like potential bias, apply to foundation models as well. But this new generation of AI can also amplify existing risks and pose new ones – so it’s important that policymakers assess the existing regulatory frameworks. They should carefully study emerging risks and mitigate them.

As our technology becomes ever more autonomous, it’s imperative to have moral reasoning engrained in it from the get-go. And to have guardrails ensuring that even this ‘default’ moral reasoning is unbiased, fair, neutral, ethical and explainable. We want to be able to trust AI decisions. As amazing as AI could be, with neural networks ever better mimicking the brain, we mustn’t allow it to be a black box.

To be certain that artificial intelligence and other emerging tech truly helps us make the world a better place, we have to properly regulate it now – together.

 


Dr. Alessandro Curioni, an IBM Fellow and Vice President of IBM Europe and Africa, is globally recognized for his contributions to high-performance computing and computational science. His innovative approaches have tackled complex challenges in sectors like healthcare and aerospace. He leads IBM’s corporate research in Europe and globally in Security and Future Computing. Twice awarded the prestigious Gordon Bell Prize, his research now focuses on AI, Big Data, and cutting-edge compute paradigms like neuromorphic and quantum computing. A graduate of Scuola Normale Superiore, Pisa, Italy, he joined IBM Research – Zurich in 1998 and leads their Cognitive Computing department.