How to navigate the opportunities and challenges posed by a technology few can afford to ignore.


Copyright: “What Does AI Mean For A Responsible Business?”


SwissCognitive_Logo_RGBIt was what many called an iPhone moment: the launch in late 2022 of OpenAI’s ChatGPT, an artificial intelligence tool with a humanlike ability to create content, answer personalised queries and even tell jokes. And it captured the public imagination. Suddenly, a foundation model — a machine learning model trained on massive data sets — thrust AI into the limelight.

But soon this latest chapter in AI’s story was generating something else: concerns about its ability to spread misinformation and “hallucinate” by producing false facts. In the hands of business, many critics said, AI technologies would precipitate everything from data breaches to bias in hiring and widespread job losses.

“That breakthrough in the foundation model has got the attention,” says Alexandra Reeve Givens, chief executive of the Center for Democracy & Technology, a Washington and Brussels-based digital rights advocacy group. “But we also have to focus on the wide range of use cases that businesses across the economy are grappling with.”

The message for the corporate sector is clear: that any company claiming to be responsible must implement AI technologies without creating threats to society — or risks to the business itself, and the people who depend on it.

Companies appear to be getting the message. In our survey of FT Moral Money readers, 52 per cent saw loss of consumer trust as the biggest risk arising from irresponsible use of AI, while 43 per cent cited legal challenges.

Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


“CEOs have to ensure AI is trustworthy,” says Ken Chenault, former chief executive of American Express and co-chair of the Data & Trust Alliance, a non-profit consortium of large corporations that is developing standards and guidelines for responsible use of data and AI.[…]

Read more: