Artificial intelligence (AI) is one of the technologies that will dominate the business, consumer and public sector landscape over the next few years. Technologists predict that, in the not-too-distant future, we will be surrounded by internet-connected objects capable of tending to our every need.

SwissCognitiveWhile AI development is still in its early stages, this technology has already shown it’s capable of competing with human intelligence. From challenging humans at chess to writing computer code, this technology can already outperform people in many areas. Newer AI systems can even learn on the fly to solve complex problems more quickly and intuitively.

But while AI presents many exciting opportunities, there are also plenty of challenges. Doomsday scenarios predicting that smart machines will one day replace humans are scattered across the internet. Speaking to CNBC , respected Chinese venture capitalist Kai-Fu Lee said AI machines will take over 50% of jobs in the coming decade.

Although businesses are ploughing billions of dollars into this lucrative market, many of the world’s most prolific figures in innovation and science have called for regulation. Tesla founder Elon Musk and renowned physicist Stephen Hawking are among those who have voiced concerns over the rise of artificial intelligence.

How real these concerns turn out to be remains to be seen, but even now there are ways in which AI used in business can pose risks not just to the companies that use them, but to the public at large.

While organisations at the cutting edge of AI development should at spend at least some of their time preventing the rise of the machines, everyday organisations have their role to play as well in protecting us all from artificial intelligence gone awry.

One solution doesn’t fit all

Automated technologies are incredibly diverse and span a range of use cases. As a result, it’s quickly apparent that there isn’t one simple answer to ensuring the safety of AI. Matt Jones, lead analytics strategist at technology consultancy Tessella, says keeping AI safe comes down to the data a business possesses. “It’s important for businesses to remember that there is never a ‘one-size fits all’ solution. This all depends on the data at the company’s fingertips – this will influence the risk involved, and therefore how dangerous the wrong decision can be,” he says.

“For instance, using AI to spot when a plane engine might fail is a very different matter to trying to target consumers with an advert for shoes. If AI for the latter goes wrong, you may lose a few potential customers, but the damage isn’t long term. However if the former goes wrong, it could lead to fatal consequences. […]


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!