All debates over contentious issues need their ‘resident’ sceptic. They help to leaven the overinflated assumptions and puncture the hysterical hype pedalled by proponents of the ‘Next Big Thing’.

 

SwissCognitive Guest Blogger: Health Tech World


 

When it comes to the impact of AI and big data on global healthcare, one of the most sceptical is John Taysom, NED & Co-Founder, Privitar, a data privacy specialist software engineer.

During a recent rigorous roundtable debate orchestrated by Health Tech World, Taysom laid down the gauntlet to the aficionados of AI and Big Data in one fell swoop, comparing the promise of artificial intelligence with the much-discredited Holy Roman Empire.

‘You may recall that it was neither holy, nor Roman nor an empire. Artificial intelligence is neither artificial nor intelligent!’

That set the tone for Taysom’s excoriating shredding of many an AI promise.

Unstable systems

‘Artificial intelligence is just an application of probability and it’s just statistics in a tuxedo. The minute people forget that, and imagine it has some kind of animus that is above and beyond that is a fatal point.’


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

To buttress his caustic assertion, Taysom referred to a quote from a professor at Cambridge University’s Department of Applied Mathematics and Theoretical Physics lambasting many AI systems as ‘unstable’. ‘This is a major liability, especially as they are increasingly used in high-risk areas such as disease diagnosis.’

Chaired by Alastair MacColl, ‘AI & Big Data – A Bright New Future in Diagnosis’ brought together global experts in the field to unpick the critical threads of the future application of these technologies.

Though not as sceptical as his fellow roundtable contributor, Gil Bashe, Managing Partner and Chair Global Health at Finn Partners, queried the accuracy of the term ‘artificial intelligence’.

‘If it were called adjunctive intelligence or composite IQ we might have a better chance of embracing it both from a clinician and a consumer standpoint.’

Bashe pointed to the lack of what he called ‘composite wisdom’ in tackling the biggest killers of humanity: non-communicable diseases such as heart disease, diabetes, cancer, respiratory disease and mental health crises.

‘We don’t use the totality of wisdom available to us. We seem to approach each person who is at risk as an n of 1, a one-off. ‘Composite intelligence’ – what we’re terming in this conversation artificial intelligence – actually could give us the tools to engage people during our health planning visits and speak to them about the trajectory of their wellness, or their lack of wellness.’

Bashe made the bold claim that, in the US, the primary care medicine system has collapsed.

‘It’s become a walk-in clinic and it’s because the consumer no longer feels value and walking into see the primary care physician. They don’t understand what they’re getting [in terms of their] return on investment for their 40 minutes: Waiting in the waiting room, and then waiting in a little office to see a doctor for nine minutes if that visit were worth something.’

To illustrate his point, Bashe revealed he contracted Covid-19 last September. ‘I was vaccinated. I was in the Ukraine and I came back to the ‘States and, lo and behold, I had Covid and I had to go to the hospital.’ My pulse oxygen level was about 88. I didn’t feel any risk, but I felt no, I’d better have it checked out.

‘Three doctors in succession gave me three different answers. This is one area where artificial intelligence – which is not ‘artificial’, it’s organic intelligence – can be brought together to some extent. Best practice medicine will be guided in the future by data being joined together.’

According to Bashe, in the patient/doctor nexus, doctors simply have to give an answer. ‘It is a knee-jerk reaction. Artificial intelligence is not instead of doctors. It is actually almost like what we called ‘The Merck Manual’, which is published every four years and contains the compendium of medical wisdom. Then suddenly it had to be published almost every other year – and then it had to be re-published digitally almost every other day. That’s because we’re accumulating [data] faster and faster.’

Bashe went on to argue that many clinicians feel threatened by the prospect of AI, comparing it to a ‘collective IQ’ being dumped in the room with them. ‘Doctors are psychologically trained to give answers, and they even have a word for when they don’t have an answer if you have pain and they can’t explain it. It’s called ‘idiopathic’ pain. It’s a great diagnosis, right?’


John Taysom at “AI & Big Data – A Bright New Diagnosis?” Roundtable

Bashe echoed these sentiments in relation to surgeons’ innate belief that they are omniscient. ‘They don’t need to know anything. What they know, they know. That’s a problem psychologically. We need to deal with the doctors’ desire to make decisions, but actually learn to question their decisions. They need a little bit of Taysom embedded in their mindset!’

Bashe did, however, concede that certain medical organisations were already receptive to AI’s potential, given their reliance on real-time data. AI-enabled systems can, of course, read and synthesise massive amounts of data.

According to Bashe, the American College of Cardiology, the European Society of Cardiology and the American Heart Association have always been ‘obsessed’ with data and patterns.

‘They’re probably more adaptable to artificial intelligence. They’ve always been very obsessed with data and looking at data connections, even making new connections. So artificial intelligence is an accelerator of scientific change.’

Money matters

Simon Legge, Managing Director at Tyson & Blake, brought 25 years’ Wall Street banking experience to the roundtable. He raised the tantalising prospect of compelling returns on investment from the implementation of what he called ‘advanced analytics’.

Legge pointed to a McKinsey estimate that for the average global top 20 pharmaceutical company, the application of advanced analytics will deliver a value add of US$300m per annum.

‘Now that’s significant commercial value and that’s what it comes down to. Just as in drug development on average we’re talking about development costs in excess of a billion dollars per drug. If we are talking about approaches, if we’re talking about innovation, we’re talking about software, we’re talking about AI.

‘That adds value, that helps reduce cost and risk in the process. Well then it has significant commercial value. And the thing about capitalism that’s often misunderstood is it’s only if it has commercial value, if it’s value is recognised will it attract capital to support it.’

To bolster his argument, Legge highlighted the pioneering work performed at the Karolinska Institute in Stockholm developing algorithms that can identify the potential for adverse drug reactions in patients.

Said Legge: ‘If you look globally, millions of people are dying every year from adverse drug reactions – its the top 10 cause of death for one of them. That’s completely avoidable. Again, it’s down to the question of how we get this software, these algorithms that have been developed and that have been proven, in a form that can be embedded and implemented into existing legacy systems?’

Terminator trepidation

The ‘rise of the robots’ conjures up dystopian images of hyper-intelligent machines out-thinking humans then maliciously turning on their creators; Think Terminator meets I, Robot. 

Sci-fi author Isaac Asimov’s Three Laws of Robotics mandates that robots must obey humans and do no harm. But fears persist that artificially-intelligent devices pose a direct threat to humanity. And in healthcare, that threat can be framed as an existential one.

John Murray, Academic Dean, Sunderland University gave the debate added impetus with an exploration of how we reassure people that AI and machine learning can solve a lot of the world’s healthcare problems while being aware of the myriad ethical issues such applications raise.

‘With doctors. They talk about patients and problems and diseases. They’re going through thought processes and they’re explaining their thought processes with each other. They’re explaining how they come to a decision or how they come to a conclusion or how and why they think a particular disease might have might have happened or what the solution might be. With AI and machine learning, we put all our data into the box and we get an answer out and it’s simply what’s going on inside one of those [human] decisions.’

Murray claimed that teaching physicians about artificial intelligence and its benefits is a more compelling proposition than having them regard it as yet another system to learn. ‘With their feedback we can improve our systems and platforms. I think that’s very critical.’

The theoretical physicist Stephen Hawking was worried about the ‘singularity’, a hypothetical point in time at which technological growth becomes uncontrollable and irreversible, resulting in unforeseeable changes to human civilisation.

Murray counters this anxiety with down-to-earth healthcare needs. ‘You need AI and people and doctors – and they need to work in symbiosis together to help us understand the data. AI can manage and understand and interpret data at much faster rates than any human could ever do and it helps us inform our decisions rather than potentially make those decisions for us. It’s not going to replace as I think anytime soon.’

The Sunderland academic adds a worrying caveat, though: Litigation.

‘There’s also the other side of the ethics as well. And that’s litigation, which I think is a real challenge of this problem. In recent years I’ve worked with some big UK medical organisations looking at AI solutions to problems. And the question I always get is from their legal team. And that is: what if it makes the wrong decision? What if it gives us advice that is defective? We’re still in the infancy of this technology in a lot of areas and it’s still stuff we have to work through and understand as a society.’

Throwing yet another spanner in the works, Privitar’s Taysom underscored the absence of time factors from machine learning’s processes. ‘Ask machine learning people how they deal with ‘autocorrelation’. Most of them look at you with utter mystery. They don’t know what you’re talking about.

‘But it’s a fact that time as a variable is dealt with broadly in machine learning as if it was any other variable. But how can you possibly expect to determine cause and effect unless you recognise that there is a time component to that expression?’

Taysom did, however, concede a proven benefit to machine learning in healthcare. One of his UK firms had just secured FDA approval for the application of machine learning to the diagnosis of small node cancer of the lung. ‘So that’s an example where an application of machine learning can directly impact economics, both reducing the impact on the patient – and also on society.’

Just goes to show that even sceptics can have a change of heart.

Roundtable participants were:

Gil Bashe, Managing Partner and Chair Global Health at Finn Partners
Simon Legge, MD at Tyson & Blake
John Murray, Academic Dean, Sunderland University
Aman Bhatti, Senior Vice President, Global BioPharma, AliveCor
John Taysom, NED & Co-Founder, Privitar
Dr Ofer Sharon, CEO OncoHost

The full debate can be found at:

(262) AI & Big Data – A Bright New Diagnosis? Roundtable – YouTube