Practicing physicians are faced with the need to make decisions and recommendations constantly and quickly throughout the day. They assess clinical situations, try to identify a coherent picture of the case at hand, compare that picture to the pattern of similar cases from experience and didactics, and come up with a proposed treatment plan. Many, many times every day.
copyright by www.cio.com
With time, and with the pressure of performance, numerous mental shortcuts can develop, often unconsciously. Learned paradigms are used as shortcuts – information-processing rules referred to as heuristics – and are helpful in moving quickly through cognitive processes all day long. However, a number of cognitive biases can emerge, and can lead clinicians into making erroneous conclusions that are often only seen in retrospect.
There are many different kinds of cognitive biases that affect clinical decision-making, similar to other fields that that also require rationality and good judgement. There are a few that are common in clinical medicine, which might be useful to describe, in order to see how we might build supportive information systems that can help overcome these biases:
1. Availability heuristic
Diagnosis of the current patient biased by experience with past cases. Example: a patient with crushing chest pain was incorrectly treated for a myocardial infarction, despite indications that an aortic dissection was present.
2. Anchoring heuristic
Relying on initial diagnostic impression, despite subsequent information to the contrary. Example: Repeated positive blood cultures with Corynebacterium were dismissed as contaminants; the patient was eventually diagnosed with Corynebacterium endocarditis.
AI systems can minimize these biases
Conceptually, an Artificial Intelligence (AI) system can overcome these cognitive biases, and deliver personalized, evidence-based rational recommendations in real time to clinicians (and patients) at the point of care. In order to do this, such a system would need of consider all the data about the patient – current complaints, physical findings, other co-morbid conditions present, medications being taken, allergies, lab and imaging tests done over time. In short, the automated system would need to take into consideration all the things clinicians use to make a recommendation.
Once the data about the individual is gathered, it is compared to the experience derived from a large base of clinical data in order to match patterns and predict outcomes. A differential diagnosis (the different possible diagnoses at the time of observation, in order of probability) can be created, further testing to better distinguish between the possibilities can be suggested, and a treatment plan can be proposed – without the cognitive biases which dog healthcare delivery currently. […]
read more – copyright by www.cio.com
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
Practicing physicians are faced with the need to make decisions and recommendations constantly and quickly throughout the day. They assess clinical situations, try to identify a coherent picture of the case at hand, compare that picture to the pattern of similar cases from experience and didactics, and come up with a proposed treatment plan. Many, many times every day.
copyright by www.cio.com
With time, and with the pressure of performance, numerous mental shortcuts can develop, often unconsciously. Learned paradigms are used as shortcuts – information-processing rules referred to as heuristics – and are helpful in moving quickly through cognitive processes all day long. However, a number of cognitive biases can emerge, and can lead clinicians into making erroneous conclusions that are often only seen in retrospect.
There are many different kinds of cognitive biases that affect clinical decision-making, similar to other fields that that also require rationality and good judgement. There are a few that are common in clinical medicine, which might be useful to describe, in order to see how we might build supportive information systems that can help overcome these biases:
1. Availability heuristic
Diagnosis of the current patient biased by experience with past cases. Example: a patient with crushing chest pain was incorrectly treated for a myocardial infarction, despite indications that an aortic dissection was present.
2. Anchoring heuristic
Relying on initial diagnostic impression, despite subsequent information to the contrary. Example: Repeated positive blood cultures with Corynebacterium were dismissed as contaminants; the patient was eventually diagnosed with Corynebacterium endocarditis.
AI systems can minimize these biases
Conceptually, an Artificial Intelligence (AI) system can overcome these cognitive biases, and deliver personalized, evidence-based rational recommendations in real time to clinicians (and patients) at the point of care. In order to do this, such a system would need of consider all the data about the patient – current complaints, physical findings, other co-morbid conditions present, medications being taken, allergies, lab and imaging tests done over time. In short, the automated system would need to take into consideration all the things clinicians use to make a recommendation.
Once the data about the individual is gathered, it is compared to the experience derived from a large base of clinical data in order to match patterns and predict outcomes. A differential diagnosis (the different possible diagnoses at the time of observation, in order of probability) can be created, further testing to better distinguish between the possibilities can be suggested, and a treatment plan can be proposed – without the cognitive biases which dog healthcare delivery currently. […]
read more – copyright by www.cio.com
Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!
Share this: