The Virtual Assistant uses an LLM to improve patient processes, treatment outcomes and promote healthcare equality. To manage risk, meet clinical, governance, and security requirements from the outset, and deploy the first LLM version rapidly, the solution is implemented in two stages. The first LLM version focuses on a use case for thyroid cancer patients in the ENT department in Rigshospitalet.

 

SwissCognitive Guest Blogger: Neil Oschlag-Michael – “Virtual Assistant Initiative (VAI) – Danish Hospital LLM Use Case”


 

The benefits of LLMs were already recognized when the Virtual Assistant initiative was conceived in early summer of 2023, but concerns about their safety remained. Existing and publicly available LLMs were trained on generic data and there wasn’t certainty that LLMs could answer specific questions from specific patient types in relation to specific hospital procedures correctly. This led to the VAI initiative, in which the Department of Otorhinolaryngology, Head and Neck Surgery & Audiology (ENT) and Innovations Center in Rigshospitalet collaborated with 2021.AI to implement an LLM, named the Virtual Assistant for patients, or Virtual Assistant for short.

Located in Copenhagen, Rigshospitalet is the largest and most specialized hospital in Denmark, serving 75,000 inpatients each year. As a critical center for teaching and research, Rigshospitalet prioritizes innovation in a continual effort to provide world-leading healthcare. The ENT department in Rigshospitalet is the biggest university department in Denmark and is also the center for surgical head-neck oncology. Rigshospitalet’s Innovation Center has capabilities and competences in design, innovation, anthropology, business development, communication and strategic partnerships. It has experience in clinical innovation in all phases and accelerates the development of new healthcare solutions that create value for patients. 2021.AI is a Danish company which specializes in AI and AI governance. 2021.AI helps companies around the globe to accelerate their AI adoption by delivering the three key components that any organization needs to manage, implement and run AI systems successfully; models, platform, and governance.

The initiative was partly funded by the Danish Life Science Cluster and one of the first requirements was to complete the project rapidly; in 10 weeks. To meet this deadline, manage risk and ensure compliance with clinical, governance and security standards from the outset, the project team decided to adopt a two-stage approach. During the first stage a project would be run for 10 weeks to deploy and test an LLM and document requirements for the full-blown implementation project, in the second stage.

The use case selected for the project was designed to meet several aims: to improve patients’ experiences, to improve hospital processes and outcomes and to promote equality. It entailed using an LLM to answer questions from patients, limiting initial scope to thyroid cancer patients, for whom an operation was planned in the ENT department. There are 400 suspected thyroid cancer patients per year, of which approximately 150 undergo thyroid surgery.

Thyroid cancer is a serious and deadly disease, where early and correct treatment is crucial for the patients’ survival and quality of life. One challenge within the healthcare sector is that many patients, including thyroid cancer patients, have a significant need for information, not least to prepare properly for operations. Hospitals provide this information, but some patients struggle with understanding it and preparing for an operation is challenging, when factors such as fasting, or medication can be crucial. Currently patients with questions or in doubt can contact the hospital, but there is no 24*7 response service. Patients with dyslexia, who make up 6-8% of all social classes and a higher percentage of weaker classes, are at an added disadvantage. Worst case, they do not receive a response in time and cannot prepare fully. This can result in cancellations, and it is estimated that 6-10% of these operations currently need to be postponed, affecting patients’ experiences or worsening their condition and increasing an already-burdened hospital workload.


Thank you for reading this post, don't forget to subscribe to our AI NAVIGATOR!


 

To address these issues the project would develop a Virtual Assistant, which would use Rigshospitalet’s procedures and documents, as well as other credible sources, to answer pre-operative questions from thyroid cancer patients. The Virtual Assistant would respond to questions about the treatment process and provide patients with responses to issues they may not have understood or had the opportunity to raise during pre-operative consultations. In doing so it would also promote equity in healthcare, improve treatment readiness and improve outcomes.

There were four main LLM requirements. The LLM must only base its answers on approved input data, which consisted of documents provided by the hospital for this purpose. The LLM must answer questions correctly based on the input data. The LLM must not answer questions related to certain topics, for example for diagnoses or questions about death. And the LLM must indicate when answers cannot be synthesized from approved input data, for example by adding: “I couldn’t find any relevant information” to its response.

Solution architecture was not designed from scratch and based on an existing solution on 2021.AI’s GRACE platform: GRACE governance for LLMs, which addresses risks and concerns associated with the use of LLMs. For a start, GRACE could be deployed rapidly as a web service. GRACE micro service architecture is configurable and scalable and designed to meet changing requirements and it can be integrated with any public LLM which has an API or with a Local Open Source LLM. It offers the same functionality as leading public chat services, making the switch to GRACE LLM easy for LLM users, and it extends this with a governance framework to operationalize organization policies, ethical guidelines and regulations. GRACE provides a secure environment with role base access control. This solution provided transparency and accelerated the project, allowing data scientists to start working on prompt engineering, which is the process of tuning a prompt to achieve the desired LLM response is called prompt engineering, early in the project.

Project governance was ensured by anchoring the project within hospital management and coordinating closely with clinical staff, instead of managing it as a standalone IT project, only governed by technical roles. This ensured that project goals and decision making were aligned with hospital and clinical goals and testing was managed in close coordination with hospital and clinical staff. ENT department surgeons took part in testing, testing was extended to meeting new requirements raised during the process, and testing was not limited to a technical test which only compared LLM responses with the information available in the approved source documents.

The Virtual Assistant met LLM requirements. It demonstrated that it could retrieve information correctly, cite sources, communicate efficiently and did not respond to topics tagged as sensitive. Key lessons learned include the importance of data quality, robust test procedures and ensuring goverance from the outset. LLM efficacy is intricately linked to input data quality. Testing uncovered issues with outdated or ambiguous input data and the risk of potential misinterpretations in data sources. Expert feedback from surgeons and nurses provided invaluable insights for LLM performance and optimization. Governance is required to implement safe and responsible LLM systems. And while they are relevant for all successful projects, the need for a motivated team with shared goals, open communication, close collaboration and robust “traditional” project management, must be acknowledged. Without these, any project, let alone a rapid LLM implementation, would hardly succeed.

This solution differentiates itself by combining advanced technology, healthcare capabilities, AI governance and risk management to develop a safe solution for patients and hospitals. Patients can receive the help they need, when they need it, preventing their condition from worsening. This reduces the risk of cancelling or postponing operations, improves hospital efficiency and reduces the workload, and has the potential to relieve healthcare personnel and free up resources for more complex patient needs. The solution is scalable, and scope can be extended to include more patient groups, more use cases in more departments. Chat can be extended to include voice support, and governance requirements can be extended to include compliance with say, the EU AI act.


About the Author:

Neil Oschlag-Michael is a data scientist and AI strategy and governance consultant. He works with organizations to use AI effectively, efficiently, easily and not least ethically and responsibly.