• AI
  • Molecular Imaging
  • CT
  • X-Ray
  • Ultrasound
  • MRI
  • Facility Management
  • Mammography

Seven Takeaways from a New Guidance on AI Implementation in Radiology

Article

Researchers discuss key parameters for the assessment, implementation and post-implementation monitoring of emerging artificial intelligence (AI) tools in radiology practices large and small.

Recognizing the potential benefits and daunting challenges of incorporating artificial intelligence (AI) models into radiology workflows, researchers have published a guidance that examines four key questions on the process for evaluating and implementing AI-enabled modalities into practice.

In the recently published Radiology article, the authors discuss the makeup of an AI governance and management structure, pertinent factors for assessing AI-powered imaging modalities, parameters for implementation into practice as well as monitoring of AI tools for effectiveness.

Here are seven key takeaways from the guidance.

1. Radiology AI algorithms currently comprise the biggest amount of AI models that have been cleared by the Food and Drug Administration (FDA), according to the article authors. Not only are radiologists represented in organizational AI governance structures, the study authors also pointed out that radiology groups are leading the implementation of AI imaging at many health-care institutions.

2. In describing the makeup of an AI governing body, the authors said those defining priorities, strategies, and the scope of evaluating and implementing AI models at larger institutions may include leadership from multiple imaging departments, electronic health record (EHR) managers, information technology (IT) management, a legal representative, AI experts as well as institutional administration liaisons. However, the involvement of end users of the AI software is particularly critical, according to the article authors.

“Incorporating end users into the governance structure is of utmost importance to consider their needs and concerns about an (AI) algorithm and to include (them) into the decision-making process,” wrote study co-author Curtis Langlotz, M.D., Ph.D., the director of the Center for Artificial Intelligence in Medicine and Imaging at Stanford University, and colleagues.

(Editor’s note: For related video content, see “Assessing and Implementing Artificial Intelligence in Radiology” and “Essential Questions for Assessing Artificial Intelligence Vendors in Radiology.”)

3. Determining who pays for the installation and maintenance of AI models is an important consideration for radiologists in community hospital settings. While the AI models would be monitored by the radiology department, the authors maintained that radiologists would need to emphasize how the given AI model would benefit the overall health system as well as radiologists in order to prevent the costs from being fully absorbed by the radiology department.

“If a health system bears the financial burden for AI, radiologists must develop the value proposition for each model. If models are seen as only improving radiologist efficiency or accuracy, then the radiology group may be asked to bear some or all of the financial cost” noted Langlotz and colleagues.

4. Does the data set used in the development of the AI model align with clinical use in your practice’s patient population? The article authors emphasize assessment of inclusion and exclusion criteria in the model development for potential bias. Another important consideration is whether there has been external validation of the AI model. In order to gauge the accuracy and reproducibility of the AI model, Langlotz and colleagues suggested that institutions perform a test of the model prior to incorporating it into daily use.

5. For risk assessment of AI tools, the study authors recommend a risk categorization framework issued by the International Medical Device Regulators Forum (IMDRF), which considers the nature of intended use cases as well as the potential impact of the software upon clinical decision-making. For lower-risk AI algorithms, such as those that aid in worklist prioritization, the IMDRF system can also be beneficial, according to Langlotz and colleagues.

6. Ensuring appropriate data security is a key aspect of implementing AI tools into practice. Noting the use of data maps to convey data location, flow and encryption status for projects, the article authors emphasized appropriate data security and sharing practices when vetting AI tools. Langlotz and colleagues added that those assessing the AI tools should be especially vigilant for models that are cloud-based or those developed internally at an institution for clinical use.

“This detailed analysis and accountability for data security and robust governance of data use are vital steps toward ensuring patient protection and public trust,” explained Langlotz and colleagues.

7. Acknowledging that post-AI implementation monitoring can be challenging for smaller practices and those without dedicated informatics support, the article authors suggest the use of automatically populated registries to ease monitoring burden and looking for AI vendors that can provide monitoring data capture in their products.

Related Videos
Improving the Quality of Breast MRI Acquisition and Processing
Can Fiber Optic RealShape (FORS) Technology Provide a Viable Alternative to X-Rays for Aortic Procedures?
Does Initial CCTA Provide the Best Assessment of Stable Chest Pain?
Can Diffusion Microstructural Imaging Provide Insights into Long Covid Beyond Conventional MRI?
Emerging MRI and PET Research Reveals Link Between Visceral Abdominal Fat and Early Signs of Alzheimer’s Disease
Nina Kottler, MD, MS
The Executive Order on AI: Promising Development for Radiology or ‘HIPAA for AI’?
Practical Insights on CT and MRI Neuroimaging and Reporting for Stroke Patients
Related Content
© 2024 MJH Life Sciences

All rights reserved.