MedCity Influencers, Artificial Intelligence

AI/ML: Considerations of Healthcare’s New Frontier

Although there is uncertainty and risk, the implementation of AI with the right compliance framework and infrastructure offers an exciting opportunity to transform healthcare into a new frontier with improved patient outcomes and increased efficiency.

Artificial Intelligence (AI) and Machine Learning (ML) is bringing healthcare into a new frontier with vast potential to improve clinical outcomes, manage resources, and support therapeutic development. They also raise ethical, legal, and operational conundrums that can, in turn, amplify risk.

Where does AI and ML stand today? Go, stop, go.

2023 has brought a rollercoaster of activity marked by tremendous advancements and a reckoning with its implications, resulting in efforts to corral unchecked expansion. Many industry leaders called to pause continuing advancements for at least six months after seeing the warp-speed growth in AI technology, only to see others continue capitalizing on target-rich opportunities. This push-and-pull reflects the need to be thoughtful in AI/ML investment and use.

Activity at the governmental level is also rapidly evolving. In late 2022, The White House released a “Blueprint for an AI Bill of Rights” that guides the deployment, design, and use of automated systems, prioritizing civil rights and democratic values. On April 3, 2023, the FDA issued draft guidance to develop the agency’s regulatory framework for AI/ML-enabled device software functions. This guidance proposes an approach to ensure the safety and efficacy of AI/ML that uses adaptive mechanisms to incorporate new data and improve in real-time. Given the lack of comprehensive federal legislation on AI, states have been active in developing privacy legislation. Additionally, to align on patient-centric, health-related AI standards, the Coalition for Health AI  released a “Blueprint For Trustworthy AI Implementation Guidance and Assurance for Healthcare” in early April.

These accelerated developments have resulted in calls to action internationally. Italy temporarily banned ChatGPT in April and began an investigation into the application’s suspected breach of the GDPR. Spain, Canada, and France have also raised similar concerns and launched investigations. EU lawmakers have called for an international summit and new AI rules, including to the proposed AI Act. Consequently, the implementation of AI/ML technology oversight and accountability practices is increasingly becoming a regulatory priority.

Key areas of AI growth

sponsored content

A Deep-dive Into Specialty Pharma

A specialty drug is a class of prescription medications used to treat complex, chronic or rare medical conditions. Although this classification was originally intended to define the treatment of rare, also termed “orphan” diseases, affecting fewer than 200,000 people in the US, more recently, specialty drugs have emerged as the cornerstone of treatment for chronic and complex diseases such as cancer, autoimmune conditions, diabetes, hepatitis C, and HIV/AIDS.

Legal and industry considerations

Although the goal of AI/ML technology is to offer “smarter” care, to date, the patient-provider relationship remains crucial in ensuring patients receive proper healthcare. AI’s growth in healthcare and life sciences has also brought new legal and regulatory considerations, especially in the areas of:

  • FDA and SaMD: The use or assistance of AI algorithms in clinical decision-making may bring the technology within the purview of the FDA’s regulatory authority if it meets the definition of a “medical device.” The FDA has developed a framework to regulate AI/ML-enabled medical devices and AI/ML-based technologies which are “Software as a Medical Device.” As the technology evolves and public interest grows, the FDA remains active in issuing guidance on these topics.
  • Ethics and research: As AI applications expand into the scope of services traditionally performed by licensed practitioners, questions into the unlicensed practice of medicine may be raised. The use of patient data in developing and testing AI technologies may also require informed consent and trigger IRB oversight. The need for human oversight, or the lack thereof, is likely to remain a continuing concern as AI proliferates, especially to monitor AI’s ability to generate incorrect results and cause unnecessary or incorrect care. Additionally, the malicious and unintended applications of AI, such as in biohacking, bioweapons, and the weaponization of health information, mandate careful safeguarding and proactive vigilance by all to ensure proper oversight.
  • Intellectual property and data assets: Healthcare innovators in the AI/ML space face a different IP climate, as AI/ML systems may not receive the same protections as traditional output. Copyright and patents, for example, may not attach to output which is not a human author or developer’s work. Rights in data assets, such as raw data and derivative data which underlay AI algorithms, also require monitoring.
  • Privacy and data rights: Healthcare privacy laws and regulations may be implicated at both the federal and state level. Patient information may be subject to protection under HIPAA and other state laws, and may need to be de-identified before such data can be shared and used to develop AI/ML products. Further, consumer privacy laws and private lawsuits related to data rights indicate a basis for individuals to monitor, and potentially object to, the use of their personal data in developing AI.
  • Reimbursement and coverage: The utilization and deployment of AI by healthcare providers and entities is largely dependent upon financial incentivization, including the rate of reimbursement based upon new AI iterations of an innovation and whether AI services will be covered by payers. As the industry moves towards value-based care, AI may offer additional tools and opportunities.
  • Potential biases and inaccuracies: Despite the groundbreaking and revolutionary potential of AI/ML technologies, AI-technology algorithms may detect patterns using human-annotated data, which could be (1) based on outdated, homogenous, or incomplete datasets and (2) susceptible to reproducing and perpetuating racial, sex-based, and even age-based biases. As a result, there is an increased focus on diversifying and expanding medical data sets to identify and mitigate these potential biases.

A pivotal moment

The dichotomy between the push forward in development of AI technologies coupled with calls to hit pause has brought AI/ML growth to a pivotal moment. As industry and governments reckon with the huge potential and risks of AI, it is paramount to track developments closely to ensure innovation is implemented in a manner which accelerates societal benefit while mitigating unintentional harms.

Although there is uncertainty and risk, the implementation of AI with the right compliance framework and infrastructure offers an exciting opportunity to transform healthcare into a new frontier with improved patient outcomes and increased efficiency.

Photo: ipopba, Getty Images

Sara Shanti is a partner in the Corporate Practice Group in Sheppard Mullin’s Chicago office. Sara’s practice sits on the cutting-edge of health-tech, providing practical counsel to clients building novel innovation in healthcare and to clients’ complex data privacy matters. Sara represents a broad range to clients, including providers, payors, and technology companies, in healthcare regulatory compliance matters related to Next-Gen technology, including artificial intelligence and machine learning (AI/ML), augmented and virtual reality (AR/VR) and metaverses, implantable and wearable devices, innovation centers, and telehealth. Prior to private practice, Sara worked for the U.S. Department of Health and Human Services, Office for Civil Rights.

Arushi Pandya is an associate in the Governmental Practice in Sheppard Mullin’s Washington, D.C. office, and a member of the Healthcare and FDA Regulatory industry teams. Arushi advises healthcare and life sciences clients on a variety of regulatory compliance matters, including data privacy, telemedicine and digital health, fraud and abuse, licensure, scope of practice, and pre- and post-market FDA requirements for drugs and devices. She interned at St. Jude Children’s Hospital, the American Health Law Association, and Decent, Inc. during her time in law school. Arushi received her J.D. as well as her B.S.A. in Biology and B.A. in Plan II Honors from the University of Texas at Austin.

Elizabeth Nevins is an associate in the Corporate Practice Group in Sheppard Mullin’s Dallas office. Elizabeth earned her J.D. from SMU Dedman School of Law in Dallas, Texas, where she graduated with honors. During law school, she served as an Articles Editor for the SMU Law Review, was named a Tsai Scholar for the Tsai Center for Law, Science and Innovation, volunteered for the COVID-19 legal helpline, and worked as a research assistant studying the intricacies of bioethics, healthcare regulation, and health inequities. She received a B.S. in Biomedical Sciences with a minor in Spanish from Texas A&M University, where she also graduated with honors. Prior to law school, Elizabeth worked at UT Southwestern Medical Center in the Biochemistry department as a research technician utilizing genetic modification techniques to help study metabolic processes and discover pro-neurogenic chemicals to tackle neurodegenerative diseases such as Alzheimer’s, Parkinson’s, and ALS.

Topics