Artificial intelligence (AI) is nothing new to the healthcare industry, as many organizations and clinicians have utilized such tools in some capacity for many years. Imaging-related AI to support radiologists is not uncommon, to use one example. However, more recently, there has been a marked increase in interest in the use of such tools in healthcare (and across all industry sectors), including generative AI—i.e., where the technology creates a new output based on existing data—and the range of uses of such tools continues to expand. AI can create potential efficiencies in care delivery as well as in administrative activities and create new touchpoints for patient engagement. For instance, in addition to the development of AI as a clinical decision support tool for practitioners, AI tools can serve as virtual assistants for practice management and provide interactive symptom checkers for use by consumers. AI tools also have the potential to significantly improve healthcare outcomes, such as providing means for earlier detection of a disease or condition. More generally, it is likely that at least some individuals in every organization’s workforce have at least tried ChatGPT since its launch in late 2022 for purposes of research or drafting content as part of their responsibilities. All the innovation occurring makes for an exciting time in healthcare, but the opportunities presented by such innovation must be balanced with efforts to mitigate risks.
Key healthcare regulatory risk areas
In addition to the key risks lawmakers and media tend to emphasize—including risk of bias, general data privacy protection laws, and lack of transparency concerning the algorithms used—existing healthcare laws and regulations applicable in other contexts also apply to AI. A summary of some key risk are as follows:
-
Patient privacy laws: Developing an AI solution requires significant amounts of data. Before using or disclosing any patient information for this purpose, a provider organization should identify applicable laws in place that protect the information and confirm that their intended use or disclosure is permitted. Potentially applicable laws and regulations include, without limitation, HIPAA, 42 C.F.R. Part 2, and state patient privacy laws.
-
Unlicensed practice of medicine: Any AI solution utilized to inform clinical care must maintain the practitioner’s role as the ultimate decision-maker concerning diagnosis and treatment. Technological solutions can be tremendously useful aids to the practice of medicine by quickly analyzing data, identifying patterns, and providing potential recommendations. However, it is critical that practitioners using such solutions limit their application of the tools to support, not supplant, their independent clinical judgment.
-
Medical malpractice and related risk: Relatedly, medical malpractice and other tort claims are also a risk where a patient experiences an adverse outcome, and the practitioner utilizes AI as part of their decision-making process. Allegations could arise that practitioners deviated from the standard of care by relying on the AI tool. Of note, such allegations could also extend to the provider organization (e.g., a claim that an employer is vicariously liable or otherwise engaged in negligent or tortious conduct in deploying the tool for the use of its workforce).
-
Billing compliance: Governmental and private payers each have established requirements that must be met for a particular service to be reimbursable. Though practitioners may determine it is possible for different categories of individuals to provide the service and/or that a lower level of physician supervision is required when utilization of an AI tool is incorporated into the process, if the current payer requirements impose more stringent standards, those requirements should be followed.
-
Anti-kickback laws: The Office of the National Coordinator of Health Information Technology (ONC) recently raised the concern that remuneration to a developer of AI could implicate the federal Anti-Kickback Statute.[1] An example would be a pharmaceutical manufacturer that provides remuneration to a developer to build the AI in a manner that recommends the order of a particular drug as part of a treatment care plan. The U.S. Department of Health and Human Services (HHS) Office of Inspector General has previously flagged the potential for similar risk with respect to electronic health record vendors, which could equally apply in the context of AI.[2]
-
Addressing bias and unlawful discrimination: Various government agencies are particularly concerned about AI-producing outcomes that result in unlawful discrimination under existing laws, as reflected in a Joint Statement issued in April affirming a commitment to enforce existing laws to promote “responsible innovation.”[3] HHS, in notice-and-comment rulemaking in 2022 to amend regulations implementing Section 1557 of the Affordable Care Act, proposed a new regulatory provision that would prohibit a covered entity from discriminating against an individual “on the basis of race, color, national origin, sex, age, or disability through the use of clinical algorithms in its decision-making.”[4] Echoing themes addressed elsewhere in this article regarding the use of independent clinical judgment, HHS emphasized that under the proposed rule, a covered entity could be held liable for decisions made in reliance on the clinical algorithm—even if the covered entity did not itself develop the algorithm.[5]
Additional regulatory oversight on the horizon
Efforts to adopt more of a regulatory framework specific to AI (and, in some cases, specific to AI in healthcare) are underway. According to the National Conference of State Legislatures, state-level legislation is pending across multiple states, some of which apply not just to the developers but also to the deployers and users of the technology.[6] On the federal side, in April, the ONC released a proposed rule which would, among other things, provide standards applicable to predictive decision support interventions that must be met to obtain ONC Health IT Certification.[7] Among other things, the ONC proposed rule would implement standards to ensure “FAVES” (fair, appropriate, valid, effective, and safe) solutions.[8] Though a proposed rule only, and if finalized, would apply to voluntary certification, the rule could further the establishment of an industry standard and potentially lead to mandates by other agencies that developers comply with these rules. Given the exponential growth in interest in AI in the media and focus by federal and state lawmakers alike, it is likely that there will be additional development in efforts to regulate AI in healthcare by the time this article is published.
Considerations to help mitigate risk
Provider organizations interested in utilizing AI should consider implementing safeguards to help mitigate compliance risk. A few suggested considerations follow, though if or to what extent each organization engages in these actions may vary depending on that organization’s operations and needs.
-
Develop a workforce policy on the use of generative AI. Organizations might consider adopting a policy that sets clear expectations regarding the use of AI by workforce members. In addition to a general workforce policy, organizations might consider adopting clinical policies on the responsible use of AI. For example, an organization could adopt a broad policy to emphasize the practitioner’s role, the need for independent clinical judgment, and the requirement for the practitioner to document their involvement in clinical decision-making.
-
Interdisciplinary team. Organizations might consider forming a committee of various stakeholders to set the organization’s policy on the use of AI and otherwise determine an approach for oversight of the deployment of AI tools within the organization.
-
Tracking regulatory developments. As referenced earlier, it is anticipated that the coming months and years will bring rapid developments in laws, regulations, and guidance at the federal and state level as lawmakers and regulatory agencies work to keep pace with technology’s rapid development. Organizations are well-advised to monitor such developments.
-
Use of existing resources. The National Institute of Standards and Technology (NIST) offers a range of tools that can aid organizations in utilizing AI technology. The NIST AI Risk Management Framework, in particular, is a useful starting point for assessing risk. Trade associations often publish helpful guidance, as well, such as the Consumer Technology Association’s AI standards (e.g., CTA 2107-A, “The Use of Artificial Intelligence in Health Care: Managing, Characterizing, and Safeguarding Data”) and the American Medical Association’s “Augmented Intelligence in Medicine.” In areas such as AI, where a regulatory framework is actively developing, materials from respected trade associations are particularly useful where they represent the industry’s efforts to self-regulate.
-
Contracting as a risk mitigation tool. Organizations should consider what contractual covenants, representations, and warranties they may want to seek from AI developers, in addition to the more standard negotiations of key terms such as insurance, indemnification, software updates, service level commitments, and data use and restrictions. For instance, seeking to incorporate representations that involve the “FAVES” principles may be helpful. In addition, organizations should consider engaging in due diligence regarding the AI solution and seek to temporarily try the solution as a pilot program before making a larger contractual commitment.
-
Risk management via insurance coverage. Organizations should consider reviewing the scope of existing insurance policies prior to utilizing a new AI solution to understand the parameters of the organization’s coverage under their insurance carrier’s policy.
Conclusion
AI innovations in healthcare delivery prompt compliance changes that will permeate throughout provider organizations’ operations. Using this article’s roadmap to think through potential compliance risk areas will help jumpstart provider organizations’ preparations for the upcoming expansion of AI utilization.
Takeaways
-
Opportunities for innovation involving artificial intelligence (AI) should be balanced with risks.
-
Many healthcare laws with broader applications equally apply to AI.
-
Risks should be assessed under such laws when considering a new AI tool.
-
Efforts are underway to regulate AI in healthcare at federal and state levels.
-
Organizations should consider adopting policies and implementing other safeguards.
*Amy Joseph and Jeremy Sherer, Partners at Hooper, Lundy & Bookman P.C., Boston, MA.
1 Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing, 88 Fed. Reg. 23,746, 23,777 (April 18, 2023), https://www.govinfo.gov/content/pkg/FR-2023-04-18/pdf/2023-07229.pdf.
2 U.S. Department of Health & Human Services, Office of Inspector General, “General Questions Regarding Certain Fraud and Abuse Authorities,” FAQ no. 6, https://oig.hhs.gov/faqs/general-questions-regarding-certain-fraud-and-abuse-authorities/.
3 U.S. Consumer Financial Protection Bureau et al., “Joint Statement on Enforcement Efforts Against Discrimination and Bias in Automated Systems,” accessed June 6, 2023, https://www.ftc.gov/system/files/ftc_gov/pdf/EEOC-CRT-FTC-CFPB-AI-Joint-Statement%28final%29.pdf.
4 Nondiscrimination in Health Programs and Activities, 87 Fed. Reg. 47,824, 47,880–47,884, 47,918 (Aug. 4, 2022), https://www.govinfo.gov/content/pkg/FR-2022-08-04/pdf/2022-16217.pdf.
5 Nondiscrimination in Health Programs and Activities, 87 Fed. Reg. 47,824, 47,880-47,884, 47,918 (Aug. 4, 2022).
6 National Conference of State Legislatures, “Artificial Intelligence 2023 Legislation,” updated July 20, 2023, https://www.ncsl.org/technology-and-communication/artificial-intelligence-2023-legislation.
7 Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing, 88 Fed. Reg. 23,746, 23,777.
8 Health Data, Technology, and Interoperability: Certification Program Updates, Algorithm Transparency, and Information Sharing, 88 Fed. Reg. at 23,780.