President Biden recently signed an executive order establishing new safety standards and tests for AI. The action reflects concerns that, if left unchecked and unregulated, AI could pose significant risks to national security, the economy, privacy, and public health. AI is now being used in virtually every major industry, including banking and financial services, law, entertainment, marketing, and health care.
Use of AI in health care promises to improve efficiency and diagnostic accuracy but valid concerns remain regarding patient privacy and data security, bias, and equitable accessibility. Thus, there is a need to balance innovation with safety in order to protect patients and promote trust.
AMA Code of Medical Ethics Opinion 1.2.11, “Ethical Innovation in Medical Practice,” calls on physicians who design and deploy medical innovations to ensure that they “act in accord with professional responsibilities to advance medical knowledge, improve quality of care, and promote the well-being of individual patients and the larger community.”
Opinion 1.2.11 recognizes that ensuring ethical practice in the design and introduction of medical innovations does not rest with physicians alone, however, and that health care institutions and the profession as a whole have significant responsibilities to uphold medicine’s defining commitment to patients. These include responsibilities to “[p]rovide meaningful professional oversight of innovation in patient care” and “[e]ncourage physician-innovators to collect and share information about the resources needed to implement their innovations safely, effectively, and equitably.”