If you have paid attention to medical news lately, you have almost certainly heard about some emerging uses of artificial intelligence (AI) in medicine, including oncology. Labroots has recently covered research showing the benefits of AI for diagnosing schizophrenia, diabetes, and breast cancer. We also discussed a recent study suggesting AI could predict patients at high risk of developing breast cancer.
Machine learning has enabled technology that can identify disease patterns and predict treatment outcomes in various types of cancer, encouraging the rapid influx of AI into healthcare. The inclusion of new AI-driven components in cancer care has inevitably impacted patients. In particular, AI has affected the practices of pathology and radiology, integral pieces of healthcare for most cancer patients.
The advent of so many new uses for AI technology in medicine brings about new ethical concerns for doctors and healthcare professionals. A team of medical and bioethical experts from Harvard Medical School and Dana-Farber Cancer Institute recently wrote in the Journal of Clinical Oncology: Oncology Practice, urging medical societies and government leaders to collaborate with patients, clinicians, and researchers. The authors emphasize the importance of involving all stakeholders in developing policies and guidelines to ensure ethics and equality while maintaining a patient focus.
The piece details how patient-facing AI (PF-AI) technologies, machine learning-driven tools that directly interact with patients, interface with cancer care. PF-AI technologies can provide health and wellness advice, educate patients on their disease or treatment, or offer psychological support. Other PF-AI technologies include tools and methods like telehealth, remote monitoring of patients, and virtual health coaching.
The authors describe that while PF-AI has a significant potential for patient benefit, the field has currently put in place few, if any, safeguards to protect patients. The piece urges stakeholders to consider various factors, including health equity, justice, and patient autonomy.
Further, the authors suggest that rules and regulations developed to safeguard PF-AI require that all policies pay respect to human dignity by “recognizing the intrinsic worth of each individual and the respect owed to them, while emphasizing the essential role of empathy—rooted in the authentic understanding and sharing of others’ feelings.”
Additionally, the authors warn that an overreliance on AI could exasperate the notable social stigma that already accompanies a cancer diagnosis. Because AI lacks characteristics like empathy, compassion, and recognition of cultural sensitives, eliminating human involvement in cancer care could result in impersonal care, further reducing patients’ dignity.
We live in an exciting time where technology and ingenuity constantly interplay with everyday life. AI and other computer-involved technologies will inevitably continue to enhance many aspects of healthcare. Thus, stakeholders must stay ahead of the rapid implementation of AI in cancer care to ensure we have sufficient policies and procedures to protect patients.
Sources: JCO Oncol Prac, Br J Hematol, NEJM