Use of artificial intelligence (AI) in the aging field continues to bring promises and concerns. New integrations promise to assist with staffing woes, and thought leaders outline ways that AI can succeed. Meanwhile, other experts pinpoint aspects of AI to approach with caution. This roundup shares several recent AI-related news items highlighted in McKnight’s Senior Living.
AI to Ease Documentation and Relieve Health Care Workers
To offer health care workers relief from processing electronic paperwork, and ultimately to help address the workforce crisis, Teladoc Health is creating integrations that automate clinical documentation for virtual and in-person exams. Microsoft Azure OpenAI Service, Azure Cognitive Services, and the Nuance Dragon Ambient eXperience™ (DAX™) will be integrated into the Teladoc Health Solo™ platform. The technology will capture notes during patient appointments and is intended to increase the amount of time health care providers can spend with patients, according to a Teledoc statement.
“We are focused on using AI to reassert and build the doctor-patient relationship at a time when technology frequently does the opposite,” said Dr. Vidya Raman-Tangella, chief medical officer at Teladoc Health. “Teladoc Health and Microsoft AI team up for automated documentation” shares more details.
Measuring AI Success
A Harvard economics professor has outlined five guidelines that can measure the success of AI in the health care field. A recent article on the JAMA Health Forum addresses AI’s role in the following:
- Taking on administrative tasks;
- Complementing rather than supplanting clinicians who can read patients’ cues;
- Reducing health care costs related to monitoring, diagnosis, and staffing;
- Rising above human error and biases; and
- Recognizing limits of machine learning.
See more in “New analysis identifies way AI can be a tool, not a crutch, for physicians.”
Using AI Carefully
As experts continue to urge caution in using AI applications, the Association for Computing Machinery has announced guidelines for AI development that are relevant to care for older adults, says “Latest AI guidelines aimed at averting ‘catastrophic harm.’” The guidelines include actions that protect older adults, such as allowing a person to opt out so that their personal data is not used to train a system. The guidelines also suggest heightening security and privacy.
These concerns come as new research, profiled in a recent Tech Time article, finds that older adults are hesitant about using AI in primary care and therapy.
Underscoring these concerns, a recent study published on the JAMA Network found that nearly 20% of medical devices that were marketed as enabled for artificial intelligence or machine learning were not approved for use as such by the U.S. Food and Drug Administration (FDA). Researchers manually reviewed 119 FDA 510(k) applications and their marketing materials, identifying 12.6% as discrepant and 6.7% as contentious. “Some AI tools touting uses not approved for by FDA, study finds” has more details.