The use of artificial intelligence (AI) in health care and aging services presents significant opportunity. At the same time, due to possible harm that may arise from AI systems’ implementation, keeping watch and engaging on AI’s use and oversight is critical—points included in the Technology, Telehealth, and AI Advocacy Goals section of our 2026 Policy Platform.
That’s why, following the Department of Health and Human Services (HHS) December 2025 publication of a request for information (RFI) on accelerating the adoption and use of AI as part of clinical care, LeadingAge submitted feedback.
Our February 23, 2026, comments, authored by Scott Code, Center for Aging Services Technologies (CAST) vice president, and Nicole Fallon, LeadingAge vice president of integrated services and managed care, outline our principles for assessing AI policies, offer suggestions for limited appropriate uses for AI, and specify guardrails we feel should be adopted to protect beneficiaries and providers from AI abuses.
More specifically, we suggest approaches that level the playing field for aging services providers with limited resources by offering education and financial support to providers introducing AI. Regarding the critical topic of payment, we recommend the creation of incentives and payment models that account for how AI delivers value in aging services, as well as the establishment of clear federal guidelines on the use of AI. Our comments also emphasize the importance of setting guardrails that bring clarity, responsibility, and accountability to AI use—including requiring human healthcare professionals to review AI outputs.
From LeadingAge’s perspective, AI holds meaningful promise in clinical care when it is:
- Supportive, not determinative.
- Person-centered, not average-based.
- Transparent, explainable, and appealable.
- Focused on efficiency, access, and quality—not cost avoidance.
LeadingAge also notes that involving aging services providers in AI innovation for Medicare beneficiaries is essential, as these providers can share interactions with older adults and critical longitudinal data.
Barriers to AI Innovation in Aging Services
LeadingAge presented several adoption barriers that can slow innovation and reduce implementation opportunities in aging services.
These include the lack of standardized data and interoperability across payers, electronic health records vendors, and care settings. Federal direction governing AI use in health care and clinical decision-making is uncertain, and federal entities’ oversight responsibility is unclear.
Due in part to the historical exclusion of long-term care providers from federal health IT incentives under the HITECH Act, providers face limited staff and financial resources to integrate AI tools. Reimbursement and incentive structures are misaligned.
Recommendations to Reduce Barriers
To address these barriers and enable responsible AI innovation and adoption, LeadingAge recommends HHS take these actions:
- Establish federal standards and guardrails for AI use in health care.
- Clarify oversight roles across federal agencies.
- Update provider regulations and licensing to establish appropriate AI use, giving priority to human clinical judgment and patient-centered decision-making.
- Provide financial support that enables aging services providers to invest in health IT infrastructure, interoperability, and AI tools.
- Phase in compliance with interoperability and data standards to reflect providers’ readiness levels.
- Support provider education and training.
Incentives to Encourage Effective Use of AI in Clinical Care
LeadingAge encourages HHS to establish the following incentives to encourage effective use of AI in clinical care:
- Prioritize regulatory and programmatic changes that address the foundational readiness gap facing nursing homes and other aging services providers.
- Consider creating a new certification or qualification program tailored to long-term care and senior living, paired with upfront financial support, to help providers adopt AI technologies.
- Prioritize demonstration programs and payment models that better align with how AI delivers value in aging services, with targeted financial incentives, programmatic support, and technical assistance.
- Establish clear guidance for the responsible use of non-medical device AI in clinical care, clarifying expectations related to human oversight, accountability, transparency, and documentation, while avoiding requirements that could stifle innovation or exclude smaller providers.
Additional Opportunities
The comments also outline opportunities for AI in clinical care:
- Supporting clinical decision-making and reducing administrative burden by automating routine administrative tasks and providing valuable decision support to clinicians that supplements, rather than replaces, their expertise.
- Expediting prior authorization and coverage decisions—but not deny, reduce, or terminate medically necessary care.
- Supporting person-centered care planning.
- Improving care transitions and post-acute coordination.
- Providing beneficiary-facing AI decision-support tools.
- Supporting clinical oversight and compliance by the Centers for Medicare & Medicaid Services.
Recommended Guardrails
Throughout the comments, LeadingAge reiterated the importance of guardrails for AI use. Our recommendations include:
- AI tools should be tested, validated, transparent, and accurate at least 95% of the time.
- Human review of AI-influenced decisions must be part of the workflow.
- AI-influenced decisions must be appealable, and those who develop and/or deploy AI tools must be accountable for errors.