
By Liza Hoffman, Sr. Program Manager, and Emily Haynes, Policy Manager
Artificial Intelligence (AI) and Machine Learning (ML) applications are woven into many parts of our daily lives, including health care. For years, AI-powered tools have supported behind the scenes work like billing, claims processing, prior authorization, appointment scheduling, and even patient engagement. Tools coming on the market have the potential to reduce administrative burden for health centers and improve quality performance, both of which matter deeply to health centers where every resource counts and effective care coordination is at the center of their success. At C3, we are encouraged by the potential that AI has to make primary care better, but we also recognize the challenges that make it important to move forward together.
AI tools are becoming available outside of large academic medical settings. Today, health centers are exploring solutions that can lighten the load for staff and improve patient-provider dynamics like AI Scribes and InBasket Management tools. Health Centers are also interested in ways to optimize reimbursement, accurately capture patients’ risk profiles, and prevent claims denials using AI Prior Authorization and Coding tools. However, AI is only as effective as the human judgement behind it. Without careful oversight, well-intentioned tools can produce unintended consequences, including the introduction of bias into processes and clinical data. Having a “human in the loop”- meaning a human being must be part of the decision process for the tools AI-generated outputs – remains essential. Manufacturers regularly state that tools like AI scribes are only about 90% accurate, leaving significant room for patient harm without appropriate human oversight.
Regulation of AI and ML tools for clinical settings requires FDA approval to demonstrate safety and efficacy, and CMS has issued guidance regarding claims and prior authorization applications. Federal oversight has shifted, but new partnerships are aiming to promote more equitable distribution of AI-powered tools to providers with limited resources to acquire and scale expensive and complex solutions.
As Health Centers evaluate AI options, costs, ethics, and the impact of these tools on their patient care are important considerations. Many of our member health centers have shown a clear desire to invest in AI solutions and are even using grant funding from C3 to pilot and expand the use of AI tools.
What is vital is that these tools truly improve patient care and reduce administrative work, without creating new costs and burdens. The price of AI tools is decreasing as the market becomes more competitive, but costs associated with vendor evaluation, implementation, and monitoring are still quite substantial. Working together, health centers can leverage collective knowledge when creating AI policies, picking vendors, and implementing AI governance infrastructure to avoid potentially costly mistakes.
For this reason, we are pleased to announce that C3 has joined the Health AI Partnership (HAIP) as Corps Members. HAIP is a multistakeholder initiative composed of 35+ healthcare delivery organizations, ecosystem partners, and federal agencies across the U.S. whose mission is empowering healthcare professionals to use AI effectively, safely, and ethically through community-informed up-to-date standards. Our partnership with HAIP will connect us with a network of thought leaders, researchers, and community advocates to ensure health centers’ voices are elevated in national AI conversations. Together, we’ll shape policy and industry best practices and bring resources that support health centers to build capacity and adopt AI solutions responsibly.
At C3, we believe the future of AI in health care should be equitable, ethical, and collaborative, and that health centers deserve a seat at the table as this revolution unfolds.





