At this year’s Medicaid Enterprise Systems Conference (MESC) in Milwaukee, the dominant theme wasn’t just that AI is coming to healthcare. It’s that AI is already here—and it's becoming the backbone of Medicaid modernization efforts.
Our CEO, Dr. Raj Lakhanpal, attended MESC and shared a clear and insightful summary of his experience. His post highlighted six foundational considerations for deploying Generative AI (GenAI) responsibly within healthcare environments: governance, use case clarity, data integrity, human oversight, end-user training, and workflow integration.
If you haven’t read it yet, we highly recommend checking out Dr. Raj’s post on LinkedIn. It’s an excellent high-level overview of where the conversation is heading.
Today, we’re taking that conversation a step further, looking closely at what these ideas mean in practice and how they apply to busy health executives tasked with leading complex, cross-functional AI and value-based care (VBC) transformations.
The clearest takeaway from MESC was that AI needs structure before scale. Plans must establish the right governance model before deployment begins. This includes defining access roles for PHI, building comprehensive audit logging, and ensuring that data use aligns with HIPAA and other compliance frameworks. Legal, IT, clinical, and analytics teams should all play a role in shaping the framework.
A robust governance foundation not only minimizes risk but also builds confidence across internal teams and partner organizations.
Why it matters: Governance ensures your AI initiatives meet compliance standards, avoid costly errors, and build long-term trust across stakeholders.
One of the most common missteps in adopting AI is starting with the technology instead of the problem. The organizations seeing the most value at MESC were those that had defined specific, measurable use cases before launching any AI tools.
Some of the most promising applications involved streamlining Medicaid redetermination workflows, identifying performance gaps in value-based contracts, reducing preventable ED visits, and supporting faster contract settlement. The key was always the same: start with a contract- or outcome-driven objective, then build the technology to support it.
Why it matters: Focusing on targeted use cases maximizes ROI, accelerates implementation, and ties innovation directly to performance and contract success.
Every AI model is only as good as the data that feeds it. This was emphasized repeatedly throughout the conference. Plans dealing with siloed, inconsistent, or incomplete data often find their AI tools producing noisy, unreliable, or biased outputs.
To get reliable insights, plans must integrate and standardize data across claims, quality, utilization, risk, and social drivers of health. A strong foundation in data governance and cleansing enables better modeling, forecasting, and decision-making. Without it, even the most advanced tools will struggle to deliver value.
Why it matters: Clean, consistent data improves AI accuracy, powers better decision-making, and prevents performance blind spots across contracts and populations.
Across both technical and operational sessions, there was strong agreement that human oversight is essential in the early stages of AI implementation. Models can identify trends or suggest actions, but domain experts need to interpret results, validate predictions, and refine workflows.
The most successful implementations are those that treat AI as an assistive tool rather than a decision-maker. This human-in-the-loop approach not only mitigates risks but also ensures that teams stay engaged and accountable.
Why it matters: Human oversight strengthens decision quality, keeps teams aligned, and mitigates the risk of errors or overreliance on unverified insights.
Deploying AI isn’t just about infrastructure—it’s about adoption. Many promising technologies fail because end users don’t understand how to interact with them or don’t trust the outputs.
Clear training, intuitive interfaces, and role-specific workflows can help bridge that gap. When insights are presented in a way that matches how users already think and work, adoption rates rise and results improve. Transparency into what the AI is doing, and what it isn’t, further reinforces trust and responsible use.
Why it matters: Training builds internal trust in AI, accelerates adoption, and ensures that the tools you invest in are actually used to drive impact.
Many attendees at MESC stressed that AI tools must integrate with day-to-day operations, not disrupt them. Teams won’t adopt platforms that force them to jump between systems or work in unfamiliar ways.
AI-powered insights should appear where decisions are already being made—whether that’s in a provider portal, a quality dashboard, or a contract modeling tool. While full EHR integration remains a longer-term goal for many organizations, aligning with existing tools and committee structures, such as Joint Operating Committees (JOCs), is a practical and powerful first step.
Why it matters: Workflow integration removes friction, improves adoption, and ensures AI outputs lead directly to action rather than sitting idle.
The shift toward AI and value-based care is accelerating. CMS has committed to moving all Medicare fee-for-service beneficiaries into value-based arrangements by 2030. State Medicaid RFPs are beginning to require AI-enabled capabilities. Commercial payers and employers are demanding more accountability, insight, and performance from care delivery.
Health plans and health systems that begin integrating AI into their infrastructure today will be better positioned to lead tomorrow. The opportunity isn’t just to adopt a new technology—it’s to reimagine how value-based care is managed, measured, and scaled across the entire ecosystem.
At SpectraMedix, we are actively exploring and designing how AI can best support the value-based care ecosystem, particularly in areas where administrative cost, time, and complexity continue to create friction.
Our platform already supports many of the foundational elements needed to enable intelligent automation and insights, including contract modeling, provider performance analytics, gap tracking, and aligned provider engagement workflows. We are building on that foundation with AI-driven capabilities focused on reducing attribution delays, streamlining contract operations, and surfacing actionable opportunities more efficiently.
Our goal is not to add AI for the sake of innovation but to integrate it in ways that meaningfully enhance the value-based contracting experience. This enables organizations to spend less time and resources managing complexity and more time delivering measurable results.