New policies are opening the door for AI in healthcare that could meaningfully change the way we approach Medication Therapy Management.
By Golda Manuel, PharmD., MS
Earlier this year, Congress introduced the Healthy Technology Act (H.R. 238), a bill that could let AI systems prescribe medications, as long as they qualify as medical devices and are approved by individual states. The bill is still under consideration and doesn't currently specify which drugs or clinical contexts are included.1
Fast forward to now and the U.S. Senate struck a pivotal amendment from the One Big Beautiful Bill Act (OBBBA), removing a proposed ten-year ban on state-level AI regulations and preserving states' rights to shape their own AI oversight frameworks.2 At the same time, the administration announced a new policy: “America’s AI Action Plan.” It rolled back earlier federal AI-related regulations, removing onerous regulations hindering AI deployment, encouraging the private sector’s rapid adoption of AI, and endorsing open-source and open-weight AI models.3
The signs point to a meaningful shift toward fewer federal constraints and a growing opportunity for states to lead the way in exploring bold, responsible uses of AI in healthcare.
AI‑powered MTM platforms can now lean on clearer federal support to integrate AI into medication reviews and prescribing support.3 Timely patient identification remains the linchpin of proactive MTM, a process now poised for improvement by AI that can recommend optimal treatment plans and, in the future, initiate prescriptions when authorized.
The Healthy Technology Act (H.R. 238) makes it possible for AI tools to help with prescribing if they’re approved as medical devices and allowed by state law.3 That could be a big deal for MTM, where medication therapy review is a major, day-to-day part of supporting patients. But it also raises some tricky questions. If an AI tool suggests adjusting a medication or starting a new one, who takes responsibility for that decision? As AI takes on a bigger role in MTM, making sure there’s clear oversight and accountability will matter more than ever.
This innovation carries a blend of promise and risk. Expanding AI’s prescriptive authority could help reduce medication errors, enhance treatment personalization, and ease the burden on providers and pharmacists, particularly in high-volume care settings. Without thoughtful design and clear guardrails it could introduce risks such as algorithmic bias or diminish the role of clinical judgment. These risks often come down to how the AI is developed and implemented, highlighting the need for well-built, responsibly governed AI tools.
The federal government is pushing for rapid testing and adoption of AI, especially in fields like healthcare, to encourage innovation and improve outcomes. But some critics worry this fast pace might come at the cost of transparency, making it harder to understand how AI-driven decisions are made. Meanwhile, states like New Jersey and Colorado have taken steps to protect consumers and patients by passing laws that guard against bias and discrimination in AI tools.4, 5 MTM platforms that use AI to identify high-risk patients may be among the tools most affected by these policy changes.
Inconsistent state-level rules around AI and healthcare data could disrupt interoperability, making it harder to share and integrate patient information across systems. When states take different approaches to privacy, bias, and transparency, it creates a fragmented landscape, especially tough for AI tools used for MTM. That kind of patchwork can raise compliance hurdles, slow down data flow, and get in the way of coordinated care. While states are right to tackle AI’s ethical risks, without alignment we risk losing the bigger goal of a healthcare system that’s connected, efficient, and fair for everyone.
The rise of state-level AI regulations brings new complexity to HIPAA compliance. While America’s AI Action Plan doesn't amend HIPAA itself, it opens the door for faster AI adoption, raising questions about how privacy, consent, and data-sharing standards will be enforced across jurisdictions.
Health data shared with AI platforms remains subject to HIPAA and must adhere to consent, breach notification, and minimum‑necessary principles. As states begin to introduce their own rules around patient notification, algorithmic transparency, and auditability, healthcare platforms must ensure their AI systems uphold HIPAA’s core principles. Providers should adopt robust governance, audit logs, and explainability standards.
As AI reshapes how medication therapy management is delivered, both industry leaders and policymakers face critical decisions. The challenge isn’t whether to adopt these technologies. It’s how to do so responsibly and effectively. From health plans to government agencies, the path forward requires thoughtful action to ensure that innovation serves patients, safeguards privacy, and strengthens care.
These are among the most important priorities we see emerging, each deserving careful consideration.
Anticipate Policy Shifts | Choose Responsible AI | Measure What Matters |
Work closely with legal and compliance teams to monitor emerging state AI legislation and plan for adaptive responses. | Whether built in-house or through a partner, ensure AI platforms are transparent, explainable, and adaptable to state regulations. | Track outcomes (e.g., medication adherence, hospitalizations, patient experience) to demonstrate the value of AI MTM tools and align with CMS quality measures. |
Innovate with Care | Clarify AI Prescribing | Keep Patients Informed |
Encourage AI innovations and adoption while preserving authority to address bias, privacy, and algorithmic fairness in healthcare. | Align AI medical device approvals with clear federal and state-level guidance and define roles for pharmacist and provider practice and oversight. | Ensure patients get clear disclosures, opt-out options, and support when AI tools guide clinical recommendations. |
Today, coordinating a medication review might take weeks and involve multiple phone calls, emails, and delays. But that’s starting to change. With new regulations and smarter tech, AI-powered MTM is becoming quicker and more personal. AI platforms can identify medication gaps, schedule virtual consultations, offer tailored education, and follow-up through a conversational chatbot. Instantaneously, patients are alerted to potential drug interactions, avoid medication duplicates, and receive guidance that aligns with their personal goals and motivation.
Regulations are opening the door for AI in healthcare and MTM, but the real work is still ahead. Building useful, trustworthy tools will take deep collaboration across the healthcare community, especially with patients. If we get it right, AI won’t just speed things up. It can help make healthcare feel more intuitive, more personal, and less overwhelming.
Common questions this article helps answer