Upskilling and Reskilling Pharmacists in the Age of AI

Pharmacists’ roles are at a turning point, where reskilling and upskilling mean pairing clinical judgment with fluency in AI, analytics, and regulation to strengthen medication management.

linegraph-cover

Pharmacy faces a turning point. Jobs have become harder to secure, and in 2025, hundreds of pharmacy residency positions remained unfilled, with 345 unfilled positions compared to just 43 five years ago.1 This gap reflects not only limited program capacity but also growing mismatches in pay and location preferences among new pharmacy graduates. The broader labor market tells a similar story of stagnation, with the Bureau of Labor Statistics reporting in July 2025 that job openings have slowed and competition for roles has intensified.2

Filled and Unfilled Pharmacy Residency Positions in 2025

In the 2025 Phase II Match, over one-third of residency positions remained vacant, with the largest gaps occurring in PGY-2 programs.
Filled (641 of 986, 65.0%)
Unfilled (345 of 986, 35.0%)

While health plans and health systems are busy weaving data and automation into medication management, most hospitals haven’t caught up with less than 6% using advanced analytics or AI in pharmacy today. It's a telling gap between the work pharmacists are trained for and the work the future will demand. That gap will be filled by pharmacists ready to move the field forward with fresh practices and technology.

Pharmacists Are Suited for AI but Still Have Work to Do

Pharmacists have always practiced at the intersection of precision and risk. Every prescription is a puzzle with balancing dose, interaction, physiology, behavior, and cost. AI, impressive as it may be, is ultimately just a pattern machine. It can sift through millions of data points , but it has no sense of which signals truly matter without someone to guide it. That’s the role pharmacists have always played. We have, by training and practice, shaped care within those boundaries for decades.

Pharmacists are uniquely positioned to guide the safe and effective use of AI, but realizing that potential will require building fluency in data, technology, and outcomes analytics.

Many organizations already see pharmacists as natural leaders in this space. The American Society of Health-System Pharmacists (ASHP), for instance, recently adopted policy 2413, which calls on pharmacists to guide AI adoption, from selecting tools to overseeing their use across the medication-use process.3 Similarly, the World Health Organization (WHO) has issued guidance urging frontline clinicians to take an active role in shaping how generative AI is applied in health,4 with pharmacists well-positioned to lead on safety and ethics. Yet, while we excel in risk and governance, the profession has underinvested in the language of technology and data. Few pharmacists are fluent in standards like NCPDP SCRIPT, RxNorm, or the NIST AI Risk Management Framework. Outcomes analytics, critical to value-based care, is rarely emphasized in pharmacy education. Yet these are the very tools pharmacists will need to lead in an AI-driven system, using data to measure results, identify high-risk patients , prioritize outreach, and direct interventions where they matter most.

Five Steps Every Pharmacist Can Take to Gain Advanced AI Skills

Because AI in healthcare is so new, even experts are learning as they go. Now is the perfect time for pharmacists to start experimenting because many tools are free and the best progress comes from slow, deliberate practice over time.

1. Know what AI can and can't do well

Pharmacists learning AI need a grounded sense of where these tools help and where they can hurt. Large language models are powerful assistants, but they are not clinical decision-makers. Their strengths are in the kinds of tasks that save time without replacing judgment: summarizing long encounter notes, drafting documentation, or spotting patterns in complex health data.

The path to safe AI use in medication management is simple but firm: ground every claim in evidence, audit every draft, and never mistake fluent text for clinical truth.

But it’s just as important to recognize their limits. For now, these models aren't experts in pharmacotherapy, they predict words based on patterns they observed during training. Left unchecked, they will occasionally hallucinate, like inventing drug interactions or citing a guideline that doesn’t exist. These are reasons they should never be treated as final arbiters of patient care. Instead, think of AI as a drafting partner or a pattern finder that always requires your clinical oversight. Used this way, pharmacists can gain real efficiency without giving up the safety net of human judgment.

2. Learn how to ask AI clear questions

The way you frame a prompt can make or break the usefulness of an AI output. Done well, prompting reduces errors, creates consistency, and speeds up documentation without sacrificing accuracy. The key is to be explicit: give the AI a role, a task, clear constraints, and defined sources. For example, you might write, “You are a clinical pharmacist. Draft a note for the patient summary below. Only use the sources provided. Flag any uncertainties. Cite each statement with a reference.”

Adding guardrails matters too. You can require the AI to stop guessing by telling it, “If you lack data to support a claim, say ‘insufficient evidence’ and list the missing fields.” With practice, you’ll build a set of reliable prompts that cut down errors and save time, helping you bring AI into medication therapy management more quickly and confidently. For a deeper dive, peer-reviewed primers on prompt design in medicine share best practices and case examples worth studying. This study, for example, published in npj Digital Medicine, found that, overall, among the models tested, gpt-4-Web responses aligned consistently with the American Academy of Orthopedic Surgeons (AAOS) clinical practice guidelines, particularly on recommendations supported by strong evidence.5

Prompting Styles Shape LLM Response Consistency with AAOS Guidelines

Overall, among nine models tested at the Strong evidence level, gpt-4-Web gave the most consistent answers, while other models lagged behind.
Abbreviations: IO=Input-Output, 0-COT=Zero-shot Chain-of-Thought, P-COT=Partial Chain-of-Thought, ROT=Return-to-Thoughts
Data source: Wang, L., Chen, X., Deng, X. et al. Prompt engineering in consistency and reliability with the evidence-based guideline for LLMs. npj Digital Medicine 7, 41 (2024).

3. Beware of hallucinations

LLMs can sound sure of themselves while being wrong,6 which is exactly what makes them risky in medication management. The fix is to make every AI draft earn its claims. Require grounding up front by asking the model to give you references like a drug label or a relevant guideline, and to stop if a source isn’t available. Then run a deliberate two-pass workflow: first a quick draft, then a verification pass that cross-checks drug, indication, dose, and contraindications against those trusted sources.

Whenever possible, use retrieval-augmented generation (RAG), so the model draws directly from your formulary, drug labels, and care guidelines rather than guessing from memory or scraping the open web. RAG consistently improves factuality by anchoring outputs in real evidence. The safest workflow is to ground the model in authoritative drug and patient data, follow with a verification pass, and prompt it to audit its own claims, because confident prose is not the same as clinical truth.

4. Experiment

The safest way to start building comfort with AI is to try it in low-stakes settings, where a patient’s health isn't on the line. Open up a tool like ChatGPT, Claude, Perplexity, and just play. Ask it what you should have for lunch, or to draft a packing list for a weekend trip, or to explain a TV show in the style of Shakespeare. Notice how it follows your instructions and where it wanders. Then push a little closer to your professional world without touching patient data. For example, have it summarize a journal abstract, rephrase drug information for a sixth-grade reading level, or draft a generic SOAP note template.

By staying current on AI regulations and focusing on transparency and measurable outcomes, pharmacists can ensure these tools serve patients safely and effectively.

The point isn’t to trust the answers, but to see how the machine handles structure, tone, and guardrails. That practice in safe, everyday scenarios builds the intuition you’ll need when the questions get more serious.

5. Stay current

Staying up to date on AI in healthcare isn’t about chasing the latest tech buzz. It’s about protecting patients and staying credible in a field that’s changing fast. Regulations and standards are evolving quickly, from the new CMS Healthcare Technology Initiative to the 2025 AI Bill that mandates transparency and provider oversight when AI is used in clinical care to the FDA’s guidance on clinical decision support. These guardrails shape how AI tools can and should be used in medication management, documentation, and patient safety.

By keeping pace with these shifts, pharmacists not only safeguard their practice but also position themselves as trusted voices when their organization, payers, or vendors ask, “Can we use this tool?” In many ways, learning the regulatory language is as important as learning how to prompt the model. It ensures pharmacists remain the ones setting the boundaries for safe and effective use.

When evaluating AI for clinical use and medication therapy management, the key is to cut through the hype and focus on what matters: intended use, trustworthy data, real-world performance, strong safety nets, and measurable outcomes . Tools that are grounded in evidence, transparent about their limits, and tied to improvements in quality or cost are the ones worth your time.

FAQs

Common questions this article helps answer

What AI skills should pharmacists learn first to stay competitive?
Pharmacists should begin by building AI literacy (knowing what large language models can and cannot do) along with outcomes analytics and awareness of the health policy landscape. These skills are rarely taught in pharmacy school but are essential for demonstrating value in today’s workforce.
How can pharmacists use large language models safely for documentation without risking hallucinations?
Treat AI as a drafting partner, not a decision-maker. Always ground outputs in authoritative sources like drug labels or guidelines, and run a two-pass workflow: generate a quick draft, then verify drug, dose, and contraindications manually. Retrieval-augmented generation (RAG) tools that pull from formularies or internal databases improve factual accuracy.
What does good AI prompting look like for medication therapy management?
Effective prompts are explicit and role-based. For example: “You are a clinical pharmacist. Draft a patient note using only the sources provided. Flag uncertainties. Cite each claim to source IDs.” Adding guardrails like “If insufficient data, state ‘insufficient evidence’” prevents guessing and reduces errors.
Which low-risk, high-impact AI use cases should a pharmacy team pilot first?
Start with administrative or workflow accelerators: drafting documents and notes, summarizing encounter histories, or rephrasing drug information for patient education. These areas save time without touching final prescribing decisions, making them safe places to build comfort.
What habits keep pharmacists current on AI (tools, standards, policy) without getting lost in hype?
Focus on credible sources, not tech buzz. Track updates from FDA, CMS, ASHP, and NIST. Build a habit of experimenting safely with AI in non-clinical tasks, while following regulatory changes. Staying fluent in standards and outcomes ensures pharmacists remain the ones setting boundaries for safe AI use.
Terms and Policies • Privacy Policy