Picture this: You’re sitting in a meeting room, explaining to your leadership team why your cancer care program’s AI initiative is six months behind schedule and over budget, while competitors are already seeing results.
The PowerPoint slides feel heavy in your hands. The questions from board members sting more than they should.
“What’s taking so long?” “When will we see ROI?” “Are we falling behind?”
You’re not alone. Even the most well-funded cancer centers are struggling with the same implementation challenges that keep you awake at 2 AM, wondering if you’re missing something obvious.
Earlier this week, I reported live from the NCCN Policy Summit on The Evolving Artificial Intelligence Landscape in Cancer Care in Washington DC, where cancer care leaders from across the healthcare spectrum gathered to tackle these exact frustrations. What I discovered wasn’t another list of AI tools or technical specifications. Instead, I found 7 counterintuitive secrets that successful healthcare leaders are using to fast-track their AI implementations while others remain stuck in analysis paralysis.
Ready to discover what they know that you don’t? Let’s begin.
Secret #1: Stop Chasing the Perfect AI Tool – Start with “Good Enough” Frameworks
Secret #1 with Real Summit Example:
Secret #1: Stop Chasing the Perfect AI Tool – Start with Strategic Guardrail Frameworks
Here’s what caught everyone off guard at the summit: The most successful AI implementations aren’t using the “best” tools. They’re using strategic guardrail frameworks that prevent bottlenecks while maintaining safety.
The Maryland approach, effective October 1, 2025, shows exactly how this works for healthcare payers. Instead of banning AI or requiring perfect validation, Maryland created specific framework requirements for health insurance carriers, pharmacy benefit managers, and private review agents:
Maryland’s Payer Framework:
- AI decisions must be based on individual patient data (not just population averages)
- Prohibits using AI to deny, delay, or modify care – requiring human provider confirmation at these critical decision points
- Quarterly performance reviews and adjustments required
- Open audit and inspection processes
- Cannot replace physician oversight entirely
The Clinical World Application: Similarly, healthcare organizations could apply this same strategic guardrail principle to clinical AI tools, as discussed at the summit.
Real Summit Example: AI Case Prioritization
- No bottleneck: AI automatically prioritizes critical findings for immediate review
- Strategic guardrail: Radiologist must confirm before any urgent clinical communication goes out
- Result: Faster critical case identification without compromising safety
The Framework Principle: Both Maryland’s payer approach and the clinical prioritization systems follow the same logic: Let AI operate efficiently in low-risk areas, but require human confirmation when decisions could significantly impact patient care.
Your Action Item: Identify the “deny, delay, or modify care” equivalent decision points in your AI workflow. Place guardrails there, not everywhere. This maintains safety without creating artificial bottlenecks that defeat AI’s efficiency benefits.
Secret #2: EmbraceBiological Drift Instead of Fighting It
Many healthcare leaders at the summit expressed concern about the same issue: “What happens when our AI stops working?”
But here’s the counterintuitive insight that emerged: Successful implementations plan for biological drift from day one.
Here’s the key distinction discussed at the summit: The AI model itself doesn’t change or “drift.” The model remains exactly the same code and algorithms you deployed.
What changes is the biological reality around it. Patient populations evolve. Disease presentations shift. Demographics change. Treatment responses vary over time.
So while your AI model stays identical, its performance can degrade because the world it’s analyzing has moved away from the training data it learned from.
The Summit Solution: Instead of trying to build “perfect” models that somehow anticipate future biological changes, summit experts discussed the concept of building self-aware AI systems that can detect when their performance drops below acceptable thresholds.
One panelist noted: “We need AI that can say ‘I’m not performing well enough anymore’ and stop giving predictions.”
Why This Approach Often Works Better: Rather than spending months trying to predict possible biological changes, you build monitoring systems that detect when performance degrades and alert human oversight.
This approach can beat trying to build perfect models because:
- You get to market faster with “good enough” models
- You build performance monitoring into your system architecture from day one
- You create sustainable long-term solutions that adapt to biological reality
Your Action Item: Build AI performance monitoring into your procurement requirements. Don’t ask vendors if their AI is perfect. Ask how their system detects and handles performance degradation due to changing patient populations.
Secret #3: Target Underutilized Care Areas, Not Your Strongest Programs
The intuitive approach for most healthcare leaders is to implement AI in their best-performing departments first – areas where they have strong workflows, experienced staff, and proven processes.
The summit revealed a different strategy that may be more beneficial: Target areas where standard care is already underutilized or difficult to implement consistently.
Dr. Travis Osterman from Vanderbilt highlighted this challenge: “We don’t even do molecular testing consistently. Let’s not dream of AI until we fix the basics.”
But here’s the counterintuitive insight: What if AI could help you skip the traditional implementation barriers entirely?
Consider molecular testing prediction from pathology images. Instead of requiring expensive next-generation sequencing on every sample, AI can:
- Predict molecular mutations from standard histology
- Triage cases for targeted testing
- Reduce costs while improving access
The cancer care continuum shown at the summit – Prevention → Diagnosis → Treatment → Survivorship → End-of-life care – reveals multiple underutilized areas where AI can make immediate impact.
Your action item: Audit your cancer care pathway. Find the bottlenecks where standard care is failing. Those are your AI implementation sweet spots.
Secret #4: Design for Patients Receiving Care, In Addition Providers Giving It
This insight emerged as a crucial blind spot at the summit.
One panelist, who had been a cancer patient herself, shared this story: After chemotherapy, she received a multi-page list of potential side effects. When she asked her doctor, “Which ones will I get?” the response was, “We don’t know. You’ll tell us.”
That’s when it became clear to the room: We’ve been focusing heavily on how AI can make providers better at giving care, but we’ve given less attention to how patients experience receiving that care.
Both Perspectives Are Critical: The most effective AI implementations consider both sides of the care equation:
Provider-Giving Perspective (Essential):
- AI that helps doctors diagnose faster and more accurately
- AI that optimizes treatment protocols, schedules and reduces errors
- AI that streamlines clinical workflows
Patient-Receiving Perspective (Equally Essential):
- AI that helps patients understand their diagnosis in accessible terms
- AI that personalizes patient education based on individual risk factors
- AI that prioritizes the most relevant information instead of overwhelming patients with every possibility
Dr. Goodman emphasized: “Patients are going to want to benefit from AI that’s also personalized and patient-focused and brings across the priority information in an intelligible way.”
Your Action Item: For every AI initiative, ask both questions: “How does this improve our providers’ ability to give excellent care?” AND “How does this improve the patient’s experience of receiving that care?” Both perspectives should inform your AI design decisions.
Secret #5: Follow the Pioneers, Don’t Reinvent from Scratch
The summit revealed something that could save you 12-18 months of development time: Successful healthcare leaders aren’t starting from scratch.
They’re following the pioneers.
The CMS AI initiatives were repeatedly referenced as examples to learn from, not compete with. One panelist noted: “Follow the pioneers and build on top of what they did. Don’t start from scratch if you don’t have to.”
This approach works because:
- You learn from their mistakes without making them yourself
- You reduce development risk
- You reduce time to implementation
- You avoid regulatory pitfalls they’ve already navigated
The summit presentations showed multiple successful implementations already in progress:
- Ambient scribing solutions
- Surgical planning optimization
- Infusion scheduling algorithms
- Radiology critical alerts
Instead of viewing these as competition, smart leaders are studying them as roadmaps.
Your action item: Identify three pioneer AI implementations in cancer care you would like to model. Study their approaches, learn from their challenges, and adapt their frameworks to your environment.
Secret #6:Build Flexible Systems for Evolving Regulatory Pathways
Here’s what no one wants to admit: Regulatory pathways for AI remain complex and evolving, even for successful companies.
But here’s the counterintuitive insight: This complexity is actually your competitive advantage if you know how to navigate it.
The Artera Success Story: Artera, which received FDA De Novo authorization in August 2025 for their AI digital pathology software for prostate cancer, illustrates this perfectly. They became the first AI-powered software authorized to prognosticate long-term outcomes for patients with non-metastatic prostate cancer.
The Remarkable Gap: While the FDA has approved 950 AI medical devices between 1995 and 2024, Artera is currently the only AI cancer test included in NCCN guidelines – despite hundreds of FDA-approved AI tools existing.
This gap reveals a key insight: FDA approval and clinical guideline inclusion are entirely different processes with different requirements.
Artera’s Strategic Advantage: Their De Novo authorization included a Predetermined Change Control Plan, allowing software updates without further 510(k) submissions. This regulatory flexibility enables rapid iteration.
Your Action Item: Build implementation flexibility into your AI strategy. Study successful regulatory pathways like Artera’s, and create systems that can accommodate multiple regulatory approaches rather than betting on one pathway.
Secret #7: Stop Setting Human-Level Performance as Your AI Goal
A surprising debate emerged at the summit about AI performance expectations that revealed how we might be limiting ourselves.
The traditional approach sets human-level performance as the gold standard for AI validation. But summit experts questioned whether this benchmark actually makes sense.
The Performance Expectation Problem: When we require AI to “only be as good as doctors,” we’re essentially building technology to replicate human limitations rather than exceed them.
The Perfect Example: TROP2-QCS Biomarker The Roche/AstraZeneca TROP2 test, which received FDA Breakthrough Device Designation in 2025, illustrates this perfectly. This AI-powered diagnostic measures the ratio of TROP2 protein expression between tumor cell membranes and cytoplasm – a calculation that provides “a level of diagnostic precision not possible with traditional manual scoring methods.”
There literally is no human equivalent to compare it to. Setting a “human-level performance” benchmark for this test would be impossible because humans cannot manually perform these quantitative ratio calculations across thousands of cells with the same precision.
The Broader Principle: Like calculators didn’t make us worse at math – they freed us to solve more complex problems – AI tools can exceed human capabilities in specific domains while humans focus on higher-level decision making.
The Tiered Approach Discussed at Summit:
- Low-risk applications: Focus on workflow improvement rather than perfect accuracy
- Medium-risk applications: Require human-level performance with efficiency gains
- High-risk applications: Demand superhuman performance when technically feasible
Finding the Balance: While we must guard against over-dependency (as a 2025 Lancet study showed endoscopists’ polyp detection skills declined after regular AI exposure), we shouldn’t limit AI to human-level performance when it can demonstrably do better.
Your Action Item: Instead of asking “Is our AI as good as our doctors?”, ask “What level of performance does this specific use case require, and what level is technically achievable?” Set evidence standards based on the intended use, potential impact, and the technology’s actual capabilities – not arbitrary human-level benchmarks.
The Implementation Reality Check
What the summit revealed about 2025 wasn’t what most healthcare leaders expected.
The AI reliance vs clinical decision support data showed something striking: Healthcare providers are already using AI tools, but not consistently and not systematically.
The gap isn’t in AI capability – it’s in implementation strategy.
But here’s what emerged from the discussions: successful healthcare leaders aren’t trying to solve every AI challenge at once. They’re taking a strategic, multi-track approach that acknowledges different risk levels require different evidence standards.
Three Implementation Tracks Emerging:
Track 1: Strategic Framework Building – Focusing on guardrail placement rather than perfect tools, learning from pioneers like Maryland’s payer requirements and Artera’s regulatory pathway
Track 2: Patient-Centric Design – Building AI for both care-giving AND care-receiving experiences, addressing the information overload problems highlighted by patient advocates at the summit
Track 3: Capability-Appropriate Standards – Setting evidence requirements based on what the technology can actually achieve, not arbitrary human-level benchmarks, as demonstrated by breakthrough tools like the TROP2-QCS biomarker
The healthcare leaders succeeding in 2025 aren’t trying to implement everything at once. They’re executing all three tracks simultaneously while building systems that can adapt as regulatory clarity emerges.
Your Next 90 Days
I know how it feels, standing in your office, frustrated by the slow pace of AI adoption while reading about breakthrough implementations at other cancer care institutions.
The pressure from your leadership team. The questions from your clinical staff. The nagging fear that you’re falling behind while technology races ahead.
But here’s what I learned from cancer care leaders from across the healthcare spectrum in Washington DC: You’re not behind. You’re positioned.
The AI revolution in cancer care isn’t coming – it’s here. But it’s not the revolution we expected. It’s not about perfect AI tools or flawless implementations.
It’s about healthcare leaders who are brave enough to start with strategic frameworks instead of waiting for perfection. Leaders who embrace regulatory uncertainty as opportunity. Leaders who design for both patients and providers.
Picture this: Six months from now, you’re presenting to your leadership team again. This time, you’re showing patient outcome improvements, operational efficiencies, and competitive advantages from your AI implementations.
Your clinical staff is asking for more AI tools, not questioning why they need them. Your patients are experiencing faster, more personalized care. Your organization is the one others are studying and following.
The difference between this vision and your current reality isn’t technology. It’s strategy.
The question isn’t whether you’ll implement AI in cancer care – your competitors are already making that choice for you.
The question is whether you’ll lead or follow.
Your patients are counting on you to choose leadership.
The AI revolution is here. Your moment is now.
Go forth and transform cancer care.













Comments are closed.