01 · Context & Background
A Fortune 500 pharmaceutical and life sciences company sought to build an AI-powered radiology assistant that could help clinicians interpret complex medical imaging. At the start of this engagement, there was no product, no prototype, and no defined user — only a broad ambition to apply AI to radiology workflows.
As the Product Owner embedded on this engagement, I was responsible for the entire product lifecycle from initial discovery through clinical validation and GTM execution.
02 · The Problem
The Clinical Reality
- Radiologists spend significant time analyzing scans manually, creating cognitive fatigue and risk of oversight on high-volume days.
- No standardized tool existed to assist radiologists with preliminary analysis across multi-tenant hospital environments.
- Sparse imaging data across specialties made training robust models extremely difficult.
The Organizational Challenge
- Stakeholder alignment was fragmented across 15+ hospital systems, each with different workflows and compliance requirements.
- HIPAA restrictions made accessing real patient imaging data for model training a multi-month process.
- Engineering teams initially pushed back on scope, and requirements evolved frequently as clinical needs became clearer.
03 · Discovery & The Pivotal Insight
I led 20+ clinical interviews and stakeholder workshops across hospital systems, targeting four user groups: radiologists reading scans, clinicians ordering reports, hospital IT teams managing integrations, and department heads overseeing workflow efficiency. Early in discovery, the team assumed the core value proposition was speed — faster AI analysis = faster reports. The data told a different story.
"Every clinician I interviewed could tell me they wanted faster reports. But when I dug into why adoption had stalled for similar tools at their institutions, the answer was always trust. They didn't know why the AI was saying what it was saying."
This single pattern, surfacing consistently across institutions, triggered our most important product decision: pivot from a speed-first to an interpretability-first MVP.
04 · The Pivotal Decision: Interpretability-First
The original roadmap prioritized accuracy and turnaround time. After discovery, I restructured the MVP around one core thesis: clinicians will not act on AI output they cannot explain to a patient, a colleague, or a regulator.
What Changed in the Product Definition:
- check_circle Every AI prediction now included highlighted regions of concern in the image, showing the radiologist exactly where the model was attending.
- check_circle Confidence scores were surfaced alongside predictions, enabling radiologists to calibrate how much weight to give each suggestion.
- check_circle Audit-ready explanations were baked into the output format, directly addressing SaMD regulatory requirements for clinical decision support tools.
- check_circle A human-in-the-loop review step was built into the workflow, ensuring no AI output went to a clinical decision without radiologist sign-off.
This pivot reduced mid-sprint scope churn by 25% because we stopped building features users said they wanted and started building for what they actually needed to adopt the product.
05 · Execution: Navigating the Hard Problems
Getting from discovery insight to a working clinical product required solving four significant execution challenges simultaneously:
HIPAA restrictions meant imaging data was extremely limited for model training. I worked with clinical partners to define synthetic data augmentation strategies and prioritized specialties where we had sufficient data density first.
Initial model accuracy sat at 68% — too low for clinical deployment. I ran weekly UAT sessions with clinical users, translated their feedback into specific retraining tickets for the Data Science team, and established a validation benchmark that defined what 'good enough to ship' looked like.
I embedded HIPAA, GDPR, and FDA SaMD constraints directly into Jira acceptance criteria, ensuring compliance was reviewed at every sprint rather than bolted on before release.
06 · Go-To-Market Strategy & Impact
Rather than launching across all radiology specialties simultaneously, I defined a precision GTM strategy: launch first in the specialties where our NLP accuracy was highest, use those wins to build clinical credibility, and expand from there.
"Featured in the client's Q3 Product Review for accelerating the 510(k) regulatory pathway through audit-ready explainability design."
What I Would Do Differently
- Start data governance conversations earlier: The HIPAA data access process took 3 months and delayed model training. In retrospect, I would have parallelized this with discovery rather than sequencing them.
- Define evaluation benchmarks before training: We set the 88% accuracy threshold mid-project; defining it upfront would have given the DS team clearer targets from day one.
- Build hospital-specific feedback loops from day one: Each hospital had different annotation standards for what counted as an 'abnormality' — a structured disagreement-logging feature would have surfaced this variability earlier.