01 · The Platform Context
This platform is a leading healthcare procurement analytics product used by hospital supply chain directors to benchmark capital equipment purchases. Supply chain directors submit vendor quotes — MRI machines, surgical robots, infusion pumps — and the platform's analysts benchmark those quotes against a database of thousands of facilities nationwide, delivering insights that help hospitals negotiate better deals.
As Product Owner, I owned the AI transformation roadmap for this core workflow — identifying where automation could unlock analyst capacity and deliver faster, higher-quality insights to hospital procurement teams.
02 · Discovering the Real Problem
The quote analysis workflow had a 3.2-day average cycle time. On the surface, that seemed acceptable. The deeper problem was structural:
Where the Time Actually Went:
- Queue backlog: 181 quotes in the queue meant 4-5 days of wait time before an analyst even touched a new submission.
- Manual data entry: Analysts spent 8 minutes per quote copy-pasting line items from PDF vendor quotes into platforms.
- Manual benchmarking: 7 more minutes looking up each item individually against the database.
- Report generation: 18 additional minutes building Excel charts and formatting inconsistent outputs.
The pain was felt on both sides: supply chain directors were losing negotiating leverage while waiting 3+ days for analysis. Analysts were drowning in copy-paste work, unable to do the strategic benchmarking they were hired for.
03 · Stakeholder Workshops & Problem Framing
I led 5+ stakeholder workshops with supply chain directors, senior procurement analysts, and finance leadership. The key insight from those sessions wasn't what users asked for — they asked for 'faster reports' — but why speed mattered:
"Every day of delay costs me negotiating power. By the time I get the analysis, the vendor has already moved on. This isn't just about efficiency — it directly impacts what I can save for the hospital."
This reframed the problem from 'speed up the analyst' to 'give the analyst their time back so they can deliver strategic value, not just faster data entry.'
04 · From Insight to Buy-In: The Streamlit POC
Before committing engineering resources to an AI build, I developed a proof-of-concept in Streamlit to demonstrate the business case to leadership. The POC simulated the AI-powered workflow end-to-end, showing AI extraction converting a PDF vendor quote to structured CSV in 30 seconds vs. 8 minutes manually.
The POC secured leadership buy-in and funding approval to move into full engineering development. By demonstrating the outcome before writing production code, we avoided the most common AI project failure mode: building something nobody was convinced would work.
05 · The AI Solution Architecture
The approved solution transformed the end-to-end quote workflow through five key AI interventions:
- AI extraction: Vendor quote PDFs are ingested automatically, transforming unstructured line items to structured CSV with near-zero error rate.
- Auto import: Extracted data flows directly, eliminating all manual copy-paste steps.
- AI benchmarking: All line items are compared against the entire database simultaneously in 2 minutes.
- Auto-generated reporting: A standardized 9-dimension analysis report replaces 11 minutes of manual Excel formatting.
- Human-in-the-loop review: Analysts validate AI outputs before delivery — preserving human judgment while eliminating grunt work.
06 · Measured Results
The total cycle time for supply chain directors went from 3.2 days down to 1.5–2 days (-38%). Manual data entry dropped from 8 minutes to 1 minute per quote — an 88% reduction. The freed-up analyst time shifted from copy-paste work to strategic benchmarking. Savings identification rate for hospital clients improved from 32% to 42%, directly strengthening procurement negotiations.
What I Would Do Differently
- Quantify the queue backlog cost earlier: The backlog was the real source of delay — I would have led with that data in the initial business case rather than leading with analyst efficiency.
- Define the AI error rate threshold before building: We didn't formally specify what 'acceptable' accuracy looked like until mid-build. A threshold upfront prevents scope creep.
- Instrument the POC before demos: Adding basic usage tracking to the Streamlit POC would have generated adoption signals that strengthened the investment case further.