How a Routine Colonoscopy—and a Cancer Diagnosis—Changed My View of AI in Healthcare
- Jonscott Turco
- Jun 9
- 3 min read

What one unexpected moment taught me about the limits of technology, the power of trust, and where leadership needs to take AI next.
In late 2024, I walked into my doctor’s office expecting nothing more than a routine colonoscopy. No symptoms. No concerns. No red flags. Just the sort of preventive care appointment that barely registers in your calendar until it’s over.
As I waited afterwards for what I was certain would be an “all good” from my Gastroenterologist, she approached—calm, attentive, and deeply skilled—and then paused.
“Something doesn’t look quite right. We’ll send out to Johns Hopkins for testing,” she said gently. “Just to be sure. And stay off WebMD”
There was no alarm in her voice. Just a stillness that made me notice.
After some time, during which I tried to convince myself it was all going to be just fine, the results came back: colon cancer.
Early Detection Saved Me. But Something Else Changed Me.
The clinical team moved fast. I was referred to a spectacular surgeon who contacted me the very next day. She was, as was my gastroenterologist, exceptionally gifted and reassuring without trying. She used the da Vinci Surgical System, a robotic-assisted platform that brings an almost science-fiction level of precision to the operating room.
Afterward, I marveled at how I never experienced any pain before or after the surgery. Discomfort? Sure. But never any pain.
Reading the operation transcript, I was amazed at the elegant, choreographed precision described in the document.—her training and expertise guiding the machine, the machine amplifying her skill. That combination helped give me a second chance.
But in the quiet moments that followed—the ones where you replay everything you thought you understood about your health—even after the surgery when my surgeon called with “the best outcome possible”, I found myself wondering:
“Could AI have scanned the rest of me? Could it have seen something we didn’t?”
The answer, offered with compassion, was clear: “Good idea, but not yet.”
And those two words stuck with me. Not yet.
What Patients Really Want from AI Isn’t Just Accuracy—It’s Reassurance
AI is already reshaping medicine in remarkable ways:
In diagnostics, systems like GI Genius are improving adenoma detection by as much as 20%, according to recent randomized studies.
In capsule endoscopy, deep learning models now read entire GI tracts in minutes—with over 99% sensitivity and specificity.
In surgery, platforms like da Vinci are elevating outcomes and reducing complications, merging machine intelligence with human finesse.
In clinical operations, AI is automating workflows, prioritizing urgent cases, and lightening the load for burned-out staff.
These are real gains. Lives are being saved. Systems are improving.
But in that pivotal moment—the one every patient faces when diagnosis hangs in the air—you don’t just want AI to detect. You want it to see you.
To say: “We’ve looked everywhere that matters. And here’s what we know.”
You want clarity, not just probability. You want context, not just computation. You want confidence—the kind that lets you sleep at night.
We Don’t Need AI That Just Works. We Need AI That Reassures.
That’s the next frontier for leadership in healthcare innovation.
Design AI outputs with empathy. Reports should inform and comfort.
Close diagnostic loops. Don’t just flag what’s abnormal—confirm what’s not.
Measure clarity, not just accuracy. Because peace of mind is an outcome, too.
As someone who’s lived through the diagnosis and come out the other side, I can tell you: patients don’t walk into their doctors’ offices or hospitals asking for AI. They walk in asking for answers.
The tech is catching up fast. But trust? That still takes intention.
The Strategic Case for Reassurance
Healthcare leaders know the pressures: deliver outcomes, manage risk, retain staff. But the case for human-centered AI isn’t just moral—it’s operational.
Better detection → fewer readmissions and malpractice risks
Smoother workflows → less burnout and higher retention
Greater trust → better follow-up, stronger compliance, and healthier patients
And when patients feel seen—really seen—they become partners in care, not passive recipients.
A Call to Reframe the AI Conversation
This isn’t about pitting technology against humanity. It’s about making sure the two evolve together.
Let’s shift the goal:
From “how accurate is our AI?” To “how confident are our patients?”
From “how smart is the system?” To “how supported does the human feel?”
Because in the end, healthcare isn’t just a science. It’s a relationship.
And when we get it right, patients like me won’t just walk out with a diagnosis. We’ll walk out with something just as vital:
Clarity.
How have you seen AI build—or break—trust in healthcare? Where should we go next? I’d love to hear your thoughts.
Comments