The Future of Medicine: How AI is Changing Health Care

Published On May 02, 2023

Hide Video Transcript

Video Transcript

[MUSIC PLAYING]
JOHN WHYTE
Welcome, everyone. I'm Dr. John Whyte. I'm the chief medical officer at WebMD.

People always say to me, Dr. Whyte, who do you follow on social media? Dr. Eric Topol-- that's the person that I follow. If you want to know anything about health tech or health services research, his profile needs to be in your feed, and he is my guest today. Dr. Topol, thanks for joining me.

ERIC TOPOL
Oh gosh, it's always a privilege, John. Thank you for having me with you.

JOHN WHYTE
I love your Twitter feed, and I'm going to focus a little on it today to help put in perspective some of the aspects of AI. And you posted in article, an editorial perspective from The New England Journal of Medicine, and I love the title. It says, "Is Medicine Ready for AI?" So that's my first question to you, Eric. Is medicine ready for AI?

ERIC TOPOL
Well, John, this is a really important point because medicine and the medical community may not be ready, but AI is ready for it. That is, things are happening so fast. I mean, the velocity here is something I've never seen before. ChatGPT was released November 30, GPT-4 March 14. We're talking about weeks, and we're still in the early going of these large language models, so the opportunities, the multitude of rebooting things that we do in medicine is really pretty striking.

JOHN WHYTE
But how do you feel about it? Are you excited about the prospects? Some people will say they're scared about it. In medicine, we tend not to be proactive. We tend to be reactive. So in many ways, I wanted to gauge your sense of where we are and where you are in terms of excited or fearful.

ERIC TOPOL
Well, in the early phases, I'm more fearful, skeptical, but I have confidence, over time, it's something that we should be really excited about. That is, we're going to get the kinks out. We're going to get through all of the validation and all the things that we need to do to get this to have the right guardrails, to have the right human-in-the-loop oversight. And eventually, this is going to be extraordinary, the most important transformation of medicine in our time.

JOHN WHYTE
But who's going to decide all of this, the guardrails, how it's used? And you pulled a quote that I'm going to use to get your perspective, and you made it as a tweet. And you said, "Having AI as an assistant to the doctor is going to play to the strength of the doctor as an intellectual, compassionate provider of care."

But what I want to ask you, Eric-- you're talking about it as an assistant. Everyone's not talking about it as an assistant. Some people are talking about it as the AI doctor. The AI technology is going to diagnose your condition. Some people think it's in lieu of the doctor.

So what's the right usage? Is it just an assistant, and is that a physician-centric, paternalistic viewpoint that people will argue versus-- consumers are saying, it's too hard to go find the doctor. This might be better.

ERIC TOPOL
Right. Well, great question, John. The point here is that it depends on whether you're the patient or you're the doctor.

So if you're the doctor, it provides a lot of assistance because instead of doing a Google search or UpToDate or whatever, you can get information that's much better, that synthesizes everything we know. But also, on any given patient, you could have their data, whether it's their images, their labs, their electronic records, their genome all integrated.

In addition, most importantly, in the near term, you can have all the clinical documentation done. So instead of having to type out a keyboard, the voice speech during the visit or bedside round will be relearned how a doctor does that. So while I'm doing the physical exam, I would be talking about the findings instead of hiding that from the patient because it has to get captured in the note that's made from the AI. So that not only that note, but the preauthorization to insurance companies, discharge summaries, procedure operation notes, scheduling new appointments, next appointments, setting up prescriptions-- all this stuff will be automated.

JOHN WHYTE
But do you think that's how most people are talking about it in terms of helping some of those administrative tasks versus helping in deliberative decision making, so helping in that tumor board assessment that we have in oncology in terms of what should be the precise treatment for a patient? And people will argue and say, well, there's not transparency to these algorithms. They make mistakes. They have biases.

Doctors make mistakes. We have biases that aren't always explicit. What exactly is going to be that function in the deliberative process? Where should it be, Eric, in helping us as clinicians make decisions?

ERIC TOPOL
Well, I like the way you partition it, John, with the administrative versus the actual care of patients, key decision making. In the latter, the key decision making, there has to be the human in the loop. Here, we're talking about the doctor clinician. But the point is that the ability, the empowerment of patients to look things up-- like now, when you look things up, it's not about you. It's about whatever is known about-- and there's all these different hits, and you could spend all day going through these.

But what we're talking about in the future is your data, all of your data could be used to help screen to make a tentative diagnostic-- let's say a differential. And then you would talk to the doctor about, what about this potential? And making the diagnosis accurately is going to be enhanced. Patients who are taking charge and are entering their data, provided we can deal with security and privacy and bias like you bring up and all these things-- once we get our arms around these troublesome aspects, it's going to be really important to promote patient autonomy.

JOHN WHYTE
What's your feeling on ChatGPT, these generative content tools, where it's not going to be the same as search where you kind of type in? It's going to be a chat bot. I'm going to ask them the questions. I might ask, relating to me, what should I do meeting these parameters to manage a disease? Are you excited by that, fearful about it? What's your thoughts?

ERIC TOPOL
Well, ChatGPT is the forerunner to GPT-4, and there's a big gap in performance because ChatGPT is pure text language only, whereas GPT-4 brings in all the video and imagery. And it's a totally enhanced input training.

Now, I like ChatGPT because it's fun. I mean, I'm having a conversation now instead of trying to do a search and go through. Actually, I'm impressed of the fluency, the rapidity of the responses, and I'm also impressed at the badness of some of the highly confident responses that are totally fabricated.

The large language model and in particularly ChatGPT, we've seen lots of evidence because it's the fastest growing user base in the history of any technology. So people have identified this of this problem of getting the wrong results with confidence. That's not going to work in health care.

You've got to have accurate-- and you mentioned-- I'm glad you mentioned it. Doctors do make mistakes, and so will AI. So there are ways to do check points even now, and it'll get better.

JOHN WHYTE
Are we using tech in the right way when it comes to diagnosing health issues? Simply because we have a technology to do it, is that the best approach? And I say that in the context of another tweet where you have a headline from The Economist that says, "An algorithm can diagnose a cold from changes in someone's voice."

And I love your response, which-- I'm going to read your comment-- was, "I think we can do this without an algorithm." So I have to ask, are we sometimes getting it wrong in terms of utilizing AI to diagnose some conditions as we get involved in the hype and the technology?

ERIC TOPOL
Yeah, I mean, I think it can get overcooked. I mean, in that particular example, I was injecting some humor, but what they're doing is, when a voice doesn't change that much that you and I could detect it and you're the employer, you could actually say, hmm, does this person really have a cold? And you could put the voice test to an algorithm, but this is kind of silly stuff, really. We need AI to help us with much more important matters than that.

JOHN WHYTE
What do you think medicine will look like in two years? Will these tools, the use of more widespread AI, fundamentally change the way we interact with patients, or do you think it'll just be slightly iterative?

ERIC TOPOL
Well, if we go in the usual path, which is slow mo, there won't be a lot of perceptible changes. However, on the clinician side, the desperate situation of being data clerks is so bad that I think there's going to be rapid embracement. There's already many health systems around the country now that are piloting these tools to get rid of keyboards, so I think we're going to see that in the next year or two.

A very substantial proportion, probably still a minority but still a substantial proportion, will have automated notes that connect with patients, that do nudges with patients about did you check your blood pressure, and then all the other things that basically free up doctors so they can spend more time with patients on important matters rather than being slaves to keyboards and screens.

JOHN WHYTE
Well, as I said at the beginning, you are the person that I follow on social to learn about health tech, to learn about health services research, and our audience members should follow you as well if they're not already. Dr. Topol, thanks for taking the time today.

ERIC TOPOL
Oh, you're very kind. Great to join you.

[MUSIC PLAYING]