Dr. Nathan Price, Thorne’s Chief Science Officer, returns to The Thorne Podcast to talk about advances in artificial intelligence in health care, and how doctors might use AI to spend more time with patients and help patients better understand medical research.
Dr. Robert Rountree Intro:
This is The Thorne Podcast, the show that navigates the complex world of wellness and explores the latest science behind diet, supplements, and lifestyle approaches to good health. I'm Dr. Robert Rountree, Chief Medical Advisor at Thorne, and functional medicine doctor. As a reminder, the recommendations made in this podcast are the recommendations of the individuals who express them and not the recommendations of Thorne. Statements in this podcast have not been evaluated by the Food and Drug Administration. Any products mentioned are not intended to diagnose, treat, cure, or prevent any disease.
--
Hi, everyone, and welcome to The Thorne Podcast. This week, we have Dr. Nathan Price back on the show. Dr. Price is the Chief Scientific Officer at Thorne HealthTech, the co-director of the Hood-Price Lab for Systems Biomedicine, and the co-author of the book The Age of Scientific Wellness.
It's great to have you back, Nathan. Have you been working on any interesting projects lately? Anything that would catch our attention?
Dr. Nathan Price:
Yeah, we've been doing a few things that I think are pretty interesting. So one of the elements that I'm really focused on right now is at the intersection of some of the new AI technologies and how we're thinking about health or being able to give information back. And so, one of the areas that I'm really intrigued by is the degree to which we can take really high dimensional data and build on top of these AI systems, like ChatGPT or Claude or any of these kinds of things, ways to try to leverage the scientific literature to interpret what we're observing in these data sets. So this is trying to create something that I'll call a cadre of virtual biologists in order to understand what's going on in these different scenarios, and it's pretty fascinating the kind of things that you can get into on that.
Dr. Robert Rountree:
Great. Well, you've alluded to where we're going, so let's jump right into the main discussion for today's episode: artificial intelligence. This is a follow-up to an earlier show we released in 2022. Wow, it's been two years.
Dr. Nathan Price:
Wow.
Dr. Robert Rountree:
And we discussed really machine learning, digital twins, the early onset of some of this stuff. And obviously, now there's been an explosion of AI applications, like you mentioned ChatGPT, these amazing digital art generators where you say, "Well, make me a copy of the Mona Lisa," and it cranks it out in a couple of minutes. So it seems like AI is everywhere now, it's exploding for everyday people, and that's an important word. For everyday people, AI is out there. If you go onto certain websites and you want to look at a pillow, then you're looking at reviews that are summarized by AI. And it doesn't say, "This is AI speaking," you just see this review and then in the small print, "AI has summarized what the consumers have thought about this." So is it safe to assume that the same thing is going on in the world of medicine?
Dr. Nathan Price:
Yeah, absolutely. I think that there is an incredible amount that's happening at the interface of AI and medicine right now. And to your point, so we spoke in 2022 on this topic. Today's discussion will no doubt be, I think, radically different than that. Was interesting, I was speaking at an AI conference a few months ago, and we were all reflecting on the fact that we had come back after a year and at the time of the meeting, in that in-between year, ChatGPT had really hit right with 3.5, these AIs had come on board, and we were all just struck by this of how virtually the entire content of what we spoke of on the second meeting had almost nothing in common with what we talked about on the first meeting because the world had changed so dramatically. And that's really happening right now. We can get into all kinds of those things.
Dr. Robert Rountree:
Yeah. So it's entering our lives in an exponential fashion. So tell us exactly how that's happening. Where are we seeing it perhaps in ways that we don't realize? Especially in medicine, healthcare.
Dr. Nathan Price:
Especially in medicine and health care. So there's a number of things that are happening, some of which will happen very rapidly, and some of which will take a little bit more time. So one of the first things that has pretty much universal interest from clinicians are things such as automated generation of chart notes. So anytime that you've gone into a doctor's office, typically the doctor's there, they may be talking to you, but they're looking back and forth at a computer screen, they're typing out a few notes, and what you might not see is that they have to then walk into the other room and summarize, or at the end of the day, they got to write these things.
Dr. Robert Rountree:
Hours. For hours.
Dr. Nathan Price:
For hours, for hours. And it is, I mean you would know this, Bob, you're the clinician, but it is the most hated aspect of physicians’ jobs in general.
Dr. Robert Rountree:
Detested.
Dr. Nathan Price:
Universally detested. And now you can have a little device, you can get these to take school notes or whatever, you can do them in medicine too, that will basically listen to the conversation and write a summary of what you talked about. And you can optimize these in order to keep track of what were the important elements. Then what the physician has to do is merely review the notes, so you're not seeding the medical care or the summary completely to AI, but you are radically reducing the amount of time that a physician has to spend on this kind of tedious task, which gives them more time to be with patients, more time to think critically, and so forth. So there's one absolute win that's starting to get in place and that physicians and health care systems are in a rush to make happen.
Dr. Robert Rountree:
And this is not dictation, you're talking about. I remember the old days where I had a little recorder and I would dictate my notes, send it to a transcriptionist, and three weeks later I would get the printout of what I said. This is different.
Dr. Nathan Price:
This is different because, one, it takes that dictation in real time, so it builds it. But then it summarizes it, so it's able to condense and compress what has been said. So if you're being redundant, if there's a lot of extra pieces, knowing the parts that really matter, the drugs that were mentioned, maybe the care that was mentioned, the pieces that matter, you can also start tying that into EHR system so that you are adjusting variables in there, so that when it knows you ordered a test.
Dr. Robert Rountree:
Plugs it into the electronic health record.
Dr. Nathan Price:
Plugs it into the electronic health record, OK, this test should be coming, or should be ordered, or all kinds of things. So there's tons of that sort of automation that's going to be important. And things like, so this writing of chart notes, that's going to have quick adoption because there's almost no opposition. There's obviously issues in terms of privacy and so forth.
Dr. Robert Rountree:
But there's no human involved in that, it's just a machine that's listening and learning.
Dr. Nathan Price:
It's just going to your doctor, so it's pretty much the same people that are having access to it and so forth. So that's one of the quicker ones. A second one, and this one's actually already deployed, is trying to reduce medical error. So people are often surprised to find out that medical error is something like the fourth leading cause of death in the United States.
Dr. Robert Rountree:
Oh, wow.
Dr. Nathan Price:
It's pretty high. It's surprisingly high. Some of the things that could lead to medical errors can be very simple to solve with AI systems. So a good example of this is that drug names tend to be long and complicated, and some drugs have very similar names but are used for very different conditions. And so a physician will often have to click a box in the EHR system, next to the drug that they want ordered. And you've got maybe two long complicated names that are only differ by a couple of letters in the middle or something. And if they're sorted alphabetically, which sometimes they are, that can be a challenge.
And so there's a program called MedAware for example, and what it does is that it actually in the system has an understanding of drugs that are related to different conditions. So it would flag, so if it understood, if it's in the system, that you're trying to treat cardiovascular disease and you click next to a drug that is in fact for say lung disease, but has a name very similar to a drug that you probably meant, it will actually ping the physician back and say, "Did you actually mean this drug for lung disease or did you in fact mean this other?" And this has been deployed in hospital systems, and it's led to a very significant reduction in those systems and the number of prescription errors that can be fixed. and this takes into advantage that the kind of mistakes that computers or AIs make and the kind of mistakes that humans make are very different.
Computers make mistakes because they have no understanding of the outside world, they just operate the program that they're on. But a difference in letters or sounds, that doesn't exist. It's all converted into a binary code and that binary code is met. There's no sense, those two drugs in terms of a phonetic space for a computer, have nothing to do with each other, it would never make that error.
Dr. Robert Rountree:
Computers are always careful. So humans are not always careful, they get rushed, and that's where spelling errors and things like that happen. But computers are, they don't need the time. It doesn't take them any time.
Dr. Nathan Price:
It doesn't take them any time. Now computers will also make it, one of the early examples in machine learning, which was quite interesting from a few years ago, was it applied logic to a big health care system to try to figure out, how do you get people to not have heart disease? And it came up with a solution that was incredibly accurate, like 98-percent chance, if you do this thing, you will not die of heart disease. Incredibly accurate. What was this? It was to ingest carcinogens. People who ingest carcinogens do not die from heart disease.
Dr. Robert Rountree:
They die.
Dr. Nathan Price:
Because they die from cancers. Yeah, exactly. They die from cancers first. And so I bring that up, it's kind of an absurd example, even though it was one of the famous early examples, but it just gives you a sense of the class of errors that are different. And AI doesn't understand that death by carcinogen is…
Dr. Robert Rountree:
Is not preferable.
Dr. Nathan Price:
At least in this case. Yeah. It's just, at least in that iteration of it, it's just learning that is predictive. Whereas a human immediately recognizes the absurdity of that, of course, because of the way we think is very different. Whereas the case that we just went through for getting two very close words mixed up is a very human kind of error that a computer would never make. So it's a good example of where there's positives and negatives.
Now of course, the more advanced AI systems give us a much broader sense of partnership around some of these things. For example, doctors, there was a study out of Google that showed doctors on their own were inferior to doctors with access to search, and doctors with access to search plus GPTs were in fact, did the best of all. They tested this against, I think it was about 300 different hard medical cases out of the New England Journal of Medicine, for example. And so we know that there are benefits to these kinds of systems.
So we talked about some of the short-term things. You can do chart notes, you can do correcting medical errors. Now there's a lot of interest in the degree to which these kinds of systems can be used for people to get much more personalized information about their health in ways that you might optimize it over time, and trying to make these systems as accurate as possible. And that's been a big focus for a lot of people. There's hallucination so they can very confidently tell you things that are wrong. That is getting better, but definitely still happens. You want to be able to trust as much as possible the information in these systems. But it is important to recognize, I think just thinking about, again, that nature of human intelligence versus AI, is that it's very different than human intelligence. It approximates it because of its ability to handle language, but it is really just a lot of big, massive linear algebra. It's a bunch of matrix computations.
Dr. Robert Rountree:
Is that what's meant by large language models? It's a term I see a lot, LLMs.
Dr. Nathan Price:
So what LLMs do is, and I'll try to just explain this in a pretty high level way, but what they do, so you look at words, so there's this big corpus, and it's not actually words, it's tokens that are approximation words, but for the sake of argument, we'll just call them words for now. And so it looks at a word and it maps it into this really high dimensional space. And this is what matrices do, and so you have these locations. And so a good example of this, and there's some great YouTube videos on this actually that you can go through and see some real detailed stuff that people put together.
But let's take a word like “tower.” So you have the word tower, and that exists semantically in this space and that location refers to a meaning. Now, what happens is that there was a really famous paper that says attention is all you need, and attention is a way to tailor it to a particular context. And so what happens is that when you take “tower,” and let's say it's in a sentence, and the word before it is Eiffel. Now as soon as it says the word before it is Eiffel, that means that that tower in this space moves to a new location, which tells it that this is the Eiffel Tower. OK, well, now it understands, OK, this is something that's in France, it's different in all the following ways from a generic tower. Now, if the word before that is miniature, miniature Eiffel Tower, now it understands, oh no, this is a keychain or something, so that moves to a different space.
So what happens is that there's all these computations that are put around adapting the meaning of words, which we do ourselves, to all the context of what's happening. And then you get what's called an attention window, which means, how much matters? Is it only the word before? Is it the two words before? Is it the thousand words before? And so all these systems are opening up this attention window so that it can have more and more background context that makes a difference to the actual meaning of a word, because a word has a meaning, but obviously it morphs a lot by the context. And we all understand this.
Now, what an LLM does at its base is all it is doing is predicting the next word from context. So given the words that you gave it, what's the most likely next word? And then it has some sort of probability and it assigns it and it amplifies forward. And so it's pretty remarkable. So when you look at the output of a ChatGPT or something, it's the equivalent of a human sitting down and just writing the essay start to finish with no going back, no retraction, no anything. And it does that obviously way better than any human across a broad range of subjects, maybe an incredible writer could do better at a niche area maybe, but it's pretty amazing.
What everyone's really interested in now is something that are called agents. So an agent means, when we talk about things like prompt and engineering, this is why it matters exactly how you write out the prompt, because that's giving it the context for it to start to predict the next word. And so that's why it matters a lot that you tell the computer, you tell the AI, who is it? What's its background? What's its context? The more you give it that's clear and focused, the better it's able to do that, but it just does it in one pass.
Dr. Robert Rountree:
So the prompt could be something like, what are the medical signs of vitamin D deficiency? Could that-
Dr. Nathan Price:
Yes. You could ask that, but it would be better if you started by saying something like, "You are a naturopathic physician who also has a deep science background in biochemistry."
Dr. Robert Rountree:
And that means something?
Dr. Nathan Price:
"Your role is...". That means something because it starts to orient around the part of the literature that it should train around. Or you say, "Please constrain all your answers to things that can be well verified in PubMed."
Dr. Robert Rountree:
OK. Evidence-based, or certain kind of evidence.
Dr. Nathan Price:
Evidence-based, I don't want this, because otherwise it's going to pull all this stuff in from the Web and you're saying, "No, look, I want you to focus on these kind of things." The thing that is amazing about these systems is that it does feel like talking to a human in some ways because you're programming it by the words, you are programming it by words. And so just like the clearer you were in a description to a person, the LLM responds pretty similarly to that, even though the underlying technology is ready. But that's pretty remarkable.
So given that around prompts, so the next thing that's really a hot topic right now is what we call agents. So an agent means that there's a particular LLM, and we said that that one LLM is just predicting the next word, so it's just going to go. So it's like writing your essay from start to finish in one shot. Now, as we know as humans, our first draft of anything is usually terrible.
Dr. Robert Rountree:
Usually throw it away. Or typewriter days, it was a lot of paper.
Dr. Nathan Price:
Yeah, typewriter days, lots of paper. Yeah. Those of us that are old enough still remember the first time you saw a word processor where you could cut paste and your mind was just-
Dr. Robert Rountree:
Yeah, what a thing. What a thing.
Dr. Nathan Price:
... wow, that's going to change the world.
Dr. Robert Rountree:
Yeah. And it did.
Dr. Nathan Price:
And it did. Yeah, it was magical when you first saw that. So LLMs doing an amazing job, but when you're doing ChatGPT or something, it's just that first draft. So what an agent is, is that you could have, so your first agent is maybe the drafter, but you could set up a persona, which again is just adjusting all the weights on these matrices by the context that you use. But now you have something come in as a reviewer. This is one of the ways that you can really help get rid of hallucinations because, and people at home, you can try this in ChatGPT or Claude or Gemini or any of the tools that you like, you can have it write out, but then you can have it set up a second prompt that says, "Read through what you just wrote and identify anything that is incorrect or that's a phantom or that is..." And a lot of errors…
Dr. Robert Rountree:
It can edit itself?
Dr. Nathan Price:
A lot of times it can edit itself.
Dr. Robert Rountree:
Oh my gosh, I didn't know that.
Dr. Nathan Price:
Not foolproof, but because you're now asking it a different thing, because now it's taking that whole piece that it wrote as context and then it goes and asset. And probably what you've done is, and very often if you notice something that's wrong, you can tell the algorithm it's wrong, and it will recognize that, apologize, because that's how these things are programmed, and then it will proceed.
Dr. Robert Rountree:
So you're directing the learning that is taking place?
Dr. Nathan Price:
You're directing the learning.
Dr. Robert Rountree:
Wow.
Dr. Nathan Price:
So one of the things that we're really interested in, this is what I was alluding to at the beginning, is let's say that we run some experiment, and it's run on, let's say we're just looking at proteins that are different between people that have cancer versus not, or a particular condition. And you end up with these lists of proteins, or lists of genes, and you're trying to sort out what's going on, it's a very complicated process. So one of the things you can do, and then it takes people a long time to kind of sort through that, no one could know the entire literature, but you can now build an AI that can search through massive amounts of texts, like let's say, trained across the entire biomedical literature, and you can then break up agents to take subsets of these pieces and go find what connects them. Have there been any experiments that link these together and so forth?
And you can build armies of these little agents so that you can bring them together and have them work through a system or a dialogue such that you can pull out really relevant information about things that were otherwise opaque to you. So maybe to put that in language, that's more my world, but in people that are... But if you just think about it like writing. It's a first draft, then you have your reviewer, you have an editor, you can have a critic, you could set up an agent that is… You tell it, "You take a skeptical view of the usefulness of supplements," or something. "Attack the following article on weaknesses that it hasn't proven to your satisfaction."
You set up an agent that's antagonistic, for example, and pressure test things. Do they really believe this is solved? Or was there an issue you didn't think of? Or you bring up, let's say there's different competing theories of Alzheimer's disease. You can set things up as an argument between different agents. And so this is one of the interesting frontiers right now, which is, can you build systems of interactions? And it is kind of interesting because it mirrors how we build teams or advisors or consultants or core, and we specialize into...
So at Thorne, for example, if we're working on some new product, I'm going to have my view as the Chief Science Officer, I'm going to have an idea about, this is what I think would make a big difference, and I'm always trying to build things for the future. But you're also going to have a chief legal officer, and the lawyer AI seeing this would be like an agent would be, "That's great, but here are the concerns. Here are the lines we can't cross. Here are things that we're allowed to say. Here's what we're not allowed to." That would be an agent. Marketing would be a different agent of, "Well, do people actually want to… Are people interested in this thing that you want to build?" Or blah, blah, blah. And so you can imagine that, and I don't mean to make that specific, this would be true in any company. You've got your person that wants to drive innovation, you've got your legal team, you've got your marketing, you've got your operation. Everyone would be interested in different aspects of this thing.
So you can imagine it now in the context of health care, coming back to our main thesis here, which is, OK, we want to be able to deliver information, we want it to be accurate. So once we generate it, there's a second agent that might look at it and say, "OK, I'm now going to act as a quality control reviewer on the following. Are there problems with that?" And you could shunt these back and forth between different agents. And what people are showing, Andrew Ng at Stanford has done some beautiful work in this space, but you can take these conglomerates and you can end up with an end product that's much better than what you started for. Just like your first draft is not nearly as good as... When I wrote the book, for example, was the first draft anywhere close to as good as what came out at the end after we worked with our editors and people that went through and helped rewrite sections and make it to a public? No, it was radically better by the end of the team effort.
So same kind of thing can apply with the LLMs, and that I think is going to be really important for how these get to a level of accuracy and trust that we would want to use in medicine. Only training on certain things. You can also vectorize your answers to trusted text. So you can tell the system, "These are things that we believe and are well vetted. Don't treat the wide Internet as equivalent to these other items that are peer reviewed or trusted or run through, serious scientific and legal review," and etc., etc. So there's a lot of things that can be done as you start thinking about the frontiers here that I think are super fascinating. I think so many of us are just kind of in love with these machines because they're fun, they're interesting to see what comes out. No doubt about that.
--
One of the things that I really am a big believer in, and I've been pushing this a little bit in the scientific community because I think it's a really good idea. So there's all kinds of scientific evidence that the public pays for. Let's be clear, most research is funded by the government, so the public is already taxed for these things. Now, the findings mostly get published in scientific papers. And there was a big uproar about this, very rightly so, a few years ago, because almost all of these papers were locked behind paywalls at journals, meaning that people couldn't even get access.
Dr. Robert Rountree:
$40 an article.
Dr. Nathan Price:
Yeah. And to be clear, sometimes I can't even get access to my own paper. If my institution doesn't have a subscription, I get charged the same amount. Sometimes you can get it, they got a little better at that. But sometimes I run into it myself on things that I actually wrote. Very annoying.
And so that was a big issue. That's been mitigated a lot. But what hasn't been mitigated is that for the general public, even if you want to go read the study, they're pretty opaque. The language in them is really written for other experts, very technical, sometimes it's pretty hard to parse. So one of the things that you can do now is, by setting up these, well-vetted, well-put-together LLMs that are trained on all of that biomedical literature, we could create portals whereby anybody could get access to, "Describe to me the pros and the evidence for or against whatever proposition that's known in the scientific literature," which I think is fantastic. I love this. I really love this idea.
And the nice thing about the LLMs is you can tell it who you are. You can say, "I'm a third-grade student doing a research report and I want to learn about ‘what are the causes of cancer,’" and it will describe it to you in a way that is tailored to you at the level that you're at, and that you can ask follow-up questions on.
Dr. Robert Rountree:
Wow.
Dr. Nathan Price:
You can also go in there and say, "Look, I am a PhD in bioengineering with 20 years of experience in systems biology and bioinformatics, but I don't know anything about this field, but I'm not an expert in endometriosis," or something that we're studying. "Explain to me, catch me up to, what are the best, what's the basics of thought in this field? How has it changed over the last five years?" Or something. And then you can ask follow-up questions. Usually the top line is not that detailed, you can then dive in more. And then you can start, for some of the systems at least, you can ask it to give you papers. And then you can go through and be like, "OK, is this real information that I'm getting? Is this a valid summary? Is it not?"
Dr. Robert Rountree:
Is it hallucination?
Dr. Nathan Price:
"Or is it a hallucination?" The early systems hallucinated a lot. More recently, it's definitely getting less, but still there. The more it's representing aspects of thought that there's a lot of text on, it doesn't hallucinate so much. The more you get into obscure areas, it will start to hallucinate quite a bit, because it doesn't have as much background training data, but you can get a sense of that. So I think using these things intelligently and interactively is really important. And then as a community, we could really make this a lot better.
And there's going to be some pretty, I don't know the details on it yet, but the National Library of Medicine, which hosts PubMed and things, is really becoming a repository for AI and computational biology efforts by the National Institutes of Health. And I think there's a lot of push to try to organize all this information in powerful ways. My argument a lot has been, if that's done right, we actually deliver to the public some of the promise of science that's never really been delivered before, which is you've never actually had access, yourself directly, to this much unfiltered information in a way that you can actually access and understand without being a professional scientist. I think that is incredibly useful, and I think it's going to be so helpful to things that are out there that you either should or should not believe because you should be able to get access to more baseline information about health.
OK, well, this claim is being made. Well, where does that come from? It used to be behind a paywall or if not that, at least super opaque. You can now ask questions, follow-up questions. And as we get better and better at these things not hallucinating and being more and more accurate, I feel like that is so good for trust, for people to understand what's going on in different areas. To their status, people in general are smart, and they can sit down, they can go through it, and they might not have a background and know all the jargon in an area. This helps them get that. I am so excited about that as just an eyeopener for a lot of people. And even as a professional scientist myself, like any scientist, I'm probably a little broad compared to most scientists in terms of my interest areas, some scientists more narrow, go incredibly deep.
But all of us, no matter what, the vast majority of topics by far, you don't know that much about. So I find it's a super useful tool when I'm looking at a different field that I don't know all the literature in and I want to get a view. In fact, I think that going forward, as these systems get really good, that we might even change up quite a lot of how the entire scientific enterprise of disseminating information is done. Because I don't know how often people will sit and read one study as opposed to making sure that you have this corpus that everyone's going to access via portal. That's a little bit more of a radical statement these days, but I don't think it will be for very long, honestly, I think it's going to come about pretty quickly.
Dr. Robert Rountree:
Well, we are clearly at the edge, or some people may say the precipice, of some radical changes in how medicine is practiced, I think, as a result of this. So this has been a real “wow” discussion, amazing discussion, and I hope people that are listening are as excited about this information as I am. So we're going to take a short break, and then after that we're going to answer some questions from our listeners.
-- Ad Break --
We know getting older is inevitable, but you can control how well you're aging with a full spectrum of at-home tests and nutritional supplements for whatever your health needs may be, Thorne can help you do just that. To get started, take Thorne's Healthy Aging Quiz to get real recommendations from Thorne's Medical Team. Go deeper with Thorne's Biological Age Test, which analyzes your entire body's rate of aging, and receive a personalized wellness plan that helps improve your longevity. Get started today by visiting thorne.com/healthy-aging.
--
Dr. Robert Rountree
And we're back. So now it's time to answer some questions from the community. Our first question this week comes from a listener who asks, “It seems like AI would be useful for personalized medicine since it can take all factors into consideration, but how much information from me does it need to give useful results? Probably more than I give my doctor?”
Dr. Nathan Price:
Yeah, I think that's a great question. It is one of the big advantages in the long run of AI is that we can take advantage of a lot more kinds of information than we can just get our heads wrapped around. For example, if we want to take a lot more information from genetics into play, some of the genetic scores, to give risk for example, may take over 100,000 genetic variants.
Dr. Robert Rountree:
Wow.
Dr. Nathan Price:
And AI can kind of think about that, a person can't. One of the things that I'm really interested in too, for example, in genetic, there's no gene that encodes for height, but you get a very good prediction of height when you take 186,000 genetic variants. And people argued a lot about whether you'd get a lot of benefit from this much more genetic information, and we're doing now a lot of this, and the answer was, yeah, you do, a lot, but it sums up lots of little pieces of information.
So in medicine today, we mostly use very small numbers of biomarkers, maybe one to just a handful, for all of our diseases. Where we might get a lot more information from this, what I call the long tail of medicine, all these subtle signals that are distributed throughout the body, and that maybe you're picking up from your wearables and so forth, that is pretty much not used in medicine at all today because of this limitation of data and what we can think about. That will go away over time, we're not there yet, but with AI systems, so yes, I think AI is really essential to being able to achieve the full promise of personalized medicine. And so we will deal with that as it goes forward over time, but the ability to get a lot of context and be able to operate is I think going to be really enhanced by these systems.
Dr. Robert Rountree:
I teach functional medicine to mainstream doctors, some of whom really don't know anything about that particular approach, and by that I mean applying systems biology, which is really what functional medicine is, applied systems biology. And the feedback I get, and I actually heard this from a friend who's a doctor at Mayo Clinic, is he basically said, “It's too much information.” So that in functional medicine, every little piece of the long tail is important. You might do a metabolomics profile that's got all kinds of organic acids and amino acids and your level of CoQ10 and your level of magnesium and your level of essential fatty acids. And your typical doc who's got 10, 15 minutes with a patient is going to go, "Whoa, I can't handle all that information," and what you're saying is AI is going to change that because it can basically interpret that in a flash.
Dr. Nathan Price:
Exactly. And what I think will happen a lot, certainly for a while, is that it's going to be able to bring down certain insights from that massive data, hopefully in a way that the physician and the patient can get their minds wrapped around. So sometimes you can bring it out and understand the processes, or what's going on, and the LLMs actually have a pretty great ability to summarize information. We still have quite a ways to go before these systems give tremendous insight. They're not yet forward reasoning kind of systems terribly much, I would argue. But they can assess a breadth of information that's just radically more than any person. So I think that they're going to be important in the loop, but you're exactly right because it has to bring that simplified view from that complexity, and it's got to bring it up to light. And that's why there's so much interest in what's called explainable AI, AI where we can understand something about what's happening. That's a whole other discussion.
Dr. Robert Rountree:
OK. Next question. “As a nursing student, should I be concerned about AI coming for my job? Should doctors be concerned about AI coming for their job?” And I hear that a lot; this is not a rare question.
Dr. Nathan Price:
Yeah, and the future, we're certainly going to see it unfold. My view is that, well, there's the very often repeated notion that you're not going to lose your job to AI, but if you don't adopt AI, you'll lose your job to somebody who does. I do find that pretty compelling. I actually think nurses are going to be still very important. I think that human connection is huge, and I guess I feel the same about doctors.
I think that one way to think about AI, I was seeing this argument from a high level guy at Google a week or two ago, forgetting his name, but he made what I thought is a very good point, and I think I agree with, which is that when we talk about AI right now, it's not really intelligent. It can accomplish amazing skills. But the point that this person was making that I thought was really good is that the ability to do a skill is not proof of intelligence because you can do it with a sufficiently large lookup table, because what we're talking about, massive computation, and it is adopting language, which is why it has this different feel to us now, which is amazing and we talked about that before. But there's still so much that it doesn't really have intelligence about, whereas human intelligence, I think, has proven to be incredibly adaptable to different scales.
So the way I look at it is that all these jobs are going to be changing, but I don't see them going away. I don't think that we're, and I fall maybe a little more on the Yann LeCun camp of this, he’s another really huge AI leader, we're still quite a ways away. There's so much that a human can do that an LLM can't. I see these technologies as being very empowering. I think we're going to be able to think about challenges at a different scale than we do now, and we might be able to leverage some of these systems to really dive in deeply on the biology of one person.
We're actually going to, at my lab, we're going to do this kind of thing. But I think we're going to be able to do that at a scale and a level where we weren't able to think about that before, but now we will. So I think that most jobs, certainly the way I'm thinking about it, are going to become even more interesting, even more fulfilling I hope, because I think the rate of progress will be accelerated. Just like when we went through the genome, people used to spend a whole career figuring out a gene, and then all of a sudden we could just sequence genomes. Well, then everyone started working on genomes. And now we're going to be able to ask questions across, of understanding in an LLM and get answers back quickly. That just makes this cycle of working towards the next thing faster and better. And so I do think the nature of these jobs changes quite a lot, but I'm not one of those people that think that we're going to go into radical mass unemployment anytime soon.
Dr. Robert Rountree:
So I love what you're saying about doctors that don't know AI going to lose their job to somebody that does. Is that something that's being taught in medical schools or is it on the way? Is that a standard part of the curriculum now? Is how, do what you were saying earlier, which is to actually have a dialogue with AI instead of just saying... How I was trained in medical school is, OK, you need to recite the 15 most common symptoms of magnesium deficiency. That's the old way of the old model. But the new model is going to be more interactive and asking questions about the questions.
Dr. Nathan Price:
Exactly, like how are you going after things that process? The memorization, because doctors for a long time, like massive memorization in medical school, now that's going to be become increasingly less important I think. You do need to be able to have things in your head to think about them, but I think tons of that is going to be AI assistant. Medical schools, it's probably going to take a while to get curricula set up, I know some are already moving in that way. That said, I think there's so much continuing medical education and doctors are getting... I saw a survey, I can't vouch for it beyond that, but they did a survey of doctors, and I think it was 67 percent of them said that, yeah, they use ChatGPT as part of their medical practice today.
Dr. Robert Rountree:
Wow.
Dr. Nathan Price:
So I think that the number of doctors that aren't, when they're thinking about a problem or searching or looking at this, if you're not using this tool, I'm surprised.
Dr. Robert Rountree:
Something's wrong.
Dr. Nathan Price:
Yeah, maybe you're only seeing things that you've seen a million times before, but just the chance to dive in, “Well, what is the latest on this? What's known about that?” With all the caveats we gave before around hallucinations and back and forth and vetting, and you're ultimately responsible for the information. You can't take these things blindly and so forth. But yeah, I think people are already using them. Yeah, they're here to stay as far as I can see.
Dr. Robert Rountree:
So it is becoming a formal part of the education or more incorporated into medical education?
Dr. Nathan Price:
Yeah. I think it's in its early stages, but I can't imagine any forward-looking medical school is not all over this and scrambling to get themselves people that are expert on the topic. I can't imagine that's high on the list. One I just saw, I was at UC Davis, they have now a new chief AI advisor, I think was his title. But these health-care systems, they're bringing on someone specific to AI. It's a big new area, there's no doubt about.
Dr. Robert Rountree:
So next question, Dr. Rountree, do you use AI in your practice? I do. It's something I was a little hesitant about because I thought, what more is it going to add? But what I've found myself doing over the last year or so is using it for problem solving, mostly looking for patterns. So I'll type in a question, "Well, is there a connection between somebody with a high calcium level and this hormonal problem?" And then see what kind of response I get and it'll give me an idea of how to investigate. "Here's a patient with chronic elevated levels of cortisol and hypoglycemia. Is there a connection between low blood sugar and cortisol being up? And if so, what is the data for that, and is there a next round of tests that I should be doing?"
So again, for problem solving, but what I'm hearing you saying is we should probably be using it in medicine more in the very beginning for triage, for information gathering. The time-consuming task that actually eat up a lot of the space in the doctor's office, if all that's taken care of ahead of time, and I know we have this setup called lab one, maybe that's not what it's called anymore, but where the patient walks in and they get all their metrics done, they get their blood pressure, their oxygen, their weight, their body fat, maybe a simple blood test, and it's all done before they've ever even seen somebody, and I see that coming in the future.
Dr. Nathan Price:
Yeah, I think that's right. And I think clinicians, they're going to figure out what really works for them. And like you talked about, you've got your process, maybe this helps to accelerate it. You'll be able to try it out, and you'll see, does it in fact accelerate your process? Is the endpoint as good or better than what you had before? Etc., etc. I think so much of this, it is moving fast, but by the same token, we're partners all along the way in just figuring out how this fits in.
Dr. Robert Rountree:
So one last question. “Does the rise of AI mean the death of the ‘second opinion’?” And that's in quotes, the “second opinion.” If two AI models give different results, won't the incorrect one be changed so they're all the same?
Dr. Nathan Price:
I think this is a fascinating question actually. I quite like it.
I don't think so. And the reason for that is that I think these AI models, for the most part what you get back are probabilities. So I doubt that what's really going to feed back is like this is the answer. I think it's much more likely to be there's a 65-percent probability of X and a 20-percent probability of Y, and it may or may not quantify those, and it probably maybe some fuzzy rangers and things like that once we get there. Because there's ambiguity in the world, and there is for our intelligence. As a doctor, even when you're making a diagnosis, my guess is that a fair amount of the time, there's some uncertainty. You're like, “OK, this is probably what that is. Could be that.” And there's probably the process to that.
AI, I don't foresee that it's going to somehow always know the answer, and I think that it's probably not quite the right way to think about it. We have to make a decision, you have to make the decision, but there's almost always probabilities in the background. Even as we think about things, at least I know me certainly, you make a decision, but you're probably only better than it was your top choice, but there was a bunch of other things you also thought about. So my guess is that it'll get there.
It is an interesting question, to what extent it would be standardized, such that they're all trained on the same corpus, would they all agree on the one most probable and then it's hard to get that to change? That is an interesting question. It's probably as much about sociology of how medicine happens as anything. I think it's interesting. I don't foresee that it would get rid of, though, the agency of the person at the end of the day, when presented options, because that's, to me, really sacrosanct because it's your life. At the end of the day, no one is affected by your health the way you are. And so I would imagine that if you're driving, going for second opinions or something special, I would imagine that would only be augmented by these systems. But time will tell. We'll see how they unfold. But I thought it was a great question.
Dr. Robert Rountree:
Well, I think for certain specialists, your answer is going to be welcomed. So this is a disruptive system, but it's not going to be that disruptive.
Dr. Nathan Price:
Yeah, I think it's going to be disruptive, but I just think the nature of jobs will change, because I just think we have a tendency... Every other previous technological revolution, everyone always thought, well, look at all the jobs that are going to be... Industrial manufacturing, that was going to be the end of jobs, and then on and on. It could be this is qualitatively different. I'm definitely familiar with the arguments. If it's just generally way smarter and better at us in every way then where do we fit in? And that time may come.
Dr. Robert Rountree:
It's still a ways off.
Dr. Nathan Price:
Yeah, we haven't shown that yet, as amazing as these tools are. But so far, I think there's still tools in our hands.
Dr. Robert Rountree:
All right, folks, that's all the time we have this week.
Dr. Price, thanks so much for coming back on the podcast. If our listeners want to follow more of your work, where can they go? What's the best way to keep track of your latest research and interests?
Dr. Nathan Price:
Yes. I'm probably most active on LinkedIn, so if people want to just find me there and can follow, I'll post things around science and discoveries and postings and things like that that I'm interested there. I also post on X, although less frequently than I do on LinkedIn. And then obviously if people are interested in products or tests or things that we come out, that's all on Thorne.com.
Dr. Robert Rountree:
Excellent. That was Dr. Nathan Price, the Chief Scientific Officer at Thorne, keeping us updated in artificial intelligence's role in health care. As always, thank you everyone for listening.
Thanks for listening to The Thorne Podcast. Make sure to never miss an episode by subscribing to the show on your podcast app of choice. If you've got a health or wellness question you'd like answered, simply follow our Instagram and shoot a message to @thornehealth. You can also learn more about the topics we discussed by visiting Thorne.com and checking out the latest news, videos and stories on Thorne's Take 5 Daily blog. Once again, thanks for tuning in, and don't forget to join us next time for another episode of The Thorne Podcast.