How full-body MRIs could predict your long-term health, with Daniel Sodickson
 
        Paul Rand: The real breakthrough in medical imaging, which we’ll talk into great detail, is the X-ray. Talk to me about when the X-ray and how the X-ray was created, and what that actually opened up for us.
Daniel Sodickson: Yeah. Well, unlike for many discoveries, there was a concrete discovery date. 1896, Wilhelm Conrad Roentgen was fiddling around in his laboratory and discovered that a tube that he was using to explore some physics of what we now know are electrons caused a screen to glow in the background. And it turns out what he had discovered was what we now call X-rays. He, by the way, was a marketing genius. He called this thing X-ray as an X for the unknown, and it sparked this huge X-ray fever afterwards. But basically, X-rays are just light on steroids, if you will. It’s high-energy light that blasts its way through things that would otherwise be opaque. And that’s what first allowed us to penetrate the veil of the skin, the skull, all those things that were impeding our view of the interior of our bodies.
Paul Rand: If you look at the evolution of medicine, how did the X-ray advance that field?
Daniel Sodickson: Well, before that time, it’s almost hard to imagine. Physicians were really limited to exploring the outside of the body in order to deduce what must be going on inside. You could poke and prod, but there was no way without doing damage of opening up the body. Now, all of a sudden, the X-ray comes along. You’ve got a patient who got hit by buckshot, and you’re worried that there is something damaging an important organ, you shine an X-ray on it, boom, there, it shows up clear as day. And you can guide a surgeon. This happened actually within the first year that X-rays were discovered. A doctor basically trained an X-ray machine on a poor guy who’d messed up with his gun, and discovered where the bullet was and took it out.
Paul Rand: Wow.
Daniel Sodickson: So now you basically had access to the interior of the body before you went hunting.
Paul Rand: But the CT scan came how many years later?
Daniel Sodickson: So in the ‘70s we got CT scanning, which is basically just taking X-rays from lots of different angles and working out the slice.
Paul Rand: Yep.
Daniel Sodickson: We got MRI, which generated its projections, believe it or not, with magnets. We got PET scanning, which got its projections, I kid you not, from anti-matter.
Paul Rand: Wow.
Daniel Sodickson: And then we got ultrasound which used high-frequency sound to generate projection.
Paul Rand: Talk a little bit more about the process of how an MRI works. When is the right time that somebody thinks a CT scan is going to work and then when actually you ought to be thinking about an MRI?
Daniel Sodickson: Absolutely. Well, first, a little bit about how MRI works. Believe it or not, inside the nuclei, inside the atoms, inside the water, inside you, there are these tiny little magnets, almost like compass needles. If you put them in a really strong magnetic field, just like compass needles, they align. And if you then hit them with radio waves from a radio antenna of just the right frequency, they’ll start zipping around in that magnetic field, and they generate a little radio signal. So believe it or not, under those circumstances, you and I and everyone are living, breathing radio transmitters. It turns out that by tracking where the signal is coming from using these principles of tomography, you can generate a map of the water inside your body. So that’s what an MRI is at baseline. It’s a map of the wet tissues of the water.
So what that tells you is a little bit about what MRI sees. MRI sees the soft, squishy tissues. Where you might use an MRI is where you really want to see what’s going on inside the brain tissue, the heart, the liver, the prostate, the breast, things like that. Where you might want to use a CT scan instead is where you’re really interested in the bony structures, the hard structures. You want to see if there’s a fracture, you want to see if there’s a calcium inside a coronary artery. Now, both of them can be used for either. That’s where each one of them excels.
Paul Rand: Okay. The fields even of MRI are evolving, and you were instrumental personally in developing something called parallel imaging. I wonder if you can talk to us a little bit about how that came up and why it was such a meaningful evolution.
Daniel Sodickson: Well, one reinvention I had the privilege of stumbling into was something called parallel imaging. So MRI, before that time, had been gathered more or less sequentially, almost like a scanner. If you have a digital scanner or an old-fashioned fax machine, I’m not sure how many of your listeners remember what a fax machine is, you sort of scan through line by line until you get the image. In the case of MRI, it was projection by projection, but it’s the same idea.
At the time, this was back in 1996, I was doing a rotation with a physician, a cardiac imager named Warren Manning who likes to image the heart. And it’s sort of considered bad form to stop the heart while you’re imaging it. The heart is constantly moving, and you need to somehow capture that so speed is important. Right?
Paul Rand: Right.
Daniel Sodickson: I started asking myself, well, what are the limits? I knew nothing about MRI at that point. What are the limits of speed in MRI? Why is it slow? And it occurred to me it’s slow because we’re gathering one projection at a time. So I just asked myself the question, is there a way of gathering multiple projections at the same time, like getting two lines of the scanner, or three or four lines of the scanner at once? And I realized that we were using these radio detectors to pick up the signal. What if we had an array of detectors around different parts of the body? Could we use those to get multiple different projections in parallel? Turns out we could, hence parallel imaging.
Paul Rand: Okay. In terms of that advancing the science, how has that impacted how we operate today?
Daniel Sodickson: Well, interestingly, in medical imaging in particular, time is money. Time is of the essence, partly because you want to scan as many people as you can in these busy, expensive scanners, partly because it’s hard for patients, particularly sick patients, to hold still. There are parts of the body that, as I was talking about before, refuse to hold still like the heart. So what parallel imaging allowed us to do was to do all of the things that we could do before but faster.
Paul Rand: Got it. Now, you call this streaming perception. Is that the word you used for it?
Daniel Sodickson: Well, in a way, it enabled something that I like to call streaming as opposed to snapshots. The old tomography took a long time, almost like an old camera that needed a long exposure. But the faster we went, the more we could image life sort of in living color in real time.
Paul Rand: Got it. So are the majority now of MRI machines set up to work in parallel imaging?
Daniel Sodickson: They are. It certainly was very gratifying for me to see. Because speed is so essential and so valuable, manufacturers very quickly picked up the concept of parallel imaging and put it on their scanners. So now, I’d say, the majority of MRI scans taken around the world use parallel imaging in one way or another.
Paul Rand: That’s pretty impressive. Congratulations.
Daniel Sodickson: Thank you. I’m proud and quite a bit humbled by that.
Paul Rand: But you didn’t stop there. You got involved in something else called compressed sensing. Tell us where this idea came from and how that has evolved.
Daniel Sodickson: Let’s say you want to send somebody a photo in your email. You then get a question: do you want it small, medium, or large? We compress the image in order to store it efficiently with something like a JPEG algorithm, something like that. But that leads to the question: if I can compress my data and throw away lots of these data points, why do I need to spend all this time gathering in the first place? What compressed sensing is, is a way of cleverly figuring out pre-compression. Before you even know what’s in the image, you essentially gather the data you need to get a compressed representation of it. Almost like you’re gathering the JPEG first and then uncompress it rather than gathering all the data, taking a long time, and then figuring out what to throw away. So this pre-compression allowed us to speed things up even further. And it turned out compressed sensing worked really well in combination with parallel imaging too.
Paul Rand: Okay. How has that worked into the science that we’re utilizing today?
Daniel Sodickson: Well, I think one really gratifying thing about working in the field of imaging is that as soon as you have a new development, it can propagate very quickly to all manner of disciplines.
Paul Rand: Okay.
Daniel Sodickson: So these acceleration techniques are being used clinically to image patients faster and clearer, but they’re also being used in basic research to get more data in the same time, for example, to help characterize new types of diseases or even new types of materials and physics.
Paul Rand: Okay. If I can pause for a second, because we’re going down the medicine path, but let’s go back to other areas of our world where imaging is prevalent, and space exploration, telescopes you talked about. Can you spend a little time talking about the evolution? Because we can certainly talk about Magellan and other advances in telescopes, but these are all parts of imaging. How has that imaging evolved as well?
Daniel Sodickson: People did not stop once they had developed these nice telescope or microscope tubes. They kept on tinkering and pushing the limits. And it turns out one of the things that in astronomy, in particular, was a key enabler was to make these tubes bigger. Basically, bigger is better when it comes to astronomy because of the limits of spatial resolution.
Paul Rand: Okay.
Daniel Sodickson: So we have had this remarkable explosion in the size and power of these artificial eyes that look out into space, to the extent that now we have something called the Event Horizon Telescope. And that’s what gave us our first view a few years ago of a black hole, our first actual visual picture of a black hole.
Paul Rand: Remarkable. Advances in imaging in one field inevitably leads to different understandings of how imaging can be used at other fields as well.
Daniel Sodickson: Absolutely. I think of imaging as one vast evolutionary tree in a way. People tend to think of astronomy or microscopy or medical imaging as very distinct disciplines. Often the people working in them kind of have their heads down and are focusing there, but actually they’re all branches of this great tree. I’ve had some extraordinary experiences sitting down with radio astronomers, for example, who have these huge dishes trained outward, and we have our MRI machines that are looking inside.
I sat down with one radio astronomer back a few years ago, Urvashi Rau, whom I profile a little bit in the book. And as we were going through the talks we were supposed to give, these paired talks at a conference, putting astronomers together with medical imagers, we discovered that our talks were basically the same. The mathematics underlying how we did radio astronomy were almost identical with a few notation changes to how we did MRI. So it turns out a radio telescope is like an MRI machine turned inside out. Who knew?
Paul Rand: Remarkable. So now we are entering a whole new era, and I’m not sure what we call this era. I don’t know if it’s the AI era or if there’s another terminology. How do you think about the new era of imaging that we’re getting into?
Daniel Sodickson: Yeah, it is remarkable. I guess the way I think about it is a shift from emulating our eyes, the imaging sensors that we use to see, to emulating our brains as well, because it turns out that the brains play a really outsized role in vision. The world we see, you and I, all of us watching, is in many ways a lie. Our brains are really good at taking rapidly streaming data that’s not necessarily good enough on its own to give us a clear picture of the world and filling in what’s missing based on what we know about how the world works.
Paul Rand: It really stood out. And that’s remarkable, isn’t it?
Daniel Sodickson: We don’t give it much, much credence because, of course, vision is so fundamental to us. Just like breathing, we just do it.
Paul Rand: Right.
Daniel Sodickson: But actually, our brains are constructing the scenes we see based on what we have learned as children about what’s what in the world. So I think it’s now becoming possible with AI, which is in many ways an artificial brain, to do some of the same things, to take much less data or much different data, and fill in the gaps and allow us to image much faster, yes, but also allow us to image differently in a variety of potentially revolutionary ways.
Paul Rand: I know your team is also working with AI and MRIs, and how prevalent is this becoming in going to our local hospital and getting an MRI?
Daniel Sodickson: What I’d say is, what most people are familiar with is what I would call downstream AI. In other words, we get our data, we get our images, and then we let AI interpret it the way a human would. People worried from early on whether radiologists would be replaced by AI hasn’t happened by a long shot yet. But there’s also something very interesting and a little bit different, which I call upstream AI, which is that if we know we have these AI models on the other end, can we gather different data than we used to? Can we build different imaging machines than we used to, that maybe don’t have the perfect data we’re used to working with, but use AI to fill in the gaps? So I think what’s really interesting is, even though, say, an MRI, we also had this similar bigger is better trend, just like astronomy, to get bigger and bigger magnets, in the last few years there’s been this remarkable less is more trend, going in the opposite direction, trying to make MRI, for example, much more accessible to people around the world.
Paul Rand: Okay. One of the points I want to pause on for a second is this idea, and radiology was predicted to be one of the first fields where AI was going to replace and radiologists were going to become a thing of the past. And as you mentioned, then there was a very public pronouncement that that was going to be in the case, that didn’t happen. I wonder if you could why that did not happen.
Daniel Sodickson: Yeah, that public pronouncement you mentioned, by the way, that was 2016. Geoff Hinton, who is now a Nobel laureate and one of the godfathers of modern AI, basically said on camera, “We should stop training radiologists now because, within five years, AI will do their jobs better than they ever could.” Of course it’s been more than five years since 2016, and I’m not aware actually of a single radiologist yet who’s been replaced. I think that’s for a number of interesting reasons.
I think people, when they think of radiologists, tend to think of them as sort of pattern recognizers, object detectors. They go, and they just scan an image. Is there a tumor? Is there a tumor? Is there tumor? Nope, no tumor. Or oh, there’s a tumor. Actually, the job of radiology is much more cognitively rich than that. It’s more like detective work than it is just like pattern recognition. So there’s that. But also, actually we can use AI to do things that no human ever could have before. Now, that said, I will pause on just one small irony, which is that though it was predicted that radiologists would be one of the first replaced by AI, now it’s looking like it’s programmers, coders who might be replaced sooner.
Paul Rand: Part of what AI is allowing, I think, getting back to this, is even a more democratization of the technology, making it more accessible. I wonder if you can talk a little bit about why that is. Certainly, you discussed speeding things up, but how is it allowing it to be more accessible?
Daniel Sodickson: That’s a really good question, Paul, and it’s not necessarily an obvious jump, but it is a powerful one. So in my group, we have been working for a few years now on something we call the Everywhere Scanner Project. The goal basically being to get scanners everywhere and allow them to kind of fill in the gaps between times when we normally image. Normally, we pull out these big expensive machines once we already know you’re sick, but often that’s too late. Often symptoms come when something is already untreatable or, in any case, difficult to treat. So the thought is, can we actually use imaging early, move it up in the diagnostic chain for early warning rather than just for guidance of therapy? But in order to do that, we obviously need cheaper, more accessible machines, and we need kind of automated detection of worrisome changes. And that’s where AI comes in. AI basically is really good at detecting changes from, say, an earlier image.
Paul Rand: And then we can compare over time, and the machine can also allow us to say, let’s go back and look at five years worth of images to see what, if anything, has changed.
Daniel Sodickson: That’s right. So basically, if all we want to know is whether something has changed in you since your last MRI scan, then maybe we don’t need a big million dollar machine. Maybe I can build an MRI machine into this seat I’m sitting in right now so it can check, hey, has there been a change in your prostate health since the last time we scanned you with the big expensive machine? So because AI can fill in what’s missing, we can use these cheaper machines that we can get everywhere.
Paul Rand: Tell me about Prenuvo. The discussion out there is people thinking that they need to get a whole body scan, and now we’re going to have more others telling us that that’s a pretty good thing to be doing.
Daniel Sodickson: Absolutely. Full disclosure, I am an advisor, a scientific advisor for Ezra, which is a company that, like Prenuvo, although in a somewhat different way, does whole body scanning. Time was, when you mentioned whole body screening in radiology circles, a chill would descend in the room because radiology was kind of burned by this in the past. There was this era when people were trying mall scanners. What would happen is, they generate all kinds of false positives. So when we take an image of you, I guarantee you we will see something that is noteworthy. The question is, what do we do about it? Is it important? In the old days, just doing a scan was a good way of alarming patients by producing these false positives.
Here’s the amazing thing: False positive rates aren’t fixed. They’re not God-given features of our imaging devices. We can drive them down by using context in the way I was just describing with AI. If we have prior context, the more we see you, the better we can predict. Is this a new thing in you we need to worry about? Or is it just your personal individual baseline? So what these companies are starting to try to do now is do longitudinal imaging, imaging not just once, not just sort of a one-shot screen, “Okay, here are the findings, go do something about it,” but instead following you over time, so they can establish a baseline and really understand your individual health. And that’s something that I think has a real shot of transforming medicine.
Paul Rand: Okay. So you mentioned you’re involved in a company called Ezra. Are they basically doing the same thing? Are they walk-in MRI, whole body scan centers? Is that kind of the business model?
Daniel Sodickson: So at least, originally, they had a sort of similar overall model with a lot of individual differences that matter. Recently, interestingly enough, Ezra has been acquired by a company called Function Health, which does longitudinal blood testing as well. Now, in my view, we’re starting to cook with gas, because now in addition to the imaging, which tells you sort of the spatial features over time, you also have the chemical features over time, and down the line, genetic and proteomic features and all of this, which means you’re going to be able to create with this ongoing data a kind of representation of your health today. What AI models can do is they can track whether that representation is more or less staying the same, or it’s deviating. What we want to do is we want to catch that deviation early before disease develops while it’s still curable.
Paul Rand: It’s changing the way medicine is being practiced. Is that something that physicians feel good about, or how to control things? You mentioned false positives or other things that maybe we don’t need to know.
Daniel Sodickson: Yeah, this is a source of ongoing debate. As it should be. I think there is still nervousness just because the details of how you do it matter. Just measuring without knowing what to do with the measurements is not enough. But what I believe is that if we actually leverage all the best tools, AI, and our best clinical insights, and we look over time for individual people, then we can make medicine in the future proactive rather than reactive, make it predictive, and also make it protective. Just like a Google Maps for health, if you will. Google Maps knows where you’ve been. It’s mapped out the landscape, and it tells you what obstacles to avoid to get where you want to go. I think we’re at the stage where we can start doing that for health. You want to get to a hundred healthy years functions mission, for example.
Paul Rand: Yes. Yeah.
Daniel Sodickson: You want to make sure you’re still capable of doing the things you care most about as you get older. Here are the things you need to avoid. Here are the dangers that are coming your way. Here are the things that you can do practically in order to get there.
Paul Rand: How close to that being a reality do you feel like we are?
Daniel Sodickson: The scientist in me is saying, hey, but yes, we still need to do some development. But the thing that I think excites me is, it’s no longer decades away. It’s months and years away now, I think, from the tools I’m seeing from the advances that are coming out.
Paul Rand: There’s another side of this coin which gets involved in privacy. How do you think about that with what’s happening to this data and the privacy? Or do you create these things and somebody else worries about making sure they’re being used correctly?
Daniel Sodickson: My final chapter of the book, which is titled The Future of Seeing, like the book itself, I actually present two starkly different pictures of possible futures: One, sort of a bright future in which we use omnipresent cameras to promote equity and to share knowledge and to even ultimately expand our own sensorium. Maybe X-ray vision will be something that we all have one day. On the other hand, there’s the dark mirror in which there is no such thing as privacy anymore.
Paul Rand: Right.
Daniel Sodickson: I mean, if you’re walking through a modern city, you are observed by countless cameras every step of the way that can be weaponized. Let’s say we do eventually figure out how to tap into our natural visual pathways and expand our vision. Well then, what you see could actually be hacked. So I think this is a really good time to step back and realize, hey, imaging is about seeing, let’s make sure that we maintain some sense of, if not privacy, then truth, that we can trust in what we see. The challenge we’ll have to deal with is that always there is the temptation then to isolate oneself in a bubble and just see the things one wants to see. So we have our stark choice right there. Do we just see the things we want to see, just basically digest the sources of images that we want? Or do we open ourselves up to this vast set of new perspectives and start thinking a little more like a species?
link

:max_bytes(150000):strip_icc()/Health-GettyImages-1264249095-69c44c358f1e49c5bda52877c7624557.jpg) 
                 
                 
                 
                 
                