Panelists discuss AI initiatives, hopes for future innovation

Panelists discuss AI initiatives, hopes for future innovation

Health News Illinois convened a panel of experts this week to weigh in on the current state of artificial intelligence in healthcare.

Along with highlighting initiatives they are undertaking, panelists discussed whether AI will live up to the hype and how it may address current challenges in healthcare.

Panelists included:

  • Dr. Abel Kho, Director, Institute for AI in Medicine, Northwestern University Feinberg School of Medicine
  • Dr. Jon Handler, Senior Fellow, Innovation, OSF HealthCare
  • Dr. George Cybulski, Chief of Neurosurgery and Clinical AI Leader, Humboldt Park Health
  • Rep. Bob Morgan, D-Deerfield

Watch the full event here.

Edited excerpts below:

HNI: What is the current state of AI in healthcare?

Dr. Jon Handler: I think it’s an interesting time because I think people have become much more comfortable with AI. Many of us have now used AI in our personal lives. Many of us who are practicing have used AI in our practice now with the Ambient Scribe, and people have become somewhat comfortable with many of the things that AI can do. People are also now much more personally aware of the ways that AI can both help us and can fail us utterly, and in ways that seem super shocking and embarrassing. So I think we’re getting to a place of some understanding and comfort of what AI can do and what the potential promise is. What I’m not sure, and I’m not sure people realize, is when we have things that are growing exponentially, which I think AI is doing, they tend to think linearly and they extrapolate out what the future will be like. But if it’s actually going exponentially, then what happens is the future comes way faster than they expect. And so I think we may still, despite everything, be underestimating the impact that AI is going to have. And when I say AI, I mean the modern AI that we’ve seen in the new developments that will inevitably happen. So I think it’s an interesting time. It’s one in which we have increasing comfort — maybe too much comfort — maybe we don’t realize how much change is still to come.

Dr. George Cybulski:  I’m a big fan of reading about companies and the success of companies, and one of my heroes is Andy Grove, the late co-founder of Intel, and he wrote a great book called ‘Only the Paranoid Survive.’ And we need not be paranoid, but we need to be alert, because I think we’re at an inflection point in healthcare. And AI is a tool for dealing with that inflection point. And we have in healthcare lots of challenges due to, as we’ve seen with the various federal programs, I’m not going to mention any by name, but we really have a challenge, and I think AI is going to be the tool that will help us at this inflection point.

Dr. Abel Kho: Like the rest of every other industry, people recognize that artificial intelligence is transformational. I’m with Jonathan on this. I think it’s not linear; it’s exponential. And I think that there’s a little bit of a panic in the C-suite that, ‘how do we get on top of this? How do we sort of get more informed about it? How do we place bets on horses that are going to win?’ I think what’s become clear is that, because one of the biggest burdens in healthcare, traditionally, has been our time, there are tools like Ambient Scribe that are just winning the game; they’re absolutely going to take over the market. But along with that, I would say that there’s the potential for really tremendous disruption that people aren’t even thinking about today.

So, for example, like electronic health records, they were installed and put in place over the course of 20 years in this country, and they were really disruptive, not in an innovative way that we thought they would be; they’re just disruptive to our workflows. Ambient Scribe comes along, and it has this potential to disrupt the whole electronic health record. I was just talking to somebody at ShirleyRyan (Ability Lab) yesterday, and they’re using Cerner.

A couple of months ago, not many people saw this, but Cerner Oracle announced that they’re going to go with an all-AI EHR. And everybody thought, ‘Ehh, what are they talking about?’ But they’ve got it implemented now. What’s interesting about building an electronic health record from the bottom up with AI, like right from the conception, is that they can take full advantage of all the AI tools that might be deeply baked into the workflows, whereas having an incumbent still tethered to a really old, underlying database structure, which is like 50 years old. And so you got this potential now for that curve to invert. I think Cerner can sort of pop up, and that’s just one example. I think there are a number of existing incumbent technologies today that don’t even know if they’re obsolete. And I think that’s what we’re more likely to see is a complete rip and replace. It’s just going to happen because this technology is light-years better.

HNI: What steps have your organizations taken to incorporate AI?

Dr. Abel Kho: One thing that we’re really focused on is education, because the reality is that most people who are in leadership today have never really been exposed or had hands-on experience with it. I’m personally really focused on, ‘how do we create educational opportunities for those who will be the future leaders?’ The future leaders are oftentimes people who are much, much, much younger than we are. There are people coming in who have, through their high school careers, been using AI already. They’re digitally native if they’ve been a couple of years more advanced than that. And so, whether it’s in the medical school curriculum, we’ve had a medical school curriculum that has incorporated data science AI for five years now. So we’ve been doing this, we’re one of the first in the country to do that. We put together practical coursework that people can experience, use and interact with data sets and the latest tools. We’re trying to really expand that, and then also trying to put together executive education, tailored towards physicians, for example, so that people who were in the practice of medicine can also not be obsolete. They can be the ones who use AI and are, I think, part of this revolution.

Dr. George Cybulski: So five years ago, maybe even more, I thought about how would I replace myself in terms of evaluating patients, and and then I became more aware of artificial intelligence and the ability to impart into a digital twin, if you will, myself, and that would be for anyone who is encountering a patient with a spinal condition, so I focused on that. I was at Northwestern for many years… and I retired, but then one of my former trainees at then-Norwegian American Hospital called to say, ‘Why don’t you come and work with me again… and let’s take care of patients who need spine care at a community-based safety-net hospital.’ And I said, ‘Well, yeah, this would probably be a place then to see if I could implement my interest in an AI platform for clinical decision support in low back pain.’ And our CEO, Jose Sanchez and (Vice President of IT) Hector Rodriguez are very supportive and experienced, and that allows us to develop this and put it in place. And so we have a proof of concept going right now, which we would like to expand, eventually to a new musculoskeletal clinical decision support. And I think it’s great. My colleague from the emergency room at Northwestern here knows that he can’t see every patient that he has residents, and now we have (physician assistants) and (medical assistants), and so I think there’s a great opportunity to develop it.

Dr. Jon Handler: There’s almost no part of our organization I feel like that isn’t putting their hand somewhere in the AI space, and especially the generative AI space… There are a lot of companies that are knocking on the door, and we’ve opened the door to a bunch of them. And things that we’re doing in imaging and imaging analysis, things that we’re doing in colonoscopies, and automatically helping the clinician to identify adenomas, which are potentially polyps that could be bad or become bad. Things that we’re doing to use AI to help facilitate our integrations with external vendors and our data integrations, which used to be a very hard, manual process. And it still is a very hard manual process, but can we make it easier? Another thing that we’re doing, for example, is that we get a lot of written comments from patients on the patient satisfaction surveys. And how do you go through those? We used to have a human read through them; now we’re exploring if we can have an AI read through those things… and give us aggregations and summaries of those things.

There are three areas in our internal development innovation labs. One is, we have Dr Adam Cross in our pediatric lab, and he’s looking at AI to help identify children with very rare diseases, and also AI to help identify and make diagnoses of patients with concussion, so neurosurgical sorts of things. And we have Dr. Matt Bramlet, who’s a pediatric cardiologist, and he’s looking at AI to automatically identify and what we call segment, meaning to figure out what part is what on CAT scans and MRIs in order to not have to on 3D reconstructions identify, let’s say, patients with neurotic aneurysms and things and how big it is. And on my own side, I’m very excited about the potential for AI to help us assess the quality of every interaction that we have with patients. I think for the most part, not speaking for OSF, I am now giving my opinion about my experience with healthcare in general is that, especially in the outpatient space, but also in the inpatient space in general, we assume that if you saw a credentialed provider, that everything was perfect, unless you tell us otherwise, even though we have plenty of data and studies out there to say that that is not a fact, that is not true.

HNI: Will AI live up to the hype?

Dr. Jon Handler: I think it will. I’m gonna make a prediction: It will definitely live up to the hype, it’ll exceed the hype. I think the question is, under what time frame will that occur? I mean, if you say, in 100 years, will it exceed the hype, I would bet a lot of money that the answer is yes. If you say, in two years, will it exceed the hype, maybe not. And so then, when exactly will it land? And what I think will happen is, just as it has with everything else, it’ll seem like nothing’s changing. And then you’ll turn around, look back and go, ‘Oh my gosh, look at how the world has changed.’ That’s what I think is going to happen here. It’s going to be a bunch of things that’re shocking when they happen, and then we just kind of accept them. And so I think we have a lot of exciting things that I imagine are coming to the forefront. I think what we’re going to see in imaging is going to be very impressive. I hope that what we see in terms of clinical decision support is going to hopefully dramatically improve how we handle optimizing the quality of care that we can provide every single time. I hope that some of this stuff helps take away some of the manual work and makes it easier for patients to get through the process of healthcare, both getting the healthcare, but also the payment of the healthcare, and all the other stuff.

HNI: How do you balance the regulation conversation with ensuring there can be innovation in this space?

Rep. Bob Morgan: There are a lot of things we can do that would negatively impact the evolution (of AI). We talked about mental health. I passed a bill this spring that banned AI therapist bots. So apps that are holding themselves out as a licensed therapist. It’s that last piece that makes it against the law, which is holding yourself out as a healthcare professional, but you’re really not. That’s not even a new concept for healthcare; we’ve been doing that for decades. You can’t set up a bagel shop, and then the bagel shop says, ‘I make great bagels, and I’m a doctor.’ So if you stay within that principle, that makes sense. But then you get the hard question, which we didn’t answer in this legislation. And you’re seeing this in the news stories every week, not every month, somebody who is interested in committing self-harm, suicidal ideation, or they go to ChatGPT, or OpenAI, or whatever, they’re asking questions about how to commit suicide. What is the generative AI supposed to do? Should we create state regulation on that? I have yet to see what role we are supposed to be playing, other than encouraging these systems that are dealing with this literally on a daily basis, to think about these things and make decisions transparently about that, not just to the state or the federal government, but to the user.

Character.ai, I have siblings with special needs and a sister-in-law with special needs, and Character.ai has been a resource tool for one of them to just be a partner and a friend. And that’s a completely unregulated app, as opposed to Misericordia, which has dozens and hundreds of trained staff who notice you and know how to do it professionally and safely. And so if Character.ai is going to take away the daily care and support services for my sister with special needs, how is that going to play out, and does the government have a regulatory role there? So these are the things I’m thinking about, and I really do think my legislative northern star is that there are times where the government regulators should not be involved. We have to be equally as comfortable recognizing that and really creating a culture around that, because otherwise we’re going to get ourselves in areas where we just cannot regulate. We don’t know what we’re doing and we’re going to create unintended consequences.

About The Author

Advertise With Us

 
health-news-illinois-advertisers-01