AI To Manage Epilepsy? Ethics & Biases - Daniel Goldenholz, Harvard Beth Israel Deaconess Medical Center, MA, USA
How useful is and can AI be in helping people with an epilepsy? How safe it it? What are the biases to be challenged and the ethics to be followed? Hear answers from Daniel Goldenholz of Harvard Beth Israel Deaconess Medical Center.
Reported by Torie Robinson | Edited and produced by Pete Allen
Podcast
-
00:00 Daniel Goldenholz
On the one hand, everyone would like to understand why the AI did what it did. On the other hand, if the AI is better than anyone in the world at doing the thing that it does, it shouldn't be understandable because then there should be someone who could do the thing!00:14 Torie Robinson
Fellow homo sapiens! Welcome back to Epilepsy Sparks Insights. Now, medical technology isn’t a new thing - virtually all of us use it in some way - even if we aren’t aware of it! But in terms of MedTech for epilepsy: although we have come on leaps and bounds over the past few years, the development of devices and the implementation of AI holds ethical questions, and some may say dilemmas - which must be addressed. Well today, hear neurologist and data scientist Daniel Goldenholz who shares his thoughts with us - in part 1 of 2!
A quick one - please don’t forget to like, comment and subscribe because your comment and like will help spread awareness and understanding of the epilepsies around the world.
And now, onto our star of the week, Daniel Goldenholz.00:59 Daniel Goldenholz
My name is Daniel Goldenholtz. I am an assistant professor at Harvard Beth Israel Deaconess Medical Center. Part of my job is to be an epileptologist, so I read EEG, take care of patients on the inpatient side and have a clinic. And, a larger part of my job is to do epilepsy research, and that is focused on data science and AI; trying to apply some of the latest technologies in those domains into the fields of epilepsy.01:22 Torie Robinson
And are you on the adult side or the paediatric side?01:25 Daniel Goldenholz
Adult side.01:25 Torie Robinson
Okay, well, we need more of you, so, thank you for doing what you do. And you kindly sent me a fair few of your papers over the, covering the previous few years regarding epilepsy and AI. Could you give us a little bit of an oversight on that and why you focus on that sphere of things?01:43 Daniel Goldenholz
There's two main reasons why. One, is because it's really fun! And the second reason is because I think it's extremely powerful and important for a field that needs a lot of help. The simplest way to think about AI is: help me do things that are hard. And, that includes things that I know how to do but are difficult, or things that I can't do because it's impossible for me or for anyone. And AI has the promise, and has demonstrated the capability for doing both types of things - sometimes. And the more we work on it, the more we study, the more we find that there's more problems that are hard that can be assisted or completed by AI completely. So, I think for those reasons, because there's so many problems in epilepsy that are hard, there's a lot of benefit to be had.02:33 Torie Robinson
I find that lots of people - including some clinicians and even researchers, but especially some patients and families - you mention AI and they get really scared! And if they… or they might think that AI is so limited that it's not worth going there, or, you're gonna steal all our data and use it for profit! Could you just reassure people a little bit on how actually things are very different to that?02:58 Daniel Goldenholz
You just put people on a giant spectrum, right?! The ones that think that AI is very frightening are the ones that are saying that AI is super powerful. And there are very wise experts that have publicly said “Oh, AI potentially could kill us all and we should be very frightened about that.”. And of course, we have popular movies that talk about, you know, robots taking over and all of that kind of stuff. And then on the other side, there are people, like you said, who are thinking “Well, this stuff doesn't really work that well, and it's not very useful, so who cares?” and then there are people that are outside of these edges in the middle, and they say “If applied correctly, and if, if the appropriate controls are used, there may be some very serious benefit that we can have from these technologies.”. I would put myself firmly in that middle camp. I don't think that AI is limited. Many people who are aware of what's going on; I've seen them taking the extreme position that AI is not very useful are missing what's happening all around them (outside of epilepsy is what I mean). In terms of finance, in terms of technology, in terms of this podcast that we're using… the app we're using right now is based on AI technologies and so many other pieces of infrastructure that, um, hold up the society that we live in, are now employing AI in order to be more efficient and more effective. So, I think that saying “AI is not really useful.” is missing some of what's happening around us. But on the other hand, those people that are saying AI is going to kill us all, I don't think that that's totally insane! I think that it is possible to imagine a world where AI does something terrible. But it's also possible to imagine...almost any other technology doing something terrible too and that's why we do things carefully. You know, we talk about nuclear energy. Nuclear energy is horrible and can do horrible things. And yet it can also do extremely helpful things. And that's why we have controls on those, on those types of technologies. And that's why we say “We're not just going to use nuclear no matter what, in all cases.”. We say “Well, under certain special circumstances, this can be used, and it can be harnessed, and we can get enormous benefit from it.”. Same thing with AI: if we do it carefully and we do it thoughtfully, we can get an enormous benefit as a society.05:12 Torie Robinson
And I guess that brings us onto ethics, which is a really key part of development in AI used in epileptology - and everything [else]. And you've done a paper or two on that. Could you tell us a little bit about that, please?05:25 Daniel Goldenholz
So, I'm gonna defer a little bit! The paper was written by my colleague Sharon Chiang and she is a brilliant scientist - and anyone that's interested in our paper about ethics and AI in neurology should definitely take a look at this paper that she wrote that I helped her with. What I can, what I can say in general is that there's a lot of things that have come out over the years as far as what can be dangerous and what can be unethical with the use of AI. And I can mention a few, a few interesting points about that, and then I can defer to that great paper from my colleague. One of the things that we see is that current generation AI can now be told “Go learn this stuff without explanation of how to do it.”. And that's an amazing, powerful technology which has been dramatically helpful in solving difficult problems, but it also creates a new ethical issue. Suppose that I give the AI a lot of training that is very biased, then the AI will correctly learn that bias. And if I give it something dangerous of a bias, then it will correctly learn that dangerous bias. So, for example, if I want to teach it “Find good guys versus bad guys, and always the bad guys have a certain skin colour.” then the AI will correctly learn that people with that skin colour are bad guys. Even if that had nothing to do with why they were identified as bad guys. But it's based on the information that I gave the AI; the AI notices the shortcut and goes for it. Humans do this too, and there is now training for implicit bias and all sorts of things that we can do psychologically. It doesn't work super well, but we're at least aware of the fact that humans are biased. AI can be biased in ways that are difficult to sometimes notice. And the most obvious way to make the bias is to give it biased training. So, a very, very important issue in - not in the field of epilepsy but in the field of AI - is “What did you use to train the thing?”. If you trained it in a way that makes it think that women are inferior or that minorities aren't important or fill in the blank idea that is inappropriate, then it will learn that because that's the shortcut that it gets to the answer faster. So, we have to be really careful about what we use to train an AI. That's a big one. Another one that's...was very theoretical before and is now very practical, is that you can use the tricks that are baked into AI to trick the AI, basically. AI is trying to find the shortcut from question to answer as fast as it can. And there are ways to game the system. So, let's say that I was a doctor who got paid more when I had a certain diagnosis that came out: I can illegally, you know, inappropriately mess with the input images that are going to the AI for it to make a diagnosis inappropriately. Because maybe I wanna make more money. And so there has to be ways to think about what we are doing to watch out for these kinds of behaviours where people are just trying to game the AI. Is the AI aware of its limitations and can we look beyond those types of things? You know, another thing that a lot of people talk about is the concept of explainable AI. This is actually kind of controversial. It comes up in ethics for various reasons, but it's very, very strange. On the one hand, everyone would like to understand why the AI did what it did. On the other hand, if the AI is better than anyone in the world at doing the thing that it does, it shouldn't be understandable because then there should be someone who could do the thing! You know, we talk about self-driving cars and the car sometimes will make a terrible mistake and hurt someone. And then engineers sit down, and legislators sit down, and they say “Ah, you see, I didn't understand why the car did that, that's terrible, we should not allow these things to exist!”. On the other hand, take any car accident on any road in any country of the world and say “Why did this idiot driver do what they did?” And usually the answer is “Oh, well, because they were drinking, or because they forgot, or because they had a momentary lapse…”, or there's all sorts of like, invented excuses after the fact, and we don't really know. And if we did know, we shouldn't be satisfied either. So, you know, to take that analogy further, I just don't think that we need to understand why it does exactly what it does at the moment that it does it, instead, we need to put a box around what it should be able to do and what it should not be able to do. And if we can do that, and that box does the things that society wants it to do and doesn't do the things society doesn't want it to do, then I think that just like with humans that we don't really understand, but we have a kind of handle on: I think that we need to be satisfied when that box is good enough to do helpful things for the world. So explainable AI and interpretable AI, I'm a little bit of an outsider on that, on that conversation. I think that we need to be careful about what we demand from a system that we expect to be better than us. But, at the same time, we shouldn't say, like “It's fine, we don't understand it, we don't care.”. We care in the sense that we care that it doesn't do things that are crazy and dangerous. But at the same time, everybody in every intelligence makes mistakes. How much mistakes are we willing to tolerate and what kinds are we willing to tolerate?10:40 Torie Robinson
And why do you think the application of AI, when it comes to epilepsy, is pretty slow compared to the use in other parts of medicine and science?10:51 Daniel Goldenholz
I'm gonna flip that upside down. I'm gonna say that AI is being used in epilepsy, and it's being used not so much in a lot of areas of medicine. The question I think is “Why is AI so readily taken up by business and industry and computer science and so reluctantly taken up by medicine, including epilepsy?”. And the reason I think that that's happening is the stakes are so high. If I have an AI that will make me five more dollars on my app, I'll do it, right? I mean, why not? Like five more dollars, like that could accumulate over time and it could be millions and I could become Google and blah, blah, blah. On the other hand, if I have an AI that misdiagnoses cancer in lots of people and therefore lots of people get surgeries they don't need and lots of complications and people die, that seems like a horrible thing! And that doesn't seem like it's worth it, even if I can rake in some money! So, um, there are people on, on, you know, the side that want to make money and there are people that are on the side of protecting humans! And, there's obviously a conflict! In, in this case of industry - in computer science - if we screw up it's fine you know “Move fast and break things” has been like a uh... a buzzword (or a buzz phrase anyway) in Silicon Valley, and it makes sense when you're talking about software. But when you're talking about software that can hurt people it's a horrible idea! And everybody in health care knows that! So noone’s in a hurry to break things and to damage patients. On the other hand, we are in a hurry to do better than what we do. And in order to do that, we've been trained over and over again that when somebody comes in with a new gizmo “Hey, look at this thing, this will solve your problems, it does things better, faster, cheaper!” then we say “Sure, sure, sure, that's good. Now what are the faults? Where is it going to harm someone? What happens if it breaks? What happens if it runs out of batteries? How long will the battery last? What happens if it catches on fire?”. Right? All the dangerous possibilities need to be addressed, and the “Move fast and break things” attitude doesn't talk about that as much. So, you take these people that are going at 100 miles an hour and you say “Hey, could you please slow down to 50?” and those people are like “Yeah, we'll see you later.”. So, medicine is stuck, waiting for some people to slow down on the off-ramp and to take these really fantastic ideas and put safety guards all around them to make them safe for patients and for patient care. I think that that's the biggest reason why we're seeing AI tools slowly trickling into medicine, but at a much slower pace than say, what's happening in your cell phone, what's happening at your financial institution, what's happening in many other domains of life.
Epilepsy in particular is full of data. We have EEG data, we have patient diaries, we have MRIs, we have CAT scans and so on and so on and so on. We now have genetic data. The amount of data that's coming out of patients from epilepsy is exceeding a lot of other kinds of diseases. So, I think that epilepsy is going to benefit from AI and it's going to benefit from AI sooner than certain other areas of medicine. But, it's got the same problem that all areas of healthcare does, which is: we've got to be careful when we do it.14:02 Torie Robinson
Many people will say “Okay, we understand what you're saying, but, people are literally dying whilst waiting.”. Do you find, do you recognise that perspective - whilst you're a clinician and researcher as well? It's like, kind of like, weighing things up sometimes, right? Like, you don't wanna get sued because, you, because (not you) but people will apply AI and it goes wrong. But also, you might save lives! You just don't know. So, I think that's kind of…what do you think about that and that sort of argument or perspective?14:33 Daniel Goldenholz
I mean I think that you're right, and I think that we can make the analogy to cancer. So, there are always new cancer drugs that are being tested. And, there's a new drug, we just found it, it's potentially lifesaving for cancer. So patient advocates would say “Well, people are dying, we don't have to wait around, just give us the drug!” And you say “Well, maybe. Because if we just give it to you without figuring out if it's safe...then maybe we're gonna kill more people than we're gonna save!”. And in fact, years ago, before sort of modern biomedical science became a reality in medicine, we just did that! We would say “Okay, look, I found something new, I'm just gonna give it to people.”. And then people would die, and many people would die, and the treatment was not effective, but there would be some wise doctor who would be, you know, quite well decorated and have many letters after their name, and they would say “Yes, I think this is very appropriate.”. And they would continue giving these horrible treatments to patients, and they would continue to have horrible outcomes. Today, we say “Look, that's good, as long as we first do a little bit of science to check to make sure that it actually is useful. And is helpful and is not more harmful than helpful.”. But then, yeah, we're gonna do it. And when it comes to questions of life and death and, you know, urgency, then we move a lot faster than with other kinds of conditions. So, for example, headache medicines: we take our time a little bit more, but with cancer drugs, we go at, you know, let's say 60 miles an hour instead of 20. And with life-saving epilepsy treatments, we're moving a lot faster than with other things, because we don't have, like you said, we don't have time to just screw around. But at the same time, if we just say “Let's just do everything that sounds good.”, then we're moving fast and breaking things again. And then we're in trouble because we run the risk of just giving people technology without knowing if it works. Now, I'll tell you this: in epilepsy, this is happening. There are people that are allowing apps, for example, on smartwatches and on smartphones, that detect seizures. Without proof. And those things, they're charging money for. And patients, like you said: people are dying. So, patients are not going to wait around until somebody goes and does careful science and does all that science-y stuff. [They say] “ need a thing now.” So, they take the app, they pay money, and they trust it. And the downside is that when they trust it and the person has a seizure and passes away, they say “Wait a minute, the app didn't do the thing!”. And of course not, because it wasn't tested to be proven to do anything. So, I think that we do have both sides of that, the rush to get things into the hands of patients and then the people that are saying, sure, we want to do stuff, but we want to do it in a way that will help not harm. So, both sides are there, and I don't support the idea of just doing anything. I think we have to do things that are helpful.17:22 Torie Robinson
Thank you to Daniel for talking about some of the challenges with MedTech and epilepsy AI and explaining why MedTech development, production, and implementation isn’t necessarily as easy as it may look from the outside! Ethics and identifying and addressing biases is crucial.
Again, if you haven’t already, don’t forget to like, comment, and subscribe, and see you next time! -
00:00 Clip & intro
00:59 Meet Daniel
01:30 AI is fun and important for epilepsy
02:33 Using AI usefully and safely
05:12 Ethics and AI biased training
07:40 Tricking AI using AI tricks
08:32 Explainable AI controversy
10:40 AI intro pace: benefiting and protecting humans
13:30 Epilepsy industry is full of data
14:02 When people don't want to wait…!
16:03 Not all epilepsy apps are proven to work…!
17:22 Thank you Daniel
-
Daniel Goldenholz is an assistant professor of neurology neurologist, epilepsy specialist, and data scientist/epilepsy researcher at Harvard Beth Israel Deaconess Medical Center in MA, USA.
His research is focused on data science applied to the field of epilepsy for diagnostics (multimodal imaging and biosensor techniques), therapeutics (clinical trial studies) and prognostics (seizure forecasting). Daniel’s long term goal is to “find ways to help end the suffering of patients with epilepsy.”
-
Goldenholz Epilepsy + Data Science Lab: goldenholz-epilepsy-data-science-lab
Harvard Catalyst: Person/27784
LinkedIn: daniel-goldenholz
Harvard Catalyst: Person/27784
ResearchGate: Daniel-Goldenholz
VJ Neurology: daniel-goldenholz