WDSE Doctors on Call
AI and Mental Health
Season 44 Episode 19 | 28m 26sVideo has Closed Captions
Is AI a breakthrough for rural mental health access or a threat to the patient-provider bond?
As AI tools become a standard part of medical scribing and mental health screening, a vital question remains: Can a simulated empathy ever replace a human one? The Doctors on Call panel explores the ethical guardrails needed as we navigate this emerging technology and why the "essential human connection" remains the heart of healing.
Problems playing video? | Closed Captioning Feedback
Problems playing video? | Closed Captioning Feedback
WDSE Doctors on Call is a local public television program presented by PBS North
WDSE Doctors on Call
AI and Mental Health
Season 44 Episode 19 | 28m 26sVideo has Closed Captions
As AI tools become a standard part of medical scribing and mental health screening, a vital question remains: Can a simulated empathy ever replace a human one? The Doctors on Call panel explores the ethical guardrails needed as we navigate this emerging technology and why the "essential human connection" remains the heart of healing.
Problems playing video? | Closed Captioning Feedback
How to Watch WDSE Doctors on Call
WDSE Doctors on Call is available to stream on pbs.org and the free PBS App, available on iPhone, Apple TV, Android TV, Android smartphones, Amazon Fire TV, Amazon Fire Tablet, Roku, Samsung Smart TV, and Vizio.
Providing Support for PBS.org
Learn Moreabout PBS online sponsorshipUm, I'm here tonight to uh introduce you to our topic tonight of of AI and mental health.
I am your host for your episode uh about AI and mental health.
The success of this program is very dependent on you, the viewer.
So, please call in your questions or send them in to our email address at askpbsnorth.org.
Our panelists this evening include Glenn Mo Glenn Maloney, a licensed independent clinical social worker and licensed alcohol and drug counselor with most excellent psychotherapy in Duth who works with adults dealing with anxiety, depression, trauma, and addiction.
Liz Burton, a licensed independent clinical social worker and licensed alcohol and drug counselor with Inside Counseling in Virginia, Minnesota, who provides trauma-informed care and support for clients navigating depression, anxiety, addiction, and life transitions.
And Mike Clayba, a licensed independent clinical social worker with Inside Counseling in Duth and the organization's director of operations focusing on helping clients build healthier patterns of thinking and behavior.
As always, our phone volunteers are standing by to answer your calls.
And now on to tonight's program on AI and mental health.
So tonight, we're exploring a topic that's quickly moving from science fiction into everyday life, artificial intelligence and mental health.
Many people are already interacting with AI, sometimes without even realizing it.
From therapy apps to and chat bots to screening tools used in clinics, artificial intelligence is beginning to play a role in how emotional support is delivered and accessed.
For some, this raises hope.
Could AI help reduce long wait times, expand access in rural communities, support overwhelmed providers?
For others, it raises a concern.
Can a machine truly understand human suffering?
What about privacy, bias, or safety, especially in moments of crisis?
As with many emerging technologies, the truth likely lives somewhere in the middle.
AI is not a therapist, but it may become a tool.
It's not human, but it may influence human care.
Tonight, we'll ask where does AI help?
Where does AI fall short?
And how can we move forward responsibly without losing the essential human connection at the heart of mental health?
We're glad you're joining us for this important conversation.
All right, Mike.
So, we'll start with you.
Um, for viewers who may be unsure.
So, tell us exactly what do we mean by AI and mental health?
A big question.
Yeah, it is.
And, uh, I think the answer is we don't really know.
Uh so far when we talk about artificial intelligence, we're talking about these large language processing machines that search for pattern recognition and have so much data um that they're able to find the most obvious patterns and present those back.
So the amount of information they hold is significant but ultimately thus far you know that they're really good at then finding algorithmically which pattern to give back to the question that was asked to them in that sense there.
Um so as far as intelligence um you know there's no judgment um there's no um insight uh there's nothing of those sorts there.
There's just sheer brute force of pattern recognition that that they can generate which then can feel like intelligence.
It can feel like um emotions or that you know that they're sad with you or empathy and and so forth there.
Um but there simulations at best um in that piece there.
And for mental health um again it's we're in the wild west and we don't really know for sure.
Um there are ways.
so far where we've seen that AI has not been helpful uh with people's mental health.
Um as we were talking before, you know, the state of Illinois is the first state to not allow AI to actively provide therapy um to to any people within the state of Illinois.
Um and and part of that is because we're not quite sure what the outcomes are.
Um, on the negative side, AI can be very sycapantic and um, just kind of repeat back to the person there.
Um, on the positive side, it can be there all the time for you.
So, um, it's a 247 um, access.
Yeah.
And that is what you know, the few times that I've had clients, we probably have all had clients that have said that they use AI.
I had a client this week that said that they use AI and I said, great.
How did you use it?
When did you use it?
What was it?
What was beneficial?
and what wasn't.
And so trying not to judge it and just I'm as a clinician I'm trying to figure out where they're using it and where they're not.
Um so Liz, so um so you work up in Virginia.
I do.
So can AI help reduce barriers to care especially in some more rural communities that whether or not it's on a day like today the there's bad weather, you know, the access.
And so what are your thoughts around AI in rural communities?
So I think it's um I think it's very helpful and and um especially with access uh a lot of our agencies are uh have waiting lists for providers uh especially for in-person and so um I think it it can be um a positive thing.
Um, at the same time, as Mike said, we're in the wild west.
So, um, there needs to be guard rails a lot more developed in in the technology.
Um, so yes, it can absolutely increase access for some people.
Um, I read some research that said that the the most active users of of AI is between 2, I think 2 am and 4:00 a.m.
in the morning.
Um, just waking up with anxiety or depression and needing somebody to talk to.
So, they go to those chat bots and so yeah, that's yeah, could be good, could not be so good because we're not, you know, we don't work 24 hours a day.
We don't we do not work 24 hours a day.
And so to have something that does is good because I'm not going to start.
Um Glenn, many people are using AI chat bots for emotional support.
What are the potential I would say benefits and risks of using the AI chat bots for for that emotional support?
Well gosh, there's so many things to say, right?
I think benefits, you know, I was talking to somebody recently and they reached out to a friend and a friend was like, I don't really want to hear it.
let's go out and like not think about your problems, right?
And I think AI is not going to ever do that to you.
It's not going to invalidate you.
It's not going to say you're being dumb, right?
So um certainly that accessibility that that idea like Liz said like AI is there um like um we are not 24 hours that's certainly a benefit that it is present and then you know risks I think so much of what we all do is based on connecting with individuals in front of us and sometimes that is about that gentle challenging right And I don't know that AI does does that.
I think it is very a very supportive thing um without that gentle challenging and without um real ability to form the same types of connection that we do as people.
Yeah.
And I think that that is is one of the risks.
I mean, I think some of the more, you know, there have been cases of people, um, you know, going as far as, you know, dying by suicide because of the, you know, because the AI has has encouraged them because that was their idea, like, yeah, keep going or or or drug overdoses and and things like that.
And so, I think that that is there's a a stop gap that that we are by definition, it goes against our code of ethics for us to be our client's friends, right?
that we we all are we all have to abide by codes of ethics otherwise we lose our license AI doesn't have a license to lose correct right and so I think that that is one of that we're you know unapologetically not our clients friends and I think that that is where um friendly but not that there's that um power over that power differential which I think is and I and I think that it's It's trying to do perhaps mental health as a as a f as as an equal as opposed to kind of having that dissonance which kind of goes back to Mike.
Yeah.
So so like kind of talk about AI and its ability to kind of like assess like depression, anxiety, suicide risks, that kind of thing.
Well, I just wanted to add I think to follow up what you said because I think important piece is there's going to be levels of AI.
So there's chat, GBT, and open AI um that most people are using.
That's what most of the headlines are.
And I know there are developers are trying to make it more mental health uh specific.
But going back to you know like the open AI and chat GBT in the the suicide case I mean I was you know reading that um in this person in this case here the the the person who suicided uh was told by the AI chatbot that um you know his brother and his family members didn't couldn't see him in the same way that the AI could.
So um you know it understood him completely and fully in ways that his family members weren't.
And that's one of the things that we find common with the AI is the more you stay with them the more it pulls you away from real people uh and connects you with them uh in that that piece there.
So that's definitely a concern.
Yeah.
Yeah.
Absolutely.
Such such a so such a concern.
So I mean I think that that speaks to kind of the there are benefits like you were talking about.
there's some benefits, but you know, again, we don't want to get too dark, but some of the some of the downsides that that you that you can see.
Yeah.
I if I could add also as clinicians when we can see our client, we do a mental status exam, which we we look at their body language, the tone of their voice.
Um, if they're withdrawing from a substance or they're on a substance, we can look at their eyes and and uh, you know, are they disheveled?
and and so there's so much uh assessment that we can do that AI I don't think can do.
So what a yeah what a great you're right we we assess all of our clients whether even on tellahalth we assess you know clients and we can tell there's so much about their aic and what the way that they present themselves to um to two sessions that are that you will never get with with an AI with a with a chatbot um te and I don't know if you if you if you have to see teenagers but teen teens are already engaging with AI tools a lot of them are having them write their papers you know, so they're doing it academically even though they're, you know, they're not supposed to.
I'm an adjunct, you know, we we, you know, do not use AI and in in their papers and trying to give assignments that they can't use AI, but they're using it.
What should parents know about AI?
So, I raised my kids in the era of they're just getting phones and, you know, plug your phones in at 10:00. you know, you can't sleep in your room.
So, I mean, it was kind of a looking back it was, oh, a sweet level.
Oh, how how cute.
You just had your kids plug your phone in in in your room.
That kind of thing.
But now with, you know, what should parents know and not just teens, but younger, you know, like how should we talk about AI and and teens?
What would you tell your par tell parents, Glenn?
Well, I think whether it's AI or anything else happen happening digitally um or anything out in the world, I suppose parents always need to be checking in with kids and what they're up to, right?
So, making it clear that they should maybe talk to their kids about what their kids are doing with AI.
Um, one of the hard things about being a parent is even when you are as open of a family as you can be, it still seems to be the teenage instinct to go to others before they go to parents, right?
And that's true even when you talk very openly with your children.
Um, and so that's I think one of the risks here is even when you ask them, they may not tell you.
Yeah.
Chances are they won't.
Chances are they won't.
Chances are they won't.
Unfortunately, which is somewhat developmentally, you know, not did raise your hand if you've told your parents everything.
And actually, my parents are going to be watching this, but when I was a kid, when I was a kid, I did not tell my parents everything when I was a teenager, but the risks back in the were different than they are now.
And so, um, I think that we kind of have to put all of that into, you know, into into perspective.
I always say, what at what age would you let your kid, um, walk through the mall by themselves, whatever that age, would you let them have your a cell phone or have access with no guardrails, right?
You know, with without and if you're not going to let your 10-year-old walk alone in the mall, why are you giving them a phone with no guardrails?
So, if I can just quickly add, the American Psychological Association and the American Pediatric Association have have joined and come out with um recommendations for parents with teens with technology and it's all um not having them they encourage not having them in the bedroom at all.
Laptops, phones, um anything on a laptop or iPhone or you know um out in the living area.
Now that's a tall order because phones have become such a extension of our teens and adults and our life.
So those are just recommendations.
Yeah.
It'll be interesting to see how many adhere to those recommendations.
Those are they're they're good.
Are they realistic?
Right.
you know, and that's um Mike going going back to you.
Um kind of talking about AI and you know, and equity and access.
So, um are there risks that AI could widen, ironically, I widen disparities for those without reliable internet or digital literacy?
Um or could it help like with with weight lists?
So, kind of going talking around that.
Yeah, I mean I definitely think that, you know, when people are in more rural areas, it can help there, but obviously, you know, not everybody has a high-speed internet uh and so forth there.
So that could be a barrier um as well in that sense there.
Yeah.
Yeah.
Yeah.
So again, I would say it's the jury's out of how that Yeah.
I think so much of this is the jury's out and I think trying to figure out how to just, you know, work with this and, you know, um, Liz, if you could talk to, you know, um, talk to this as clinicians and how if you use AI in, you know, in in in in the clinic and and help, uh, you know, with with some of your work or, you know, I know the last time I went to see my primary care physician, he asked whether or not he could record the session that there was going to be help with AI notetaker.
I said absolutely fine because scribes have been a part of the medical establishment for a long time and so this is a different way of scribing and so um I I had no problems with that but if you could speak to that.
I I use it um and it's changed the way I um it's actually helped me be a better clinician.
um I'm able to be present and engaging with my client and just really focusing in instead of worrying about taking notes and did I miss that?
Um uh and I it also has helped summarize and capture uh the essence and the details that I may have missed.
But again, uh, with that, um, being responsible of just going back through and making sure that things are correct.
And sometimes there's one or two things that I need to correct.
Um, but that's the ethical part of our our role is making sure that um, if we're using AI, it's transcribing and it's gathering, but we are the final person of of assessing um, the other nuances and the clinical things.
So, I love it.
Yeah, I really do.
But I'm cautious also.
Yeah.
And I would add just super quickly because this is a important value for me is that I really try to limit my use of AI to just that aspect of it being fully aware of the massive amounts of electricity and the huge data centers that are being built.
So there is a huge carbon footprint that goes with AI that we should never um totally not talk about.
So yeah, I agree.
Yeah, absolutely.
Yeah, that and that that is a big part of it.
you know, obviously we all know what's happening in our community and and with that.
And so it's it's again, we won't go off too far off on that off on that topic, but it's interesting.
You almost have to opt out of using AI right now in in writing a simple email in in googling something.
You have to kind of you almost have to opt out.
So it's and that is so I mean that is that's been that's different in the last six months.
Yeah.
Um, and so I I think just the the the the the pace that the change is happening um is is really just is mind-blowing, which makes us as clinicians, as licensed therapists is part of our code of ethics also that we have to keep up with with what's going on in in the field.
And so this is one that we definitely have to um So where do you feel like as a licensed therapist, where do you feel like the AI tools like kind of fall short?
Um, and then what do you feel like are like Yeah.
So, where where do you think they kind of fall short?
Well, like I mentioned earlier, the idea of I think the gentle push back or the challenge for sure um falls short there.
Like Liz just said, um, AI writes her note and then she still needs to look over it.
Like it always needs the sort of extra pair of eyes on it, right?
And as therapists, we begin our careers that way where there's an extra pair of eyes on us making sure we're doing the right thing and learning the right things.
Um, but at a certain point, we're consider considered to be independent and we still even then would seek out aid from each other if we don't know what to do, if we're confused by a particularly hard case.
And I think that's another thing that AI is not doing.
I mean, I've never had AI call me up and ask me for advice.
No, it hasn't.
Not Not yet.
Not yet.
Not yet.
Not yet.
But now that I've said that, now that you've said that, now you put that out in the ether versse.
It's it's it's going to this to just to piggyback really quickly on that.
I mean, the best metaphor I see for AI is AI is essentially a mirror.
It will reflect back to you something, you know, about how you look.
And so, it's it's like a funhouse mirror.
It's either going to make you look better or worse or whatever, but and usually better, but it's really reflection back to you.
So, you know, when you're sitting there, it's not until you tell the AI something that it's going to reflect something back to you.
Whereas in a clinical setting, you know, there's a separate entity in there with making its own assessments, it own judgments and so forth, and it's not just simply mirroring back to the person um that you're working with in therapy.
And so far, you know, that's that's really all AI is doing is paring back to you something in a different way using a different algorithm and pattern recognition, but it it that's what it is.
So, how do you feel is the best way for um people out there who want help with mental health, don't really know where to go, not sure if they're ready to go see a therapist yet, or just have questions on maybe some symptoms, like what how do you feel is the best way to use AI for from an from an indiv from an and which obviously it's so different because everyone every individual situation will be different, but just on a very very high level, I think It's it's probably best to use AI to ask for how to access resources around if you're feeling anxious um or um depressed, you know, what resources are available for me um to go that route.
Um and I think it's very good at providing resources.
Um I don't think it's good and yet and could be harmful for actually asking it what should I do with my depression or help me with my depression.
Um, I think that's that's where the risk lies.
Yeah.
And what about like from a diagnostic standpoint?
So again, we have to diagnose everyone that comes in our door for insurance purposes.
And so, you know, from a diagnostic standpoint, like when if people are looking, you know, they type because this is what's happening in my in in my practice is people say that they come in, they type in their symptoms and then AI, you know, says that they have anxiety or depression or something like that and then they then they get help.
Um, so what are your what are your thoughts on some of that because people are using it.
So, it's like how do we how do we help guide people to use it effectively or um not harmfully.
I guess that's a great question.
that is very hard to answer and because it's changing so fast.
Um and I think I I would I would concur with Mike that um really being careful uh using it as a therapist.
Um and with diagnos diagnostic and and you know you you do need a clinician who can be in the room or tellaalth where they can actually I think properly and clinically pull in all of the history and the data and the time that we spend with an individual.
Um, we have screenings.
We have uh psychological screenings that screen for for different diagnoses.
So, um, I would say like Mike said, you know, what are some resources and tools that I could use?
Yeah.
That would help me with my depression.
Yeah.
Does that Yeah.
No, that's that's great.
So, Glenn, you get to look into the crystal ball.
Okay.
5 10 years from now, what do you feel like will what will mental health care look like with AI in involved?
Which again, Chris, I would even say a year from now, but like where where do you see it hopefully, you know, leveling leveling out?
Well, I don't know.
Um I really like one of the things they've been doing with AI is helping train clinicians, helping train people who are answering crisis lines and things like that.
And so I think the idea of AI as a training client is really a cool idea and I'm super comfortable with that idea.
Um I think therapy, you know, it's not just about what is said, but also what is not said and how it's said and the things between what is said.
And that's where the human element really comes in.
And I think that's where AI will continue to fall short.
So yeah, I I heard it once said that most of our mental health challenges are in response to what happened to us from another limbic system, another mammals's limbic emotional brain.
And only another mammal with a lyic system can help to heal those pieces.
And computers will never have a lyic system.
They will never have emotions uh in that sense there.
Good.
I'm so glad you got that in because I've heard you say that before.
I want to thank our panelists, Mike Clayba, Liz Burton, and Glenn Maloney.
Thank you so much for joining us for the 44th season of Doctors on Call.
Without your support, this program wouldn't be possible.
Doctors on Call will be back next year for season 45 with more great discussions surrounding health care in our region and to answer more of your important health questions.
And if you're looking for more tips, tricks, and conversation around mental around health and wellness in the Northland, make sure to check out Northern Balance on the PBS North YouTube channel.
Thank you for watching Doctors on Call.
Good night.

- Science and Nature

Explore scientific discoveries on television's most acclaimed science documentary series.

- Science and Nature

Capturing the splendor of the natural world, from the African plains to the Antarctic ice.












Support for PBS provided by:
WDSE Doctors on Call is a local public television program presented by PBS North