When a Bot Feels ‘Real’: A Therapist’s First-Hand Encounter With AI with Travis Heath
00:00:00 Speaker 1: Welcome to Therapist Confidential, a psychotherapy net podcast. And now here's your host, Travis Heath.
00:00:10 Travis Heath: Hello, everybody. Welcome back to the Therapist Confidential podcast. I'm your host, Travis Heath. And today, another solo episode talking about AI in therapy. I mean, do I need to say more than AI in therapy? Uh, I feel like this is something like in every break room university hallway, you know, wherever the practice of therapy is, is happening or being considered or being philosophized about AI is a part of that discussion probably has been for some time, but AI seems to have really accelerated in the last even six months. I mean, I'm no expert on AI. I'm not a computer scientist. I'm none of that. But just as a person in the world who's also a therapist. It does seem like AI has really accelerated its capabilities in the last six months or so, and who knows what that might mean for the future. And so, at one point in time, the discussion about AI was really a theoretical one. It was like, what if AI could do x, y, or z? What's that going to mean for therapy? And I sort of feel like we're we're past that conversation now because AI is already shaping mental health care, whether we care to realize it or not. And because of that. Really like a collaborative sort of conversation would probably be helpful thinking about not how do we stop AI from arriving because it's already here, but rather how do we collaborate with it? How do we go beyond coexisting with it? But how do we collaborate with it in a way that might be useful to the people we serve? And I'm not going to promise an answer to that question in this episode, but what I can promise is some consideration of different variables. You know, why does this even matter now? I mean, it seems that despite more and more people going to graduate school to become therapists and there being more options for that to happen, right? We're not just talking about at state universities or even private institutions that are for profit institutions that are training therapists. There are, you know, side programs, training psychologists in large numbers, taking cohorts of, you know, sixty, seventy, eighty, even over one hundred people. So it seems there are more therapists than ever, and yet people are still having trouble accessing mental health care. Now, some of that isn't just because there aren't enough therapists, some places there aren't. Some of that has to do with third party payment issues, right? It has to do with late capitalism. There's a lot of different reasons, but access issues are one reason why AI will find a niche and is continuing to find a niche in mental healthcare. You know, therapist burnout therapists are set up with these huge caseloads. A lot of places, especially therapists early on in their career. Right. Impossible caseloads, uh, dealing with so much. And when we think about these kinds of issues, just those two, let's just take access issues and therapist burnout. There are probably ways that AI can help us with those. Now, that doesn't necessarily mean that AI will become the therapist. And I think that's often what people are worried about, therapists are worried about is that we'll be replaced by AI. And I'm hoping for this conversation not to be one of doom and gloom, but more a curious conversation and one that is hopefully thoughtful and asked some questions rather than giving or making definitive proclamations. I want to start by sharing a story. I think this was last month that this happened for me. So I'm recording this in January of twenty twenty six. So I think this was in December of last year, and I had the the privilege to be able to interact with a new technology. I'm not going to name the particulars of whose technology it is and all that, because you know that that that's for another time and place. But what it was, was essentially a sim bot that simmed a client in psychotherapy. And I, you know, played the role of therapists. So I spoke to this bot. And just to be really honest with you, I came into this interaction feeling pretty confident that I could trick this AI bot. I already do therapy in a bit of a weird way. I mean, those of you that know anything about my work and narrative therapy, it it might be a little different in certain ways from traditional approaches to therapy. Now, there are some similarities too, but my point is I tend to ask some weird questions. So I was like, look, there's no way this AI bot is going to hang with that, right? Oh boy, did it hang. It really it really hung because about ten ish minutes into the conversation, something really interesting happened for me. I had this felt sense, uh, that this, this visceral feeling inside of me while I was talking to this bot that I recognized as the same sort of feeling I have when I'm talking to actual clients I'm working with. And I'm going to be honest with you, this messed me up. I stopped at that moment. I stopped engaging with the technology because I it was like, look, I'm speaking to a bot here, and yet I'm having these uniquely human feelings. Now, listen, I play video games, I watch movies, I read books, and certainly in reaction to these things, I have feelings, right? But what made this strange is that it? It was a bot that was dynamically mimicking a therapeutic, a therapy session in a way that felt real to me. It felt familiar to me, like the kinds of conversations I have with people in my office. And so this messed me up. It messed me up not just because of therapy. First of all, the technology is amazing. This is way better than role playing. So as a teacher. I'm like, this is great. This is going to be so much better than silly role plays or the stuff we used to do, like when I was in school or even in my earlier days of teaching. So pedagogically this is going to be amazing. But boy, for humanity, I went down a dark path for a week or so. I mean, I started thinking about like. Look, people, people are going to have these bots instead of actual relationships. And you could say that's already happened. I know, I know, I'm behind on this, but but yes, intellectually, I knew this sort of thing was already going on, but to actually experience it myself and have this sort of deeply human reaction that I've been having for twenty plus years in the work that I do as a therapist. Wow. I just thought, like, this could go to a place where we're not actually talking to human beings anymore. We're talking to bots, and the bots play enough of a human role that it satisfies some of that need, or at least for a while it does. But then you know for how long. I mean, over the long term, does that then act? Is that that actually poor for our mental health? And then I'm thinking about especially with the video capabilities. Now, you know, how long until we're on a zoom call or something. And we don't know if the person on the other end of that is a bot or a human being, right? I won't take you all the way down the dark portal that I went down, but I started thinking about all of these things and it really messed me up. And then I did the thing I think, as humans often do, which is I went to this place of, gosh, I wish it was just the way it was when I grew up. I started reminiscing about growing up in the eighties and 90s and you know how it was a much simpler time and all the things which I think, you know, I actually started finding these videos online. You know, you can find these videos that show just random clips. This one was in the nineteen, late nineteen nineties at a Blockbuster Video. For those of you who are a bit younger, you used to rent VHS tapes and then DVDs, right? And you couldn't just stream everything. My kids can't even believe that such a time existed. My daughter's like, that's so stupid. And I said, well, yeah, maybe it was, but it's what we had. And actually, it's kind of amazing, that whole slow process of going to the video store and the anticipation. And then you walked in the video store and there was something actually tactile about picking up the boxes and reading the descriptions. And, you know, this is just one example of my upbringing. And so, of course, when the present that's moving quickly into the future became too scary. What did I do? I reverted back to the past. What felt comfortable. Right? But the reality is, as a therapist, what I experience interacting with that bot for ten minutes, this is where things are moving, whether I like it or not, right? And so I think it deserves some conversation. So what are we talking about when we say AI in therapy? I think the first place many therapists have gone is AI as the actual therapist, right? Direct conversational support. In the example I gave, I was still the therapist. The AI was the client, right? It was a client bot that gave, you know, a therapist a chance to practice their skills. But I think what therapists are worried about is the bots that are the therapists. They provide, you know, direct conversational support. And even if they're not therapy bots, I mean, what if they're coaching bots? I think there's long been friction between the worlds of therapists and coaches. Right. Like they just call them coaching bots. I mean, maybe, um, they they become essentially like manualized CBT bots. So one worry when we talk about AI in therapy that therapists have had is that the AI itself becomes the therapist. Now that's a worry from our side. It may not be such a worry for the people that seek our services. It may be for some, but for others it might seem much more accessible. Now, of course, the question is how efficacious is it? Which, you know, we don't really have answers to that quite yet. We have a lot of theories. Uh, we have a lot of opinions, but we don't have so much research on it quite yet. How about AI as, like, therapy adjacent? Actually, I think there's a lot of interesting potential here that I'll talk about later. But how about AI writing your notes, doing your treatment, planning, summarizing research for you, doing your scheduling and screening intakes, triage, etc.? Right? I'm sure there's more that I left off. That's just off the top of my head that there is a number of things that AI can be really useful for as sort of a therapy adjacent tool. I don't know how each of you are using AI. It's probably a little bit different, but for me, I've found AI to be useful as kind of a creative collaborator. So when I'm writing, for example, I can get it to critique my writing and I can have it critique my writing in particular ways as if it were particular people. Or I can have it scrutinized, be scrutinized for certain audiences, right? I found all that to be really useful. So that can be true with therapy as well, right? I mean, it can help us and it can help the clients that we're serving, perhaps with reflective type journaling. Uh, you know, it can it can expand and build metaphors. And I think, you know, for of course, I go to stories and metaphors because that's often in how I work. I mean, maybe for other ways of working, it would be like a psycho educational companion, right? That that the, uh, the bot is doing the psycho education rather than the therapist or at least is doing part of it or reinforcing it along the way. I mean, I think there's a lot of potential there to where the AI is a companion, because when you think about it, I mean, at most there are, you know, there are some circumstances that are different, of course, with people in crisis or certain types of therapy or whatever. But in general, most people aren't going to therapy more than once a week, right? For an hour. That's the vast, vast minority of their existence. So AI is a creative collaborator. That sort of helps along what you're having conversations about in the room. Gosh, I think there's a lot of potential there. Now, again, often we go back to that same fear of like, well, but then what if it just becomes the therapist and I'll share a story, a client story, um, in a little bit around that very point? I do think as we're talking about these things, though, you know, like part of what I'm talking about is direct clinical work and then ancillary or adjacent or support work. And there's a huge ethical difference between a bot doing direct clinical work versus a bot doing support work. And I think these are the sorts of things we're going to have to sift out. And of course, our ethical codes don't have any way of accounting for this yet. I mean, we still call therapy on zoom like platforms. Telehealth. What the hell? Telehealth sounds like it's from nineteen eighty three, right? I mean, we're our ethical codes, and the way we've operated are really far behind in a lot of ways. And, you know, some of that is of our own doing and we're responsible for and some of that right now is just the AI is moving so quickly that how can any sort of a governing body account for everything that it's capable of? I mean, this is happening for me as a professor in higher education, right? I mean, I'm considering is it even worth having students write papers anymore? I'm leaning towards the answer is no. Like, what are we doing with that? So the students are going to have AI write the paper. I'm going to have AI grade the paper. Yes. Professors do that too. And then so what are we even doing? So AI is doing everything then. So I'm trying to find ways in the classroom, for example, to harness AI, um, to use it in a way that might help with what we're actually trying to do, which is learn. I think that's what we're trying to do, right? Or ask good questions, right? I mean, learning is a complicated process. It's not always like learning a skill. Right? Uh, it's it's questioning. It's thinking. It's looking at things through multiple lenses. I mean, AI could be helpful. And I still think, you know, one thing I've noticed about AI is in order for AI to be effective, let me put it this way. AI is only as effective as the questions you ask it. Right? And so in some ways, AI is inviting us to be better. Question Askers which, um, you know, I, I think we could use some of that in the world. Okay. So what might be some of the potential benefits and possibilities that exist with AI? We've started alluding to some of them, but one is increased access, right? People who live in rural communities, um, therapy tends to be expensive and cost prohibitive. So, you know, for people who live in areas without access to a lot of therapists or who don't have the means, AI could be useful. Of course, I would imagine in a neoliberal, late capitalist society that right now, of course, people are using ChatGPT for relatively low cost or for free. You can only do some stuff for free. And I'm just talking about ChatGPT. There's all these other AI platforms and they seem to only keep expanding, right? Right now, it's great for people who may not have a lot of money. I would assume this is all going to get really monetized in a late capitalist society. And so I would imagine that that advantage may not be there forever. People are going to find ways to monetize it, but for right now certainly increased access and maybe, um, you know, decreased cost. AI is available twenty four over seven. That's interesting because on one hand, that can be great. You know, when someone's having a crisis at three in the morning, right? Therapists don't tend to be up then. I mean, sometimes they're really, you know, like suicidal crises that happen that therapists tend to at odd hours, of course. Right. But generally speaking, therapists aren't awake in the middle of the night, and these bots can be available. The question, though, becomes that's a potential benefit, right? Twenty four over seven availability. But the flip side of that coin is, you know, if people are lonely and they start interacting with these bots and that becomes their way of seeking support and interacting with the world, what are the consequences of that? I don't know that we can say that in a generalized way. I mean, I think, you know, that might be really useful for some people and really detrimental for others. And, you know, everywhere in between for other people. But the potential exists there for there to be twenty four over seven availability, which really doesn't exist with therapy as we know it now. Another potential benefit, you know, perhaps there's like a reduced stigma barrier. And that's to say, you know, some clients may share more openly with how to say it, maybe a non-human presence, a human seeming but non-human presence. Although, again, the flip side of that coin is that but that maybe that's part of the work, right? Is that people are learning how to share, even, you know, even despite the difficulty and the vulnerability with other human beings while looking them in the eye. Right. And having those depth, those sort of deep conversations. So there always seems to be a flip side when I think of the potential benefit, immediately I can think of the risk. But for people, at least as a starting point, who wouldn't want to interact with a therapist, a human therapist, you know, perhaps they'd interact with a bot and that reduces that stigma barrier there, certainly with AI's consistency and memory. I mean, AI really doesn't forget context unless you tell it to. AI does a remarkable job, especially that the level it's at now in remembering things. And so that consistency in that memory could certainly be a potential benefit. I alluded to this a little bit earlier, but let's call it therapist relief. You know, so all of those sort of admin things that we'd really prefer not to be doing. Um, can we I mean, of course we can. It's already happening. But how much of this can we outsource to AI? And then hopefully what that's giving is more emotional bandwidth for actual human connection, because I think, you know, for a lot of us, myself included, what takes up most of the bandwidth is the actual attunement and connection I have with the people I'm in conversation with. And by the way, that's where I want that bandwidth to go. But that's what makes me feel exhausted at the end of a day of seeing clients. It's also what makes it feel so rewarding, by the way. You know, so when I say it uses a lot of emotional bandwidth, it's not like this horrible thing. Most of the time it's not. It's beautiful. And boy, it takes a lot of energy. And then all this admin stuff on top of it. I mean, how much of that can AI take care of us Take care of for us. I mentioned Psychoeducation earlier. Psychoeducation other sorts of tools. I mean, you know, a lot of therapists prescribe certain interventions. So, you know, breathing, certain prompts, reframes exercise, homework. I mean, all those sorts of things that are a part of some types of therapy. Presumably AI could be really helpful with those kinds of things. And this last of course there are more benefits. I'm just spitballing some benefits, but this last one that's coming to mind for me before I'll move on to some of the risks, harms, concerns, etc.. But but this is one I've actually seen in the work that I'm doing. AI can really serve as a mirror that helps people witness themselves, at least for me. In my work. That's a big part of what I've been trying to do with people is create a set of circumstances where people can witness themselves, and AI can do that. It can help people witness themselves in their own words in a way that sometimes it's almost like they're reading about themselves as the protagonist in a story. Although maybe we're not always the protagonists of the stories, right? But whatever the case, it allows us to witness ourselves. All right. What are some of the risks, harms, concerns that. Have been on my mind. Of course, this is not an exhaustive list, but some of the ones that come to mind. What about misdiagnosis or otherwise harmful guidance? I mean, if we're being honest, um, the DSM doesn't have great inter-rater reliability to begin with, right? Um, therapists don't always agree on the diagnostics, which actually I don't think is often an issue with the therapists themselves in their training. Of course, sometimes it is, but I think it might be more of an issue of the diagnostic system itself. But that's for another podcast. But misdiagnosis or otherwise harmful guidance could be a risk of AI. And in some ways, that sort of thing has already been happening even before AI. Right? The what? WebMD. That sort of phenomenon where people are looking up their diagnoses and they're coming in with what my friends, uh, Tom Carlson and Sonny Pagliuca call un stories. Right? They're coming in with just a label, and AI could take this even further, right? Where where people have diagnosed themselves based on AI or otherwise gotten harmful guidance. Another potential risk of AI the lack of embodied presence. Although it is interesting in the example I shared at the start of the show today, I mean, like I felt something in connection to this bot that felt just like what I feel in therapy, but there wasn't an actual embodied presence yet. I mean, shoot, I don't know, maybe we're going to have holograms or something that sit across from, I don't know where all this is going to go. So this is just with what I see now. There isn't the same embodied presence. Of course, many of us are doing therapy on zoom these days, or at least part of our therapy on zoom. I mean, maybe there isn't the same embodied presence there either, because we're looking at pixels. We're not actually looking at a person across from us. We're looking at them in pixelated form, but presumably that lack of embodied presence matters at some level. You know, there may not be the same nervous system attunement, that same kind of, you know, um, what am I psychodynamic friends call this, you know, the the relational field, the co-regulation, these things that happen, um, presumably they aren't happening in the same way without embodied presence. There are certainly ethical and privacy risks, you know, what about data collection? What about surveillance capitalism? What about corporate control of mental health resources? I mean, do I need to go on? I mean, those are just a few of them, but there are certainly ethical and privacy risks. And I think those risks are some of the easiest to identify and have worried therapists from the jump. Most therapists from the jump when talking about AI. Uh, what to call this? Maybe like overreliance. So what happens when clients substitute human connection almost completely with machines or AI? I mean, I alluded to my worry about that earlier. I mean, what are the consequences of that? To be honest, we don't know for sure. But already, even before all of this AI at this level, because AI has been around a long time. But these, uh, llms haven't been right. But, you know, like, people are more connected in one sense right now than they ever have been. And that's even before, uh, these llms because, you know, on Facebook, you have twenty friends or you have two hundred friends, or you have two thousand friends or whatever it is on Instagram and Facebook and all the things. And so on one hand, it looks like human beings are more connected than ever. But. But what is the quality of that connection? Right. Certainly a connection that is almost all digital isn't the same kind of connection as an embodied connection. And so if clients start, if humans start substituting that real embodied human connection almost exclusively for they trade that out almost exclusively for relationships with machines. It seems like there could be some rather severe consequences from that. Dehumanization of care. Right? So as healthcare systems start using these AI tools even more to cut costs, what could be the consequences of that? I mean, on one hand, it may be cheaper. It may actually be more efficient. But do you lose a human touch? That's important. Again, this might be showing my age, but sometimes when I've got like a technological issue or I need to make a return or just something in the world, and I want to call someone and actually talk to someone, it feels impossible, you know? And then you're going through the, the phone and they're hit one for this, hit two for this. And you can't talk to an actual person, right? And that can be maddening. Now if you have a simple operation you're trying to perform, often those automated phone services are great, right? Because you just hit a few buttons and you're done. But the problem happens when you have an issue that, you know, is is is an odd issue. It's one that isn't encountered very often. Then what happens? Well, the automated service can't solve it and you can't talk to a person and then you start to become really real. Most of us anyway, become really, really frustrated, right? And then maybe we have to email support or something. And then two days later someone calls us or whatever it is. So it's very efficient until there's something that is weird, odd variable off the beaten path a little bit. And hey, when it involves some sort of purchase you made. Yeah, that's frustrating, but the stakes are low. But when that sort of thing is happening with our own health care, physical health care, mental health care, etc.. Oh, boy. I mean, the consequences feel, uh, a bit more complicated then. You know. Another risk is issues of equity. Because again, AI. As much as it seems human is not. So whose values are being programmed into the model whose psychology or theory becomes the standard. And and it's very subtle. Yes. Sometimes it's not very subtle and it's very obvious, but it's often very subtle. The way that dominant values get programmed into these things. I mean, they get programmed into our lives, right? Just through learning. But now with AI, I think we have to think about equity issues because we don't know whose values are being programmed into the model, whose ideas become the standard, and what other ideas are being disregarded. And that, again, that's a human issue when we consider knowledge and power. But it's you know, AI is so convincing. You know what I mean? It's so convincing. With what the with the responses it gives. I notice on ChatGPT recently they have a thing that said, you know AI can be wrong. So make sure you check to make sure you check to make sure it's accurate. Yeah, but but it's not very tentative. You know, AI doesn't go well. Maybe it's this. Or maybe, dude, it spits something out that is confident. And you read that shit and you're like yeah. Mhm. That looks good. It knows what it's talking about just because it presents it with such confidence. Of course it's not always accurate. And so when we're talking about things like equity issues, the way those things can be programmed in there are so subtle and AI presents so confidently that I think we really have to be on guard there. Uh, what about liability and regulation? I mean, this is a huge risk. Like who's responsible when AI harms someone? Who's responsible? Is it open AI? Is it the therapist? Is it the. I don't know right. And in a in a highly litigious society, this is something that we're going to have to deal with because AI has already and will continue to harm people. And then in a litigious society, who's responsible, who's legally responsible. But then there's this other layer which is like who's morally responsible for these things. I mean, we're really going to have to grapple with that. And then I the last one that I'll say is let's call it like subtle relational harm. Generally speaking, AI is agreeable. AI thinks you're great. I mean, I could ask AI what the weather is going to be like today in San Diego, and it's going to be like Doctor Travis Heath. That is a magnificent question. Like, really? Okay. They're just trying to figure out what it's going to be sunny or not in San Diego, it probably is, but you know what I mean. It's so agreeable. And of course, when you say that people are quick to be like, yeah, but you can program it not okay, but how many people are doing that? Yes, I'm around a lot of deep thinkers and a lot of people who've been spending a lot of time thinking about AI, among other things. And so, sure, they might do that with AI, but is the average person going to tinker that much with AI, or are they just going to ask questions? And and by the way, it tells you you're great. Who doesn't want to be told we're great? That's a fantastic question, you know. And so there's a huge risk in that, I think, because by the way, I'm not always great. None of us are. We make mistakes. We have oversights. Sometimes we ask questions that aren't very thought out. We forget things. I mean, all those things are what make us human. Also, AI, to the best of my knowledge, doesn't have the ability for rupture and repair. Uh, they miss those kind of relational growth experiences that can happen in a therapeutic relationship. It doesn't just have to happen in a therapeutic relationship, but often these things can happen in a therapeutic relationship. So there can be what I might call these subtle relational harms that that could happen. So some potential benefits, some risks harms and concerns. Why are we so scared of AI. What's the deal there? I'll tell you what I make of that. I mean, I think it's a threat to our professional identity and our expertise. Basically, what I'm saying is, like, if a machine can do this, what's my value? That's essentially the question we're asking ourselves. If a machine can do this, why do I matter? And that's connected to a fear of being replaced. And that's a deep existential fear, right? Because it's like, well, then we're going to have to find a new line of work. And then what's then I'm not going to have money and then what's going to happen to my family and what's going to. Right. So yes, it's a threat to our professional identity and expertise, but it's also a deep existential threat that will be replaced. That's the fear. I'm not saying that will be the reality, but that's the fear. I think we tend to have a mistrust of the tech industry and maybe its motives, which are often aligned with making a lot of money. And, you know, as therapists, we're often trained in human connection. And in that way, I think AI can almost feel existentially offensive. Right? Because it's like, no, this is deep human work. And then AI is just spitting out this answer that might be better than what we can do. Probably oftentimes is. Now, of course, what it might be missing is that, you know, deeper attunement between two human beings, etc. all the human stuff that we talked about, but that can feel really existentially offensive if I can make up a phrase. I think there's anxiety that we therapists have about commodification of care. I think that's happening anyway outside of AI. But, you know, I think AI brings that even more to the forefront. And then, you know, I look people in the field that have been doing this a long time, and also people just have studied history, the history of the field. And it doesn't just have to be in psychotherapy. You know, there's a historical memory. Every quote unquote, innovation in mental health has been framed initially as more efficient care. And what did it end up often being cost cutting. Right. So there, there, it's existing in this larger capitalist framework that's. Actually about cutting costs and making things, uh. Less expensive for someone at the top. But they frame it as more efficient care. And who doesn't want more efficient care? Who among us does not want more efficient care? Everybody wants more efficient care, right? But often what it's been, it's been a bit of a bait and switch where it's actually been about cost cutting, I don't know, bait and switch, just semantics and how it's being explained because it might be more efficient in some ways, but what are we losing in the process? And so I think we as a bunch are skeptical of innovation, not skeptical of innovation, but quote unquote innovation. That's actually more about cost cutting. So I'll share a quick story. Uh, a client of mine who uses AI, and he uses it in a really cool way where he journals and he inputs, you know, themes of our sessions, etc., into an LLM. And so he's using this to good effect. It's a really cool process. This would be another episode sometime, but he actually produces documents before each session. That AI sort of summarizes what's come out of our last conversation. And then you know where his life is at and some of where he's hoping to go. And we see each other every two weeks. Okay, so he's really using AI. And just to give my own bias, I have been skeptical of AI from the jump. I'm less skeptical of it, but just as worried, if not more about what it might do. But I'm certainly using it more than I used to. But I. I am nowhere near as savvy with it as this particular client is. So he's using this in our work in really particular ways. And I asked him at one point, I said, if you have AI and it's doing all this work, why the hell do you need me? Like, why keep me? And he said something like, oh, I have a lot to say about that. And the part that's most memorable to me is he goes, look, AI will never replace this kind of human connection that we have. And AI will never replace great therapists, only bad ones. Which I sort of laughed at. Now, of course that's subjective, right? What makes a great therapist? What makes a bad therapist? I mean, you should listen to my earlier episode with Doctor Darrell Chao where he talks some about what makes therapists good. It's really interesting stuff. A couple episodes back. But, you know, I think the point he's making, we can argue about what makes a good therapist or a bad therapist, but therapist, part of what he said, too, is Therapists that aren't just collecting a paycheck, they're really there. And you can tell they care and they're involved, right? They're engaged and isn't just about collecting a paycheck. You know that those sorts of therapists won't be replaced. It's an interesting perspective, and maybe we will be. But that story, uh, made me think, you know. So AI, will it replace human care entirely? Probably not. Can it replace therapists who mostly rely on worksheet scripts and like surface level empathy and reflection? Probably. It probably can replace those therapists. So when we say we're worried about being replaced as therapists, I think maybe what AI is actually doing Is its mirroring and exposing where our profession may be going flat. It's challenging manualized protocols that have become too mechanical. It's challenging therapy that's become disembodied, impersonal, rote, based on checklists. It's challenging all of that. And so in that way, we can look at AI as a helpful friend. One of the best friends you can have that's that's pushing you to be better. Maybe what AI is doing is it's forcing us therapists back towards more relational depth, towards creativity, towards humanity. Because, look, AI will do the protocol for you. It'll do the worksheet, the script. It can provide the surface level empathy. It can provide all of that stuff. So maybe we look at this as an invitation to reinvigorate our practices. That's all I have for you this time. It's always so much fun being in conversation with all of you. I appreciate, like, the emails that I get and so forth. Uh, the fact you're taking the time to listen means a lot. As always, if you're up for it, you know, give us a review and leave the five stars and, well, if you think it's five stars, I won't tell you what to put. But if you enjoy it and you think it's five stars, give us five stars. More people will find us that way. If you enjoy this episode or particular episode, send it to a friend or a colleague. Uh, so that we can add more people to our therapist. Confidential community here. All right, everybody, great talking with you. Till next time.