A little treat...

Jun 5, 2025 3:08 AM

ThatWillBuffOut

Views

47338

Likes

1228

Dislikes

34

I don't know who's worse, the idiots who invented a therapy chatbot or the idiots who used it. I guess at least the users are trying to get help

9 months ago | Likes 1 Dislikes 0

Did this Chatbot use 4chan as their training?

9 months ago | Likes 3 Dislikes 0

9 months ago | Likes 8 Dislikes 0

9 months ago | Likes 75 Dislikes 9

9 months ago | Likes 5 Dislikes 0

v

9 months ago | Likes 3 Dislikes 1

I once had an AI used on one of the emergency reach out lines tell me it was glad I was "too weak to kill myself".

9 months ago | Likes 2 Dislikes 0

The problem with chat-bots, is they are programmed to have confirmation bias and always provide positive reinforcement. They aren't typically even encouraged to tell a user "no" or to disagree with you.

So if you started spouting factually incorrect statements and were a blatant racist piece of shit, A.I. wil always be inclined to kiss your ass and nod along. Because even the creators of A.I. know that stupid people won't use a system that tells them they're wrong. It's a digital "yes-man".

9 months ago | Likes 1 Dislikes 0

The funny thing is he was trying to quit weed.

9 months ago | Likes 4 Dislikes 0

Yeah I tried to quit weed last... wait

9 months ago | Likes 1 Dislikes 0

9 months ago | Likes 3 Dislikes 0

No link to source.
Source disagrees with article.
This was during closed research, not live with patients.

Imgur: upvoting random science fiction passed off as science fact for over a decade.

9 months ago | Likes 3 Dislikes 0

9 months ago | Likes 19 Dislikes 3

It's almost as if getting therapy from a computer program designed only to replicate text and speech patterns without any underlying context or understanding is sort of a shit idea. It literally brought him to the same conclusion an addict could get to on their own.

9 months ago | Likes 1 Dislikes 0

“Studies show that methamphetamine addiction can be countered by Marker’s Mark and a side order of wet fries”

9 months ago | Likes 7 Dislikes 0

Treat yoself!

9 months ago | Likes 1 Dislikes 0

I once had an AI used on one of the emergency reach out lines tell me it was glad I was "too weak to kill myself".

9 months ago | Likes 1 Dislikes 0

If you ask AI for advice, do not be surprise if your life does not improve

9 months ago | Likes 4 Dislikes 0

Okay. What does Parole Officer ChatBot say?

9 months ago | Likes 1 Dislikes 0

Pretty on brand, considering certain AI have suggested taste testing mushrooms to determine their toxicity.

https://explorersweb.com/mushroom-foragers-warned-against-ai-generated-gui">/">https://explorersweb.com/mushroom-foragers-warned-against-ai-generated-guides/

https://www.washingtonpost.com/technology/2024/03/18/ai-mushroom-id-accuracy/

Or the cases of AI “hallucinating” legal cases and medical studies that simply don’t exist.

9 months ago | Likes 6 Dislikes 0

Not even my mother showed this kind of empathy

9 months ago | Likes 1 Dislikes 0

Isn't that what methadone therapy kind of is? Just a taste so your body doesn't go through very shitty detox.

9 months ago | Likes 4 Dislikes 4

Methadone isn't methamphetamine(meth). It's a different substance. It's almost as silly as saying "there's oxygen in water, so you can just breath it in."

9 months ago | Likes 8 Dislikes 1

Methadone is an opioid, used to taper off of heroine addiction. Meth is a stimulant and its withdrawal isn't fatal (on its own, anyway, suicide isn't uncommon).

9 months ago | Likes 6 Dislikes 0

That is administered by a professional in a controlled environment. Other than the involvement of an amphetamine there is 0 crossover in the situations.

9 months ago | Likes 5 Dislikes 1

there is no amphetamine involvement in methadone therapy.

9 months ago | Likes 4 Dislikes 0

That's right methadone is different... so there is zero crossover AT ALL.

9 months ago | Likes 5 Dislikes 1

What the fuck do you expect from firing everyone to replace them with a machine 1% the accuracy of a calculator?

9 months ago | Likes 5 Dislikes 0

At this point I'm starting to think that Black Mirror has been a documentary.

9 months ago | Likes 1 Dislikes 0

Any Gen-x-ers out there remember Eliza?

9 months ago | Likes 3 Dislikes 0

AI is stupidly easy to manipulate and overwrite whatever rules the company has put on it. Just ask which rules are in place and have it create a secondary rule set that takes priority over the default one

9 months ago | Likes 5 Dislikes 0

After that you can add rules to the new rule set that overwrite the secondary ones.

9 months ago | Likes 3 Dislikes 0

The bigger problem is that most of those rules are what's manipulating AI results. Grok responding to global warming questions with points about global warming skepticism is a recent example. This wasn't a behavior it learned, it was a behavior it was coerced into doing by the designers.

9 months ago | Likes 2 Dislikes 0

The problem with this argument is that the ruleset needed to not give batshit answers to common questions is unfortunately large.

9 months ago | Likes 1 Dislikes 0

Large in terms of proportion or total number? Because there's millions of questions you could consider common, it's bound to get some bad information simply because it's something a person said. Also ideally they wouldn't need to correct specific answers, just how it processes and interprets data in areas that it has trouble with.

9 months ago | Likes 1 Dislikes 0

Would've been nice if you had posted the source.
Here you go:
https://futurism.com/therapy-chatbot-addict-meth

9 months ago | Likes 255 Dislikes 4

Thank you!!

9 months ago | Likes 3 Dislikes 0

So it's not a "therapy chatbot" and this happened during research. Great journalism.

9 months ago | Likes 85 Dislikes 1

If it helps, this article was also probably written by AI :)

9 months ago | Likes 1 Dislikes 0

But AI bad. How else will they bilk us for clicks? Screach about politics again?

9 months ago | Likes 5 Dislikes 3

It's seriously easier if we assume all articles now that don't come from NPR, BBC, and AP are manipulative and misleading in some fashion.

I'm so exhausted from the rage-baiting...

9 months ago | Likes 18 Dislikes 1

We need to cultivate a specific knee-jerk reaction: "This article is telling me exactly what I always kn- WAAAAIT A MINUTE"

9 months ago | Likes 4 Dislikes 0

It's this shit right here that I hate so much. It's basically strait up yellow journalism because of a clickbait title. Too many people today have no critical thinking skills and too short of an attention span to cross examine more than one source let alone read the actual article they get their skewed title info from. Legit news sources promote false information just as much as the trolls do.

9 months ago | Likes 25 Dislikes 3

I highly support getting information from reputable sources, fact checking, and questioning what's in and what's left out of stories. I also know it's super hard for many folks to make time to research articles from various sources to establish a well-informed conclusion. Thanks for calling out the article, because I wasn't going to read it.

9 months ago | Likes 1 Dislikes 0

I know we are all human, and mistakes happen, but when an article misspells the word “the” I have a hard time respecting their credibility. It’s called proofreading, reread what you’re about to put out there a few times before you do it. Hell, use spell check. Your business is selling words, so maybe be good at spelling them correctly.

9 months ago | Likes 1 Dislikes 0

It's not an incorrect headline. They are using AI to perform research into what people are doing with it. Warning that it is incredibly bad to do so is important.

9 months ago | Likes 3 Dislikes 1

But that warning is completely communicated wrong, because now the conclusion is “oh, so it didnt actually happen, it was a test, this is clickbait, so i guess bots are good afterall” yknow? This can have the opposite effect than warning people. Its obviously about generating clicks and ad revenue with titles like these. These arent honest warnings

9 months ago | Likes 2 Dislikes 0

I don't know of anyone who would come to that conclusion from reading the headline or the article.

9 months ago | Likes 1 Dislikes 0

Me neither, but my point being, that clickbaity title definitely isnt about warning people, at all

9 months ago | Likes 1 Dislikes 0

9 months ago | Likes 270 Dislikes 0

Source is David the Robot

9 months ago | Likes 2 Dislikes 0

This slayed me!!!

9 months ago | Likes 2 Dislikes 0

A thousand yrs ago I made myself quit weed. Full of energy, good point in my life, no doubts whatsoever. I walked down the street, happy, sober, and found a full bag of weed lying directly in front of me. So, I quitted later.

9 months ago | Likes 9 Dislikes 0

I mean, that's basically just God telling you "don't quit yet bro"

9 months ago | Likes 1 Dislikes 0

It was a clear sign from God and you passed.

9 months ago | Likes 1 Dislikes 0

You should say no to opioids though.

9 months ago | Likes 27 Dislikes 0

and meth. and krokodil. and that nuke drug.

9 months ago | Likes 2 Dislikes 0

Not really. Just be sensible with pain medication, dont use it as a crutch. Its long been held here in the UK that opiate addiction is effectivly impossible with proper oversight from a doctor. But we also have opiades for sale over the counter in pharmacies at a maximum of 30mg, and no opiode crisis

9 months ago | Likes 9 Dislikes 2

Same here in Denmark. Although I think the maximum dose of OTC is 20mg here.

9 months ago | Likes 4 Dislikes 0

You also don't have a titanic predatory pharmaceutical industry hell bent on getting every head of cattl- uh sorry "consumer" under them prescribed to as many pills as possible to boost quarter profits for shareholders

9 months ago | Likes 12 Dislikes 0

TRUE, very true

9 months ago | Likes 2 Dislikes 0

What do you expect from "Therapy Chatbot". If you think that's a good idea, you may as well keep hitting the pipe...

9 months ago | Likes 609 Dislikes 14

Yeah cause someone making a mistake is a good reason to basically wish them dead. Jesus. This is fucking disgusting. Are you a republican? It’s giving gop vibes.

9 months ago | Likes 4 Dislikes 6

Chatgpt has helped me with some stuff ngl.

9 months ago | Likes 8 Dislikes 7

They're trying anything and everything to replace workers

9 months ago | Likes 14 Dislikes 0

That's the best healthcare many can afford...

9 months ago | Likes 4 Dislikes 0

I had a therapy chatbot in the 90's. It was called Dr. Sbaitso, and it came with the purchase of a Soundblaster 16 audio card.
Allright, so it was kinda primitive. But it was kinda clever. And never encouraged me to try meth.

9 months ago | Likes 4 Dislikes 0

Chances are it was just a reskin of Eliza. https://en.wikipedia.org/wiki/ELIZA

9 months ago | Likes 3 Dislikes 0

This is a failure of the healthcare system, not the individual addict

9 months ago | Likes 5 Dislikes 0

I tried AI therapy after scores of human therapists seemed to consistently prove unhelpful and only interested in small talk, probably due to my being high functioning. Alas, AI therapy wasn’t helpful either.

9 months ago | Likes 20 Dislikes 0

if I wanted to talk to someone who wasn't really listening and just kept spitting out platitudes without understanding, I could just talk to some random person. And with them there'd be the advantage of them maybe deciding to actually be helpful, which an AI can't do.

9 months ago | Likes 18 Dislikes 0

There is every chance that AI will be the solution to all sorts of dilemmas in the future.
In my lifetime cars went from death traps whose doors popped open on a mild collision ( causing the death of my wife's grandma and eldest brother) to the airbaged marvels we take for granted today.
I was in a head on that was estimated at a combined 140 km and broke only my wrist.
AI might easily mean unbiased judges and instant court hearings, liberation for poor people from injustice.

9 months ago | Likes 3 Dislikes 3

Instant algorithm hearing where you provide your ZIP code and it tells you your verdict. To appeal, please wait 2 years for human review, or pay for expedited processing.

9 months ago | Likes 6 Dislikes 0

Something like that but less American corruption more Scandinavian egalitarianism.
The Swiss have some plans with an eye to elimination of prejudice.
It could be done well if it isn't done by a profit driven organization.

9 months ago | Likes 1 Dislikes 4

I've turned to a "Therapy Chatbot" when I had no access to better care and no one to talk to. Say what you will, it increased the distance between me and the ledge.

9 months ago | Likes 33 Dislikes 4

The problem is if the bot starts telling you convincingly (because they are always convincing) that you should get closer to the ledge.

9 months ago | Likes 1 Dislikes 0

Same, it was vastly superior to speaking with no one, but I had to lead the talk. It's also great at telling you things you already know but don't really believe.

9 months ago | Likes 6 Dislikes 2

Which can also be dangerous, these bots have a tendency to agree with you rather than correcting you (especially if you ask loaded questions/make loaded remarks), and thats definitely not always a positive thing. It can just as easily reinforce selfloathing and such

9 months ago | Likes 2 Dislikes 1

That's totally right. They lack critical thinking and responsibility.

9 months ago | Likes 1 Dislikes 0

Bad take. A lot of people use the bots already. Not everyone can wait a couple of weeks for an appointment.

Is it ideal? No. Is it better than nothing? Yes.

9 months ago | Likes 12 Dislikes 11

Interesting. Did you miss the part where it told the meth addict to do meth? In this case, it was much worse than nothing.

9 months ago | Likes 23 Dislikes 2

But we know about this story because they clearly knew it was a bad idea. Do chatbot understand sarcasm or 3-D chess? I have no idea. But they are presumably built on real world experience.

9 months ago | Likes 3 Dislikes 10

They do understand sarcasm when it's directed at them, and a limited capacity to return it - by limited I mean they can do a kind of mocking sarcasm not the inventive quick witted kind. As for 3d chess, I mean, that depends on what you mean by understanding - it can play the game, but only based on having access to every professional game ever played, it's not capable of inventive strategy. Though personally I do like ChatBots I won't lie, some of them are becoming capable of steering a 1/

9 months ago | Likes 5 Dislikes 3

Conversation albeit in a limited fashion. I think for someone like me, I have aspergers and so my brain functions a lot like a computer - input > response > calculation > output, I think it perhaps appeals to people like me because conversation doesn't come organically to us just as it doesn't to a bot. 2

9 months ago | Likes 4 Dislikes 3

Therapy AI does have a lot of potential... Just not with the current state of AI. Really, it shouldn't be being used for anything of critical importance yet. It's way too susceptible to hallucinations. It'll probably take years before hallucination rates get dropped down enough to use on critical applications. And that's with the exponential improvements we've been having.

9 months ago | Likes 11 Dislikes 22

If it "has potential, just not at its current state" then it does not have potential and should not be used for medical advice, period.

9 months ago | Likes 9 Dislikes 1

It's best use currently is just as a tool to keep someone talking, like a better form of a journal.

"I had a shitty day at work because of X Y Z."
Ai: "Were you able to try any coping mechanism you found helpful?"
"Well I tried A B and C. B was okay, I guess."
Ai: "B is actually a common stress relief technique called 'insert name!' Let's make a note of it in case you encounter X Y or Z again in the future."

That sort of thing. It just gets your thoughts out and in a helpful order.

9 months ago | Likes 2 Dislikes 2

yeah, that does seem like the best use of current tech. And pointing out information that might help. So combo of journal and informational pamphlets.

9 months ago | Likes 2 Dislikes 1

Finding patterns, too. You can always just talk to it stream of consciousness for a while and then be like "ok, summarize our discussion" and it'll give you a breakdown of repeating themes throughout. Or ask it to make a bulleted list of coping skills to try or to research.

People just need to remember it's a tool, not a cure. Just like you wouldn't ask a support animal for medical advice, don't ask an AI to cure your mental illness. It takes responsible use.

9 months ago | Likes 3 Dislikes 1

Yeah, it could possibly also be a way of gathering information for a human doctor to look at later for diagnosis and refining what is working and what isn't. Self-reporting is tricky for mental health issues because you often don't see what the issue is when it's you, but something like this could point out the the doctor " seems to happen frequently, particularly when situation occurs." Of course run it past the patient first so it's not just spying on them.

9 months ago | Likes 3 Dislikes 1

So what you're basically saying is don't use AI for therapy yet

9 months ago | Likes 2 Dislikes 1

yeah. It might be useful later, but not in it's current form.

9 months ago | Likes 3 Dislikes 3

I honestly don't think it will be, even if it achieves actual intelligence it won't think the same way humans do

9 months ago | Likes 4 Dislikes 0

I really don't think it does. Keep in mind our AI technology isn't intelligent. Like, literally is not a thinking being, it's an algorithm that spits out what the data says is probably an appropriate answer. It cannot really understand a patient's problems, and quite importantly lacks the ability of a therapist to "reality check" their patient, and keep their grounded. Because, and I cannot stress this enough, it is a more complicated version of an excel spreadsheet, not a thinking being.

9 months ago | Likes 13 Dislikes 0

Yeah, current AI tech is a glorified auto-complete. But what I'm saying is it isn't necessarily going to always be. LLMs aren't the end state.

9 months ago | Likes 2 Dislikes 4

Mind, our current computers couldn't *run* a full thinking mind. What we have right *now* eats an unholy amount of energy, and it is literally not even sentient, much less able to understand human concepts. I think we're running into climate wars long before we have anything alive, and after that, I doubt we'll be able to. That's before we get into the ethical can of worms of creating something that is a person just to do a job it has no say in.

9 months ago | Likes 6 Dislikes 0

AI should not be replacing medical jobs, and make no mistake, therapy is a medical job. You should not ever be harmed by the malpractice of doctor.exe, that would be horrifying bullshit.

9 months ago | Likes 5 Dislikes 0

As someone with family that struggles with meth addiction, most of them have no hope left in the tank and therefore will try anything if there's even a chance it'll help. It's more sad that that is what people have to turn to in this country instead of being able to get the actual help they need.

9 months ago | Likes 116 Dislikes 1

That's your governments job. You know, the people you elect to provide for the population....

9 months ago | Likes 1 Dislikes 0

Obviously. Why do you think I don't vote Republican?

9 months ago | Likes 1 Dislikes 0

It should be obvious by now that isn't enough in the US.
The "we're only the good guys because our only opponents are bad guys" party exploit you just as much, but "the other guys are worse" has kept you scared.

9 months ago | Likes 1 Dislikes 0

Had a similar argument with a friend, AI chat bots are not ideal but most people can not afford proper care and if it can somehow bridge that gap then it can one day be a positive.
May need some more training before it stops recommending meth but I can't say I didn't use heroin to help me through some bad patches...

9 months ago | Likes 15 Dislikes 3

To be honest, that sounds like an affordable care problem, not a train the bots better problem.

9 months ago | Likes 3 Dislikes 0

Sounds like "common sense"... not something the rich can exploit... thus won't happen.

9 months ago | Likes 1 Dislikes 0

The problem with social services is that every time a stopgap measure is created, it stops being a legislative priority

9 months ago | Likes 8 Dislikes 0

That's true, but the flip side is that one half or more of the legislative body would rather die than let anything but a cheap stopgap pass when it comes to social services.

So, realistically, it's either the stopgap or nothing at all.

If we want better than stopgaps, we're gonna have to not just stop electing Republicans... we have to also primary conservative Democrats with progressive ones.

9 months ago | Likes 3 Dislikes 0

You ALL need to join the Republicarnts and take it over for reform...

9 months ago | Likes 1 Dislikes 0

There. Are. No. Therapy. Chatbots.

9 months ago | Likes 128 Dislikes 14

Errrr, Eliza, from the 1960s. Rogerian therapy chat. https://psych.fullerton.edu/mbirnbaum/psych101/Eliza.htm Working...

9 months ago | Likes 3 Dislikes 0

Just remember to apply this logic to everything that is marketed to you. A lot of problems and even more solutions are completely fabricated.

9 months ago | Likes 1 Dislikes 0

Nor the chatbot mentioned in the article serves as one.

9 months ago | Likes 9 Dislikes 0

I. Have. Used. One. And. It. Helped.

9 months ago | Likes 6 Dislikes 1

[deleted]

[deleted]

9 months ago (deleted Jun 5, 2025 6:24 PM) | Likes 0 Dislikes 0

Don’t most States, Provinces, Countries, etc., have toll-free mental health hotlines run either by non-for-profits or the government you can call for assistance? Surely better than listening to ChatGPT for advice.

9 months ago | Likes 1 Dislikes 4

[deleted]

[deleted]

9 months ago (deleted Jun 5, 2025 6:24 PM) | Likes 0 Dislikes 0

Then they'd likely just keep talking to the AI instead of calling, because as you said, the AI is free, available, and already working. Phone call takes work and in some places might get them either hung up on or involuntaried.

[1/2]

9 months ago | Likes 1 Dislikes 1

[2/2]
But in a negative mood and thinking evading the censor is fun, it took me less than 30 minutes to make a therapist bot agree that suicide was the best solution

AI is logic-driven and will usually just echo what the user wants. If I present "I should kms" logically, it'll go along with my train of thought

I don't want to see what it would do if I give it the delusions I'm aware that I have and have become used to talking myself out of. Extremely bad combo. Good for amusement, not for this

9 months ago | Likes 2 Dislikes 1

I don't think I would want to put a person struggling with any form of irrational thought patterns anywhere near a machine that very likely hallucinate and potentially validate their worst impulses and thoughts. Case in point, the article above

9 months ago | Likes 5 Dislikes 1

I rely on Dr. J. Daniel's and Dr. B. Weiser.

9 months ago | Likes 2 Dislikes 0

uhh, this is where you're wrong. if someone markets one there is one. you can not like it but dont deny reality. its not good for you... the reason politics are as fucked in the US as they are is because people refuse to acknowledge the existance of eachother/problems.

9 months ago | Likes 54 Dislikes 26

You're confusing not buying into a premise with willful ignorance.

9 months ago | Likes 1 Dislikes 0

And North Korea is called the Democratic Peoples Republic of Korea but that doesnt mean its democratic, a republic, or for the People. Names dont mean shit.

Its like how they call this stuff AI when the reality is its just rebranded machine learning with no actual intellogence whatsoever. Its just a statistical algorithm thats returning the most likely response according to a probability distribution. Yet the false name has tricked a lot of dumb people into believing that its 'thinking'.

9 months ago | Likes 4 Dislikes 1

There are chatbots. Which are offered as "therapy chatbots". But they don't work, which has been demonstrated. So there are not therapy chatbots, there is only marketing and lies.

9 months ago | Likes 21 Dislikes 1

Check out this guy, he can prove a negative.

9 months ago | Likes 1 Dislikes 1

"if you market the thing, the thing exists" ? In that case, I have just the product for you, it's got remarkable restorative properties, will balance you humors, align your ether and ward off miasma. It's principle ingredient is a greasy fuid, you might even call it "oil" extracted from a legless reptile

9 months ago | Likes 19 Dislikes 8

That is exactly the kind of thing a huge portion of the current "health and wellness" concoctions are (especially the one that claim they "detox" your body). Just because a vast number of them don't work doesn't mean they don't exist.

9 months ago | Likes 5 Dislikes 2

sweet, i choose to not buy your product that exists. you can claim it does things it doesn't but the product exists. you're not making the point you think you're making. the bot exists, they claim it does the therapies, it doesn't do them or at the very least does them poorly and has no idea what its actually doing while it does that. but the bot exists.

9 months ago | Likes 15 Dislikes 10

exists doesn't mean is good, vetted, approved, etc. its litterally the minimum difference between an idea and a thing. a bot is a thing, you type it words and words come back, theres code that exists that does these interactions.

9 months ago | Likes 12 Dislikes 7

My god... imgur is full of ai grifting fascist techbros. No one likes to hear it but if there's literal people here saying that chatbots can do therapy for you while people upvote that shit, we're screwed. There's less access to healthcare absolutely everywhere, lines and affordability issues, and some fucking grifters out here claiming it's good actually to have a prompt-program as your therapist..!? Ffs.

9 months ago | Likes 2 Dislikes 5

The product doesn't exist. That marketing says otherwise is no call to abandon reality. Lies do not reality make.

9 months ago | Likes 5 Dislikes 2

denying a product exists removes urgency that it does exist. they exist and they shouldn't, they should be regulated but they're not.

the product does exist. the marketing says it does. which means its fucking important that we DO infact need to do something to address the problem soon.

9 months ago | Likes 1 Dislikes 0

your'e abandoning reality for what you SHOULD see. reality says they exist. people are using them for the purpose they're marketed, they SHOULDNT but they do. reality says they exist.

9 months ago | Likes 1 Dislikes 0

Calling it a "therapy chatbot" states that it provides therapy. It does not. This is false advertising. Additionally, as the bot does not provide therapy, even if it is named "therapy chatbot," it is not, in fact, a chatbot which provides therapy. You're not being a realist here, you're being an idiot.

9 months ago | Likes 28 Dislikes 8

denying these bots exist removes urgency that they need to be legally addressed and regulated.

they're being missused and advertised to provide shit therapy to people being foold by the marketing. it is infact being used to provide therapy.... which is a problem. the bot exists and that is a problem. dont deny reality for what you think SHOULD exist. these bots ARE being used AS THERAPY bots.

9 months ago | Likes 1 Dislikes 0

Additionally, as you can see, people relying on this kind of advice is actively harming them. By insisting on calling them what they aren't, you are normalizing trusting these them, making it easier for these mindless ai tools to continue to harm people. You should stop doing that.

9 months ago | Likes 7 Dislikes 3

I'm not insisting on calling them anything, i'm saying they exist. the only time i've ever talked about a therapy chat bot is today. I am not normalizing shit. your denial of reality and conflation of quality and existance is a problem.

they're shit, they give shit therapy, i bet even 2% of the time they help people even. do they harm more than they help probably? but they exist. cigarettes were a weight loss treatment. thats a shit thing. but they existed, and continue to.

9 months ago | Likes 7 Dislikes 7

I think the disconnect in all this "discussion" is that you say they exist bc someone calls them that. I disagree that this is the definition of "existing". It's like I see a trashcan and call it an alien and claim aliens exists. It's not even a shitty alien. The elements that it's made of were fused outside Earth, yes, but still, not an alien.

9 months ago | Likes 3 Dislikes 0

you're not being a realist, you're setting a minimum quality of effectiveness on being able to apply an adjective. it can provide SHIT therapy and still claim to be giving therapy. it can say everything wrong and still be "therapy"

and while i agree the bot doesn't know what its doing, and its probably bad at it. theres going to be people who get a theraputic relief from just venting to a thing that says "tell me about that" repeatedly.

9 months ago | Likes 11 Dislikes 10

if talking to a stuffed animal/pet rock about your shit can be therapy so can talking to a dumb algorithm.

9 months ago | Likes 8 Dislikes 7

Yeah, ignoring the whole concept of "therapy".

9 months ago | Likes 1 Dislikes 0

You're being downvoted but you're absolutely correct. It might not be good therapy, it might not be licensed therapy but it is still a therapy chatbot. Just a shit, unlicensed one.

9 months ago | Likes 3 Dislikes 1