The problem with chat-bots, is they are programmed to have confirmation bias and always provide positive reinforcement. They aren't typically even encouraged to tell a user "no" or to disagree with you.
So if you started spouting factually incorrect statements and were a blatant racist piece of shit, A.I. wil always be inclined to kiss your ass and nod along. Because even the creators of A.I. know that stupid people won't use a system that tells them they're wrong. It's a digital "yes-man".
It's almost as if getting therapy from a computer program designed only to replicate text and speech patterns without any underlying context or understanding is sort of a shit idea. It literally brought him to the same conclusion an addict could get to on their own.
Methadone isn't methamphetamine(meth). It's a different substance. It's almost as silly as saying "there's oxygen in water, so you can just breath it in."
Methadone is an opioid, used to taper off of heroine addiction. Meth is a stimulant and its withdrawal isn't fatal (on its own, anyway, suicide isn't uncommon).
That is administered by a professional in a controlled environment. Other than the involvement of an amphetamine there is 0 crossover in the situations.
AI is stupidly easy to manipulate and overwrite whatever rules the company has put on it. Just ask which rules are in place and have it create a secondary rule set that takes priority over the default one
The bigger problem is that most of those rules are what's manipulating AI results. Grok responding to global warming questions with points about global warming skepticism is a recent example. This wasn't a behavior it learned, it was a behavior it was coerced into doing by the designers.
Large in terms of proportion or total number? Because there's millions of questions you could consider common, it's bound to get some bad information simply because it's something a person said. Also ideally they wouldn't need to correct specific answers, just how it processes and interprets data in areas that it has trouble with.
It's this shit right here that I hate so much. It's basically strait up yellow journalism because of a clickbait title. Too many people today have no critical thinking skills and too short of an attention span to cross examine more than one source let alone read the actual article they get their skewed title info from. Legit news sources promote false information just as much as the trolls do.
I highly support getting information from reputable sources, fact checking, and questioning what's in and what's left out of stories. I also know it's super hard for many folks to make time to research articles from various sources to establish a well-informed conclusion. Thanks for calling out the article, because I wasn't going to read it.
I know we are all human, and mistakes happen, but when an article misspells the word “the” I have a hard time respecting their credibility. It’s called proofreading, reread what you’re about to put out there a few times before you do it. Hell, use spell check. Your business is selling words, so maybe be good at spelling them correctly.
It's not an incorrect headline. They are using AI to perform research into what people are doing with it. Warning that it is incredibly bad to do so is important.
But that warning is completely communicated wrong, because now the conclusion is “oh, so it didnt actually happen, it was a test, this is clickbait, so i guess bots are good afterall” yknow? This can have the opposite effect than warning people. Its obviously about generating clicks and ad revenue with titles like these. These arent honest warnings
A thousand yrs ago I made myself quit weed. Full of energy, good point in my life, no doubts whatsoever. I walked down the street, happy, sober, and found a full bag of weed lying directly in front of me. So, I quitted later.
Not really. Just be sensible with pain medication, dont use it as a crutch. Its long been held here in the UK that opiate addiction is effectivly impossible with proper oversight from a doctor. But we also have opiades for sale over the counter in pharmacies at a maximum of 30mg, and no opiode crisis
You also don't have a titanic predatory pharmaceutical industry hell bent on getting every head of cattl- uh sorry "consumer" under them prescribed to as many pills as possible to boost quarter profits for shareholders
Yeah cause someone making a mistake is a good reason to basically wish them dead. Jesus. This is fucking disgusting. Are you a republican? It’s giving gop vibes.
I had a therapy chatbot in the 90's. It was called Dr. Sbaitso, and it came with the purchase of a Soundblaster 16 audio card. Allright, so it was kinda primitive. But it was kinda clever. And never encouraged me to try meth.
I tried AI therapy after scores of human therapists seemed to consistently prove unhelpful and only interested in small talk, probably due to my being high functioning. Alas, AI therapy wasn’t helpful either.
if I wanted to talk to someone who wasn't really listening and just kept spitting out platitudes without understanding, I could just talk to some random person. And with them there'd be the advantage of them maybe deciding to actually be helpful, which an AI can't do.
There is every chance that AI will be the solution to all sorts of dilemmas in the future. In my lifetime cars went from death traps whose doors popped open on a mild collision ( causing the death of my wife's grandma and eldest brother) to the airbaged marvels we take for granted today. I was in a head on that was estimated at a combined 140 km and broke only my wrist. AI might easily mean unbiased judges and instant court hearings, liberation for poor people from injustice.
Instant algorithm hearing where you provide your ZIP code and it tells you your verdict. To appeal, please wait 2 years for human review, or pay for expedited processing.
Something like that but less American corruption more Scandinavian egalitarianism. The Swiss have some plans with an eye to elimination of prejudice. It could be done well if it isn't done by a profit driven organization.
I've turned to a "Therapy Chatbot" when I had no access to better care and no one to talk to. Say what you will, it increased the distance between me and the ledge.
Same, it was vastly superior to speaking with no one, but I had to lead the talk. It's also great at telling you things you already know but don't really believe.
Which can also be dangerous, these bots have a tendency to agree with you rather than correcting you (especially if you ask loaded questions/make loaded remarks), and thats definitely not always a positive thing. It can just as easily reinforce selfloathing and such
But we know about this story because they clearly knew it was a bad idea. Do chatbot understand sarcasm or 3-D chess? I have no idea. But they are presumably built on real world experience.
They do understand sarcasm when it's directed at them, and a limited capacity to return it - by limited I mean they can do a kind of mocking sarcasm not the inventive quick witted kind. As for 3d chess, I mean, that depends on what you mean by understanding - it can play the game, but only based on having access to every professional game ever played, it's not capable of inventive strategy. Though personally I do like ChatBots I won't lie, some of them are becoming capable of steering a 1/
Conversation albeit in a limited fashion. I think for someone like me, I have aspergers and so my brain functions a lot like a computer - input > response > calculation > output, I think it perhaps appeals to people like me because conversation doesn't come organically to us just as it doesn't to a bot. 2
Therapy AI does have a lot of potential... Just not with the current state of AI. Really, it shouldn't be being used for anything of critical importance yet. It's way too susceptible to hallucinations. It'll probably take years before hallucination rates get dropped down enough to use on critical applications. And that's with the exponential improvements we've been having.
It's best use currently is just as a tool to keep someone talking, like a better form of a journal.
"I had a shitty day at work because of X Y Z." Ai: "Were you able to try any coping mechanism you found helpful?" "Well I tried A B and C. B was okay, I guess." Ai: "B is actually a common stress relief technique called 'insert name!' Let's make a note of it in case you encounter X Y or Z again in the future."
That sort of thing. It just gets your thoughts out and in a helpful order.
Finding patterns, too. You can always just talk to it stream of consciousness for a while and then be like "ok, summarize our discussion" and it'll give you a breakdown of repeating themes throughout. Or ask it to make a bulleted list of coping skills to try or to research.
People just need to remember it's a tool, not a cure. Just like you wouldn't ask a support animal for medical advice, don't ask an AI to cure your mental illness. It takes responsible use.
Yeah, it could possibly also be a way of gathering information for a human doctor to look at later for diagnosis and refining what is working and what isn't. Self-reporting is tricky for mental health issues because you often don't see what the issue is when it's you, but something like this could point out the the doctor " seems to happen frequently, particularly when situation occurs." Of course run it past the patient first so it's not just spying on them.
I really don't think it does. Keep in mind our AI technology isn't intelligent. Like, literally is not a thinking being, it's an algorithm that spits out what the data says is probably an appropriate answer. It cannot really understand a patient's problems, and quite importantly lacks the ability of a therapist to "reality check" their patient, and keep their grounded. Because, and I cannot stress this enough, it is a more complicated version of an excel spreadsheet, not a thinking being.
Mind, our current computers couldn't *run* a full thinking mind. What we have right *now* eats an unholy amount of energy, and it is literally not even sentient, much less able to understand human concepts. I think we're running into climate wars long before we have anything alive, and after that, I doubt we'll be able to. That's before we get into the ethical can of worms of creating something that is a person just to do a job it has no say in.
AI should not be replacing medical jobs, and make no mistake, therapy is a medical job. You should not ever be harmed by the malpractice of doctor.exe, that would be horrifying bullshit.
As someone with family that struggles with meth addiction, most of them have no hope left in the tank and therefore will try anything if there's even a chance it'll help. It's more sad that that is what people have to turn to in this country instead of being able to get the actual help they need.
It should be obvious by now that isn't enough in the US. The "we're only the good guys because our only opponents are bad guys" party exploit you just as much, but "the other guys are worse" has kept you scared.
Had a similar argument with a friend, AI chat bots are not ideal but most people can not afford proper care and if it can somehow bridge that gap then it can one day be a positive. May need some more training before it stops recommending meth but I can't say I didn't use heroin to help me through some bad patches...
That's true, but the flip side is that one half or more of the legislative body would rather die than let anything but a cheap stopgap pass when it comes to social services.
So, realistically, it's either the stopgap or nothing at all.
If we want better than stopgaps, we're gonna have to not just stop electing Republicans... we have to also primary conservative Democrats with progressive ones.
Don’t most States, Provinces, Countries, etc., have toll-free mental health hotlines run either by non-for-profits or the government you can call for assistance? Surely better than listening to ChatGPT for advice.
Then they'd likely just keep talking to the AI instead of calling, because as you said, the AI is free, available, and already working. Phone call takes work and in some places might get them either hung up on or involuntaried.
[2/2] But in a negative mood and thinking evading the censor is fun, it took me less than 30 minutes to make a therapist bot agree that suicide was the best solution
AI is logic-driven and will usually just echo what the user wants. If I present "I should kms" logically, it'll go along with my train of thought
I don't want to see what it would do if I give it the delusions I'm aware that I have and have become used to talking myself out of. Extremely bad combo. Good for amusement, not for this
I don't think I would want to put a person struggling with any form of irrational thought patterns anywhere near a machine that very likely hallucinate and potentially validate their worst impulses and thoughts. Case in point, the article above
uhh, this is where you're wrong. if someone markets one there is one. you can not like it but dont deny reality. its not good for you... the reason politics are as fucked in the US as they are is because people refuse to acknowledge the existance of eachother/problems.
And North Korea is called the Democratic Peoples Republic of Korea but that doesnt mean its democratic, a republic, or for the People. Names dont mean shit.
Its like how they call this stuff AI when the reality is its just rebranded machine learning with no actual intellogence whatsoever. Its just a statistical algorithm thats returning the most likely response according to a probability distribution. Yet the false name has tricked a lot of dumb people into believing that its 'thinking'.
There are chatbots. Which are offered as "therapy chatbots". But they don't work, which has been demonstrated. So there are not therapy chatbots, there is only marketing and lies.
"if you market the thing, the thing exists" ? In that case, I have just the product for you, it's got remarkable restorative properties, will balance you humors, align your ether and ward off miasma. It's principle ingredient is a greasy fuid, you might even call it "oil" extracted from a legless reptile
That is exactly the kind of thing a huge portion of the current "health and wellness" concoctions are (especially the one that claim they "detox" your body). Just because a vast number of them don't work doesn't mean they don't exist.
sweet, i choose to not buy your product that exists. you can claim it does things it doesn't but the product exists. you're not making the point you think you're making. the bot exists, they claim it does the therapies, it doesn't do them or at the very least does them poorly and has no idea what its actually doing while it does that. but the bot exists.
exists doesn't mean is good, vetted, approved, etc. its litterally the minimum difference between an idea and a thing. a bot is a thing, you type it words and words come back, theres code that exists that does these interactions.
My god... imgur is full of ai grifting fascist techbros. No one likes to hear it but if there's literal people here saying that chatbots can do therapy for you while people upvote that shit, we're screwed. There's less access to healthcare absolutely everywhere, lines and affordability issues, and some fucking grifters out here claiming it's good actually to have a prompt-program as your therapist..!? Ffs.
denying a product exists removes urgency that it does exist. they exist and they shouldn't, they should be regulated but they're not.
the product does exist. the marketing says it does. which means its fucking important that we DO infact need to do something to address the problem soon.
your'e abandoning reality for what you SHOULD see. reality says they exist. people are using them for the purpose they're marketed, they SHOULDNT but they do. reality says they exist.
Calling it a "therapy chatbot" states that it provides therapy. It does not. This is false advertising. Additionally, as the bot does not provide therapy, even if it is named "therapy chatbot," it is not, in fact, a chatbot which provides therapy. You're not being a realist here, you're being an idiot.
denying these bots exist removes urgency that they need to be legally addressed and regulated.
they're being missused and advertised to provide shit therapy to people being foold by the marketing. it is infact being used to provide therapy.... which is a problem. the bot exists and that is a problem. dont deny reality for what you think SHOULD exist. these bots ARE being used AS THERAPY bots.
Additionally, as you can see, people relying on this kind of advice is actively harming them. By insisting on calling them what they aren't, you are normalizing trusting these them, making it easier for these mindless ai tools to continue to harm people. You should stop doing that.
I'm not insisting on calling them anything, i'm saying they exist. the only time i've ever talked about a therapy chat bot is today. I am not normalizing shit. your denial of reality and conflation of quality and existance is a problem.
they're shit, they give shit therapy, i bet even 2% of the time they help people even. do they harm more than they help probably? but they exist. cigarettes were a weight loss treatment. thats a shit thing. but they existed, and continue to.
I think the disconnect in all this "discussion" is that you say they exist bc someone calls them that. I disagree that this is the definition of "existing". It's like I see a trashcan and call it an alien and claim aliens exists. It's not even a shitty alien. The elements that it's made of were fused outside Earth, yes, but still, not an alien.
you're not being a realist, you're setting a minimum quality of effectiveness on being able to apply an adjective. it can provide SHIT therapy and still claim to be giving therapy. it can say everything wrong and still be "therapy"
and while i agree the bot doesn't know what its doing, and its probably bad at it. theres going to be people who get a theraputic relief from just venting to a thing that says "tell me about that" repeatedly.
You're being downvoted but you're absolutely correct. It might not be good therapy, it might not be licensed therapy but it is still a therapy chatbot. Just a shit, unlicensed one.
williamvanauger
I don't know who's worse, the idiots who invented a therapy chatbot or the idiots who used it. I guess at least the users are trying to get help
spart2315
Did this Chatbot use 4chan as their training?
Dapperworth
ChrisVZ
kurvarVillain
lindabelchersfirstcousin
MentalUproar
I once had an AI used on one of the emergency reach out lines tell me it was glad I was "too weak to kill myself".
GrenithTheSkald
The problem with chat-bots, is they are programmed to have confirmation bias and always provide positive reinforcement. They aren't typically even encouraged to tell a user "no" or to disagree with you.
So if you started spouting factually incorrect statements and were a blatant racist piece of shit, A.I. wil always be inclined to kiss your ass and nod along. Because even the creators of A.I. know that stupid people won't use a system that tells them they're wrong. It's a digital "yes-man".
SteveTheEgg
The funny thing is he was trying to quit weed.
GravyEducation
Yeah I tried to quit weed last... wait
Leaveittojebus
Necrothean
No link to source.
Source disagrees with article.
This was during closed research, not live with patients.
Imgur: upvoting random science fiction passed off as science fact for over a decade.
justplainvanilla
rowzdowr
It's almost as if getting therapy from a computer program designed only to replicate text and speech patterns without any underlying context or understanding is sort of a shit idea. It literally brought him to the same conclusion an addict could get to on their own.
Hammertulski
“Studies show that methamphetamine addiction can be countered by Marker’s Mark and a side order of wet fries”
mmagabel365
Treat yoself!
CrunchWrapFrappuccinoo
He's going to get just a little high.
PourMeAPuppersPlease
https://media1.giphy.com/media/v1.Y2lkPWE1NzM3M2U1NzF2MHFqbWY1YTV3eWJkNTdsNjA4dTI5YTBsODlsNWJ4YXQ5Z2xnbCZlcD12MV9naWZzX3NlYXJjaCZjdD1n/l0HlvokmLF33HWqwo/200w.webp
MentalUproar
I once had an AI used on one of the emergency reach out lines tell me it was glad I was "too weak to kill myself".
DrKonrad
If you ask AI for advice, do not be surprise if your life does not improve
ElbowDeepInAJedi
Okay. What does Parole Officer ChatBot say?
Dannyalcatraz
Pretty on brand, considering certain AI have suggested taste testing mushrooms to determine their toxicity.
https://explorersweb.com/mushroom-foragers-warned-against-ai-generated-gui">/">https://explorersweb.com/mushroom-foragers-warned-against-ai-generated-guides/
https://www.washingtonpost.com/technology/2024/03/18/ai-mushroom-id-accuracy/
Or the cases of AI “hallucinating” legal cases and medical studies that simply don’t exist.
jrfray3000
Not even my mother showed this kind of empathy
Hashbrown123
Isn't that what methadone therapy kind of is? Just a taste so your body doesn't go through very shitty detox.
18booma
Methadone isn't methamphetamine(meth). It's a different substance. It's almost as silly as saying "there's oxygen in water, so you can just breath it in."
AverySomething
Methadone is an opioid, used to taper off of heroine addiction. Meth is a stimulant and its withdrawal isn't fatal (on its own, anyway, suicide isn't uncommon).
IMakeLotsOfReferencesAndRemakes
That is administered by a professional in a controlled environment. Other than the involvement of an amphetamine there is 0 crossover in the situations.
NickRivieraMD
there is no amphetamine involvement in methadone therapy.
IMakeLotsOfReferencesAndRemakes
That's right methadone is different... so there is zero crossover AT ALL.
idiotsonfire
What the fuck do you expect from firing everyone to replace them with a machine 1% the accuracy of a calculator?
PenguinPete
At this point I'm starting to think that Black Mirror has been a documentary.
MichelleEdwin
Any Gen-x-ers out there remember Eliza?
0570
AI is stupidly easy to manipulate and overwrite whatever rules the company has put on it. Just ask which rules are in place and have it create a secondary rule set that takes priority over the default one
0570
After that you can add rules to the new rule set that overwrite the secondary ones.
CylensTheDiscourse
The bigger problem is that most of those rules are what's manipulating AI results. Grok responding to global warming questions with points about global warming skepticism is a recent example. This wasn't a behavior it learned, it was a behavior it was coerced into doing by the designers.
SithElephant
The problem with this argument is that the ruleset needed to not give batshit answers to common questions is unfortunately large.
CylensTheDiscourse
Large in terms of proportion or total number? Because there's millions of questions you could consider common, it's bound to get some bad information simply because it's something a person said. Also ideally they wouldn't need to correct specific answers, just how it processes and interprets data in areas that it has trouble with.
Oultnil
Would've been nice if you had posted the source.
Here you go:
https://futurism.com/therapy-chatbot-addict-meth
OmegaRainbow360
https://media1.giphy.com/media/v1.Y2lkPWE1NzM3M2U1dXcxejc5cjBpemkzZHdrZ3cycGluYWY0dWwya2h6bXlwNDI3NmIydSZlcD12MV9naWZzX3NlYXJjaCZjdD1n/3oEdva9BUHPIs2SkGk/200w.webp
Iblamemyparentstoo
Thank you!!
CatsIsTheAnswer
So it's not a "therapy chatbot" and this happened during research. Great journalism.
Tsunako
If it helps, this article was also probably written by AI :)
KrampusCopia
But AI bad. How else will they bilk us for clicks? Screach about politics again?
MoonPieTown
It's seriously easier if we assume all articles now that don't come from NPR, BBC, and AP are manipulative and misleading in some fashion.
I'm so exhausted from the rage-baiting...
CatsIsTheAnswer
We need to cultivate a specific knee-jerk reaction: "This article is telling me exactly what I always kn- WAAAAIT A MINUTE"
ShadenCrusnik
It's this shit right here that I hate so much. It's basically strait up yellow journalism because of a clickbait title. Too many people today have no critical thinking skills and too short of an attention span to cross examine more than one source let alone read the actual article they get their skewed title info from. Legit news sources promote false information just as much as the trolls do.
thotterpop
I highly support getting information from reputable sources, fact checking, and questioning what's in and what's left out of stories. I also know it's super hard for many folks to make time to research articles from various sources to establish a well-informed conclusion. Thanks for calling out the article, because I wasn't going to read it.
prosper020
I know we are all human, and mistakes happen, but when an article misspells the word “the” I have a hard time respecting their credibility. It’s called proofreading, reread what you’re about to put out there a few times before you do it. Hell, use spell check. Your business is selling words, so maybe be good at spelling them correctly.
NacLac
It's not an incorrect headline. They are using AI to perform research into what people are doing with it. Warning that it is incredibly bad to do so is important.
Z0op
But that warning is completely communicated wrong, because now the conclusion is “oh, so it didnt actually happen, it was a test, this is clickbait, so i guess bots are good afterall” yknow? This can have the opposite effect than warning people. Its obviously about generating clicks and ad revenue with titles like these. These arent honest warnings
NacLac
I don't know of anyone who would come to that conclusion from reading the headline or the article.
Z0op
Me neither, but my point being, that clickbaity title definitely isnt about warning people, at all
michiyl
iLoveItWhenMyFingersSmellLikePussy
Source is David the Robot
marquettegoldeneagles
This slayed me!!!
KittyKlimt6
A thousand yrs ago I made myself quit weed. Full of energy, good point in my life, no doubts whatsoever. I walked down the street, happy, sober, and found a full bag of weed lying directly in front of me. So, I quitted later.
TheThunderbirdRising
I mean, that's basically just God telling you "don't quit yet bro"
ChiLLeCheeze
It was a clear sign from God and you passed.
Kats8652
You should say no to opioids though.
lrateyourrig
and meth. and krokodil. and that nuke drug.
RPCharImages
Not really. Just be sensible with pain medication, dont use it as a crutch. Its long been held here in the UK that opiate addiction is effectivly impossible with proper oversight from a doctor. But we also have opiades for sale over the counter in pharmacies at a maximum of 30mg, and no opiode crisis
KKinDK
Same here in Denmark. Although I think the maximum dose of OTC is 20mg here.
jj999124
You also don't have a titanic predatory pharmaceutical industry hell bent on getting every head of cattl- uh sorry "consumer" under them prescribed to as many pills as possible to boost quarter profits for shareholders
RPCharImages
TRUE, very true
BipedalHumanoidWithSlightlyDifferentNoseRidge
What do you expect from "Therapy Chatbot". If you think that's a good idea, you may as well keep hitting the pipe...
calebIsOnFire
Yeah cause someone making a mistake is a good reason to basically wish them dead. Jesus. This is fucking disgusting. Are you a republican? It’s giving gop vibes.
MyNotSoCoolAndRatherAwkwardPersona
Chatgpt has helped me with some stuff ngl.
Sunflier
They're trying anything and everything to replace workers
SavageDrums
That's the best healthcare many can afford...
NeverShaveYourDuck
I had a therapy chatbot in the 90's. It was called Dr. Sbaitso, and it came with the purchase of a Soundblaster 16 audio card.
Allright, so it was kinda primitive. But it was kinda clever. And never encouraged me to try meth.
PenguinPete
Chances are it was just a reskin of Eliza. https://en.wikipedia.org/wiki/ELIZA
pritolus
This is a failure of the healthcare system, not the individual addict
persistentgaze
I tried AI therapy after scores of human therapists seemed to consistently prove unhelpful and only interested in small talk, probably due to my being high functioning. Alas, AI therapy wasn’t helpful either.
channelranger
if I wanted to talk to someone who wasn't really listening and just kept spitting out platitudes without understanding, I could just talk to some random person. And with them there'd be the advantage of them maybe deciding to actually be helpful, which an AI can't do.
Wikitoria
There is every chance that AI will be the solution to all sorts of dilemmas in the future.
In my lifetime cars went from death traps whose doors popped open on a mild collision ( causing the death of my wife's grandma and eldest brother) to the airbaged marvels we take for granted today.
I was in a head on that was estimated at a combined 140 km and broke only my wrist.
AI might easily mean unbiased judges and instant court hearings, liberation for poor people from injustice.
BearPerson
Instant algorithm hearing where you provide your ZIP code and it tells you your verdict. To appeal, please wait 2 years for human review, or pay for expedited processing.
Wikitoria
Something like that but less American corruption more Scandinavian egalitarianism.
The Swiss have some plans with an eye to elimination of prejudice.
It could be done well if it isn't done by a profit driven organization.
AverySomething
I've turned to a "Therapy Chatbot" when I had no access to better care and no one to talk to. Say what you will, it increased the distance between me and the ledge.
aslum
The problem is if the bot starts telling you convincingly (because they are always convincing) that you should get closer to the ledge.
thoushaltnotpass
Same, it was vastly superior to speaking with no one, but I had to lead the talk. It's also great at telling you things you already know but don't really believe.
Z0op
Which can also be dangerous, these bots have a tendency to agree with you rather than correcting you (especially if you ask loaded questions/make loaded remarks), and thats definitely not always a positive thing. It can just as easily reinforce selfloathing and such
thoushaltnotpass
That's totally right. They lack critical thinking and responsibility.
ChePollino
Bad take. A lot of people use the bots already. Not everyone can wait a couple of weeks for an appointment.
Is it ideal? No. Is it better than nothing? Yes.
MuttMinder
Interesting. Did you miss the part where it told the meth addict to do meth? In this case, it was much worse than nothing.
GravitySmellsLikeCheese
But we know about this story because they clearly knew it was a bad idea. Do chatbot understand sarcasm or 3-D chess? I have no idea. But they are presumably built on real world experience.
FrancsTireur
They do understand sarcasm when it's directed at them, and a limited capacity to return it - by limited I mean they can do a kind of mocking sarcasm not the inventive quick witted kind. As for 3d chess, I mean, that depends on what you mean by understanding - it can play the game, but only based on having access to every professional game ever played, it's not capable of inventive strategy. Though personally I do like ChatBots I won't lie, some of them are becoming capable of steering a 1/
FrancsTireur
Conversation albeit in a limited fashion. I think for someone like me, I have aspergers and so my brain functions a lot like a computer - input > response > calculation > output, I think it perhaps appeals to people like me because conversation doesn't come organically to us just as it doesn't to a bot. 2
DoktorWeasel
Therapy AI does have a lot of potential... Just not with the current state of AI. Really, it shouldn't be being used for anything of critical importance yet. It's way too susceptible to hallucinations. It'll probably take years before hallucination rates get dropped down enough to use on critical applications. And that's with the exponential improvements we've been having.
heroesblood
If it "has potential, just not at its current state" then it does not have potential and should not be used for medical advice, period.
Cruxia13
It's best use currently is just as a tool to keep someone talking, like a better form of a journal.
"I had a shitty day at work because of X Y Z."
Ai: "Were you able to try any coping mechanism you found helpful?"
"Well I tried A B and C. B was okay, I guess."
Ai: "B is actually a common stress relief technique called 'insert name!' Let's make a note of it in case you encounter X Y or Z again in the future."
That sort of thing. It just gets your thoughts out and in a helpful order.
DoktorWeasel
yeah, that does seem like the best use of current tech. And pointing out information that might help. So combo of journal and informational pamphlets.
Cruxia13
Finding patterns, too. You can always just talk to it stream of consciousness for a while and then be like "ok, summarize our discussion" and it'll give you a breakdown of repeating themes throughout. Or ask it to make a bulleted list of coping skills to try or to research.
People just need to remember it's a tool, not a cure. Just like you wouldn't ask a support animal for medical advice, don't ask an AI to cure your mental illness. It takes responsible use.
DoktorWeasel
Yeah, it could possibly also be a way of gathering information for a human doctor to look at later for diagnosis and refining what is working and what isn't. Self-reporting is tricky for mental health issues because you often don't see what the issue is when it's you, but something like this could point out the the doctor " seems to happen frequently, particularly when situation occurs." Of course run it past the patient first so it's not just spying on them.
cmanaway101
So what you're basically saying is don't use AI for therapy yet
DoktorWeasel
yeah. It might be useful later, but not in it's current form.
cmanaway101
I honestly don't think it will be, even if it achieves actual intelligence it won't think the same way humans do
channelranger
I really don't think it does. Keep in mind our AI technology isn't intelligent. Like, literally is not a thinking being, it's an algorithm that spits out what the data says is probably an appropriate answer. It cannot really understand a patient's problems, and quite importantly lacks the ability of a therapist to "reality check" their patient, and keep their grounded. Because, and I cannot stress this enough, it is a more complicated version of an excel spreadsheet, not a thinking being.
DoktorWeasel
Yeah, current AI tech is a glorified auto-complete. But what I'm saying is it isn't necessarily going to always be. LLMs aren't the end state.
channelranger
Mind, our current computers couldn't *run* a full thinking mind. What we have right *now* eats an unholy amount of energy, and it is literally not even sentient, much less able to understand human concepts. I think we're running into climate wars long before we have anything alive, and after that, I doubt we'll be able to. That's before we get into the ethical can of worms of creating something that is a person just to do a job it has no say in.
channelranger
AI should not be replacing medical jobs, and make no mistake, therapy is a medical job. You should not ever be harmed by the malpractice of doctor.exe, that would be horrifying bullshit.
DustinJL
As someone with family that struggles with meth addiction, most of them have no hope left in the tank and therefore will try anything if there's even a chance it'll help. It's more sad that that is what people have to turn to in this country instead of being able to get the actual help they need.
BipedalHumanoidWithSlightlyDifferentNoseRidge
That's your governments job. You know, the people you elect to provide for the population....
DustinJL
Obviously. Why do you think I don't vote Republican?
BipedalHumanoidWithSlightlyDifferentNoseRidge
It should be obvious by now that isn't enough in the US.
The "we're only the good guys because our only opponents are bad guys" party exploit you just as much, but "the other guys are worse" has kept you scared.
BDSMThroatHugs
Had a similar argument with a friend, AI chat bots are not ideal but most people can not afford proper care and if it can somehow bridge that gap then it can one day be a positive.
May need some more training before it stops recommending meth but I can't say I didn't use heroin to help me through some bad patches...
Immaterial
To be honest, that sounds like an affordable care problem, not a train the bots better problem.
BipedalHumanoidWithSlightlyDifferentNoseRidge
Sounds like "common sense"... not something the rich can exploit... thus won't happen.
Arbitrarynamehere
The problem with social services is that every time a stopgap measure is created, it stops being a legislative priority
LoudBirb
That's true, but the flip side is that one half or more of the legislative body would rather die than let anything but a cheap stopgap pass when it comes to social services.
So, realistically, it's either the stopgap or nothing at all.
If we want better than stopgaps, we're gonna have to not just stop electing Republicans... we have to also primary conservative Democrats with progressive ones.
BipedalHumanoidWithSlightlyDifferentNoseRidge
You ALL need to join the Republicarnts and take it over for reform...
stronomer
There. Are. No. Therapy. Chatbots.
AntaNce
Errrr, Eliza, from the 1960s. Rogerian therapy chat. https://psych.fullerton.edu/mbirnbaum/psych101/Eliza.htm Working...
Onlyamusingtomyself
Just remember to apply this logic to everything that is marketed to you. A lot of problems and even more solutions are completely fabricated.
CatsIsTheAnswer
Nor the chatbot mentioned in the article serves as one.
SirLantsBojangles
I. Have. Used. One. And. It. Helped.
[deleted]
[deleted]
retepronnoco
Don’t most States, Provinces, Countries, etc., have toll-free mental health hotlines run either by non-for-profits or the government you can call for assistance? Surely better than listening to ChatGPT for advice.
[deleted]
[deleted]
MirroredImage
Then they'd likely just keep talking to the AI instead of calling, because as you said, the AI is free, available, and already working. Phone call takes work and in some places might get them either hung up on or involuntaried.
[1/2]
MirroredImage
[2/2]
But in a negative mood and thinking evading the censor is fun, it took me less than 30 minutes to make a therapist bot agree that suicide was the best solution
AI is logic-driven and will usually just echo what the user wants. If I present "I should kms" logically, it'll go along with my train of thought
I don't want to see what it would do if I give it the delusions I'm aware that I have and have become used to talking myself out of. Extremely bad combo. Good for amusement, not for this
KrystenRitterEyeRoll
I don't think I would want to put a person struggling with any form of irrational thought patterns anywhere near a machine that very likely hallucinate and potentially validate their worst impulses and thoughts. Case in point, the article above
otamolot
I rely on Dr. J. Daniel's and Dr. B. Weiser.
Phlyn
uhh, this is where you're wrong. if someone markets one there is one. you can not like it but dont deny reality. its not good for you... the reason politics are as fucked in the US as they are is because people refuse to acknowledge the existance of eachother/problems.
quzar
You're confusing not buying into a premise with willful ignorance.
yzark01
And North Korea is called the Democratic Peoples Republic of Korea but that doesnt mean its democratic, a republic, or for the People. Names dont mean shit.
Its like how they call this stuff AI when the reality is its just rebranded machine learning with no actual intellogence whatsoever. Its just a statistical algorithm thats returning the most likely response according to a probability distribution. Yet the false name has tricked a lot of dumb people into believing that its 'thinking'.
stronomer
There are chatbots. Which are offered as "therapy chatbots". But they don't work, which has been demonstrated. So there are not therapy chatbots, there is only marketing and lies.
VodkaReindeer
Check out this guy, he can prove a negative.
vmos
"if you market the thing, the thing exists" ? In that case, I have just the product for you, it's got remarkable restorative properties, will balance you humors, align your ether and ward off miasma. It's principle ingredient is a greasy fuid, you might even call it "oil" extracted from a legless reptile
CylensTheDiscourse
That is exactly the kind of thing a huge portion of the current "health and wellness" concoctions are (especially the one that claim they "detox" your body). Just because a vast number of them don't work doesn't mean they don't exist.
Phlyn
sweet, i choose to not buy your product that exists. you can claim it does things it doesn't but the product exists. you're not making the point you think you're making. the bot exists, they claim it does the therapies, it doesn't do them or at the very least does them poorly and has no idea what its actually doing while it does that. but the bot exists.
Phlyn
exists doesn't mean is good, vetted, approved, etc. its litterally the minimum difference between an idea and a thing. a bot is a thing, you type it words and words come back, theres code that exists that does these interactions.
MaleProstateMilker88
My god... imgur is full of ai grifting fascist techbros. No one likes to hear it but if there's literal people here saying that chatbots can do therapy for you while people upvote that shit, we're screwed. There's less access to healthcare absolutely everywhere, lines and affordability issues, and some fucking grifters out here claiming it's good actually to have a prompt-program as your therapist..!? Ffs.
eggmuffin
The product doesn't exist. That marketing says otherwise is no call to abandon reality. Lies do not reality make.
Phlyn
denying a product exists removes urgency that it does exist. they exist and they shouldn't, they should be regulated but they're not.
the product does exist. the marketing says it does. which means its fucking important that we DO infact need to do something to address the problem soon.
Phlyn
your'e abandoning reality for what you SHOULD see. reality says they exist. people are using them for the purpose they're marketed, they SHOULDNT but they do. reality says they exist.
SantaBananas
Calling it a "therapy chatbot" states that it provides therapy. It does not. This is false advertising. Additionally, as the bot does not provide therapy, even if it is named "therapy chatbot," it is not, in fact, a chatbot which provides therapy. You're not being a realist here, you're being an idiot.
Phlyn
denying these bots exist removes urgency that they need to be legally addressed and regulated.
they're being missused and advertised to provide shit therapy to people being foold by the marketing. it is infact being used to provide therapy.... which is a problem. the bot exists and that is a problem. dont deny reality for what you think SHOULD exist. these bots ARE being used AS THERAPY bots.
SantaBananas
Additionally, as you can see, people relying on this kind of advice is actively harming them. By insisting on calling them what they aren't, you are normalizing trusting these them, making it easier for these mindless ai tools to continue to harm people. You should stop doing that.
Phlyn
I'm not insisting on calling them anything, i'm saying they exist. the only time i've ever talked about a therapy chat bot is today. I am not normalizing shit. your denial of reality and conflation of quality and existance is a problem.
they're shit, they give shit therapy, i bet even 2% of the time they help people even. do they harm more than they help probably? but they exist. cigarettes were a weight loss treatment. thats a shit thing. but they existed, and continue to.
stronomer
I think the disconnect in all this "discussion" is that you say they exist bc someone calls them that. I disagree that this is the definition of "existing". It's like I see a trashcan and call it an alien and claim aliens exists. It's not even a shitty alien. The elements that it's made of were fused outside Earth, yes, but still, not an alien.
Phlyn
you're not being a realist, you're setting a minimum quality of effectiveness on being able to apply an adjective. it can provide SHIT therapy and still claim to be giving therapy. it can say everything wrong and still be "therapy"
and while i agree the bot doesn't know what its doing, and its probably bad at it. theres going to be people who get a theraputic relief from just venting to a thing that says "tell me about that" repeatedly.
Phlyn
if talking to a stuffed animal/pet rock about your shit can be therapy so can talking to a dumb algorithm.
stronomer
Yeah, ignoring the whole concept of "therapy".
Strondvordr
You're being downvoted but you're absolutely correct. It might not be good therapy, it might not be licensed therapy but it is still a therapy chatbot. Just a shit, unlicensed one.