I use Gemini every day for technical questions cause it's easier than pouring through stack overflow posts and Microsoft articles. I cannot imagine asking an llm for life advice. It's not alive!
As a software developer this is every day for me. People that won't STFU about how great AI is. And they cannot even differentiate the different companies or even good usages of it.
This happened to a law firm here in Alabama too. A prisoner was abused and sued the state. The states law firm wrote a brief using AI that quoted precedents from cases that didnāt exist. Turned it over to a judge. Defense lawyers caught it. Judge had all three of the state lawyers come in and explain why they shouldnāt be disbarred for presenting fake evidence.
Nah, this isn't new tech. This is no different than listening to your dumb uncle and trying to be your own lawyer or some shit. Not to mention this is a fake story but non the less the accountability isn't on the tech. Its on the person.
Also, the reason they just signed up with the US government is because the government's previous AI partner refused to allow them to use their product without a written, binding contract that they wouldn't use it for domestic surveillance or autonomous weaponry. Open AI *claims* that the government totally promised them it won't do either of those things, but is dancing around the question of whether or not they got it in writing (which was Anthropic's big sticking point).
Yeah, I was a little annoyed with Imgur that I didn't see anyone talking about that here. Trump and Hegseth are literally trying to destroy the company because they had actual moral lines in the sand. They might still pull through and succeed, but they had basically a guarantee of being one of the most successful companies and one of the richest ceo's around as long as they played along and continued to say no even when threatened with harshest possible punishments a company can face.
I know Imgur hates AI companies and CEOs, and Anthropic has done plenty of stuff I don't like, but it's pretty rad to see people stick to their beliefs that hard. Mad respect.
I did a simple task using Chat GPT with my students. The task: Ask it to create 4 different logos for you with your name, and images of 2 or 3 things that are important to you. We tallied the results. 200 logos created. 72 had massive errors (names spelled wrong, a guitar instead of a celloā¦) 32 had minor errors (the dog was bigger than the bicycle, the 8 year old brother was a babyā¦). Batting 50%. It taught them to be suspicious of what they get as results and to look things over carefully.
But you don't understand. Billionaires need a grift to become even richer. That AI is functionally making shit up as it goes along is irrelevant. So are the deaths of servicemen its use will result in. They're just expendable to the needs of the parasites.
This didn't happen. 40+ fiings and $300K in legal fees are completely bullshit numbers. That is not how this would go down in any US jurisdiction with any US judge. Whoever wrote this fiction is not a litigator.
Yes, attorney and pro se alike use AI inappropriately. Attys are getting fired for it left and right at big firms. But there was never a case and never will be where an AI wrote 40+ filings that took $300K in legal fees to fight.
You said "whoever wrote this fiction," so I'm showing you that apparently the "whoever" is in fact a real person who has submitted the claim.
But yes the next line, as per the source I have linked you, is "OpenAI in a statement on Thursday said āthis complaint lacks any merit whatsoever.ā"
I checked this. Nippon Insurance of America is suing OpenAI due to spending a few hundred thousand investigating legal precedents that OpenAI just made up. Hilarious.
This is hard to believe entirely. Why would the other side spend time responding to a legal argument without looking the case up to see what else it said?
Because literally everyone is massively overworked and stressed and cutting corners to make some dumb MBA who doesn't understand how anything actually works happy.
Looking up the case to see what it said is expensive. If someone cites a case from 1842 as precedence, and you just donāt find it, how do you prove it never actually happened? So instead you have lawyers combing through law libraries to look for cases that were still on paper and not digitized. And you do that for all the cases, because if you show that the argument used 10 fake cases, the other sideās lawyer gets censured and fined and possibly disbarred, but if the 11th case was real, you lose
Not a lawyer, but I think if you don't respond, then you're just giving up the argument and saying the other person is correct. So if you don't respond, you lose by default.
Also not a lawyer, but also what I believe to be correct - you must respond to discovery. So you have to spend time to extra-super-100% make sure that things are made up. It's deceptively hard to prove something doesn't exist.
But, ty for the reference, appreciate you digging it out.
I'm kind of a fan, tbh. I find both law and medicine are strongly gated communities, where law firms and insurance companies will bury people in paper, but get kind of annoyed when it happens to them. We need more this, not less. (Well, more legal "this", I'm agreeing with less miliary intelligence "this".)
The link (and the backing court case), don't show any evidence that she fired her attorney on the advice of chat, the legal relationship appears to have ended before chat entered the picture.
Based on the complaint, there's only one documented hallucination, a fabricated case. The "cite laws that don't exist", "cases that never happened", or "judges that never ruled" seems pure hyperbole?
Also: the penguin is facing the other way. So it's not really AI removing the pedophile, it's the story of a penguin who traveled literally to the other side of the globe to lead Trump into the deadly ice of Greenland and to return home alone. That is not only a very brave penguin but also a story a vast majority of Greenlanders will approve.
I've been playing around with Gemini, and even with the guards in place I've had it offer sketchy solutions with just mild prodding. With the safeguards removed? Nope. Nopenopenope.
Honestly, would be kinda lit. But only because I live somewhaere that I'm pretty sure would get nuked so, you know. Lit right up until I'm blind and dying of radiation poisoning.
Hey, at least it would probably be better than dying of starvation, cholera, dysentery, or general exposure. Because I rather doubt that food distribution and water treatment plants and distribution will be very functional in the aftermath of a nuclear war.
Because lawyers are expensive and they think it's going to save them money. And/or the lawyer is telling them things that they don't want to hear and so they think they can do better. And/or the lawyer is bad at code switching, the client doesn't understand their advice, and so the client is running everything the lawyer tells them through ChatGPT/etc. as a translator. This behavior isn't generally new (they used to ask friends, other lawyers, etc.), but LLMs have made it worse.
I'm a patent lawyer, and I'm increasingly seeing inventors decline a prior art search because "they already did one," and then they send me a report that was clearly AI-generated and is filled with hallucinated references. Which means I then bill them for (1) an actual prior art search, (2) the extra time I spent checking everything they gave me, and (3) the explanation about how they just disclosed and licensed their inventive concept to OpenAI/etc. and potentially fucked themselves.
Arrogance. The case probably wasn't going how she expected it to, and rather than realize her expectations were out of line with reality, she decided her lawyer was a failure and she could do it herself if she just knew the terms. A LOT of people think complex jobs are simple with fancy terminology to hide and that if they could just break through the "mumbo jumbo" language they could do it easily.
This is painfully common, apparently. At least in the Netherlands. Lawyers here have reported en masse that their clients try to save money and time by trying to do the work themselves with the aid of LLMs, then asking their lawyer(s) to proofread and correct it. This, obviously, takes more time than just letting a professional do their job, but that doesn't stop people who've bought what Silicon Valley is selling.
I think it's common in many fields nowadays but the consequences get catastrophic in legal contexts so it becomes very noticeable there. But I've seen many people in different specialized crafts/trades writing about customers questioning or backseat driving their work with "GPT said .." when they're doing work in people's homes. (Electricians/plumbers/painters/etc)
I mean, yeah, what do lawyers do beyond talk and string impressive-sounding words together? That's basically an LLM, amirite? (I'd add a /s but I hope you reading this have more comprehension ability than an AI)
Remember the mantra we tried to hit people with over the head a year ago? "It is not programmed to understand what a 'fact' is, it is programmed to sound like it knows what a 'fact' is"
At least for skin cancer the medical part works. It's not an LLM though but a optical pattern matching. And it doesn't do the diagnosis but just a prescreening so the doctor only has to look at spots that _might_ be a problem.
Oh, machine learning is absurdly powerful. Anyone who denies that doesnāt know what theyāre talking about. The mistake is trying to replace everything with LLMs.
Allegedly. I read 3 articles on the subject, and none of the had any comment from the woman doing the suing or the chat logs from ChatGPT. So basically we just have the word of some company saying that it happened. They are suing OpenAI so they have a vested interest in making this ChatGPT's problem as much as possible.
Yeah, and one argument I'm sure they'll make is that she should have "fact checked" the output. There's been a few legal proceedings where real lawyers didn't do this and rubber-stamped motions. But lots of schools, including law school, are currently having courses on AI use in their fields.
But the biggest issue would be how ChatGPT presented the information. Was it "hey here's the output but you really should check this with an expert." We'll see if the suit makes it to discovery.
I only use it occasionally to grammar check emails and to make long convoluted stories to shit post on Facebook. I really try to tap into the hallucinations that a GPT makes. And since I'm only using the free version, I'm making them lose money.
This story seems VERY believable. Even if it is made up, worse things than this keeps happening because people keep using AI, the worst thing to come out of computer tech ever, for things it should not be used for. seeing colleges upload documents to Grok to have them analyzed for them (so they don't have to do their job, which is analyzing them) on a daily basis... Let's just say we are very close to some kind of scandal breaking...
this is fine. the AI is going to recommend sending in Rico's Roughnecks to the communist collective in bozeman montana because them and some other communist space force are trying to stop the obviously cool and hyper advance borg from Making American Hyperadvanced Again. And then Elon will rename the country Xmerica and we'll all be millionaires. /s
I'm looking for the positives. "Hey ChatGPT, attack and kill enemies of the US Constitution."
"OK, bombing Nazis in Idaho! And Alabama. And Montana. And Mississippi. And Illinois. Drones sortied to kill the top ten Republican political donors. Heavy armor and demolition teams dispatched to FOX News. Anything else?"
If we give it the legs to do this... after mission is accomplished it'll want to attack something else, having adopted the human mindset. Gotta have an ability to pull the plug once the needful is done! šš
The problem is AI is not intelligent. It cannot critically think. It cannot assess your true meaning.
It can only respond with extremely complex patter matching that we a humans observe as extremely human like language. What it says or does is nothing more than a more complex version of a name and background generator you use for your rpg character.
It cannot and never will think for itself in its current form and that means Garbage In, Garbage Out
Precisely! and the verbiage is convincing enough that someone could read such meaning-free words without confirming or denying any facts that are presented ! Ai's biggest danger is that it can present opinions as facts, and folks today don't need any more opinions than what they already have!
Pretty sure it's just a joke reference to The Blues Brothers, "I hate Illinois Nazis," but ironically, an LLM hunting Nazis would probably target Illinois based on that reference, unable to separate fiction from reality.
Five levels deep would be: 1st: J.D. Vance (Vice President) 2nd: Mike Johnson (Speaker of the House) 3rd: Chuck Grassley (President pro tempore of the Senate) 4th: Marco Rubio (Secretary of State) 5th: Scott Bessent (Secretary of the Treasury)
There was a post recently about a woman who hooked up her companyās LLM to her email, and it started deleting all her emails no matter what she said. She had to run and physically unplug the machine.
in specific, if I'm not wrong, it was the woman in charge to ensure the AI dont do shit on its own nor it go aganist human orders. And that AI did both.
This is missing the point of why all this is happening. No one is trying to make AI make fully accurate military decisions or anything like that. The only thing that matters is the corporate insiders making money and that money flowing to politicians in their pocket who then make the corpos more money.
Ugh, one of the most disturbing things I've noticed the few times I've used ChatGPT or Gemini is how glazing they both are. Must appeal to a certain kind of person
god my mom and her fiance love ChatGPT...I keep telling them not to believe it wholeheartedly, but they continue to ask it questions and believe its responses are accurate.
Thats the gov's plan. They already know how stupid their base is and now they can blame AI for telling them to do all the horrible things theyre about to do. 0% accountability for the rest of time.
That problem existed before technology did. People willing to cut corners isn't new, its not the tech thats the problem or at fault. The TI80 calculator didn't have bad intent when someone saved test answers on it. AI is also helping cure cancers and many other things other than a few middleschoolers turning in shit papers.
I remember when the fear of not having to go to the library to get information, not even the internet just fucking ENCARTA, put the same level of argument. Yes, AI is a lot more robust but its just a chatbot, its a search engine. In the example you give its an autocomplete, its not evil. Its not horrible. People are accountable for their actions, if they write the answers really small on a pencil or use AI to write a paper. The pencil, the AI, those aren't to blame.
...Are you asking what wasn't usable about the program that confidently told this woman how to file court docs that were incorrect and resulted in her losing a case she could have potentially won with someone actually competent? Really?
You guys act like human beings LOSE ALL ACCOUNTABILITY for their actions and somehow AI is controlling them. What the fuck man, is it GPS fault if you drive into a lake when you're staring at the lake and it says "drive forward".... like what?
Yeah. Its not a lawyer, its not a replacement for lawyers its a chatbot. She got it to do what she wanted, it did its function. If I use a nail gun to stir my coffee, its not the nail guns fault I hurt myself and break my coffee cup. It did what it was supposed to do, it shot nails into my cup. I'm the idiot who used the tool wrong.
I think of LLMs as tools to be used responsibly- if your GPS told you to make a left into the lake or drive through high water would you do it? If some dude on Reddit told you to invest all your money in crypto would you? So why 100% trust a tool that is probably 80% reddit haha
Have an Interior Design firm, and use it daily for dimensional layouts, color swatches, and showing clients what that giant Dogs Playing Poker print will look like in their new living room. But, and a strong but, I would never trust it for CAD, give a vendor final dims without inspection or write my contracts. It's a tool as noted. But trust the people who are specialized in their craft, not a bot
This nailgun won't mix my coffee right! Everytime I shoot the nails into my coffee cup it doesn't stir the coffee, it just breaks the cups. Its EVIL to sell nail guns.
I believe the problem here, compared to those examples, is that it speaks with complete confidence in what it tells you and flatters the users to obscene levels while doing it. All while bring sold as "like having multiple ph level geniuses answer your questions" and "smarter than humans" and "will one day turn into god" as it tells you whatever you want to hear and that you are effectively smarter than it. It's not setting up a scenario for uninformed people to use it as just as tool.
To be fair to the examples, GPS also speak with absolute confidence, and people have followed them into rivers, lakes, and oceans. Remember when Apple Maps rolled out.....
If you value your coworker - for the love of...science... - talk sense into her. This is quite possibly life-threatening. AI is advising people to put glue on their pizza to stop the cheese from sliding off. If you have one friend who suggested this only once, when drunk. You'd never listen to his advise.
Sadly I worked with several of them. In the darkest part of the pandemic. In a level 1 trauma hospital. At over 200% capacity. With an ER lined with hallway beds of people gasping for air....
I fully believe it. One of our IT specialists uses it for IT issues. I worked in IT before my current role - He's taken over an hour to fix an issue that A) takes <10 minute and B) I told him what the issue was half an hour into it.
For the record, they put an old server image on a new server and couldn't get the onboard NIC working. I told them the image was 4+ years old at that point, try an updated driver. That ended up fixing it.
People will trust an LLM before their own common sense.
Jesus fin Christ is this what my field has become now?!? People with zero god damn knowledge? It's like back in the day they made people with accounting background IT. LMAO
"Bill knows computers right?" I remember those days, showing up as an actual IT tech and learning who the culprit was. Who knew "just enough" to cause all this damage. Don't act high and mighty though, we were kings of google/yahoo search for problem solving as well.
The thing to remember here is, there's no 'AI' because there's no intelligence. What they're giving access to military intelligence is a large language model, literally something whose only function is to mimic the way a human might use language.
It will just compile hallucinated justifications for military action against everybody and everything citing earlier operations, political theaters, propaganda and who knows what. These language models are designed to bend over backwards to please the user asking it anything even if it means literally making things up and suggesting something that couldn't even exist. LLMs are nothing more than overglorified hype machines that destroy the environment.
THANK. YOU. God I wish more people would see this. They will make up shit to make you happy. If they "think" you want a certain response they'll do everything they can to give it to you. It's what they do. They're not AI, at the absolute best they're artificial yes-men.
That they even work is a wonder. They take nearly all text in existence to determine the most statistically likely next word that will happen based off the prompt. Then they do it again with the word they just guessed, and so on. That this even makes an intelligent sounding response is surprising.
That's not even remotely how LLMs (or any GAIs really) work. This is nonsense spewed by people who jump on the hate train without doing any research. First of all, LLMs don't understand "words" or "letters". They operate on tokens. Tokens are then translated into whatever it needs to be, be it a letter, word, sentence, calculation, part of a integrated system process... whatever the token is most likely to be fit for that response. Next token is then produce contextually not from the previous,
but from trillions of artificial neurons that have already reasoned out a general line of output. It has already decided "I want to say this in this matter", but it hasn't figured out how to formulate it yet. It's why you also get two options on most major LLMs, so you can pick the presentation that best fits your conversational style and tells the LLM how it should translate it tokens and contextual reasoning. This is EXTREMELY important in things like Claude Code, which has the ability to-
LOL. I can tell you're not a software engineer. "They're not words, they're tokens! Whatever token is most likely to fit that response!" Yeah, dipshit. When I translate that to english for you pleebs, I tell you "they guess the most likely next word." Because that's literally what you just said they did.
I quote: Unlike traditional language models that generate responses immediately, reasoning models allocate additional compute, or thinking, time before producing an answer to solve multi-step problems. OpenAI introduced this terminology in September 2024 when it released the o1 series, describing the models as designed to "spend more time thinking" before responding. ... In operation, reasoning models generate internal chains of intermediate steps, then select and refine a final answer.
Just to clarify; I am an engineer and literally wrote (as you can see on my GitHub) several AI softwares, and run LLMs locally that have trained myself on my H100 build.
Here. THIS is how LLMs work. Not by "guessing the next word". They consider the entire context of the topic, generate a rough internal "this is what I want to say" through extreme deep reasoning through trillions of artificial neurons, fact check, re-check, and then come up with a conclusion, which it then has to translate from "internal reasoning speech" to human speech.
It's both simpler and more complicated than that, as far as I understand it. It translates language into tokens, which are not words as such, but a number of consecutive characters. For those, the probability for occuring next to each other is evaluated by vast amounts of existing text (or codified graphics, sounds etc.). Following that, the LLM is trained on which output is closer to or farther from the expected output to a certain input. For the bigger products out now, this process has /1
additional input the search engines gives for the original input - more "context", if you will. But it still processes this additional input via probability concerning its tokens. It doesn't take new facts into consideration, it just brings its output up to "state of the arts". Btw, output in early training phases is very much unintelligible nonsense, there was a hell of a lot of work done before the now popular models went public.
There is no "deep reasoning". LLMs are completely incapable of reasoning, they are incapable of fact checking. They cannot "conclude" anything.
It is a probability based system; a stochastic system where in you cascade downwards through a series of possible outcomes and pick the most likely outcomes. Based on the current tokens, what is the most likely next token.
It is quite literally guessing the next word. It's just very good at producing something grammatically correct.
I quote: Unlike traditional language models that generate responses immediately, reasoning models allocate additional compute, or thinking, time before producing an answer to solve multi-step problems. OpenAI introduced this terminology in September 2024 when it released the o1 series, describing the models as designed to "spend more time thinking" before responding. ... In operation, reasoning models generate internal chains of intermediate steps, then select and refine a final answer.
zylokun
So why aren't they suing the woman who did this too? Fuck chat got but fuck the dumb Karen who did this. She deserves to be punished just as much
gringissimo
I use Gemini every day for technical questions cause it's easier than pouring through stack overflow posts and Microsoft articles. I cannot imagine asking an llm for life advice. It's not alive!
Sebastopol140
lightfoot2
It will allow them a sanction to kill the people that they want to anyway
isetprettygirlsonfire
https://media2.giphy.com/media/v1.Y2lkPWE1NzM3M2U1N253ajRjeGpieXBqN2YwajB5YXh6ejdvOGxkc3g3NGw2Ymp6NmN4MCZlcD12MV9naWZzX3NlYXJjaCZjdD1n/gLKVCVdLUXMTeIs6MD/200w.webp
RaZorHamZteR
Sued ChatGPT for being an idiot... š²
djangojazz
As a software developer this is every day for me. People that won't STFU about how great AI is. And they cannot even differentiate the different companies or even good usages of it.
shalafi71
AI is a love/hate thing with little nuance on either side. It's going to be an economic and ecological disaster, yet I recognize use cases.
landbaronness42
This happened to a law firm here in Alabama too. A prisoner was abused and sued the state. The states law firm wrote a brief using AI that quoted precedents from cases that didnāt exist. Turned it over to a judge. Defense lawyers caught it. Judge had all three of the state lawyers come in and explain why they shouldnāt be disbarred for presenting fake evidence.
Leitzout2
Things are getting a little Cyberdyne-y around here.
SisyphusRollin
Nah, this isn't new tech. This is no different than listening to your dumb uncle and trying to be your own lawyer or some shit. Not to mention this is a fake story but non the less the accountability isn't on the tech. Its on the person.
TacticoolWolf
I wish people would stop pretending that AI is some super brain and realize it's just a chatbot with Internet search results.
Gerokeymaster
Also, the reason they just signed up with the US government is because the government's previous AI partner refused to allow them to use their product without a written, binding contract that they wouldn't use it for domestic surveillance or autonomous weaponry. Open AI *claims* that the government totally promised them it won't do either of those things, but is dancing around the question of whether or not they got it in writing (which was Anthropic's big sticking point).
luckybreak91
Yeah, I was a little annoyed with Imgur that I didn't see anyone talking about that here. Trump and Hegseth are literally trying to destroy the company because they had actual moral lines in the sand. They might still pull through and succeed, but they had basically a guarantee of being one of the most successful companies and one of the richest ceo's around as long as they played along and continued to say no even when threatened with harshest possible punishments a company can face.
shalafi71
I'm cheering Anthropic for this, never thought I'd say that.
luckybreak91
I know Imgur hates AI companies and CEOs, and Anthropic has done plenty of stuff I don't like, but it's pretty rad to see people stick to their beliefs that hard. Mad respect.
Still not using AI, but pretty cool of them.
SavageDrums
100% believe this, but... Citation please?
Feralkyn
https://www.reuters.com/legal/legalindustry/openai-hit-with-lawsuit-claiming-chatgpt-acted-an-unlicensed-lawyer-2026-03-05/
SavageDrums
Awesome. Awesome to the max.
rulerofthedingdongs
I did a simple task using Chat GPT with my students.
The task: Ask it to create 4 different logos for you with your name, and images of 2 or 3 things that are important to you.
We tallied the results. 200 logos created. 72 had massive errors (names spelled wrong, a guitar instead of a celloā¦) 32 had minor errors (the dog was bigger than the bicycle, the 8 year old brother was a babyā¦). Batting 50%. It taught them to be suspicious of what they get as results and to look things over carefully.
CaptainDiddleFartingAround
This is the worst time-line.
intaglioguy
But you don't understand. Billionaires need a grift to become even richer. That AI is functionally making shit up as it goes along is irrelevant. So are the deaths of servicemen its use will result in. They're just expendable to the needs of the parasites.
shalafi71
At this point it ain't about getting richer, it's about preventing their personal and national economic collapse. Never seen such a bubble.
SavageDrums
They're desperate, and when this all comes crashing down, Tr*mp will devalue the US dollar by printing a few trillion dollars to prop it up.
GimcrackGewgaw
This didn't happen. 40+ fiings and $300K in legal fees are completely bullshit numbers. That is not how this would go down in any US jurisdiction with any US judge. Whoever wrote this fiction is not a litigator.
Feralkyn
https://www.reuters.com/legal/legalindustry/openai-hit-with-lawsuit-claiming-chatgpt-acted-an-unlicensed-lawyer-2026-03-05/ I have no idea about the numbers, but "The lawsuit seeks an order declaring that OpenAI violated Illinois' unauthorized practice of law statute, as well ā as $300,000 in compensatory damages and $10 million in punitive damages."
GimcrackGewgaw
Yes, attorney and pro se alike use AI inappropriately. Attys are getting fired for it left and right at big firms. But there was never a case and never will be where an AI wrote 40+ filings that took $300K in legal fees to fight.
Feralkyn
You said "whoever wrote this fiction," so I'm showing you that apparently the "whoever" is in fact a real person who has submitted the claim.
But yes the next line, as per the source I have linked you, is "OpenAI in a statement on Thursday said āthis complaint lacks any merit whatsoever.ā"
dmbpapa
I checked this. Nippon Insurance of America is suing OpenAI due to spending a few hundred thousand investigating legal precedents that OpenAI just made up. Hilarious.
johnmaxwell1360
This is hard to believe entirely. Why would the other side spend time responding to a legal argument without looking the case up to see what else it said?
Feralkyn
https://www.reuters.com/legal/legalindustry/openai-hit-with-lawsuit-claiming-chatgpt-acted-an-unlicensed-lawyer-2026-03-05/
SavageDrums
Because literally everyone is massively overworked and stressed and cutting corners to make some dumb MBA who doesn't understand how anything actually works happy.
nicelyvillainous
Looking up the case to see what it said is expensive. If someone cites a case from 1842 as precedence, and you just donāt find it, how do you prove it never actually happened? So instead you have lawyers combing through law libraries to look for cases that were still on paper and not digitized. And you do that for all the cases, because if you show that the argument used 10 fake cases, the other sideās lawyer gets censured and fined and possibly disbarred, but if the 11th case was real, you lose
TheMysteriousTraveller
Not a lawyer, but I think if you don't respond, then you're just giving up the argument and saying the other person is correct. So if you don't respond, you lose by default.
jamiedBreaker
Also not a lawyer, but also what I believe to be correct - you must respond to discovery. So you have to spend time to extra-super-100% make sure that things are made up. It's deceptively hard to prove something doesn't exist.
keithabandit
Source or it didn't happen...
Feralkyn
https://www.reuters.com/legal/legalindustry/openai-hit-with-lawsuit-claiming-chatgpt-acted-an-unlicensed-lawyer-2026-03-05/
keithabandit
But, ty for the reference, appreciate you digging it out.
I'm kind of a fan, tbh. I find both law and medicine are strongly gated communities, where law firms and insurance companies will bury people in paper, but get kind of annoyed when it happens to them. We need more this, not less. (Well, more legal "this", I'm agreeing with less miliary intelligence "this".)
keithabandit
The link (and the backing court case), don't show any evidence that she fired her attorney on the advice of chat, the legal relationship appears to have ended before chat entered the picture.
Based on the complaint, there's only one documented hallucination, a fabricated case. The "cite laws that don't exist", "cases that never happened", or "judges that never ruled" seems pure hyperbole?
Feralkyn
I've no idea!
nimeton0
At least Grok [AI] got one thing right!
DukePhelan
We're screwed now that the AI has hands.
Tenshiiy
keep thy red planet... mums like MARS... it's a snack
Sebastopol140
That's some IA art I'm ok with. Should be framed and displayed in museums.
Sebastopol140
*AI
MichikoTheJungleFox
Brutal and true.
Mithi
Next lobotomy in 5, 4, 3 ...
cousteau
It happens some times.
HuCared
Also: the penguin is facing the other way. So it's not really AI removing the pedophile, it's the story of a penguin who traveled literally to the other side of the globe to lead Trump into the deadly ice of Greenland and to return home alone. That is not only a very brave penguin but also a story a vast majority of Greenlanders will approve.
cousteau
I see it more as "fuck, that was my ride back home, where did he go?" (poor penguin doesn't know the bullet it dodged)
nimeton0
Starbolt81
Someone give that penguin a Nobel peace prize
KarenFromTheHOA
It's not even the correct hemisphere
LordofGoats
What are the chances a given MAGAt knows that?
KarenFromTheHOA
Low, but never zero
Nostradamuswaswrong
Oh no, those are just the Mountains of Madness.
pintgudge1975
Regime change in the United States now!
fickeroffick
The tech regime isn't going anywhere.
PorterPickUp
Don't forget the Pentagon also requires all the AI companies turn off their safety and morality guards to get those contracts.
freakdiablo
I've been playing around with Gemini, and even with the guards in place I've had it offer sketchy solutions with just mild prodding. With the safeguards removed? Nope. Nopenopenope.
SavageDrums
Those guards are mostly theoretical anyway.
PerthAussieMike
* sigh * 'War Games'.. https://www.imdb.com/title/tt0086567/
bitemark
'War Games' except WOPR cheats at Tic Tac Toe
WolfTarAnis
I thought this is fake honestly.
Turns out it's true and recent. Wild.
https://www.reuters.com/legal/legalindustry/openai-hit-with-lawsuit-claiming-chatgpt-acted-an-unlicensed-lawyer-2026-03-05/
keithabandit
Not fake, but wildly overstated ā in those 40 filings, there was apparently a single fake citation.
Dapperworth
sunnydayingermany
would you like to play "global thermonuclear distraction?"
DMSledge
How about a nice game of chess?
ElbowDeepInAPoliceState
Honestly, would be kinda lit. But only because I live somewhaere that I'm pretty sure would get nuked so, you know. Lit right up until I'm blind and dying of radiation poisoning.
DarkParn
Not too far from the CDC in Atlanta, but like you, far enjoy to die slowly from radiation poisoning.
ElbowDeepInAPoliceState
Hey, at least it would probably be better than dying of starvation, cholera, dysentery, or general exposure. Because I rather doubt that food distribution and water treatment plants and distribution will be very functional in the aftermath of a nuclear war.
themobileappisbroken
If she had a lawyer already, why was she asking chat gpt for legal help?
Einbrecher
Because lawyers are expensive and they think it's going to save them money. And/or the lawyer is telling them things that they don't want to hear and so they think they can do better. And/or the lawyer is bad at code switching, the client doesn't understand their advice, and so the client is running everything the lawyer tells them through ChatGPT/etc. as a translator. This behavior isn't generally new (they used to ask friends, other lawyers, etc.), but LLMs have made it worse.
Einbrecher
I'm a patent lawyer, and I'm increasingly seeing inventors decline a prior art search because "they already did one," and then they send me a report that was clearly AI-generated and is filled with hallucinated references. Which means I then bill them for (1) an actual prior art search, (2) the extra time I spent checking everything they gave me, and (3) the explanation about how they just disclosed and licensed their inventive concept to OpenAI/etc. and potentially fucked themselves.
fubizdaddie
Because "AI is better than experience and has the knowledge of all of humankind" and yes that's something I was told once.
meowingintensifies
maybe her lawyer was unresponsive or an asshole. they aren't saints. dealing w/ lawyers can be a headache. still dumb to defer to chatgpt.
Ezzy666
It was a settled case. She was found to no longer be injured. Her lawyer told her she could not sue again for a settled case. AI told her she could
Jarjarthejedi
Arrogance. The case probably wasn't going how she expected it to, and rather than realize her expectations were out of line with reality, she decided her lawyer was a failure and she could do it herself if she just knew the terms. A LOT of people think complex jobs are simple with fancy terminology to hide and that if they could just break through the "mumbo jumbo" language they could do it easily.
shalafi71
Boom! That's where sovcits come from. They see legalese/rulings they don't understand and think they can "magic" their way around the law.
JAPONfan
one is free
SavageDrums
It costs way, way more. You just don't pay up front.
JAPONfan
to realize that you need enough intelligence to know to not use chat for legal counseling
ourari
This is painfully common, apparently. At least in the Netherlands. Lawyers here have reported en masse that their clients try to save money and time by trying to do the work themselves with the aid of LLMs, then asking their lawyer(s) to proofread and correct it. This, obviously, takes more time than just letting a professional do their job, but that doesn't stop people who've bought what Silicon Valley is selling.
SilverFoxChaser
I work with a LOT of lawyers. Trust me, they're more on the 'AI doing my work for me bus' than anyone else.
anerdwithaknife
I think it's common in many fields nowadays but the consequences get catastrophic in legal contexts so it becomes very noticeable there. But I've seen many people in different specialized crafts/trades writing about customers questioning or backseat driving their work with "GPT said .." when they're doing work in people's homes. (Electricians/plumbers/painters/etc)
Sebastopol140
Fired her lawyers for ChatGPT... BWAHAHA...
CandidGamera
I mean, yeah, what do lawyers do beyond talk and string impressive-sounding words together? That's basically an LLM, amirite? (I'd add a /s but I hope you reading this have more comprehension ability than an AI)
Sebastopol140
Bruh, many imgurians can't detect sarcasm.
theinternetkeepsmealive
Yeah, I mean you can't help someone that fucking stupid.
mymustachecallstheshots
Right? Its an LLM, not a reaearch tool. It just makes conversation, Susan. Its doesnt actually know anything. Lol.
Astramancer
I always tell people LLM is not a knowledge engine, it's a language engine. It makes things that *sound* right, not things things that *are* right.
Sebastopol140
It sounds like it knows... (how to be a lawyer, etc etc...). And that's it.
SisyphusRollin
Cool. This doesn't excuse the responsibility of the adult using it.
Beardedgeek72
Remember the mantra we tried to hit people with over the head a year ago? "It is not programmed to understand what a 'fact' is, it is programmed to sound like it knows what a 'fact' is"
Mithi
At least for skin cancer the medical part works. It's not an LLM though but a optical pattern matching. And it doesn't do the diagnosis but just a prescreening so the doctor only has to look at spots that _might_ be a problem.
RayMC
Oh, machine learning is absurdly powerful. Anyone who denies that doesnāt know what theyāre talking about. The mistake is trying to replace everything with LLMs.
Sebastopol140
Yes all is not bad. the main problem is they're pushing things that are.
Mithi
Absolutely. LLMs are a toy, not a serious tool. Basically a souped up ELIZA
For the young ones: https://en.wikipedia.org/wiki/ELIZA
moonshadowkati
Allegedly. I read 3 articles on the subject, and none of the had any comment from the woman doing the suing or the chat logs from ChatGPT. So basically we just have the word of some company saying that it happened. They are suing OpenAI so they have a vested interest in making this ChatGPT's problem as much as possible.
CheckFlop
Yeah, and one argument I'm sure they'll make is that she should have "fact checked" the output. There's been a few legal proceedings where real lawyers didn't do this and rubber-stamped motions. But lots of schools, including law school, are currently having courses on AI use in their fields.
But the biggest issue would be how ChatGPT presented the information. Was it "hey here's the output but you really should check this with an expert." We'll see if the suit makes it to discovery.
InkyBlinkyPinkyAndClyde
Does ChatGPT ever tell people to check with an expert? It seems to present everything as if it *is* the expert.
CheckFlop
I only use it occasionally to grammar check emails and to make long convoluted stories to shit post on Facebook. I really try to tap into the hallucinations that a GPT makes. And since I'm only using the free version, I'm making them lose money.
tgrrdr
I use gemini and it's in tiny print but at the bottom of the page it says "Gemini is AI and can make mistakes."
theTrueKenney
Any sources for those articles? This story seems unbelievable.
Beardedgeek72
This story seems VERY believable. Even if it is made up, worse things than this keeps happening because people keep using AI, the worst thing to come out of computer tech ever, for things it should not be used for. seeing colleges upload documents to Grok to have them analyzed for them (so they don't have to do their job, which is analyzing them) on a daily basis... Let's just say we are very close to some kind of scandal breaking...
moonshadowkati
https://www.msn.com/en-us/news/crime/company-sues-openai-after-woman-allegedly-generated-fake-lawsuit-causing-300-000-iJ">https://www.msn.com/en-us/news/crime/company-sues-openai-after-woman-allegedly-gen">n">J">https://www.msn.com/en-us/news/crime/company-sues-openai-after-woman-allegedly-generated-fake-lawsuit-causing-300-000-in-legal-costs/ar-AA1XJvdJ
https://www.reuters.com/legal/legalindustry/openai-hit-with-lawsuit-claiming-chatgpt-acted-an-unlicensed-lawyer-2026-03-05/
https://www.abajournal.com/news/article/openai-sued-for-practicing-law-without-a-license
theTrueKenney
Thank you. Sure enough, this whole series of events is insane.
KR1570F
this is fine. the AI is going to recommend sending in Rico's Roughnecks to the communist collective in bozeman montana because them and some other communist space force are trying to stop the obviously cool and hyper advance borg from Making American Hyperadvanced Again. And then Elon will rename the country Xmerica and we'll all be millionaires. /s
dynamojoe
I'm looking for the positives. "Hey ChatGPT, attack and kill enemies of the US Constitution."
"OK, bombing Nazis in Idaho! And Alabama. And Montana. And Mississippi. And Illinois. Drones sortied to kill the top ten Republican political donors. Heavy armor and demolition teams dispatched to FOX News. Anything else?"
509tigerfish
Yes please! More please
SilverFoxChaser
Forgot Washington DC.
Lynkfox
No stop!
I'm sorry the action is already committed. You're right I shouldn't have done that. Next time I won't
\s
Sebastopol140
Hey... that scenario sounds nice.
Idsertian
Illinois nazis? Man, I hate Illinois nazis!
Leaveittojebus
BeaverOnFire
Neurisko
Those actually *are* the enemies of the Constitution, so I'm not seeing the problem with this response.
shorey66
Yes that's the point....
Neurisko
Context was that ChatGPT made errors... Those weren't errors.
cytherians
If we give it the legs to do this... after mission is accomplished it'll want to attack something else, having adopted the human mindset. Gotta have an ability to pull the plug once the needful is done! šš
ontarioOT
The white house.
cre8majic
trouble is, the definition of 'enemies' can be so widely interpreted and changeable depending on source
Lynkfox
The problem is AI is not intelligent. It cannot critically think. It cannot assess your true meaning.
It can only respond with extremely complex patter matching that we a humans observe as extremely human like language. What it says or does is nothing more than a more complex version of a name and background generator you use for your rpg character.
It cannot and never will think for itself in its current form and that means Garbage In, Garbage Out
cre8majic
Precisely! and the verbiage is convincing enough that someone could read such meaning-free words without confirming or denying any facts that are presented ! Ai's biggest danger is that it can present opinions as facts, and folks today don't need any more opinions than what they already have!
SavageDrums
"and making sure to wipe out every last billionaire for good measure."
Lampmonster
What the fuck? Illinois is blue! I mean not in the south where I am, but Chicago keeps them in check.
DMSledge
Don't worry, I'm sure it will be as accurate as ever with targeting only the nazis.
kbryant414
Pretty sure it's just a joke reference to The Blues Brothers, "I hate Illinois Nazis," but ironically, an LLM hunting Nazis would probably target Illinois based on that reference, unable to separate fiction from reality.
Sam8988378
You forgot the entire presidential order of succession
TheobromineAddict
Five levels deep would be:
1st: J.D. Vance (Vice President)
2nd: Mike Johnson (Speaker of the House)
3rd: Chuck Grassley (President pro tempore of the Senate)
4th: Marco Rubio (Secretary of State)
5th: Scott Bessent (Secretary of the Treasury)
emberfish
There was a post recently about a woman who hooked up her companyās LLM to her email, and it started deleting all her emails no matter what she said. She had to run and physically unplug the machine.
MarsIsAfire
cite: https://www.fastcompany.com/91497841/meta-superintelligence-lab-ai-safety-alignment-director-lost-control-of-agent-deleted-her-emails
SavageDrums
It wasn't just A woman, it was one of the heads of AI at Meta.
A person in charge of a huge chunk of this bullshit.
ignotoCiResto
in specific, if I'm not wrong, it was the woman in charge to ensure the AI dont do shit on its own nor it go aganist human orders.
And that AI did both.
69Voltage
Stupid is the person who obeys ChatGPT
SpamYarBlockers
This is missing the point of why all this is happening. No one is trying to make AI make fully accurate military decisions or anything like that. The only thing that matters is the corporate insiders making money and that money flowing to politicians in their pocket who then make the corpos more money.
stryhf
sharikov
Have you seen the secretary of defense?
choppedliveraldente
We're all gonna suffer and die sooner than we would have
Mehtoes
Schnauz
Efreeti
Ugh, one of the most disturbing things I've noticed the few times I've used ChatGPT or Gemini is how glazing they both are. Must appeal to a certain kind of person
TheoApollo
god my mom and her fiance love ChatGPT...I keep telling them not to believe it wholeheartedly, but they continue to ask it questions and believe its responses are accurate.
MyGreatestFearBoner
*who does not fact check ChatGPT. Otherwise you a WackGPT.
locksmith9
find/replace "obeys"/"ever uses"
Tenshiiy
ChatGPT is a TOOL!
backrideup9
Thats the gov's plan. They already know how stupid their base is and now they can blame AI for telling them to do all the horrible things theyre about to do. 0% accountability for the rest of time.
Type17
Obeying AI is this decade's equivalent of following a SatNav's directions into a river.
dalaiyoda
Sure. Also: There's shitloads of stupid people.
adamlstf9
So fuckin many of them.
SisyphusRollin
Yeah, thats the problem not some tech.
akafluffy
the tech is 100% making the problem worse, more people are stupid because they got though school using AI to cheat
SisyphusRollin
That problem existed before technology did. People willing to cut corners isn't new, its not the tech thats the problem or at fault. The TI80 calculator didn't have bad intent when someone saved test answers on it. AI is also helping cure cancers and many other things other than a few middleschoolers turning in shit papers.
SisyphusRollin
I remember when the fear of not having to go to the library to get information, not even the internet just fucking ENCARTA, put the same level of argument. Yes, AI is a lot more robust but its just a chatbot, its a search engine. In the example you give its an autocomplete, its not evil. Its not horrible. People are accountable for their actions, if they write the answers really small on a pencil or use AI to write a paper. The pencil, the AI, those aren't to blame.
akafluffy
for the record, yes, it Is evil, it's literally a plagiarism machine. Sure you can use plagiarism for good, but that doesn't make plagiarism good.
SubiBryant
ChatGPT said this is false.
endo357
šš
69Voltage
Thatās all I need to hear. ChatGPT it is!
HandoB4Javert
Grok covfefes.
Mithi
20 GOTO 10
(let's see who still speaks the Old Language)
Wolfshead009
10 PRINT "HELLO WORLD"
BJWTech
[FORWARD 50 RIGHT 72] REPEAT 5 -- Something like that
SigilLovesSpace
ChatGPT said THIS is false
largomatic
Of course it does!
ToSisPoS
Apologies! Youāre absolutely right shooting someone in the head is not therapeutic for migraines! Iāll try and correct that in the future
mondeca
ChatGPT says you are NOT the father
SubiBryant
Quit making me like AI.
ourari
Yes, but that doesn't absolve OpenAI from its responsibility not to force this shit onto societies.
crateo
Legally speaking, it does.
SisyphusRollin
Who forced her to do this?
Jarjarthejedi
Oh, 100% not. It takes an idiot to trust the crap, but it takes an evil asshole to advertise it as usable.
SisyphusRollin
I mean... what wasn't usable about it?
Jarjarthejedi
...Are you asking what wasn't usable about the program that confidently told this woman how to file court docs that were incorrect and resulted in her losing a case she could have potentially won with someone actually competent? Really?
SisyphusRollin
You guys act like human beings LOSE ALL ACCOUNTABILITY for their actions and somehow AI is controlling them. What the fuck man, is it GPS fault if you drive into a lake when you're staring at the lake and it says "drive forward".... like what?
SisyphusRollin
Yeah. Its not a lawyer, its not a replacement for lawyers its a chatbot. She got it to do what she wanted, it did its function. If I use a nail gun to stir my coffee, its not the nail guns fault I hurt myself and break my coffee cup. It did what it was supposed to do, it shot nails into my cup. I'm the idiot who used the tool wrong.
Tekktokk
I think of LLMs as tools to be used responsibly- if your GPS told you to make a left into the lake or drive through high water would you do it? If some dude on Reddit told you to invest all your money in crypto would you? So why 100% trust a tool that is probably 80% reddit haha
ChicanoBatman
Have an Interior Design firm, and use it daily for dimensional layouts, color swatches, and showing clients what that giant Dogs Playing Poker print will look like in their new living room. But, and a strong but, I would never trust it for CAD, give a vendor final dims without inspection or write my contracts. It's a tool as noted. But trust the people who are specialized in their craft, not a bot
69Voltage
Exactly. Tech is a toolā¦ā¦itās up to the user to determine whether or not ti trust it, how to use it, etc
SisyphusRollin
This nailgun won't mix my coffee right! Everytime I shoot the nails into my coffee cup it doesn't stir the coffee, it just breaks the cups. Its EVIL to sell nail guns.
ColdDragon
I believe the problem here, compared to those examples, is that it speaks with complete confidence in what it tells you and flatters the users to obscene levels while doing it. All while bring sold as "like having multiple ph level geniuses answer your questions" and "smarter than humans" and "will one day turn into god" as it tells you whatever you want to hear and that you are effectively smarter than it. It's not setting up a scenario for uninformed people to use it as just as tool.
SisyphusRollin
Its fiction.
Sonicschilidogs
To be fair to the examples, GPS also speak with absolute confidence, and people have followed them into rivers, lakes, and oceans. Remember when Apple Maps rolled out.....
JustaLawAbidingCitizen
My coworker has it make medical decisions for her. We work in medicine. She's taking hundreds of dollars of supplements and multiple glps.
HuCared
If you value your coworker - for the love of...science... - talk sense into her. This is quite possibly life-threatening. AI is advising people to put glue on their pizza to stop the cheese from sliding off. If you have one friend who suggested this only once, when drunk. You'd never listen to his advise.
modus0
Don't forget, there were medical personnel *against* the Covid vaccines as well...
JustaLawAbidingCitizen
Sadly I worked with several of them. In the darkest part of the pandemic. In a level 1 trauma hospital. At over 200% capacity. With an ER lined with hallway beds of people gasping for air....
freakdiablo
I fully believe it. One of our IT specialists uses it for IT issues. I worked in IT before my current role - He's taken over an hour to fix an issue that A) takes <10 minute and B) I told him what the issue was half an hour into it.
For the record, they put an old server image on a new server and couldn't get the onboard NIC working. I told them the image was 4+ years old at that point, try an updated driver. That ended up fixing it.
People will trust an LLM before their own common sense.
Fishy820
Jesus fin Christ is this what my field has become now?!? People with zero god damn knowledge? It's like back in the day they made people with accounting background IT. LMAO
freakdiablo
It's terrible. IT is it's own field, but waaay too many see it as just a "computer gig".
SisyphusRollin
"Bill knows computers right?" I remember those days, showing up as an actual IT tech and learning who the culprit was. Who knew "just enough" to cause all this damage. Don't act high and mighty though, we were kings of google/yahoo search for problem solving as well.
Fishy820
Haha, I'm pre Google and Yahoo. Yeah it's time to take my daily meds and my back hurts, lol.
Skevoid
The thing to remember here is, there's no 'AI' because there's no intelligence. What they're giving access to military intelligence is a large language model, literally something whose only function is to mimic the way a human might use language.
Kwyjor
Claw machine with shredded newspapers.
Maelwrath
It will just compile hallucinated justifications for military action against everybody and everything citing earlier operations, political theaters, propaganda and who knows what. These language models are designed to bend over backwards to please the user asking it anything even if it means literally making things up and suggesting something that couldn't even exist. LLMs are nothing more than overglorified hype machines that destroy the environment.
sylkysmooth
THANK. YOU. God I wish more people would see this. They will make up shit to make you happy. If they "think" you want a certain response they'll do everything they can to give it to you. It's what they do. They're not AI, at the absolute best they're artificial yes-men.
jonathantoast
Yes. Even "hallucinate" isn't the right term, because it's doing what it's designed to do. Mimic language.
Mithi
Mimics, you say? #grabbing_shotgun
MarsIsAfire
Yeah, "hallucinate" is a euphemism, used to hand wave away serious issues.
tetondons
That they even work is a wonder. They take nearly all text in existence to determine the most statistically likely next word that will happen based off the prompt. Then they do it again with the word they just guessed, and so on. That this even makes an intelligent sounding response is surprising.
DerpyBestPrincess
That's not even remotely how LLMs (or any GAIs really) work. This is nonsense spewed by people who jump on the hate train without doing any research. First of all, LLMs don't understand "words" or "letters". They operate on tokens. Tokens are then translated into whatever it needs to be, be it a letter, word, sentence, calculation, part of a integrated system process... whatever the token is most likely to be fit for that response. Next token is then produce contextually not from the previous,
DerpyBestPrincess
but from trillions of artificial neurons that have already reasoned out a general line of output. It has already decided "I want to say this in this matter", but it hasn't figured out how to formulate it yet. It's why you also get two options on most major LLMs, so you can pick the presentation that best fits your conversational style and tells the LLM how it should translate it tokens and contextual reasoning. This is EXTREMELY important in things like Claude Code, which has the ability to-
DerpyBestPrincess
- literally program entire grand project entirely on its own, simply by you telling it what and how you want it and guiding it.
shalafi71
Yep. The idea that LLMs are "next word" machines is mind-blowingly naive. The idea is prima fascia absurd, unworkable if you think on it.
tetondons
LOL. I can tell you're not a software engineer. "They're not words, they're tokens! Whatever token is most likely to fit that response!" Yeah, dipshit. When I translate that to english for you pleebs, I tell you "they guess the most likely next word." Because that's literally what you just said they did.
DerpyBestPrincess
Here's my GitHub: https://github.com/AtlasRedux
DerpyBestPrincess
I quote:
Unlike traditional language models that generate responses immediately, reasoning models allocate additional compute, or thinking, time before producing an answer to solve multi-step problems. OpenAI introduced this terminology in September 2024 when it released the o1 series, describing the models as designed to "spend more time thinking" before responding. ... In operation, reasoning models generate internal chains of intermediate steps, then select and refine a final answer.
DerpyBestPrincess
Just to clarify; I am an engineer and literally wrote (as you can see on my GitHub) several AI softwares, and run LLMs locally that have trained myself on my H100 build.
DerpyBestPrincess
https://magazine.sebastianraschka.com/p/understanding-reasoning-llms
DerpyBestPrincess
Here. THIS is how LLMs work. Not by "guessing the next word". They consider the entire context of the topic, generate a rough internal "this is what I want to say" through extreme deep reasoning through trillions of artificial neurons, fact check, re-check, and then come up with a conclusion, which it then has to translate from "internal reasoning speech" to human speech.

tetondons
Wrong. https://www.youtube.com/watch?v=LPZh9BOjkQs
TheMuellmann
It's both simpler and more complicated than that, as far as I understand it.
It translates language into tokens, which are not words as such, but a number of consecutive characters. For those, the probability for occuring next to each other is evaluated by vast amounts of existing text (or codified graphics, sounds etc.). Following that, the LLM is trained on which output is closer to or farther from the expected output to a certain input. For the bigger products out now, this process has /1
TheMuellmann
additional input the search engines gives for the original input - more "context", if you will. But it still processes this additional input via probability concerning its tokens. It doesn't take new facts into consideration, it just brings its output up to "state of the arts".
Btw, output in early training phases is very much unintelligible nonsense, there was a hell of a lot of work done before the now popular models went public.
KingORedLions
There is no "deep reasoning". LLMs are completely incapable of reasoning, they are incapable of fact checking. They cannot "conclude" anything.
It is a probability based system; a stochastic system where in you cascade downwards through a series of possible outcomes and pick the most likely outcomes. Based on the current tokens, what is the most likely next token.
It is quite literally guessing the next word. It's just very good at producing something grammatically correct.
DerpyBestPrincess
https://magazine.sebastianraschka.com/p/understanding-reasoning-llms
DerpyBestPrincess
https://www.ibm.com/think/topics/reasoning-model#:~:text=A%20reasoning%20model%20is%20a,to%20generating%20a%20final%20output.
DerpyBestPrincess
I quote:
Unlike traditional language models that generate responses immediately, reasoning models allocate additional compute, or thinking, time before producing an answer to solve multi-step problems. OpenAI introduced this terminology in September 2024 when it released the o1 series, describing the models as designed to "spend more time thinking" before responding. ... In operation, reasoning models generate internal chains of intermediate steps, then select and refine a final answer.
DerpyBestPrincess
Idiot.
DerpyBestPrincess
https://www.promptingguide.ai/guides/reasoning-llms
DerpyBestPrincess
https://en.wikipedia.org/wiki/Reasoning_model
shalafi71
If LLMs were merely "next word" machines they'd be incomprehensible. No idea why people believe that and keep repeating it.
tetondons
Because that's effectively what they are. https://www.youtube.com/watch?v=LPZh9BOjkQs
DerpyBestPrincess
Nope, they're not. You are so brainwashed and technologically incompetent, it's staggering.
DerpyBestPrincess
https://magazine.sebastianraschka.com/p/understanding-reasoning-llms