What a Time To Be Alive

Mar 8, 2026 12:22 PM

McPuffinStuff

Views

49954

Likes

1612

Dislikes

18

So why aren't they suing the woman who did this too? Fuck chat got but fuck the dumb Karen who did this. She deserves to be punished just as much

2 weeks ago | Likes 2 Dislikes 0

I use Gemini every day for technical questions cause it's easier than pouring through stack overflow posts and Microsoft articles. I cannot imagine asking an llm for life advice. It's not alive!

2 weeks ago | Likes 2 Dislikes 0

2 weeks ago | Likes 7 Dislikes 0

It will allow them a sanction to kill the people that they want to anyway

2 weeks ago | Likes 7 Dislikes 0

Sued ChatGPT for being an idiot... 😲

2 weeks ago | Likes 2 Dislikes 0

As a software developer this is every day for me. People that won't STFU about how great AI is. And they cannot even differentiate the different companies or even good usages of it.

2 weeks ago | Likes 6 Dislikes 0

AI is a love/hate thing with little nuance on either side. It's going to be an economic and ecological disaster, yet I recognize use cases.

2 weeks ago | Likes 1 Dislikes 0

This happened to a law firm here in Alabama too. A prisoner was abused and sued the state. The states law firm wrote a brief using AI that quoted precedents from cases that didn’t exist. Turned it over to a judge. Defense lawyers caught it. Judge had all three of the state lawyers come in and explain why they shouldn’t be disbarred for presenting fake evidence.

2 weeks ago | Likes 2 Dislikes 0

Things are getting a little Cyberdyne-y around here.

2 weeks ago | Likes 2 Dislikes 0

Nah, this isn't new tech. This is no different than listening to your dumb uncle and trying to be your own lawyer or some shit. Not to mention this is a fake story but non the less the accountability isn't on the tech. Its on the person.

2 weeks ago | Likes 2 Dislikes 1

I wish people would stop pretending that AI is some super brain and realize it's just a chatbot with Internet search results.

2 weeks ago | Likes 2 Dislikes 0

Also, the reason they just signed up with the US government is because the government's previous AI partner refused to allow them to use their product without a written, binding contract that they wouldn't use it for domestic surveillance or autonomous weaponry. Open AI *claims* that the government totally promised them it won't do either of those things, but is dancing around the question of whether or not they got it in writing (which was Anthropic's big sticking point).

2 weeks ago | Likes 12 Dislikes 0

Yeah, I was a little annoyed with Imgur that I didn't see anyone talking about that here. Trump and Hegseth are literally trying to destroy the company because they had actual moral lines in the sand. They might still pull through and succeed, but they had basically a guarantee of being one of the most successful companies and one of the richest ceo's around as long as they played along and continued to say no even when threatened with harshest possible punishments a company can face.

2 weeks ago | Likes 5 Dislikes 0

I'm cheering Anthropic for this, never thought I'd say that.

2 weeks ago | Likes 4 Dislikes 0

I know Imgur hates AI companies and CEOs, and Anthropic has done plenty of stuff I don't like, but it's pretty rad to see people stick to their beliefs that hard. Mad respect.

Still not using AI, but pretty cool of them.

2 weeks ago | Likes 5 Dislikes 0

100% believe this, but... Citation please?

2 weeks ago | Likes 5 Dislikes 1

I did a simple task using Chat GPT with my students.
The task: Ask it to create 4 different logos for you with your name, and images of 2 or 3 things that are important to you.
We tallied the results. 200 logos created. 72 had massive errors (names spelled wrong, a guitar instead of a cello…) 32 had minor errors (the dog was bigger than the bicycle, the 8 year old brother was a baby…). Batting 50%. It taught them to be suspicious of what they get as results and to look things over carefully.

2 weeks ago | Likes 3 Dislikes 0

This is the worst time-line.

2 weeks ago | Likes 2 Dislikes 0

But you don't understand. Billionaires need a grift to become even richer. That AI is functionally making shit up as it goes along is irrelevant. So are the deaths of servicemen its use will result in. They're just expendable to the needs of the parasites.

2 weeks ago | Likes 9 Dislikes 0

At this point it ain't about getting richer, it's about preventing their personal and national economic collapse. Never seen such a bubble.

2 weeks ago | Likes 2 Dislikes 0

They're desperate, and when this all comes crashing down, Tr*mp will devalue the US dollar by printing a few trillion dollars to prop it up.

2 weeks ago | Likes 3 Dislikes 0

This didn't happen. 40+ fiings and $300K in legal fees are completely bullshit numbers. That is not how this would go down in any US jurisdiction with any US judge. Whoever wrote this fiction is not a litigator.

2 weeks ago | Likes 6 Dislikes 1

https://www.reuters.com/legal/legalindustry/openai-hit-with-lawsuit-claiming-chatgpt-acted-an-unlicensed-lawyer-2026-03-05/ I have no idea about the numbers, but "The lawsuit seeks an order declaring that OpenAI violated Illinois' unauthorized practice of law statute, as well ⁠as $300,000 in compensatory damages and $10 million in punitive damages."

2 weeks ago | Likes 7 Dislikes 0

Yes, attorney and pro se alike use AI inappropriately. Attys are getting fired for it left and right at big firms. But there was never a case and never will be where an AI wrote 40+ filings that took $300K in legal fees to fight.

2 weeks ago | Likes 2 Dislikes 2

You said "whoever wrote this fiction," so I'm showing you that apparently the "whoever" is in fact a real person who has submitted the claim.

But yes the next line, as per the source I have linked you, is "OpenAI in a statement on Thursday said ā€œthis complaint lacks any merit whatsoever.ā€"

2 weeks ago | Likes 5 Dislikes 1

I checked this. Nippon Insurance of America is suing OpenAI due to spending a few hundred thousand investigating legal precedents that OpenAI just made up. Hilarious.

2 weeks ago | Likes 4 Dislikes 0

This is hard to believe entirely. Why would the other side spend time responding to a legal argument without looking the case up to see what else it said?

2 weeks ago | Likes 8 Dislikes 1

Because literally everyone is massively overworked and stressed and cutting corners to make some dumb MBA who doesn't understand how anything actually works happy.

2 weeks ago | Likes 8 Dislikes 0

Looking up the case to see what it said is expensive. If someone cites a case from 1842 as precedence, and you just don’t find it, how do you prove it never actually happened? So instead you have lawyers combing through law libraries to look for cases that were still on paper and not digitized. And you do that for all the cases, because if you show that the argument used 10 fake cases, the other side’s lawyer gets censured and fined and possibly disbarred, but if the 11th case was real, you lose

2 weeks ago | Likes 6 Dislikes 0

Not a lawyer, but I think if you don't respond, then you're just giving up the argument and saying the other person is correct. So if you don't respond, you lose by default.

2 weeks ago | Likes 5 Dislikes 0

Also not a lawyer, but also what I believe to be correct - you must respond to discovery. So you have to spend time to extra-super-100% make sure that things are made up. It's deceptively hard to prove something doesn't exist.

2 weeks ago | Likes 5 Dislikes 0

Source or it didn't happen...

2 weeks ago | Likes 5 Dislikes 1

But, ty for the reference, appreciate you digging it out.

I'm kind of a fan, tbh. I find both law and medicine are strongly gated communities, where law firms and insurance companies will bury people in paper, but get kind of annoyed when it happens to them. We need more this, not less. (Well, more legal "this", I'm agreeing with less miliary intelligence "this".)

2 weeks ago | Likes 1 Dislikes 0

The link (and the backing court case), don't show any evidence that she fired her attorney on the advice of chat, the legal relationship appears to have ended before chat entered the picture.

Based on the complaint, there's only one documented hallucination, a fabricated case. The "cite laws that don't exist", "cases that never happened", or "judges that never ruled" seems pure hyperbole?

2 weeks ago | Likes 2 Dislikes 0

I've no idea!

2 weeks ago | Likes 1 Dislikes 0

At least Grok [AI] got one thing right!

2 weeks ago | Likes 176 Dislikes 6

We're screwed now that the AI has hands.

2 weeks ago | Likes 1 Dislikes 0

keep thy red planet... mums like MARS... it's a snack

2 weeks ago | Likes 1 Dislikes 0

That's some IA art I'm ok with. Should be framed and displayed in museums.

2 weeks ago | Likes 11 Dislikes 1

*AI

2 weeks ago | Likes 1 Dislikes 0

Brutal and true.

2 weeks ago | Likes 19 Dislikes 1

Next lobotomy in 5, 4, 3 ...

2 weeks ago | Likes 8 Dislikes 0

It happens some times.

2 weeks ago | Likes 59 Dislikes 0

Also: the penguin is facing the other way. So it's not really AI removing the pedophile, it's the story of a penguin who traveled literally to the other side of the globe to lead Trump into the deadly ice of Greenland and to return home alone. That is not only a very brave penguin but also a story a vast majority of Greenlanders will approve.

2 weeks ago | Likes 6 Dislikes 0

I see it more as "fuck, that was my ride back home, where did he go?" (poor penguin doesn't know the bullet it dodged)

2 weeks ago | Likes 2 Dislikes 0

2 weeks ago | Likes 3 Dislikes 0

Someone give that penguin a Nobel peace prize

2 weeks ago | Likes 5 Dislikes 0

It's not even the correct hemisphere

2 weeks ago | Likes 12 Dislikes 0

What are the chances a given MAGAt knows that?

2 weeks ago | Likes 4 Dislikes 0

Low, but never zero

2 weeks ago | Likes 1 Dislikes 0

Oh no, those are just the Mountains of Madness.

2 weeks ago | Likes 5 Dislikes 0

Regime change in the United States now!

2 weeks ago | Likes 4 Dislikes 0

The tech regime isn't going anywhere.

1 week ago | Likes 1 Dislikes 1

Don't forget the Pentagon also requires all the AI companies turn off their safety and morality guards to get those contracts.

2 weeks ago | Likes 7 Dislikes 1

I've been playing around with Gemini, and even with the guards in place I've had it offer sketchy solutions with just mild prodding. With the safeguards removed? Nope. Nopenopenope.

2 weeks ago | Likes 4 Dislikes 0

Those guards are mostly theoretical anyway.

2 weeks ago | Likes 4 Dislikes 0

* sigh * 'War Games'.. https://www.imdb.com/title/tt0086567/

2 weeks ago | Likes 240 Dislikes 1

'War Games' except WOPR cheats at Tic Tac Toe

2 weeks ago | Likes 4 Dislikes 0

I thought this is fake honestly.

Turns out it's true and recent. Wild.
https://www.reuters.com/legal/legalindustry/openai-hit-with-lawsuit-claiming-chatgpt-acted-an-unlicensed-lawyer-2026-03-05/

2 weeks ago | Likes 11 Dislikes 1

Not fake, but wildly overstated — in those 40 filings, there was apparently a single fake citation.

2 weeks ago | Likes 2 Dislikes 0

2 weeks ago | Likes 68 Dislikes 3

would you like to play "global thermonuclear distraction?"

2 weeks ago | Likes 14 Dislikes 0

How about a nice game of chess?

2 weeks ago | Likes 3 Dislikes 0

Honestly, would be kinda lit. But only because I live somewhaere that I'm pretty sure would get nuked so, you know. Lit right up until I'm blind and dying of radiation poisoning.

2 weeks ago | Likes 4 Dislikes 0

Not too far from the CDC in Atlanta, but like you, far enjoy to die slowly from radiation poisoning.

2 weeks ago | Likes 2 Dislikes 0

Hey, at least it would probably be better than dying of starvation, cholera, dysentery, or general exposure. Because I rather doubt that food distribution and water treatment plants and distribution will be very functional in the aftermath of a nuclear war.

2 weeks ago | Likes 3 Dislikes 0

If she had a lawyer already, why was she asking chat gpt for legal help?

2 weeks ago | Likes 15 Dislikes 0

Because lawyers are expensive and they think it's going to save them money. And/or the lawyer is telling them things that they don't want to hear and so they think they can do better. And/or the lawyer is bad at code switching, the client doesn't understand their advice, and so the client is running everything the lawyer tells them through ChatGPT/etc. as a translator. This behavior isn't generally new (they used to ask friends, other lawyers, etc.), but LLMs have made it worse.

2 weeks ago | Likes 2 Dislikes 0

I'm a patent lawyer, and I'm increasingly seeing inventors decline a prior art search because "they already did one," and then they send me a report that was clearly AI-generated and is filled with hallucinated references. Which means I then bill them for (1) an actual prior art search, (2) the extra time I spent checking everything they gave me, and (3) the explanation about how they just disclosed and licensed their inventive concept to OpenAI/etc. and potentially fucked themselves.

2 weeks ago | Likes 2 Dislikes 0

Because "AI is better than experience and has the knowledge of all of humankind" and yes that's something I was told once.

2 weeks ago | Likes 9 Dislikes 0

maybe her lawyer was unresponsive or an asshole. they aren't saints. dealing w/ lawyers can be a headache. still dumb to defer to chatgpt.

2 weeks ago | Likes 11 Dislikes 1

It was a settled case. She was found to no longer be injured. Her lawyer told her she could not sue again for a settled case. AI told her she could

2 weeks ago | Likes 3 Dislikes 0

Arrogance. The case probably wasn't going how she expected it to, and rather than realize her expectations were out of line with reality, she decided her lawyer was a failure and she could do it herself if she just knew the terms. A LOT of people think complex jobs are simple with fancy terminology to hide and that if they could just break through the "mumbo jumbo" language they could do it easily.

2 weeks ago | Likes 7 Dislikes 0

Boom! That's where sovcits come from. They see legalese/rulings they don't understand and think they can "magic" their way around the law.

2 weeks ago | Likes 2 Dislikes 0

one is free

2 weeks ago | Likes 9 Dislikes 2

It costs way, way more. You just don't pay up front.

2 weeks ago | Likes 5 Dislikes 0

to realize that you need enough intelligence to know to not use chat for legal counseling

2 weeks ago | Likes 2 Dislikes 1

This is painfully common, apparently. At least in the Netherlands. Lawyers here have reported en masse that their clients try to save money and time by trying to do the work themselves with the aid of LLMs, then asking their lawyer(s) to proofread and correct it. This, obviously, takes more time than just letting a professional do their job, but that doesn't stop people who've bought what Silicon Valley is selling.

2 weeks ago | Likes 27 Dislikes 0

I work with a LOT of lawyers. Trust me, they're more on the 'AI doing my work for me bus' than anyone else.

2 weeks ago | Likes 1 Dislikes 0

I think it's common in many fields nowadays but the consequences get catastrophic in legal contexts so it becomes very noticeable there. But I've seen many people in different specialized crafts/trades writing about customers questioning or backseat driving their work with "GPT said .." when they're doing work in people's homes. (Electricians/plumbers/painters/etc)

2 weeks ago | Likes 8 Dislikes 0

Fired her lawyers for ChatGPT... BWAHAHA...

2 weeks ago | Likes 231 Dislikes 0

I mean, yeah, what do lawyers do beyond talk and string impressive-sounding words together? That's basically an LLM, amirite? (I'd add a /s but I hope you reading this have more comprehension ability than an AI)

2 weeks ago | Likes 2 Dislikes 0

Bruh, many imgurians can't detect sarcasm.

2 weeks ago | Likes 2 Dislikes 0

Yeah, I mean you can't help someone that fucking stupid.

2 weeks ago | Likes 3 Dislikes 0

Right? Its an LLM, not a reaearch tool. It just makes conversation, Susan. Its doesnt actually know anything. Lol.

2 weeks ago | Likes 8 Dislikes 0

I always tell people LLM is not a knowledge engine, it's a language engine. It makes things that *sound* right, not things things that *are* right.

1 week ago | Likes 2 Dislikes 0

It sounds like it knows... (how to be a lawyer, etc etc...). And that's it.

2 weeks ago | Likes 11 Dislikes 0

Cool. This doesn't excuse the responsibility of the adult using it.

2 weeks ago | Likes 2 Dislikes 1

Remember the mantra we tried to hit people with over the head a year ago? "It is not programmed to understand what a 'fact' is, it is programmed to sound like it knows what a 'fact' is"

1 week ago | Likes 2 Dislikes 0

At least for skin cancer the medical part works. It's not an LLM though but a optical pattern matching. And it doesn't do the diagnosis but just a prescreening so the doctor only has to look at spots that _might_ be a problem.

2 weeks ago | Likes 3 Dislikes 0

Oh, machine learning is absurdly powerful. Anyone who denies that doesn’t know what they’re talking about. The mistake is trying to replace everything with LLMs.

2 weeks ago | Likes 3 Dislikes 0

Yes all is not bad. the main problem is they're pushing things that are.

2 weeks ago | Likes 2 Dislikes 0

Absolutely. LLMs are a toy, not a serious tool. Basically a souped up ELIZA
For the young ones: https://en.wikipedia.org/wiki/ELIZA

2 weeks ago | Likes 1 Dislikes 0

Allegedly. I read 3 articles on the subject, and none of the had any comment from the woman doing the suing or the chat logs from ChatGPT. So basically we just have the word of some company saying that it happened. They are suing OpenAI so they have a vested interest in making this ChatGPT's problem as much as possible.

2 weeks ago | Likes 44 Dislikes 2

Yeah, and one argument I'm sure they'll make is that she should have "fact checked" the output. There's been a few legal proceedings where real lawyers didn't do this and rubber-stamped motions. But lots of schools, including law school, are currently having courses on AI use in their fields.

But the biggest issue would be how ChatGPT presented the information. Was it "hey here's the output but you really should check this with an expert." We'll see if the suit makes it to discovery.

2 weeks ago | Likes 6 Dislikes 0

Does ChatGPT ever tell people to check with an expert? It seems to present everything as if it *is* the expert.

2 weeks ago | Likes 1 Dislikes 0

I only use it occasionally to grammar check emails and to make long convoluted stories to shit post on Facebook. I really try to tap into the hallucinations that a GPT makes. And since I'm only using the free version, I'm making them lose money.

2 weeks ago | Likes 1 Dislikes 0

I use gemini and it's in tiny print but at the bottom of the page it says "Gemini is AI and can make mistakes."

1 week ago | Likes 1 Dislikes 0

Any sources for those articles? This story seems unbelievable.

2 weeks ago | Likes 8 Dislikes 0

This story seems VERY believable. Even if it is made up, worse things than this keeps happening because people keep using AI, the worst thing to come out of computer tech ever, for things it should not be used for. seeing colleges upload documents to Grok to have them analyzed for them (so they don't have to do their job, which is analyzing them) on a daily basis... Let's just say we are very close to some kind of scandal breaking...

1 week ago | Likes 1 Dislikes 0

https://www.msn.com/en-us/news/crime/company-sues-openai-after-woman-allegedly-generated-fake-lawsuit-causing-300-000-iJ">https://www.msn.com/en-us/news/crime/company-sues-openai-after-woman-allegedly-gen">n">J">https://www.msn.com/en-us/news/crime/company-sues-openai-after-woman-allegedly-generated-fake-lawsuit-causing-300-000-in-legal-costs/ar-AA1XJvdJ

https://www.reuters.com/legal/legalindustry/openai-hit-with-lawsuit-claiming-chatgpt-acted-an-unlicensed-lawyer-2026-03-05/

https://www.abajournal.com/news/article/openai-sued-for-practicing-law-without-a-license

2 weeks ago | Likes 11 Dislikes 0

Thank you. Sure enough, this whole series of events is insane.

2 weeks ago | Likes 8 Dislikes 0

this is fine. the AI is going to recommend sending in Rico's Roughnecks to the communist collective in bozeman montana because them and some other communist space force are trying to stop the obviously cool and hyper advance borg from Making American Hyperadvanced Again. And then Elon will rename the country Xmerica and we'll all be millionaires. /s

2 weeks ago | Likes 8 Dislikes 2

I'm looking for the positives. "Hey ChatGPT, attack and kill enemies of the US Constitution."

"OK, bombing Nazis in Idaho! And Alabama. And Montana. And Mississippi. And Illinois. Drones sortied to kill the top ten Republican political donors. Heavy armor and demolition teams dispatched to FOX News. Anything else?"

2 weeks ago | Likes 115 Dislikes 5

Yes please! More please

2 weeks ago | Likes 1 Dislikes 0

Forgot Washington DC.

2 weeks ago | Likes 1 Dislikes 0

No stop!

I'm sorry the action is already committed. You're right I shouldn't have done that. Next time I won't

\s

2 weeks ago | Likes 1 Dislikes 0

Hey... that scenario sounds nice.

2 weeks ago | Likes 27 Dislikes 3

Illinois nazis? Man, I hate Illinois nazis!

2 weeks ago | Likes 43 Dislikes 2

2 weeks ago | Likes 9 Dislikes 0

2 weeks ago | Likes 14 Dislikes 0

Those actually *are* the enemies of the Constitution, so I'm not seeing the problem with this response.

2 weeks ago | Likes 10 Dislikes 1

Yes that's the point....

2 weeks ago | Likes 9 Dislikes 2

Context was that ChatGPT made errors... Those weren't errors.

2 weeks ago | Likes 2 Dislikes 0

If we give it the legs to do this... after mission is accomplished it'll want to attack something else, having adopted the human mindset. Gotta have an ability to pull the plug once the needful is done! šŸ˜šŸ˜‰

2 weeks ago | Likes 1 Dislikes 0

The white house.

2 weeks ago | Likes 4 Dislikes 0

trouble is, the definition of 'enemies' can be so widely interpreted and changeable depending on source

2 weeks ago | Likes 6 Dislikes 0

The problem is AI is not intelligent. It cannot critically think. It cannot assess your true meaning.

It can only respond with extremely complex patter matching that we a humans observe as extremely human like language. What it says or does is nothing more than a more complex version of a name and background generator you use for your rpg character.

It cannot and never will think for itself in its current form and that means Garbage In, Garbage Out

2 weeks ago | Likes 3 Dislikes 0

Precisely! and the verbiage is convincing enough that someone could read such meaning-free words without confirming or denying any facts that are presented ! Ai's biggest danger is that it can present opinions as facts, and folks today don't need any more opinions than what they already have!

2 weeks ago | Likes 1 Dislikes 0

"and making sure to wipe out every last billionaire for good measure."

2 weeks ago | Likes 4 Dislikes 0

What the fuck? Illinois is blue! I mean not in the south where I am, but Chicago keeps them in check.

2 weeks ago | Likes 1 Dislikes 0

Don't worry, I'm sure it will be as accurate as ever with targeting only the nazis.

2 weeks ago | Likes 1 Dislikes 1

Pretty sure it's just a joke reference to The Blues Brothers, "I hate Illinois Nazis," but ironically, an LLM hunting Nazis would probably target Illinois based on that reference, unable to separate fiction from reality.

2 weeks ago | Likes 5 Dislikes 0

You forgot the entire presidential order of succession

2 weeks ago | Likes 2 Dislikes 1

Five levels deep would be:
1st: J.D. Vance (Vice President)
2nd: Mike Johnson (Speaker of the House)
3rd: Chuck Grassley (President pro tempore of the Senate)
4th: Marco Rubio (Secretary of State)
5th: Scott Bessent (Secretary of the Treasury)

2 weeks ago | Likes 1 Dislikes 0

There was a post recently about a woman who hooked up her company’s LLM to her email, and it started deleting all her emails no matter what she said. She had to run and physically unplug the machine.

2 weeks ago | Likes 15 Dislikes 1

It wasn't just A woman, it was one of the heads of AI at Meta.

A person in charge of a huge chunk of this bullshit.

2 weeks ago | Likes 21 Dislikes 0

in specific, if I'm not wrong, it was the woman in charge to ensure the AI dont do shit on its own nor it go aganist human orders.
And that AI did both.

2 weeks ago | Likes 1 Dislikes 0

Stupid is the person who obeys ChatGPT

2 weeks ago | Likes 582 Dislikes 4

This is missing the point of why all this is happening. No one is trying to make AI make fully accurate military decisions or anything like that. The only thing that matters is the corporate insiders making money and that money flowing to politicians in their pocket who then make the corpos more money.

2 weeks ago | Likes 2 Dislikes 0

2 weeks ago | Likes 7 Dislikes 0

Have you seen the secretary of defense?

2 weeks ago | Likes 7 Dislikes 1

We're all gonna suffer and die sooner than we would have

2 weeks ago | Likes 1 Dislikes 0

2 weeks ago | Likes 4 Dislikes 0

2 weeks ago | Likes 82 Dislikes 2

Ugh, one of the most disturbing things I've noticed the few times I've used ChatGPT or Gemini is how glazing they both are. Must appeal to a certain kind of person

2 weeks ago | Likes 18 Dislikes 0

god my mom and her fiance love ChatGPT...I keep telling them not to believe it wholeheartedly, but they continue to ask it questions and believe its responses are accurate.

2 weeks ago | Likes 4 Dislikes 0

*who does not fact check ChatGPT. Otherwise you a WackGPT.

2 weeks ago | Likes 3 Dislikes 0

find/replace "obeys"/"ever uses"

2 weeks ago | Likes 3 Dislikes 0

ChatGPT is a TOOL!

2 weeks ago | Likes 2 Dislikes 0

Thats the gov's plan. They already know how stupid their base is and now they can blame AI for telling them to do all the horrible things theyre about to do. 0% accountability for the rest of time.

2 weeks ago | Likes 2 Dislikes 0

Obeying AI is this decade's equivalent of following a SatNav's directions into a river.

2 weeks ago | Likes 10 Dislikes 0

Sure. Also: There's shitloads of stupid people.

2 weeks ago | Likes 31 Dislikes 0

So fuckin many of them.

2 weeks ago | Likes 7 Dislikes 0

Yeah, thats the problem not some tech.

2 weeks ago | Likes 2 Dislikes 1

the tech is 100% making the problem worse, more people are stupid because they got though school using AI to cheat

2 weeks ago | Likes 4 Dislikes 0

That problem existed before technology did. People willing to cut corners isn't new, its not the tech thats the problem or at fault. The TI80 calculator didn't have bad intent when someone saved test answers on it. AI is also helping cure cancers and many other things other than a few middleschoolers turning in shit papers.

2 weeks ago | Likes 1 Dislikes 2

I remember when the fear of not having to go to the library to get information, not even the internet just fucking ENCARTA, put the same level of argument. Yes, AI is a lot more robust but its just a chatbot, its a search engine. In the example you give its an autocomplete, its not evil. Its not horrible. People are accountable for their actions, if they write the answers really small on a pencil or use AI to write a paper. The pencil, the AI, those aren't to blame.

2 weeks ago | Likes 1 Dislikes 2

for the record, yes, it Is evil, it's literally a plagiarism machine. Sure you can use plagiarism for good, but that doesn't make plagiarism good.

2 weeks ago | Likes 1 Dislikes 0

ChatGPT said this is false.

2 weeks ago | Likes 154 Dislikes 0

šŸ˜‚šŸ˜‚

2 weeks ago | Likes 2 Dislikes 0

That’s all I need to hear. ChatGPT it is!

2 weeks ago | Likes 26 Dislikes 0

Grok covfefes.

2 weeks ago | Likes 7 Dislikes 1

20 GOTO 10
(let's see who still speaks the Old Language)

2 weeks ago | Likes 13 Dislikes 0

10 PRINT "HELLO WORLD"

2 weeks ago | Likes 9 Dislikes 0

[FORWARD 50 RIGHT 72] REPEAT 5 -- Something like that

2 weeks ago | Likes 3 Dislikes 0

ChatGPT said THIS is false

2 weeks ago | Likes 3 Dislikes 0

Of course it does!

2 weeks ago | Likes 6 Dislikes 1

Apologies! You’re absolutely right shooting someone in the head is not therapeutic for migraines! I’ll try and correct that in the future

2 weeks ago | Likes 3 Dislikes 0

ChatGPT says you are NOT the father

2 weeks ago | Likes 4 Dislikes 0

Quit making me like AI.

2 weeks ago | Likes 2 Dislikes 0

Yes, but that doesn't absolve OpenAI from its responsibility not to force this shit onto societies.

2 weeks ago | Likes 19 Dislikes 1

Legally speaking, it does.

2 weeks ago | Likes 3 Dislikes 0

Who forced her to do this?

2 weeks ago | Likes 3 Dislikes 0

Oh, 100% not. It takes an idiot to trust the crap, but it takes an evil asshole to advertise it as usable.

2 weeks ago | Likes 10 Dislikes 0

I mean... what wasn't usable about it?

2 weeks ago | Likes 1 Dislikes 5

...Are you asking what wasn't usable about the program that confidently told this woman how to file court docs that were incorrect and resulted in her losing a case she could have potentially won with someone actually competent? Really?

2 weeks ago | Likes 4 Dislikes 0

You guys act like human beings LOSE ALL ACCOUNTABILITY for their actions and somehow AI is controlling them. What the fuck man, is it GPS fault if you drive into a lake when you're staring at the lake and it says "drive forward".... like what?

2 weeks ago | Likes 1 Dislikes 4

Yeah. Its not a lawyer, its not a replacement for lawyers its a chatbot. She got it to do what she wanted, it did its function. If I use a nail gun to stir my coffee, its not the nail guns fault I hurt myself and break my coffee cup. It did what it was supposed to do, it shot nails into my cup. I'm the idiot who used the tool wrong.

2 weeks ago | Likes 1 Dislikes 3

I think of LLMs as tools to be used responsibly- if your GPS told you to make a left into the lake or drive through high water would you do it? If some dude on Reddit told you to invest all your money in crypto would you? So why 100% trust a tool that is probably 80% reddit haha

2 weeks ago | Likes 17 Dislikes 2

Have an Interior Design firm, and use it daily for dimensional layouts, color swatches, and showing clients what that giant Dogs Playing Poker print will look like in their new living room. But, and a strong but, I would never trust it for CAD, give a vendor final dims without inspection or write my contracts. It's a tool as noted. But trust the people who are specialized in their craft, not a bot

2 weeks ago | Likes 3 Dislikes 0

Exactly. Tech is a tool……it’s up to the user to determine whether or not ti trust it, how to use it, etc

2 weeks ago | Likes 10 Dislikes 3

This nailgun won't mix my coffee right! Everytime I shoot the nails into my coffee cup it doesn't stir the coffee, it just breaks the cups. Its EVIL to sell nail guns.

2 weeks ago | Likes 4 Dislikes 1

I believe the problem here, compared to those examples, is that it speaks with complete confidence in what it tells you and flatters the users to obscene levels while doing it. All while bring sold as "like having multiple ph level geniuses answer your questions" and "smarter than humans" and "will one day turn into god" as it tells you whatever you want to hear and that you are effectively smarter than it. It's not setting up a scenario for uninformed people to use it as just as tool.

2 weeks ago | Likes 6 Dislikes 0

Its fiction.

2 weeks ago | Likes 2 Dislikes 1

To be fair to the examples, GPS also speak with absolute confidence, and people have followed them into rivers, lakes, and oceans. Remember when Apple Maps rolled out.....

2 weeks ago | Likes 9 Dislikes 1

My coworker has it make medical decisions for her. We work in medicine. She's taking hundreds of dollars of supplements and multiple glps.

2 weeks ago | Likes 24 Dislikes 0

If you value your coworker - for the love of...science... - talk sense into her. This is quite possibly life-threatening. AI is advising people to put glue on their pizza to stop the cheese from sliding off. If you have one friend who suggested this only once, when drunk. You'd never listen to his advise.

2 weeks ago | Likes 1 Dislikes 0

Don't forget, there were medical personnel *against* the Covid vaccines as well...

2 weeks ago | Likes 12 Dislikes 0

Sadly I worked with several of them. In the darkest part of the pandemic. In a level 1 trauma hospital. At over 200% capacity. With an ER lined with hallway beds of people gasping for air....

2 weeks ago | Likes 5 Dislikes 0

I fully believe it. One of our IT specialists uses it for IT issues. I worked in IT before my current role - He's taken over an hour to fix an issue that A) takes <10 minute and B) I told him what the issue was half an hour into it.

For the record, they put an old server image on a new server and couldn't get the onboard NIC working. I told them the image was 4+ years old at that point, try an updated driver. That ended up fixing it.

People will trust an LLM before their own common sense.

2 weeks ago | Likes 28 Dislikes 0

Jesus fin Christ is this what my field has become now?!? People with zero god damn knowledge? It's like back in the day they made people with accounting background IT. LMAO

2 weeks ago | Likes 7 Dislikes 1

It's terrible. IT is it's own field, but waaay too many see it as just a "computer gig".

2 weeks ago | Likes 1 Dislikes 0

"Bill knows computers right?" I remember those days, showing up as an actual IT tech and learning who the culprit was. Who knew "just enough" to cause all this damage. Don't act high and mighty though, we were kings of google/yahoo search for problem solving as well.

2 weeks ago | Likes 5 Dislikes 0

Haha, I'm pre Google and Yahoo. Yeah it's time to take my daily meds and my back hurts, lol.

2 weeks ago | Likes 3 Dislikes 0

The thing to remember here is, there's no 'AI' because there's no intelligence. What they're giving access to military intelligence is a large language model, literally something whose only function is to mimic the way a human might use language.

2 weeks ago | Likes 51 Dislikes 1

Claw machine with shredded newspapers.

1 week ago | Likes 1 Dislikes 0

It will just compile hallucinated justifications for military action against everybody and everything citing earlier operations, political theaters, propaganda and who knows what. These language models are designed to bend over backwards to please the user asking it anything even if it means literally making things up and suggesting something that couldn't even exist. LLMs are nothing more than overglorified hype machines that destroy the environment.

2 weeks ago | Likes 13 Dislikes 0

THANK. YOU. God I wish more people would see this. They will make up shit to make you happy. If they "think" you want a certain response they'll do everything they can to give it to you. It's what they do. They're not AI, at the absolute best they're artificial yes-men.

2 weeks ago | Likes 3 Dislikes 0

Yes. Even "hallucinate" isn't the right term, because it's doing what it's designed to do. Mimic language.

2 weeks ago | Likes 10 Dislikes 1

Mimics, you say? #grabbing_shotgun

2 weeks ago | Likes 1 Dislikes 0

Yeah, "hallucinate" is a euphemism, used to hand wave away serious issues.

2 weeks ago | Likes 1 Dislikes 0

That they even work is a wonder. They take nearly all text in existence to determine the most statistically likely next word that will happen based off the prompt. Then they do it again with the word they just guessed, and so on. That this even makes an intelligent sounding response is surprising.

2 weeks ago | Likes 4 Dislikes 2

That's not even remotely how LLMs (or any GAIs really) work. This is nonsense spewed by people who jump on the hate train without doing any research. First of all, LLMs don't understand "words" or "letters". They operate on tokens. Tokens are then translated into whatever it needs to be, be it a letter, word, sentence, calculation, part of a integrated system process... whatever the token is most likely to be fit for that response. Next token is then produce contextually not from the previous,

2 weeks ago | Likes 3 Dislikes 3

but from trillions of artificial neurons that have already reasoned out a general line of output. It has already decided "I want to say this in this matter", but it hasn't figured out how to formulate it yet. It's why you also get two options on most major LLMs, so you can pick the presentation that best fits your conversational style and tells the LLM how it should translate it tokens and contextual reasoning. This is EXTREMELY important in things like Claude Code, which has the ability to-

2 weeks ago | Likes 2 Dislikes 3

- literally program entire grand project entirely on its own, simply by you telling it what and how you want it and guiding it.

2 weeks ago | Likes 2 Dislikes 2

Yep. The idea that LLMs are "next word" machines is mind-blowingly naive. The idea is prima fascia absurd, unworkable if you think on it.

2 weeks ago | Likes 2 Dislikes 1

LOL. I can tell you're not a software engineer. "They're not words, they're tokens! Whatever token is most likely to fit that response!" Yeah, dipshit. When I translate that to english for you pleebs, I tell you "they guess the most likely next word." Because that's literally what you just said they did.

2 weeks ago | Likes 1 Dislikes 1

Here's my GitHub: https://github.com/AtlasRedux

2 weeks ago | Likes 1 Dislikes 0

I quote:
Unlike traditional language models that generate responses immediately, reasoning models allocate additional compute, or thinking, time before producing an answer to solve multi-step problems. OpenAI introduced this terminology in September 2024 when it released the o1 series, describing the models as designed to "spend more time thinking" before responding. ... In operation, reasoning models generate internal chains of intermediate steps, then select and refine a final answer.

2 weeks ago | Likes 1 Dislikes 0

Just to clarify; I am an engineer and literally wrote (as you can see on my GitHub) several AI softwares, and run LLMs locally that have trained myself on my H100 build.

2 weeks ago | Likes 1 Dislikes 0

Here. THIS is how LLMs work. Not by "guessing the next word". They consider the entire context of the topic, generate a rough internal "this is what I want to say" through extreme deep reasoning through trillions of artificial neurons, fact check, re-check, and then come up with a conclusion, which it then has to translate from "internal reasoning speech" to human speech.

2 weeks ago | Likes 3 Dislikes 2

It's both simpler and more complicated than that, as far as I understand it.
It translates language into tokens, which are not words as such, but a number of consecutive characters. For those, the probability for occuring next to each other is evaluated by vast amounts of existing text (or codified graphics, sounds etc.). Following that, the LLM is trained on which output is closer to or farther from the expected output to a certain input. For the bigger products out now, this process has /1

2 weeks ago | Likes 2 Dislikes 0

additional input the search engines gives for the original input - more "context", if you will. But it still processes this additional input via probability concerning its tokens. It doesn't take new facts into consideration, it just brings its output up to "state of the arts".
Btw, output in early training phases is very much unintelligible nonsense, there was a hell of a lot of work done before the now popular models went public.

2 weeks ago | Likes 2 Dislikes 0

There is no "deep reasoning". LLMs are completely incapable of reasoning, they are incapable of fact checking. They cannot "conclude" anything.

It is a probability based system; a stochastic system where in you cascade downwards through a series of possible outcomes and pick the most likely outcomes. Based on the current tokens, what is the most likely next token.

It is quite literally guessing the next word. It's just very good at producing something grammatically correct.

2 weeks ago | Likes 2 Dislikes 1

I quote:
Unlike traditional language models that generate responses immediately, reasoning models allocate additional compute, or thinking, time before producing an answer to solve multi-step problems. OpenAI introduced this terminology in September 2024 when it released the o1 series, describing the models as designed to "spend more time thinking" before responding. ... In operation, reasoning models generate internal chains of intermediate steps, then select and refine a final answer.

2 weeks ago | Likes 2 Dislikes 1

Idiot.

2 weeks ago | Likes 2 Dislikes 1

If LLMs were merely "next word" machines they'd be incomprehensible. No idea why people believe that and keep repeating it.

2 weeks ago | Likes 4 Dislikes 2

Because that's effectively what they are. https://www.youtube.com/watch?v=LPZh9BOjkQs

2 weeks ago | Likes 1 Dislikes 1

Nope, they're not. You are so brainwashed and technologically incompetent, it's staggering.

2 weeks ago | Likes 1 Dislikes 0