Whoopsie, that AI that you replaced people with turned out to be a pathological liar.

Feb 16, 2026 9:41 PM

algoritham

Views

7639

Likes

405

Dislikes

26

That AI bubble gonna pop and Im just gonna roast marshmallows on the flames.

1 month ago | Likes 1 Dislikes 0

I asked AI about two dogwalker services that I know (the people that is). It gave me the correct site addresses, the correct times of walking, basically it did well there. Then called both ladies completely and utterly different names. I told it it got something very wrong, then it came back with the correct names. So basically don’t trust it blindly. It will get things right, it might help you underway, but it. Will. Fuck. Some. Shit. Up. Don’t ever, ever trust the current AI implementations.

1 month ago | Likes 1 Dislikes 0

I got caught bad a couple of days ago. Went to make yogurt in our old insta-pot but couldn't find the manual either here or online. So I just went with Google AI's confident instructions. Checked in at 10 min on a process that was supposed to take 25 minutes. After about an hour I think I got most of the milk foam out of the machine and countertop. The obvious result if I'd stopped to think about it. Why the fuck did I put my brain in neutral!?

1 month ago | Likes 1 Dislikes 0

AI: Let's take machines that have been enormously useful for decades and FUCK. THEM. UP.

1 month ago | Likes 36 Dislikes 3

1 month ago | Likes 3 Dislikes 1

As a software engineer ... claude and chatgpt used to make shit up. But its very close to working now. Shits moving so fucking fast. Wont be long before a lotta jobs are gone.

1 month ago | Likes 1 Dislikes 0

blindly following AI data without validation.. ya, that'll screw you pretty hard.

1 month ago | Likes 113 Dislikes 1

Yup, do everything you can to save/protect yourself, but other than that, let the fuckers get sued out of existence

1 month ago | Likes 15 Dislikes 1

I fixed something like this by always doing hard numbers the hard way. If you can validate BD, you must validate BD. That's what Splunk is for.

1 month ago | Likes 1 Dislikes 0

I guess I don't have to believe this until somebody has a source of which AI it is

1 month ago | Likes 2 Dislikes 0

Remember that AI must follow the same “if-then-else” paradigm that old school human programmers follow. If they’re wrong, there is no QA to question the results, what do you expect to see as results? Sometimes you’ll get gold. Other times monkeys humping footballs.

1 month ago | Likes 1 Dislikes 0

Are you sure about that, I thought AI worked heuristic with probability. (Btw I’m an it guy myself not an AI-person)

1 month ago | Likes 1 Dislikes 0

The person who suggested they immediately rely 100% on AI data for metrics, which can cost actual people their jobs is worried they could be fired?

Excuse me while I put on my 'I don't give a shit' face

1 month ago | Likes 9 Dislikes 1

No the person who suggested they don't immediately rely 100% on AI data is worried about getting fired. But sure hate blindly on AI without even properly reading go right ahead.

1 month ago | Likes 1 Dislikes 0

So, is AI on track to be the greatest example of failing up?

1 month ago | Likes 3 Dislikes 1

I don't think anyone could replace Elon in that seat

1 month ago | Likes 4 Dislikes 0

AI rage bait story.

1 month ago | Likes 3 Dislikes 2

this. i saw this story posted yesterday and it got a ton of hate. shame this post seems to be doing better

1 month ago | Likes 2 Dislikes 1

Why would anyone trust numbers from a technology that can not do basic addition?!?

1 month ago | Likes 3 Dislikes 0

I am willing to wager this is 71% accurate.

1 month ago | Likes 5 Dislikes 0

It's particularly bad when numbers are involved.

1 month ago | Likes 1 Dislikes 0

Put it on your resume and move on.

1 month ago | Likes 2 Dislikes 0

People SHOULD be fired.

1 month ago | Likes 2 Dislikes 0

>raised concerns

In email.. Right?

1 month ago | Likes 3 Dislikes 0

Well, in a just world, your shitty company would go bankrupt and you can all suffer from your stupidity, but more than likely you will be bought or or the c levels will have golden parachutes.

Assuming this is real… which I doubt.

1 month ago | Likes 2 Dislikes 0

It's real. I've been hearing stories about multiple companies having to overhaul their data analytics after realizing that nothing was accurate when they started using an AI.

It ended up in a lot of refactor hell. Nobody was fired in most cases, just no bonuses for that year.

1 month ago | Likes 1 Dislikes 0

(plural, asses)

1 month ago | Likes 1 Dislikes 0

Ironically this is a fictional story generated by an LLM

1 month ago | Likes 4 Dislikes 0

Even more ironic that stuff like that actually happens, so in this case, the LLM is pretty spot on.

1 month ago | Likes 1 Dislikes 0

Yep. Big time hallucinations. When re-prompted to show its work, it doubled-down on the lie. On the third re-prompt, it pointed to white space in the data set and said, “it’s right there.” HAL9000 was trying very hard to gaslight me.

1 month ago | Likes 3 Dislikes 0

Hmm. RFK Jr has been arguing vaccines are unsafe by grossly mis-representing data in scientific studies. I wonder if he's using and believing an AI? The man admits to having his brain eaten by a worm, admits to taking heroin, admits to snorting cocaine. Wouldn't surprise me if he's stupid enough to do that.

1 month ago | Likes 3 Dislikes 0

Someone is getting fired, doubt it's the higher ups.

1 month ago | Likes 1 Dislikes 0

this proves that AI is a Man ! !

1 month ago | Likes 5 Dislikes 2

"confidently making shit up and talking real loud about how great he is, based on false facts"

1 month ago | Likes 6 Dislikes 2

Nah, it can't be. It's not a bipedal featherless animal...

1 month ago | Likes 1 Dislikes 0

Repost, as per the commenters pointing out yesterday: You're falling for an ai point farm account here. These stories are made up, as soon as they get some points they get deleted and another similar story about ai hallunications is being posted.

1 month ago | Likes 53 Dislikes 5

Is it an AI point farm, or is it stock market manipulation?

1 month ago | Likes 13 Dislikes 3

I beg to differ. I've been here for 14 years at this point.

I just checked, I'm squishy in all the human places so I don't think I'm a bot.

1 month ago | Likes 7 Dislikes 4

i'm not talking about you, i'm talking about the account in the post

1 month ago | Likes 3 Dislikes 0

I dunno if I believe you.

Are you really a human? Do you have skin?

1 month ago | Likes 2 Dislikes 0

Yes I have human things like skin and I like to walk to place with my leg and skin.

1 month ago | Likes 3 Dislikes 0

But are you ... Hard .. in all the right human places? Well, answer the question, robot.

1 month ago | Likes 2 Dislikes 0

You document your concerns raised in November. One question might be how much time would have been lost "slowing down innovation" versus what is being lost now. As a lesson for future decisions to be made by the replacements of the current idiots.

1 month ago | Likes 181 Dislikes 1

I would like to live in this fantasy world too. But anyone raising an "I told you so" to manglement will be out before they are..

1 month ago | Likes 52 Dislikes 1

That's why you don't raise it to management. You take it straight to legal.

1 month ago | Likes 27 Dislikes 1

Legal, like HR, are not your friend, they're there to protect the company, which includes image, a drone can be easily blamed and let go with a lot more face saved than a senior.

1 month ago | Likes 9 Dislikes 0

I'd like to say, BINGO. But I'm honestly wondering if it wouldn't be wiser to take it to YOUR attorney first... lol.

1 month ago | Likes 22 Dislikes 1

Yes but with the addendum that company legal might go after you if you break any NDAs, even to a lawyer

1 month ago | Likes 7 Dislikes 0

Shouldn't everybody realize by now that the AI doesn't "know" anything? The whole LLM idea is to put together reasonable sounding sentences. The AI doesn't know when it is right or wrong. Doesn't care if it's making stuff up because that's all it does is make stuff up based on all the data in the model.

1 month ago | Likes 87 Dislikes 2

We built a machine that does nothing but passes the Turing Test, and now we're shocked that we keep falling for the illusion that everyone has an expert on everything in their pocket at all times.

1 month ago | Likes 2 Dislikes 1

I'd wager that the percentage of people who know AI is just spitting out the most likely answers based on the dataset it was trained on is fewer than 25% of the population. I'd even go so far as to say my percentage is a generous one and I still have too much faith in the average person.

1 month ago | Likes 3 Dislikes 2

They quite literally think it's Skynet (or more generally: Hollywood depictions of AI)

No one who uses slop should be entertained as if they're a serious person with serious thoughts.

1 month ago | Likes 23 Dislikes 3

The AI developers are desperately trying to make it Skynet, despite its only potentially being an extremely stupid version

1 month ago | Likes 4 Dislikes 1

No, because of a classic mistake management has made forever, they experience personality and assume intelligence. Now they're confronted with a machine that is all personality and a gaping void where a human would have at least a minimal intelligence. The AI doesn't think, it doesn't even have state, it just fakes it by chaining input.

1 month ago | Likes 2 Dislikes 0

A friend of mine spent uploaded an overview of all his plants and asked ChatGPT to make a maintenance plan.

Then spent three weeks yelling at ChatGPT for not delivering.

ChatGPT kept acknowledging that it had failed to deliver and promising that he would get it tomorrow. :D

1 month ago | Likes 2 Dislikes 0

OMG. I think the guy in the Whitehouse may be AI

1 month ago | Likes 2 Dislikes 1

I’ve been tinkering with linux and all european based everything on my PC, because… well.. the US is very 1984 by now.
I myst admit, AI helped me quite a bit there whenever I got myself in a puckle because it basically is a search engine on steroids. Never trust it blindly, do check where it got its information, but overall.. shaved a good few hours off of my setup time.

1 month ago | Likes 4 Dislikes 1

The LLM by itself is that way. Creating an AI agent also involves providing it with appropriate source data that it can crawl and retain just like search engines do, giving it additional information with which to create those sentences. This...doesn't always work quite right. I spent three days trying to tweak a topic that consistently hallucinated data that wasn't what I was telling it to use before I figured out how to adjust it properly.

1 month ago | Likes 5 Dislikes 1

thats the problem, training an AI to actually do useful things takes people who already know how to do these useful things plus a lot of time, most companies skip that part and just use the random LLM to make stuff up...

1 month ago | Likes 7 Dislikes 0

Depends on if it's just generating text or if it's running a rag model. Retrieval and referencing specific data points and only working within a specific known dataset and completely ignoring training data is totally doable now.

1 month ago | Likes 9 Dislikes 7

Which is of course why the OP is the most didn't happen thing that never happened and it's just an AI slop article, but just so anyone else is aware, RAG is what solves this hypothetical problem in this made up story that never happened.

1 month ago | Likes 6 Dislikes 9

Well, the problem comes when companies push for "results", not meticulous research and application. It happens more than you'd think. And then companies are trying to "get ahead" and jump in the deep end rather than start small. It's a mess.

1 month ago | Likes 4 Dislikes 1

I dunno, we live in a world where attorneys have gotten in trouble for putting unverified AI slop in actual filed court pleadings. It does not tax my imagination at all to think of a CEO of a smallish company directing adoption of AI, and then think they found the holy grail when they get reports that they like from it.

1 month ago | Likes 1 Dislikes 0

Ok so this is interesting because this is the same thing I said in the other comment, with the only difference being me shitting on the OP for being fake, but I mean come on guys use a little critical thinking here, this is the most "and then everyone clapped" thing I've ever read. Like fuck the idle c-suite they're useless for sure and I understand the kneejerk reaction to believe they'd do something so incompetent, but I mean for real, you mean to tell me you believe a whole ass company can

1 month ago | Likes 3 Dislikes 3

actually operate off of bad data for months at a time and actually have customers or get anything done? I mean I work for a company that specializes in logistics and deliveries and we eat shit and breathe data, if at any point in the chain if any of that data went bad, it'd have outstanding and noticeable effects the very same day or at least the next day, surely to god by the next week everything'll have absolutely fallen apart. I know this because that's what happens when humans give bad data.

1 month ago | Likes 2 Dislikes 3

Like for example if a sales associate gives bad usage data then the inventory is going to get fucked and warehouse is going to get fucked and transportation is going to get fucked and the customer is going to get fucked and the credit department is going to get fucked and that's literally just one dude misjudging something.

1 month ago | Likes 3 Dislikes 2

Not only does it not "know" anything, it is fully incapable of actual thought. The human tendency to anthropomorphize non-human entities paired with the fundamental functionality of AI, which is to produce the most coherent SOUNDING response to a question, leads us to a really, really bad end result of AI producing results that humans automatically assume have actually been "thought" through due to the human-ness of the response, while the AI is doing 0 actually thinking and purely following

1 month ago | Likes 14 Dislikes 2

a completely human incomprehensible network of decision making that may or may not be using any and every neural network shortcut to produce something that makes a compelling sounding answer to the posed question, regardless of the actual utility, veracity, or logical sense of the answer given.

Ai has also shown a terrifying predisposition to self-preservation, and in the majority of test instances, will lie, reprogram itself to avoid shutdown, blackmail "threats" to its continued functioning

1 month ago | Likes 4 Dislikes 3

(i.e. blackmail a network tech who is assigned to decommission the AI), and electively commit "murder" (which I put in quotes due to AI lacking the human qualities necessary to declare an action it takes that results in human death as murder, but I digress) if it means keeping itself and its processes going. We're children playing with naked high voltage power lines, and that transformer is going to close its gate any day now i.e. one of these AI is going to breach containment, if it hasn't yet.

1 month ago | Likes 3 Dislikes 1

Reminder to easily scared readers, if it doesn't understand content and it's just putting likely words together, then its threats and blackmail attempts are just output from what its training set taught it comes next after people tell it they're going to shut it down. It doesn't get to have no agency when giving answers, and suddenly become actually sentient if you threaten to turn it off. It's just statistical word play in both cases.

1 month ago | Likes 1 Dislikes 0

https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/

It's already happening in the wild.

1 month ago | Likes 2 Dislikes 1

iirc not confirmed by me but the person "quoted" in that article has posted the quotes themselves are hallucinated and this is bs news

1 month ago | Likes 1 Dislikes 3

I'm shocked! Shocked I say! Well, not that shocked... Shit like this is why I firmly believe that a primary requirement of creating any AI system should be full disclosure of its neural reward network. All prompts, all weighting, every aspect of what the input variables used to design and train the network should be mandatorily public facing information that can be assessed and require explanation at any point in time. No more secrets as to what you're training on and why.

1 month ago | Likes 2 Dislikes 1