I asked AI about two dogwalker services that I know (the people that is). It gave me the correct site addresses, the correct times of walking, basically it did well there. Then called both ladies completely and utterly different names. I told it it got something very wrong, then it came back with the correct names. So basically don’t trust it blindly. It will get things right, it might help you underway, but it. Will. Fuck. Some. Shit. Up. Don’t ever, ever trust the current AI implementations.
I got caught bad a couple of days ago. Went to make yogurt in our old insta-pot but couldn't find the manual either here or online. So I just went with Google AI's confident instructions. Checked in at 10 min on a process that was supposed to take 25 minutes. After about an hour I think I got most of the milk foam out of the machine and countertop. The obvious result if I'd stopped to think about it. Why the fuck did I put my brain in neutral!?
As a software engineer ... claude and chatgpt used to make shit up. But its very close to working now. Shits moving so fucking fast. Wont be long before a lotta jobs are gone.
Remember that AI must follow the same “if-then-else” paradigm that old school human programmers follow. If they’re wrong, there is no QA to question the results, what do you expect to see as results? Sometimes you’ll get gold. Other times monkeys humping footballs.
No the person who suggested they don't immediately rely 100% on AI data is worried about getting fired. But sure hate blindly on AI without even properly reading go right ahead.
Well, in a just world, your shitty company would go bankrupt and you can all suffer from your stupidity, but more than likely you will be bought or or the c levels will have golden parachutes.
It's real. I've been hearing stories about multiple companies having to overhaul their data analytics after realizing that nothing was accurate when they started using an AI.
It ended up in a lot of refactor hell. Nobody was fired in most cases, just no bonuses for that year.
Yep. Big time hallucinations. When re-prompted to show its work, it doubled-down on the lie. On the third re-prompt, it pointed to white space in the data set and said, “it’s right there.” HAL9000 was trying very hard to gaslight me.
Hmm. RFK Jr has been arguing vaccines are unsafe by grossly mis-representing data in scientific studies. I wonder if he's using and believing an AI? The man admits to having his brain eaten by a worm, admits to taking heroin, admits to snorting cocaine. Wouldn't surprise me if he's stupid enough to do that.
Repost, as per the commenters pointing out yesterday: You're falling for an ai point farm account here. These stories are made up, as soon as they get some points they get deleted and another similar story about ai hallunications is being posted.
You document your concerns raised in November. One question might be how much time would have been lost "slowing down innovation" versus what is being lost now. As a lesson for future decisions to be made by the replacements of the current idiots.
Legal, like HR, are not your friend, they're there to protect the company, which includes image, a drone can be easily blamed and let go with a lot more face saved than a senior.
Shouldn't everybody realize by now that the AI doesn't "know" anything? The whole LLM idea is to put together reasonable sounding sentences. The AI doesn't know when it is right or wrong. Doesn't care if it's making stuff up because that's all it does is make stuff up based on all the data in the model.
We built a machine that does nothing but passes the Turing Test, and now we're shocked that we keep falling for the illusion that everyone has an expert on everything in their pocket at all times.
I'd wager that the percentage of people who know AI is just spitting out the most likely answers based on the dataset it was trained on is fewer than 25% of the population. I'd even go so far as to say my percentage is a generous one and I still have too much faith in the average person.
No, because of a classic mistake management has made forever, they experience personality and assume intelligence. Now they're confronted with a machine that is all personality and a gaping void where a human would have at least a minimal intelligence. The AI doesn't think, it doesn't even have state, it just fakes it by chaining input.
I’ve been tinkering with linux and all european based everything on my PC, because… well.. the US is very 1984 by now. I myst admit, AI helped me quite a bit there whenever I got myself in a puckle because it basically is a search engine on steroids. Never trust it blindly, do check where it got its information, but overall.. shaved a good few hours off of my setup time.
The LLM by itself is that way. Creating an AI agent also involves providing it with appropriate source data that it can crawl and retain just like search engines do, giving it additional information with which to create those sentences. This...doesn't always work quite right. I spent three days trying to tweak a topic that consistently hallucinated data that wasn't what I was telling it to use before I figured out how to adjust it properly.
thats the problem, training an AI to actually do useful things takes people who already know how to do these useful things plus a lot of time, most companies skip that part and just use the random LLM to make stuff up...
Depends on if it's just generating text or if it's running a rag model. Retrieval and referencing specific data points and only working within a specific known dataset and completely ignoring training data is totally doable now.
Which is of course why the OP is the most didn't happen thing that never happened and it's just an AI slop article, but just so anyone else is aware, RAG is what solves this hypothetical problem in this made up story that never happened.
Well, the problem comes when companies push for "results", not meticulous research and application. It happens more than you'd think. And then companies are trying to "get ahead" and jump in the deep end rather than start small. It's a mess.
I dunno, we live in a world where attorneys have gotten in trouble for putting unverified AI slop in actual filed court pleadings. It does not tax my imagination at all to think of a CEO of a smallish company directing adoption of AI, and then think they found the holy grail when they get reports that they like from it.
Ok so this is interesting because this is the same thing I said in the other comment, with the only difference being me shitting on the OP for being fake, but I mean come on guys use a little critical thinking here, this is the most "and then everyone clapped" thing I've ever read. Like fuck the idle c-suite they're useless for sure and I understand the kneejerk reaction to believe they'd do something so incompetent, but I mean for real, you mean to tell me you believe a whole ass company can
actually operate off of bad data for months at a time and actually have customers or get anything done? I mean I work for a company that specializes in logistics and deliveries and we eat shit and breathe data, if at any point in the chain if any of that data went bad, it'd have outstanding and noticeable effects the very same day or at least the next day, surely to god by the next week everything'll have absolutely fallen apart. I know this because that's what happens when humans give bad data.
Like for example if a sales associate gives bad usage data then the inventory is going to get fucked and warehouse is going to get fucked and transportation is going to get fucked and the customer is going to get fucked and the credit department is going to get fucked and that's literally just one dude misjudging something.
Not only does it not "know" anything, it is fully incapable of actual thought. The human tendency to anthropomorphize non-human entities paired with the fundamental functionality of AI, which is to produce the most coherent SOUNDING response to a question, leads us to a really, really bad end result of AI producing results that humans automatically assume have actually been "thought" through due to the human-ness of the response, while the AI is doing 0 actually thinking and purely following
a completely human incomprehensible network of decision making that may or may not be using any and every neural network shortcut to produce something that makes a compelling sounding answer to the posed question, regardless of the actual utility, veracity, or logical sense of the answer given.
Ai has also shown a terrifying predisposition to self-preservation, and in the majority of test instances, will lie, reprogram itself to avoid shutdown, blackmail "threats" to its continued functioning
(i.e. blackmail a network tech who is assigned to decommission the AI), and electively commit "murder" (which I put in quotes due to AI lacking the human qualities necessary to declare an action it takes that results in human death as murder, but I digress) if it means keeping itself and its processes going. We're children playing with naked high voltage power lines, and that transformer is going to close its gate any day now i.e. one of these AI is going to breach containment, if it hasn't yet.
Reminder to easily scared readers, if it doesn't understand content and it's just putting likely words together, then its threats and blackmail attempts are just output from what its training set taught it comes next after people tell it they're going to shut it down. It doesn't get to have no agency when giving answers, and suddenly become actually sentient if you threaten to turn it off. It's just statistical word play in both cases.
I'm shocked! Shocked I say! Well, not that shocked... Shit like this is why I firmly believe that a primary requirement of creating any AI system should be full disclosure of its neural reward network. All prompts, all weighting, every aspect of what the input variables used to design and train the network should be mandatorily public facing information that can be assessed and require explanation at any point in time. No more secrets as to what you're training on and why.
twinkwtp
That AI bubble gonna pop and Im just gonna roast marshmallows on the flames.
ToenailClippingsJar
I asked AI about two dogwalker services that I know (the people that is). It gave me the correct site addresses, the correct times of walking, basically it did well there. Then called both ladies completely and utterly different names. I told it it got something very wrong, then it came back with the correct names. So basically don’t trust it blindly. It will get things right, it might help you underway, but it. Will. Fuck. Some. Shit. Up. Don’t ever, ever trust the current AI implementations.
NotAllowedToArgueUnlessYouPay
I got caught bad a couple of days ago. Went to make yogurt in our old insta-pot but couldn't find the manual either here or online. So I just went with Google AI's confident instructions. Checked in at 10 min on a process that was supposed to take 25 minutes. After about an hour I think I got most of the milk foam out of the machine and countertop. The obvious result if I'd stopped to think about it. Why the fuck did I put my brain in neutral!?
SmashySashimi
AI: Let's take machines that have been enormously useful for decades and FUCK. THEM. UP.
DrewThe3DPrinterGuy
JohnSmithterms
As a software engineer ... claude and chatgpt used to make shit up. But its very close to working now. Shits moving so fucking fast. Wont be long before a lotta jobs are gone.
UmmonPrime
blindly following AI data without validation.. ya, that'll screw you pretty hard.
Tigersterne
Yup, do everything you can to save/protect yourself, but other than that, let the fuckers get sued out of existence
ATLandNerdy
I fixed something like this by always doing hard numbers the hard way. If you can validate BD, you must validate BD. That's what Splunk is for.
spazztastic
I guess I don't have to believe this until somebody has a source of which AI it is
Ghoffner
Remember that AI must follow the same “if-then-else” paradigm that old school human programmers follow. If they’re wrong, there is no QA to question the results, what do you expect to see as results? Sometimes you’ll get gold. Other times monkeys humping footballs.
pt2016
Are you sure about that, I thought AI worked heuristic with probability. (Btw I’m an it guy myself not an AI-person)
REOJackwagon
The person who suggested they immediately rely 100% on AI data for metrics, which can cost actual people their jobs is worried they could be fired?
Excuse me while I put on my 'I don't give a shit' face
Carl99
No the person who suggested they don't immediately rely 100% on AI data is worried about getting fired. But sure hate blindly on AI without even properly reading go right ahead.
Jumboscircus
So, is AI on track to be the greatest example of failing up?
thisisnotfineffs
I don't think anyone could replace Elon in that seat
CedricDur
AI rage bait story.
PieSpie
this. i saw this story posted yesterday and it got a ton of hate. shame this post seems to be doing better
Stringgeek
Why would anyone trust numbers from a technology that can not do basic addition?!?
usernametakenisthestoryofmylife
I am willing to wager this is 71% accurate.
chiefrunswithscissors
It's particularly bad when numbers are involved.
FermentTheRich3000
Put it on your resume and move on.
evilspock
People SHOULD be fired.
knubberrub
>raised concerns

In email.. Right?
vericon151
Well, in a just world, your shitty company would go bankrupt and you can all suffer from your stupidity, but more than likely you will be bought or or the c levels will have golden parachutes.
Assuming this is real… which I doubt.
NKato
It's real. I've been hearing stories about multiple companies having to overhaul their data analytics after realizing that nothing was accurate when they started using an AI.
It ended up in a lot of refactor hell. Nobody was fired in most cases, just no bonuses for that year.
icurays1
RedgrintGrumble
Ironically this is a fictional story generated by an LLM
TheMuellmann
Even more ironic that stuff like that actually happens, so in this case, the LLM is pretty spot on.
BillBarian
Yep. Big time hallucinations. When re-prompted to show its work, it doubled-down on the lie. On the third re-prompt, it pointed to white space in the data set and said, “it’s right there.” HAL9000 was trying very hard to gaslight me.
SpammersAreScum
Hmm. RFK Jr has been arguing vaccines are unsafe by grossly mis-representing data in scientific studies. I wonder if he's using and believing an AI? The man admits to having his brain eaten by a worm, admits to taking heroin, admits to snorting cocaine. Wouldn't surprise me if he's stupid enough to do that.
MotoCanuck
Someone is getting fired, doubt it's the higher ups.
xmaneds
this proves that AI is a Man ! !
xmaneds
"confidently making shit up and talking real loud about how great he is, based on false facts"
MrImmaculate
Nah, it can't be. It's not a bipedal featherless animal...
dreikommavierzehn
Repost, as per the commenters pointing out yesterday: You're falling for an ai point farm account here. These stories are made up, as soon as they get some points they get deleted and another similar story about ai hallunications is being posted.
ThingsThatDontJustifyGenocide
Is it an AI point farm, or is it stock market manipulation?
algoritham
I beg to differ. I've been here for 14 years at this point.
I just checked, I'm squishy in all the human places so I don't think I'm a bot.
dreikommavierzehn
i'm not talking about you, i'm talking about the account in the post
reichstein
I dunno if I believe you.
Are you really a human? Do you have skin?
algoritham
Yes I have human things like skin and I like to walk to place with my leg and skin.
WillLickNudibranchsForBUzz
But are you ... Hard .. in all the right human places? Well, answer the question, robot.
oldguyexlurker
You document your concerns raised in November. One question might be how much time would have been lost "slowing down innovation" versus what is being lost now. As a lesson for future decisions to be made by the replacements of the current idiots.
Sh1tMovieGroup
I would like to live in this fantasy world too. But anyone raising an "I told you so" to manglement will be out before they are..
TKKain
That's why you don't raise it to management. You take it straight to legal.
Sh1tMovieGroup
Legal, like HR, are not your friend, they're there to protect the company, which includes image, a drone can be easily blamed and let go with a lot more face saved than a senior.
oldguyexlurker
I'd like to say, BINGO. But I'm honestly wondering if it wouldn't be wiser to take it to YOUR attorney first... lol.
TheGriffin
Yes but with the addendum that company legal might go after you if you break any NDAs, even to a lawyer
ProCycle
Shouldn't everybody realize by now that the AI doesn't "know" anything? The whole LLM idea is to put together reasonable sounding sentences. The AI doesn't know when it is right or wrong. Doesn't care if it's making stuff up because that's all it does is make stuff up based on all the data in the model.
Saturniidae
We built a machine that does nothing but passes the Turing Test, and now we're shocked that we keep falling for the illusion that everyone has an expert on everything in their pocket at all times.
parabolic000
I'd wager that the percentage of people who know AI is just spitting out the most likely answers based on the dataset it was trained on is fewer than 25% of the population. I'd even go so far as to say my percentage is a generous one and I still have too much faith in the average person.
Emjayen
They quite literally think it's Skynet (or more generally: Hollywood depictions of AI)
No one who uses slop should be entertained as if they're a serious person with serious thoughts.
FartsSmellBad
The AI developers are desperately trying to make it Skynet, despite its only potentially being an extremely stupid version
ATLandNerdy
No, because of a classic mistake management has made forever, they experience personality and assume intelligence. Now they're confronted with a machine that is all personality and a gaping void where a human would have at least a minimal intelligence. The AI doesn't think, it doesn't even have state, it just fakes it by chaining input.
EvilBrainSlug
A friend of mine spent uploaded an overview of all his plants and asked ChatGPT to make a maintenance plan.
Then spent three weeks yelling at ChatGPT for not delivering.
ChatGPT kept acknowledging that it had failed to deliver and promising that he would get it tomorrow. :D
animatronicChristmasChickens
OMG. I think the guy in the Whitehouse may be AI
ToenailClippingsJar
I’ve been tinkering with linux and all european based everything on my PC, because… well.. the US is very 1984 by now.
I myst admit, AI helped me quite a bit there whenever I got myself in a puckle because it basically is a search engine on steroids. Never trust it blindly, do check where it got its information, but overall.. shaved a good few hours off of my setup time.
UprootedGrunt
The LLM by itself is that way. Creating an AI agent also involves providing it with appropriate source data that it can crawl and retain just like search engines do, giving it additional information with which to create those sentences. This...doesn't always work quite right. I spent three days trying to tweak a topic that consistently hallucinated data that wasn't what I was telling it to use before I figured out how to adjust it properly.
RecurringNightmare
thats the problem, training an AI to actually do useful things takes people who already know how to do these useful things plus a lot of time, most companies skip that part and just use the random LLM to make stuff up...
paulwall117350
Depends on if it's just generating text or if it's running a rag model. Retrieval and referencing specific data points and only working within a specific known dataset and completely ignoring training data is totally doable now.
paulwall117350
Which is of course why the OP is the most didn't happen thing that never happened and it's just an AI slop article, but just so anyone else is aware, RAG is what solves this hypothetical problem in this made up story that never happened.
Eilonwyy
Well, the problem comes when companies push for "results", not meticulous research and application. It happens more than you'd think. And then companies are trying to "get ahead" and jump in the deep end rather than start small. It's a mess.
crazyspelling
I dunno, we live in a world where attorneys have gotten in trouble for putting unverified AI slop in actual filed court pleadings. It does not tax my imagination at all to think of a CEO of a smallish company directing adoption of AI, and then think they found the holy grail when they get reports that they like from it.
paulwall117350
Ok so this is interesting because this is the same thing I said in the other comment, with the only difference being me shitting on the OP for being fake, but I mean come on guys use a little critical thinking here, this is the most "and then everyone clapped" thing I've ever read. Like fuck the idle c-suite they're useless for sure and I understand the kneejerk reaction to believe they'd do something so incompetent, but I mean for real, you mean to tell me you believe a whole ass company can
paulwall117350
actually operate off of bad data for months at a time and actually have customers or get anything done? I mean I work for a company that specializes in logistics and deliveries and we eat shit and breathe data, if at any point in the chain if any of that data went bad, it'd have outstanding and noticeable effects the very same day or at least the next day, surely to god by the next week everything'll have absolutely fallen apart. I know this because that's what happens when humans give bad data.
paulwall117350
Like for example if a sales associate gives bad usage data then the inventory is going to get fucked and warehouse is going to get fucked and transportation is going to get fucked and the customer is going to get fucked and the credit department is going to get fucked and that's literally just one dude misjudging something.
wabitgirl
Not only does it not "know" anything, it is fully incapable of actual thought. The human tendency to anthropomorphize non-human entities paired with the fundamental functionality of AI, which is to produce the most coherent SOUNDING response to a question, leads us to a really, really bad end result of AI producing results that humans automatically assume have actually been "thought" through due to the human-ness of the response, while the AI is doing 0 actually thinking and purely following
wabitgirl
a completely human incomprehensible network of decision making that may or may not be using any and every neural network shortcut to produce something that makes a compelling sounding answer to the posed question, regardless of the actual utility, veracity, or logical sense of the answer given.
Ai has also shown a terrifying predisposition to self-preservation, and in the majority of test instances, will lie, reprogram itself to avoid shutdown, blackmail "threats" to its continued functioning
wabitgirl
(i.e. blackmail a network tech who is assigned to decommission the AI), and electively commit "murder" (which I put in quotes due to AI lacking the human qualities necessary to declare an action it takes that results in human death as murder, but I digress) if it means keeping itself and its processes going. We're children playing with naked high voltage power lines, and that transformer is going to close its gate any day now i.e. one of these AI is going to breach containment, if it hasn't yet.
crazyspelling
Reminder to easily scared readers, if it doesn't understand content and it's just putting likely words together, then its threats and blackmail attempts are just output from what its training set taught it comes next after people tell it they're going to shut it down. It doesn't get to have no agency when giving answers, and suddenly become actually sentient if you threaten to turn it off. It's just statistical word play in both cases.
KinetoPlay
https://theshamblog.com/an-ai-agent-published-a-hit-piece-on-me/
It's already happening in the wild.
bekkayya
iirc not confirmed by me but the person "quoted" in that article has posted the quotes themselves are hallucinated and this is bs news
wabitgirl
I'm shocked! Shocked I say! Well, not that shocked... Shit like this is why I firmly believe that a primary requirement of creating any AI system should be full disclosure of its neural reward network. All prompts, all weighting, every aspect of what the input variables used to design and train the network should be mandatorily public facing information that can be assessed and require explanation at any point in time. No more secrets as to what you're training on and why.