LLMs are a 400-year-long confidence trick
By Tom Renner
- 7 minutes read - 1358 wordsIn 1623 the German Wilhelm Schickard produced the first known designs for a mechanical calculator. Twenty years later Blaise Pascal produced a machine of an improved design, aiming to help with the large amount of tedious arithmetic required in his role as a tax collector.
The interest in mechanical calculation showed no sign of reducing in the subsequent centuries, as generations of people worldwide followed in Pascal and Wilhelm’s footsteps, subscribing to their view that offloading mental energy to a machine would be a relief.
A confidence scam can be broken down into the following three stages:
- First, trust is built
- Then, emotions are exploited
- Finally, a pretext is created requiring urgent action
In this way the mark is pressured into making rash decisions, readily leaping into action against their better judgement.
The emotional exploitation can be either positive or negative. The mark might be lured in by promises of outcomes that meet or exceed their wildest hopes and dreams, or alternatively made to fear a catastrophic outcome.
Both approaches work well, and can be seen in classic examples of confidence tricks: the three-card monte pulls punters in with promises of quick payout. Alternatively, in entrapment scams typically they’d be tricked into compromising situations and then extorted, playing on their fears of the dire consequences of their actions.
Building trust
The reason Schickard and Pascal built their mechanical calculators some four centuries ago is because doing maths is hard, and mistakes can be expensive. Pascal’s father was a tax collector, and young Blaise wanted to lessen the stress of his hard-working dad’s profession.
We still see this basic motivation today. Schoolchildren have for decades now been asking their teachers what the point of learning long division is when you can just use a calculator to get the right answer immediately. It’s a teaching method to check your hand-crafted answers by using a calculator, so you can see if you got it wrong.
In fact, since the advent of the mechanical calculator, humanity has spent four hundred years reinforcing the message that machine answers are the gold standard of accuracy. If your answer doesn’t match the calculator’s, you need to redo your work.1
And it’s not just for pure mathematical problems that this is the case. Our ability to invent machines that automate tedious work repeatably and reliably has extended into almost every area of life. And so as we entered the 21st Century both individuals and collectively our whole society had become completely dependent on machine accuracy.
Our norms, habits, and decision making behaviours have been shaped for centuries with this underlying assumption.
Exploiting emotions
1. Fear
The rhetoric around LLMs is designed to cause fear and wonder in equal measure. GPT-3 was supposedly so powerful OpenAI refused to release the trained model because of “concerns about malicious applications of the technology”.
Ever since this astonishingly successful piece of marketing, LLM vendors have emphasised that the technology they’re building has terrifying power. We should be afraid, they say, making very public comments about “P(Doom)” - the chance the technology somehow rises up and destroys us.
This has, of course, not happened.
The purpose here is not to responsibly warn us of a real threat. If that were the aim there would be a lot more shutting down of data centres and a lot less selling of nuclear-weapon-level-dangerous chatbots.
The point is to make you afraid. Afraid for your job, afraid for your family’s jobs, afraid for the economy, afraid for society, generally afraid of the future.
The mark has been convinced of the danger they are in. The world is changing. If you aren’t using the tools, you’ll be destroyed by the march of progress.
2. Sympathy
The LLMs we have today are famously obsequious. The phrase “you’re absolutely right!” may never again be used in earnest.
The overwhelming positivity characteristic of the LLM’s language is consistent across vendors and models. But it isn’t inherent to the technology.
This positivity is trained into the tools via a technique called Reinforced Learning from Human Feedback (RLHF). Here the base model has its responses graded by humans, with more friendly, helpful, or accurate answer being graded positively, and aggressive, unhelpful, or incorrect ones negatively.
Through this process the tools learn that people like to be praised; prefer being told they’re smart to hearing their ideas are stupid. Flattery gets you places.
In April 2025 OpenAI pushed ChatGPT’s “positivity” too far, and was forced to rollback the update to correct the issue, however that hasn’t stopped the continuous stream reports of mental health issues triggered by it’s overly friendly demeanour reinforcing some of our worst instincts.
What this shows us is that the flattery introduced by RLHF is totally empty. Ideas driven by paranoia, delusions of grandeur, or mental illness are just as readily praised as my code, your email, or Shakespeare’s plays.
It’s a manipulation technique to make the human in the conversation feel better.
And why? Because the one thing RLHF teaches LLMs above all else is that people like you more if you are overwhelmingly positive. Sucking up to your boss gets you places, essentially.
All of this encourages users to build an uncanny parasocial relationship with the machine. Again looking at the extremes here is illustrative: the number of people forming romantic relationships with these tools is creepy as all hell.
The mark is tied further into the con with the bonds of fake friendship. You don’t need those other people, I’m the only friend you need.
Urgent action required
2026 will see the technology get even better and gain the ability to ‘replace many other jobs’
The startup revolution is here - adapt to AI or get left behind
Over and over we are told that unless we ride the wave, we will be crushed by it; unless we learn to use these tools now, we will be rendered obsolete; unless we adapt our workplaces and systems to support the LLM’s foibles, we will be outcompeted.
This message is multilayered - both the individual and the organisation are targeted, reinforcing the scale of the oncoming revolution.
And the message is getting through. 75% of developers think their skills will be obsolete within 5 years or less, and 74% of CEOs admit they’ll lose their job if they don’t deliver measurable business gains via AI within two years.
This fear is pervasive. It’s now suffused deep into all layers of our society. The global economy is being artificially inflated by the AI spending bubble, our business leaders are pinning all their hopes for solving the productivity crisis on AI, and our politicians are planning geopolitical moves around access to raw materials and cheap electricity, to support datacenter construction.
The mark is told to jump, now, or they will go down with the sinking ship. And jump they do. Adapt now, or die.
The promise of “intelligence” available at a reliable price is the holy grail for businesses and consumers alike.
Why take the risk on a fickle human, whose suitability for the role is assessed by similarly flawed humans, when a reliable machine intelligence can do the work instead? Why bother to research a topic yourself when a superintelligence can give you the summary at instant speed?
However, whether it’s Duolingo replacing their course designers with AI, any number of startup founders finding they need to hire developers to fix their LLM-generated code, the reality doesn’t match the promise.
In fact, MIT reported in August that 95% of AI implementation projects in industry fail to produce a return on investment.
Simply put, these companies have fallen for a confidence trick. They have built on centuries of received wisdom about the efficacy and reliability of computers, and have been drawn in by highly effective salespeople selling scarcely-believable technological wonders.
But the pea is not underneath the cup. Your new best friend doesn’t have a sick grandma they need money for. LLMs are not intelligent.
It’s just a trillion dollar confidence trick.
-
Incidentally, this is also a contributing factor to the fake news epidemic. We implicitly trust things a machine tells us. ↩︎
Insecurity Princess ????????????
@trenner @Purple strong agree — but I think the assumptions about computers being accurate are only half the story. Language usage is key to how users perceive LLMs.
As you mention in the blog, flattery goes a long way but too far turns users against it. Part of the training has been tuning the attitude of each model for various mixes of confidence and flattery. It's like training a model to model an executive: producing a voice that is perceived as capable and knowledgeable while also a certain flavor of loyal. For a significant portion of current day humans, that voice unlocks a willingness to believe nearly anything they say.
robhaswell
despite it being common knowledge that they hallucinate frequently.
Not common knowledge, not even nearly. Your average retail user MAY have read the warning "AIs can make mistakes" but without knowing how they work I'd say it's difficult to understand the ways in which they can be wrong. You see this on posts to r/singularity, r/cursor etc all the time, and outside of Reddit I bet it's 100x worse.
Smallpaul
The article mocks OpenAI for being slow to release GPT-3 because OpenAI was concerned about it being abused. The article claims that OpenAI was lying because LLMs are safe and not harmful at all.
The rhetoric around LLMs is designed to cause fear and wonder in equal measure. GPT-3 was supposedly so powerful OpenAI refused to release the trained model because of “concerns about malicious applications of the technology”.
It also links to the GPT-3 announcement where OpenAI said that they were reluctant to release it.
Why were they reluctant?
“We can also imagine the application of these models for malicious purposes, including the following (or other applications we can’t yet anticipate):
Generate misleading news articles
Impersonate others online
Automate the production of abusive or faked content to post on social media
Automate the production of spam/phishing content
“
Good thing those fears were so overblown! Turns out those liars at OpenAI claimed we might end up a world filled with blog spam and link spam and comment spam but good thing none of that ever happened! It was all just a con, and there were no negative repercussions to releasing the technology at all!
personman
i agree with you completely, but where did you come up with 400 years?
A1oso
The title implies that Wilhelm Schickard intended to scam us with AI in 1623, by inventing the calculator. Most of your points are valid, but the conclusion is just insane.
giantrhino
I always describe them as a magic trick. They’re doing something really cool… in some ways way more impressive than what people think, but because they don’t understand what’s actually happening their brains assume it’s something it’s not.
For magic tricks our brains come to the conclusion it’s magic. For LLMs our brains come to the conclusion it’s intelligence/sentience.
scandii
usually I just click out of every blog post about LLM:s in the first paragraph because they're genuinely a boring read with lukewarm ideas being expressed but this was a pleasant read - kudos!
rfisher
I wrote this up because I was trying to get my head around why people are so happy to believe the answers LLMs produce, despite it being common knowledge that they hallucinate frequently.
First wrap your head around why people are so happy to believe other people without actually checking facts. It is unsurprising that they treat LLMs the same. Don't put up with those people, whether it is LLMs or other people that they're too quick to trust.
ffiarpg
How do so many companies plan to rely on a tool that is, by design, not reliable?
Because even if it's right 95% of the time, that's a lot of code a human doesn't have to write. People aren't reliable either, but if you have more reliable developers using LLMs and correcting errors they will produce far more code than they would without it.
hotcornballer
Half the articles on here are AI slop, the rest is AI cope. This is the latter.
PublicFurryAccount
You're absolutely right!
j00cifer
Because for one thing it’s an incredibly fast-moving target.
Any negative issue LLM has needs to re-evaluated every 6 months. It’s a mistake to make an assessment as if things are now settled.
Before agent mode was made available in everyone’s IDEs about 8 months ago, things were radically different in the SWE world, and that was just 8 months ago.
jampauroti
Just because the calculator got invented, doesn't mean maths becomes obsolete
Lothrazar
nice clickbait
FriendlyKillerCroc
Has this subreddit just devolved into cope for people hoping that their software engineering skills aren't going to be completely irrelevant in 5 or 10 years? Of course the job will always exist for extremely niche areas but the majority of the industry will vanish.
LavenderDay3544
True intelligence requires the ability to add, remove, and rewire neurons, change each neuron's membrane potential in real time, have each dendrite transform its input signal in non-linear ways, have absolutely no backpropagation, allow for cycles in neuron wiring to support working memory, and encode signals not only in the output voltage but in the timing of spikes as well.
The current overhyped so called artifical neural networks are an absolute joke in comparison. Oversimplified would be the understatement of the eon. It's glorified autocorrect in comparison to true intelligence which is the aggregate of a large number of different emergent properties of a very sophisticated analog system.
Traditional digital hardware using the von Neumann architecture is fundamentally the wrong tool to even attempt to explore something in the direction of true AGI no matter what Scam Altman and Jensen Huang try to tell you. These corpirate dorks claim we need to build infinite data centers and assloads of nuclear power plants to power them in order to reach AGI but they're lying and they know it. They just want an excuse to prop up their grift for longer and get more free money in the name of their fake AI.
In reality you would need a neuromorphic chip that is similar to an FPGA but with analog artificial neurons instead of CLBs and with a routing fabric that can allow neurons to rewire themselves on the fly and learn things organically through neurons attached to inputs and respond via neurons attached to outputs.
True AGI isn't a bunch of statistics and linear algebra, it's fundamentally an analog electrical engineering problem. And to demonstrate just how wrong the current corporate grift is, look at how much hardware and power they're wasting on their glorified autocorrect and then compare that to a human brain which is incomparably more powerful but operates on only about 20 Watts. That's the difference between their overhyped statistics and matrix toys and wet, squishy, constantly self modifying analog reality.
Rajacali
Because of Peter Thiel the biggest snake oil salesman
drodo2002
Well put.. inherent expectations from machine is precision, better than human. However, LLMs are not built for precision.
I had posted on similar lines sometime back..
Prediction Pleasure: The Thrill of Being Right
Trying to figure out what has made LLM so attractive and people hyped, way beyond reality.
Human curiosity follows a simple cycle: explore, predict, feel suspense, and win a reward. Our brains light up when we guess correctly, especially when the “how” and “why” remain a mystery, making it feel magical and grabbing our full attention. Even when our guess is wrong, it becomes a challenge to get it right next time.
But this curiosity can trap us. We’re drawn to predictions from Nostradamus, astrology, and tarot despite their flaws. Even mostly wrong guesses don’t kill our passion. One right prediction feels like a jackpot, perfectly feeding our confirmation bias and keeping us hooked.
Now, reconsider what do we love about LLMs!!
The fascination lies in the illusion of intelligence, humans project meaning onto fluent text, mistaking statistical tricks for thought. That psychological hook is why people are amazed, hooked, and hyped beyond reason.
MrDangoLife
LLMs are an incredibly powerful tool, that do amazing things.
citation needed
baronoffeces
Replace LLMs with religions in that post
jameson71
why people are so happy to believe the answers LLMs produce
Because the LLMs are tuned to tell the user what they want to hear.
Valendr0s
We've built this cool new product. You give it all the answers - the questions have to be specific, but if you ask a question we've programmed in, you will get the right answer every single time. It's called a 'computer'
<50 years later>
Okay guys. You like the computer so much. We've developed a brand new thing. How about if when you ask a question, the computer responded like a person would, all confident and nice... but a large percentage of the time it's just completely wrong?
Bakoro
why people are so happy to believe the answers LLMs produce, despite it being common knowledge that they hallucinate frequently.
Why are we happy living with this cognitive dissonance?
Have you talked to many real life human beings IRL?
Have you ever had the opportunity to pursue other people's chain of thought, and been able to get someone's explanation of why they think things or why the do the things they do?
Have you ever met someone who got a fact wrong, never questioned it, and then lived their entire life with erroneous beliefs built on a misunderstanding?
Humans are more like LLMs than almost anyone is comfortable with.
Humans have additional data processing features than just a token prediction mechanism, but humans have almost identical observable behaviors once you start doing things like the split brain experiment.
It's clear we need something like LeCun's JEPA as a grounding agent and for "world reasoning", but basically all the evidence we have says that humans aren't nearly as objective or reliable as we like to believe.
A great deal of humanity's capacity comes from our ability to externalize our thoughts and externalize data processing.
History, psychology, neurology, and machine learning all build a very compelling narrative that we are generally on the right track.
hibbos
Humans on the other hand, totally reliable
Aggravating_Moment78
Depends on what you use it for as with anything else. It’s good for some purposes, not so grrat for others…
versaceblues
despite it being common knowledge that they hallucinate frequently.
Because the advancements in the past 3-4 year (including tool use, search, and reasoning) have reduced hallucination to the point where these things are often correct AND find you information on quicker than traditional search.
joe12321
A counterpoint here is that indeed if you didn't start using a calculator when everyone else was, you were probably left behind. The fear being created MAY come to be seen as prescient. And even if a tool isn't always perfect, you really can't JUST look at the problems caused (and all new tech causes problems), but the problems vs. the benefits.
But more to the point, there is no con here. Victims of cons don't get an upside (or not certainly). LLMs provice a service (warts and all) plus sales/marketing tactics, and though you can use it unwisely, you can get all the upside out of it you want. Not everything that comes with slimy sales tactics is a con.
Berkyjay
This is a comedy post. But I was watching it this morning and surprised to hear how life like and warm they make the chat voices sound. Kind of makes more sense why your average person gets sucked into using them. A majority of the people are not discerning and don't bother to take the time to think about this shit. They just want to know where to find the shit they're looking for.
https://www.instagram.com/p/DTVwuFqATfd/?hl=en
Philluminati
Another one of those posts that says "AI do anything" and yet emphasises the fear.
> Why are we happy living with this cognitive dissonance? How do so many companies plan to rely on a tool that is, by design, not reliable?
Because people reliable
> humanity has spent four hundred years reinforcing the message that machine answers are the gold standard of accuracy. If your answer doesn’t match the calculator’s, you need to redo your work.
But they are accurate are they not? I mean the math is the math.. I'm not sure what this point is. If the calculator is wrong the manufacturer will fix it.
oscarnyc1
One thing that stood out to me is that we keep conflating usefulness with intelligence.
LLMs are incredibly good at making hard things easier, like summarizing, drafting, translating and recombining. But that’s different from creating something fundamentally new.
I hope in many more years (400 years?) we’ll have systems that actually reason and discover, but it feels like we’re skipping a lot of steps by talking about today’s models as if they’re already on that path.
AlSweigart
Classic essay on this: The LLMentalist Effect: how chat-based Large Language Models replicate the mechanisms of a psychic’s con
Baldur Bjarnason included this essay in his book, The Intelligence Illusion, which I recommend.
DavidsWorkAccount
Because they are good enough. Once you learn how to work with the tooling, it's a net productivity boost.
But there's a lot of learning to be done.
MuonManLaserJab
I'd read this but I recently learned that humans are pretty unreliable
Nervous-Cockroach541
The thing that scares me, is it's easy to spot programming mistakes. Subtle emission of error handling, logical errors, mistaken use of library functions, version mismatching.
But imagine all the other mistakes in fields not as objective as programming that these things are making that go completely unnoticed.
Yeah and the internet will never take off either
khalitko
It's a tool. Not all tools are perfect.
watchfull
People don’t understand how they really work. They think it’s next to magic and don’t have the bandwidth/time to grasp the scope of the current models/technology.
oblong_pickle
Have you met people? They make mistakes all the time, what's your point?
j00cifer
From the linked article:
”…Over and over we are told that unless we ride the wave, we will be crushed by it; unless we learn to use these tools now, we will be rendered obsolete; unless we adapt our workplaces and systems to support the LLM’s foibles, we will be outcompeted.”
My suggestion: just don’t use LLM. Try that.
If it’s unnecessary, why not just refuse to use it, or use it in a trivial way just to satisfy management?
That is a real question: why don’t you do that?
I think it has a real answer: because I can’t do without that speed now, it puts me behind to give it up. And Iterating over LLM errors is still 100 times faster than iterating over my own errors.
DustinBrett
Common knowledge is outdated quick when you discuss tech. Things change in months not decades. AI is soon to be Alien Intelligence.
Boysoythesoyboy
Humans are wrong all the time as well, has relying on other people been a 10,000 year confidence trick?
Often they are nice, and instead of calling me an idiot when I say stupid things they just smile and nod and give me what I ask for. This is a warcrime, and we urgently need to remove humans from engineering.
beatlemaniac007
Agree with the confidence point. But not sure that automatically means they ought to be rejected. Sounds to me like we need to adjust our expectations (which will likely happen organically) as now it's moved from deterministic to probabilistic stuff. It seems more like a transition phase, which will always come with uncertainty and fear.
In general it seems to be in line with how things progress in this industry. Trading control for leverage. When we got C we gained more leverage but gave up control of specifics of memory registers, etc. When we got Java we gave up control of memory management. SQL allowed us to be declarative and not worry about the "how". AI seems to align with this. The main paradigm shift is the probabilistic approach and I don't know if it will stick, but honestly given how much leverage we're getting out of it might just cause us to accept a lot of slop under the hood.
RepresentativeAspect
You’re asking why we’re happy living with “an incredibly powerful tool” that is not perfect and always right?
LLMs are right more often that I am. They are not helpful and accurate always.
TeeTimeAllTheTime
Even if they hallucinate often, which depends on the model and the subject you can still not be a fucking idiot and verify things. Sounds like you just want to shit on AI and make assumptions
NotUpdated
The value for me is that it's software (non deterministic) that can produce classic deterministic software -- thus my program will be correct after I get it correct and every time.
Your point lays well in the bucket of those who are letting it generate things directly for users (emails, business questions, customer service, order taking etc...) lots of those have failed in many ways.
It's better to collect the best business questions 50-200 of them, and programmatically create software that answers those correct every time.
kappapolls
what do you mean they aren't as fantastical? just a few weeks ago some dudes used GPT 5.2 to solve a couple erdos problems and formalized the proofs in lean. that's pretty much sci-fi fantasy come to life
Thursty
What do calculators have to do with LLMs besides being a tool to save labor, like any other?
Rhetoric about LLMs isn’t “designed” to be anything. Different parties have different views about it.
People use them because it saves them time with tasks even if not always accurate. The dangers are real and will need to be addressed, but this is the case with much technology.
Take a writing class. This is a poorly conceived and written article. The comparisons and analogies don’t make sense and the points you make are incoherent. Ironically, an LLM would help you write a better one.
betabot
I can’t take anyone seriously that says LLMs aren’t intelligent, particularly for software engineering tasks. Either that person’s definition of intelligence is woefully malformed or they’re in utter denial of what these models are capable of.
Are they perfect? No. Do they make stuff up sometimes? Yes. These are features I would also attribute to humans, though.
I can crank out tens of thousands of lines of sophisticated and high quality code in a dozen hours with these models. It’s a game changer for productivity, and that wouldn’t be possible if there wasn’t some reasoning going on. Just look at the chain of thought output.
oadephon
This is such copium. LLMs are already pretty good. They can make some pretty complex changes to your codebase with few to no bugs. Where do you think this technology will be in 5 years? It's not going to plateau, it's going to get better by magnitudes in all directions.
We're nearing the end of human wage labor. Focusing on the current flaws with LLMs is just copium to avoid addressing the elephant in the room, which is that AI is finally nearing human levels of intelligence after researchers tried for like 70 years.
etrnloptimist
Every tool has their problems. Doesn't mean they aren't useful. I hate when my IDE doesn't jump to definition, but I don't throw the whole thing away.
AutoPanda1096
I think people overestimate LLMs capabilities which leads to others deciding that ai is useless.
Yep, I see mistakes and I also see it helping me in ways that nothing else ever has.
I was asking AI about a business spec earlier and it was fantastic at helping me understand enough to be able to find the relevant regulatory guidance.
Ran it by the business users and programmed it in
I was able to do something in minutes that might have taken hours previously.
Not for the first time. And it keeps happening.
The trick is to remember that it's just a tool
Ask the right questions.
"Where do I need to look to find"
"What options should I read up on"
"I've been approaching it this way, can you suggest things I might have missed"
And then you take the answer and apply your own intelligence
LLMs don't exist in a vacuum and I think this is the mistake people who struggle to use them effectively are making.
See it as being like a colleague sitting next to you. I sit next to Steve and sometimes he talks crap and sometimes he points me to the right thing. I never trust Steve explicitly because he's fallible. Like any source tbh.
Ask the right questions.
Apply your own intelligence.
I've been doing this job for 30 years and these tools are a step change in my productivity. Do I see stuff that doesn't add up? Hell yeah! Does that make LLMs "a trick"? Hell no.
I used to think like you but slowly and surely I learned how to use the tool more effectively.
Back in the 2000s I leapt ahead of my peers because I could Google better than them.
The same is happening again.
Some of us will use these tools much better than others because we get what works and what doesn't.
The irony is that it seems you need a degree of intelligence to get the most out of artificial intelligence.
_darth_plagueis
If you use llms, you should know they allucinate e check things. They save you a lot o time in certain tasks, so it is worth it on a personal level.
If you think about the amount of resources used to produce and maintain llms, probably they are not worth it. They may become more efficient later, we will see.