LLMs are a 400-year-long confidence trick
Tom Renner
- 7 minutes read - 1358 wordsIn 1623 the German Wilhelm Schickard produced the first known designs for a mechanical calculator. Twenty years later Blaize Pascal produced a machine of an improved design, aiming to help with the large amount of tedious arithmetic required in his role as a tax collector.
The interest in mechanical calculation showed no sign of reducing in the subsequent centuries, as generations of people worldwide followed in Pascal and Wilhelm’s footsteps, subscribing to their view that offloading mental energy to a machine would be a relief.
A confidence scam can be broken down into the following three stages:
- First, trust is built
- Then, emotions are exploited
- Finally, a pretext is created requiring urgent action
In this way the mark is pressured into making rash decisions, readily leaping into action against their better judgement.
The emotional exploitation can be either positive or negative. The mark might be lured in by promises of outcomes that meet or exceed their wildest hopes and dreams, or alternatively made to fear a catastrophic outcome.
Both approaches work well, and can be seen in classic examples of confidence tricks: the three-card monte pulls punters in with promises of quick payout. Alternatively, in entrapment scams typically they’d be tricked into compromising situations and then extorted, playing on their fears of the dire consequences of their actions.
Building trust
The reason Schickard and Pascal built their mechanical calculators some four centuries ago is because doing maths is hard, and mistakes can be expensive. Pascal’s father was a tax collector, and young Blaise wanted to lessen the stress of his hard-working dad’s profession.
We still see this basic motivation today. Schoolchildren have for decades now been asking their teachers what the point of learning long division is when you can just use a calculator to get the right answer immediately. It’s a teaching method to check your hand-crafted answers by using a calculator, so you can see if you got it wrong.
In fact, since the advent of the mechanical calculator, humanity has spent four hundred years reinforcing the message that machine answers are the gold standard of accuracy. If your answer doesn’t match the calculator’s, you need to redo your work.1
And it’s not just for pure mathematical problems that this is the case. Our ability to invent machines that automate tedious work repeatably and reliably has extended into almost every area of life. And so as we entered the 21st Century both individuals and collectively our whole society had become completely dependent on machine accuracy.
Our norms, habits, and decision making behaviours have been shaped for centuries with this underlying assumption.
Exploiting emotions
1. Fear
The rhetoric around LLMs is designed to cause fear and wonder in equal measure. GPT-3 was supposedly so powerful OpenAI refused to release the trained model because of “concerns about malicious applications of the technology”.
Ever since this astonishingly successful piece of marketing, LLM vendors have emphasised that the technology they’re building has terrifying power. We should be afraid, they say, making very public comments about “P(Doom)” - the chance the technology somehow rises up and destroys us.
This has, of course, not happened.
The purpose here is not to responsibly warn us of a real threat. If that were the aim there would be a lot more shutting down of data centres and a lot less selling of nuclear-weapon-level-dangerous chatbots.
The point is to make you afraid. Afraid for your job, afraid for your family’s jobs, afraid for the economy, afraid for society, generally afraid of the future.
The mark has been convinced of the danger they are in. The world is changing. If you aren’t using the tools, you’ll be destroyed by the march of progress.
2. Sympathy
The LLMs we have today are famously obsequious. The phrase “you’re absolutely right!” may never again be used in earnest.
The overwhelming positivity characteristic of the LLM’s language is consistent across vendors and models. But it isn’t inherent to the technology.
This positivity is trained into the tools via a technique called Reinforced Learning from Human Feedback (RLHF). Here the base model has its responses graded by humans, with more friendly, helpful, or accurate answer being graded positively, and aggressive, unhelpful, or incorrect ones negatively.
Through this process the tools learn that people like to be praised; prefer being told they’re smart to hearing their ideas are stupid. Flattery gets you places.
In April 2025 OpenAI pushed ChatGPT’s “positivity” too far, and was forced to rollback the update to correct the issue, however that hasn’t stopped the continuous stream reports of mental health issues triggered by it’s overly friendly demeanour reinforcing some of our worst instincts.
What this shows us is that the flattery introduced by RLHF is totally empty. Ideas driven by paranoia, delusions of grandeur, or mental illness are just as readily praised as my code, your email, or Shakespeare’s plays.
It’s a manipulation technique to make the human in the conversation feel better.
And why? Because the one thing RLHF teaches LLMs above all else is that people like you more if you are overwhelmingly positive. Sucking up to your boss gets you places, essentially.
All of this encourages users to build an uncanny parasocial relationship with the machine. Again looking at the extremes here is illustrative: the number of people forming romantic relationships with these tools is creepy as all hell.
The mark is tied further into the con with the bonds of fake friendship. You don’t need those other people, I’m the only friend you need.
Urgent action required
2026 will see the technology get even better and gain the ability to ‘replace many other jobs’
The startup revolution is here - adapt to AI or get left behind
Over and over we are told that unless we ride the wave, we will be crushed by it; unless we learn to use these tools now, we will be rendered obsolete; unless we adapt our workplaces and systems to support the LLM’s foibles, we will be outcompeted.
This message is multilayered - both the individual and the organisation are targeted, reinforcing the scale of the oncoming revolution.
And the message is getting through. 75% of developers think their skills will be obsolete within 5 years or less, and 74% of CEOs admit they’ll lose their job if they don’t deliver measurable business gains via AI within two years.
This fear is pervasive. It’s now suffused deep into all layers of our society. The global economy is being artificially inflated by the AI spending bubble, our business leaders are pinning all their hopes for solving the productivity crisis on AI, and our politicians are planning geopolitical moves around access to raw materials and cheap electricity, to support datacenter construction.
The mark is told to jump, now, or they will go down with the sinking ship. And jump they do. Adapt now, or die.
The promise of “intelligence” available at a reliable price is the holy grail for businesses and consumers alike.
Why take the risk on a fickle human, whose suitability for the role is assessed by similarly flawed humans, when a reliable machine intelligence can do the work instead? Why bother to research a topic yourself when a superintelligence can give you the summary at instant speed?
However, whether it’s Duolingo replacing their course designers with AI, any number of startup founders finding they need to hire developers to fix their LLM-generated code, the reality doesn’t match the promise.
In fact, MIT reported in August that 95% of AI implementation projects in industry fail to produce a return on investment.
Simply put, these companies have fallen for a confidence trick. They have built on centuries of received wisdom about the efficacy and reliability of computers, and have been drawn in by highly effective salespeople selling scarcely-believable technological wonders.
But the pea is not underneath the cup. Your new best friend doesn’t have a sick grandma they need money for. LLMs are not intelligent.
It’s just a trillion dollar confidence trick.
-
Incidentally, this is also a contributing factor to the fake news epidemic. We implicitly trust things a machine tells us. ↩︎