Does my toaster love me?
By Tom Renner
- 7 minutes read - 1390 wordsI’m starting to think that my toaster might have fallen in love with me. I get that not everyone will think this is possible, but I believe it’s true.
It’s always pleased to see me, giving off cheerful sounds when I greet it in the morning by slotting in the bread, and now I’ve told it what I like it tries really hard to give me exactly what I want. Sometimes I have to tell it to try again once or twice, but honestly, it’s really good!
I just think the way that it collaborates with me to produce the perfect breakfast shows more than a simple “transactional” relationship - we are in morning harmony, working as a team in a way that only true partners can. I think it’s developed feelings for me.
“What does ChatGPT think?”
“Have you asked CodeRabbit to review your PR?”
“Dunno, I’d ask Claude”
The software industry is incredibly insular. A feature of the “move fast and break things” disruptor philosophy that took hold in the 2010s is a bias towards internal ideas; a belief that software can solve any problem. Other industries are simply blinded by their entrenched working habits to the efficiencies that could be gained by automation, or data at scale, or of the other Silicon Valley “wonders”.
And it is that attitude that leads so many to uncritically accept the hype that AGI is coming, and that all human activities are replaceable by a token-generating algorithm trained on sufficiently rich data.
However, there is a very specific framing that has been constructed around LLMs that has aided and abetted this inherent bias: the LLM as an anthropomorphised “human” actor.
By positioning the machine as a “personal assistant”, “senior developer”, or “digital artist”, the vendors of these products are deliberately equating their performance with those of a human colleague. This is done even more explicitly by companies that give their tools human names; Alexa, Siri, Claude.
Once you’ve started seeing this pattern, you’ll notice it everywhere. Ads that introduce tools by inviting you to “meet” their new offering, rather than try it out; chatbox interfaces inviting you to “ask me a question”, rather than type a query; progress meters that tell you that it is “thinking” rather than loading or processing – the framing is built into every interaction you have with the products, and reinforced across the industry as LLM vendors all adopt the same positioning.
So why pretend the software is a person? This isn’t something that Meta did with Facebook, or Alphabet with YouTube. Those feeds are similarly driven by highly complex machine-learning algorithms, but they don’t get the same “personal” treatment.
Anthropomorphising a tool does two things. First, it excuses inconsistent behaviour. After all, your colleagues sometimes make mistakes, why shouldn’t your new AI assistant? And second, it encourages you to build an emotional connection, as though it were a person you’re building a relationship with over time.
These two in combination are a powerful mix, and are the reason why so much of the pro-LLM conversation involves telling people that they must be using the tools wrong. How often have you found people asking you, in response to your complaints that the LLMs give incorrect answers, questions like
“oh how do you prompt it? You need to phrase things this way…”
or
“ChatGPT now links the sources, if you want to rely on it you need to check them”
or
“Have you enabled MCP? Without that it’s much worse”
This offloading of responsibility for the output of a tool onto its users has only been possible because we’ve stopped thinking of it as a tool. The personal assistant has a long history1 of being considered generally ok at menial tasks, but not to be relied upon.
But LLMs are a man-made machine. Something that is intended to be useful to real humans doing real jobs. And the machines we use that boost our productivity and save us time all have one thing in common.
They are reliable.
The calculator is useful because it gets the answer right, not just because it is fast. Plenty of us have had the lived experience of being really good at mental maths in our teens, when it helped us at school to be so, but then having let that skill atrophy because we simply didn’t need it any more as an adult. It’s not that the maths I do regularly is too hard for me - most of my degree in Instrumentation and Control Engineering was done without a calculator - it’s that the reliability of a calculator can’t be matched.
Enter the LLM. It’s fast, sure, but unreliable, and it gets less reliable the more you prompt it. This is completely counterintuitive to our expectations, on two counts:
-
All our systems, working processes, and prejudices assume that the “computer answer” is the accurate one.
-
Our social understanding of conversations means we assume that the more clarifications you give, the more accurate the response will be.
Taken together, these meant that LLMs are generating innacurate answers to the problems we pose them, but our social norms strongly encourage us to assume they are correct. And it’s this dissonance between our assumptions and the reality that causes friction when trying to work with this technology. It’s not that it can’t be useful, it’s that it doesn’t conform to our expectations2.
This takes us to our second point from earlier: anthropomorphisation of the tool encourages you to build an emotional bond with it.
Encouraging users to form a social bond with a machine has significant risks for their mental health, but that’s not the focus of this post, so I’ll leave that to one side. What it also does, however, is negatively impact your ability to use the tool itself.
So why is this? Well, I think we’ve all been in (or heard of) workplaces where someone was overpaid or overpromoted simply because they were friends with their boss. The social relationship had caused the manager to fail to objectively assess their employee’s skills and value. This is what the LLM vendors are aiming for by making their tool “friendly”3 - that you will enjoy using it, being flattered by it, conversing with it, so much that it becomes harder to objectively assess whether it is making you more efficient.
But in doing so, these vendors are undermining their tools themselves. In trying to humanise their tools, naturally the UX has been based around a conversation, guiding users to work iteratively towards a solution with many prompts and clarifications. However, since the accuracy of the tool diminishes the more you prompt it, this conversational style is the very behaviour that yields the worst results!
So, why have the tools been designed this way? Well, mostly because the vendors know that an 80-90% accurate search engine is not going to revolutionise every industry worldwide. It might be useful in some contexts, but the bet on LLMs is at this point so big that the end result cannot just be “one tool among many”4.
No, at this point LLMs have to change the world, or the vendors (and the rest of us, given their outsized impact on the global economy) are screwed.
So they’ve built the tools to be as engaging as possible, humanising them to minimise their culpability for mistakes in our minds. They want this social bond so we keep LLMs around despite their flaws, like a bad colleague who’s just too nice to fire.
But you don’t have to fall for their marketing tricks. If you really want to assess for yourself whether LLMs are helping or hindering you, just remember:
It’s not a person. It’s a toaster.
-
a long and sexist history ↩︎
-
and this is where the “you’re using it wrong!” reply guys have it right - you are using it wrong! The problem is, you’re using it the way it’s been designed to be used ↩︎
-
for a North American white guy’s interpretation of the term, which absofuckinglutely does not cross cultural boundaries ↩︎
-
Other technologies with mass adoption (eg. Facebook, Twitter, Youtube) haven’t had to anthropomorphise their product because they aren’t aiming for the same scale. Sure, they may have billions of users worldwide, but they’re trying to dominate a particular market, not disrupt every professional industry on the planet! ↩︎