Large Language Models (LLM’s) ‘hallucinate’. That is to say they can produce responses which sound convincingly true but are actually nonsensical or false. This may occur due to mistakes in the training data, but it’s also a characteristic of their modelling process. LLM’s are not databases nor search engines, but statistical models of how bits of words relate to other bits of words. They don’t know what they’re talking about nor how to confirm it is correct. However, they can say it with realistic flare, which can lead us easily astray and waste our time, as developer Daniel Stenberg has found:
Human beings have, of course, long been able to speak in ways which sound convincingly true but are actually false. Sometimes, that is done with conscious intent. A con artist lies to take money from you. An abuser lies to get access to you through your emotions, conscience or other psychological fragility. A friend may lie to you to save your feelings. Whatever the reason, people consciously deceive.
However, we also lie instinctively. Words flow quickly to defend against an accusation we don’t like or silence a conversation we find embarrassing. The words are untrue but are not part of a conscious strategy to achieve the goal: we’re just reacting with whatever words pop into our heads. In such moments, are we ‘hallucinating’ in the same way as LLM’s? Our brains are on overdrive, we’re rapidly creating the next word to say, and then the next word to say, without our reason being fully part of the process. Indeed, we sometimes describe others (or ourselves) as having engaged their mouths before using their brains. Do LLM’s, by accident, model this trait of instinctive lying?
A materialist believes there is a purely physical – brain - explanation for this habit. So such a person may well consider that an LLM models in technology what occurs in biology, albeit not fully since we often are, or become, conscious of our auto-lying – a post-production feature which LLM’s still lack. But those of us who know we are more than our chemical processes - that the mind is more than the brain and that consciousness is more than atomic reactions - see deeper into it.
Firstly, we remember that the ‘father of lies’ (John 8:44) is capable of manipulating us in so many ways. So truly to model our instinctive lying through an AI model we would need a second one which was applying pressure, via a hidden prompt, upon the first model, pushing it to hallucinate.
Secondly, the reason we sin at all – and lying is sinful (Revelation 21:8) – is because at the start of our race, we wanted to judge good and evil for ourselves rather than listen to God’s judgement (Genesis 3:5). That immature rebellion has left us with a damaged conscience which fails to tame our words correctly. And conscience is also not modelled by LLM’s.
So do we ever ‘hallucinate’ like AI? Possibly. But what we certainly, and sadly, do is lie.
Photo by Taras Chernus on Unsplash
All posts tagged under technology notebook
Introduction to this series of posts
Cover photo by Denley Photography on Unsplash
Scripture quotations are from the ESV® Bible (The Holy Bible, English Standard Version®), © 2001 by Crossway, a publishing ministry of Good News Publishers. Used by permission. All rights reserved. The ESV text may not be quoted in any publication made available to the public by a Creative Commons license. The ESV may not be translated in whole or in part into any other language.