You’ve probably never heard of Joseph Weizenbaum, a German American computer science professor at the world renowned Massachusetts Institute of Technology (MIT). You’ve probably never heard of ELIZA, either – the computer programme he created to demonstrate that machines could not replicate talking mental health therapy.
At least, that was the intention. But as students of his work would be quick to tell you, that’s not quite how things worked out.
Let’s rewind a little. The Sixties is now recognised as having been a hothouse of new trends and new ideas. Not just around fashion, music and film, but also in social consciousness (this is the age of peace and love and antiwar sentiment, after all) and in technological advancement.
Let’s not forget, the Sixties was the decade of the Space Race – a period when computers really came of age, not just conceptually, but also in terms of practical application.
Whilst it’s easy to mock the vast supercooled warehouses that held banks upon banks of computers that, collectively, generated a fraction of the computing power that sits in your mobile phone today, let’s not forget that they nevertheless took Man to the Moon.
And in the instant of achieving this feat, watched on televisions by 650 million people across the globe, the world came to believe that computers had the potential to do pretty much anything that had currently been done by a human.
This also extended to providing therapy to people with mental health challenges. And why not? And so Weizenbaum created ELIZA – the name a playful nod to the ingenue Eliza Dolittle who is taught elocution and becomes a social climber as a result in George Bernard Shaw’s Pygmalion.
The simplistic explanation of this experiment is that ELIZA would ‘listen’ to her subject and then repeat back what ‘she’ had ‘heard’ in a variety of ways that were intended to prompt the subject to expand on what they had said without leading them to a specific conclusion.
This, of course, is the cornerstone of talking therapy, probably best espoused by Carl Rogers, the eminent American psychologist who founded the concept of person-focused therapy – the basis of which was to guide the patient to explore their issue to a point of self-realisation or self-actualisation.
But if the Weizenbaum’s intention was to prove a computer – or Artificial Intelligence, to give it a more contemporary slant – was incapable of providing this kind of therapeutic support, he was sorely disappointed.
In fact, he was shocked that his programme was taken seriously by many people. Perhaps most famously, he was fond of recounting the time when his secretary, who knew the programme was a simulation, asked him to leave the room for the sake of privacy.
And why am I telling you all this? Because AI and mental health talking therapy has become a ‘thing’ once again, with some media channels recently reporting that trials suggest the machine learning writing tool ChatGPT is capable of offering mental health support.
Now, I appreciate there’s an element of ‘well, she would say that, wouldn’t she’ about what follows, but stick with me for a few paragraphs.
First, some truths. ChatGPT is fantastic. You can believe everything it writes, without question. But only if you believe that everything you read on the internet is true and that the Queen was actually a lizard in human form.
What ChatGPT does is scans content it finds online, based on keywords and phrases you enter into its search engine, and then provides you with a compelling written narrative that is seemingly rooted in evidence and fact.
There’s no doubting that ChatGPT, and other writing tools like it, is very clever. But because it has no way of differentiating between substantive fact, and unsubstantiated opinion, the result it generates is, by extension, highly suspect.
Put another way, if you ask a machine learning tool like ChatGPT to ‘write about the right way to treat a woman’, you’ll never know if what you’re getting back has been sourced from Prince Charming or Prince Andrew.
Ultimately, then, ChatGPT et al are just lazy ways or generating copy that won’t see you sued for plagiarism. It’s hardly Freud or Jung, or Carl Rogers.
Like I said, I would say that, wouldn’t I? So, just to be completely impartial, shall we ask ChatGPT what it thinks of itself. I typed in the instruction: Write about why you shouldn’t trust ChatGPT.
This is what came back (and by the way, I’ve changed nothing – not even the Americanised spelling)
Lack of Real Understanding: ChatGPT doesn’t truly understand the way humans do. It generates responses based on patterns and information it has learned from its training data.
Errors and Inaccuracies: ChatGPT is not infallible. It can produce incorrect or outdated information. Users should independently verify any critical or important information it provides.
Bias and Unintended Responses: ChatGPT can inadvertently produce biased or offensive content. Despite efforts to mitigate bias during training, it can still generate content that reflects underlying biases present in its training data.
No Personal Experience: ChatGPT lacks personal experiences, emotions, and contextual understanding. It doesn’t have its own beliefs or feelings, so any emotional response it generates is simulated based on patterns it has learned.
Sensitive and Private Information: Sharing sensitive, personal, or confidential information with ChatGPT is risky. It’s a machine learning model, and data breaches or unintended sharing of information can have serious consequences.
Generating Plausible Sounding But False Information: ChatGPT can generate text that sounds credible but is entirely made up. Without cross-referencing its responses with reliable sources, it’s easy to be misled.
Changing Outputs: The same input can lead to different outputs. This variability can make it hard to predict exactly how ChatGPT will respond in a given situation.
Ethical and Moral Dilemmas: AI-generated content can sometimes raise ethical questions. For example, relying on AI for important decisions or using AI to manipulate or deceive might not align with ethical principles.
Unpredictable Behavior: ChatGPT can sometimes provide unexpected, bizarre, or nonsensical responses. This unpredictability can be problematic, especially in situations where clarity and accuracy are crucial.
Dependency and Critical Thinking: Relying solely on ChatGPT for information can hinder critical thinking and independent research. It’s important to use AI-generated content as a supplement to human judgment, not a replacement for it.
Limited Contextual Understanding: While ChatGPT can understand context to some extent, it might still miss nuances, jokes, or cultural references, leading to responses that are off the mark or inappropriate.
You may have read all of that in detail. You may have skim-read it. But you don’t need to read in too much detail to work out that phrases like ChatGPT can generate text that sounds credible but is entirely made up, ChatGPT doesn’t truly understand the way humans do, and It can produce incorrect or outdated information are hardly a sound basis from which to support someone struggling with their mental health.
But this isn’t just about ChatGPT. It’s about whether there’s any sort of role for AI in the mental health therapy space, and the reality is that the ChatGPT question merely exemplified the wider issues.
AI can never provide the same support as a human therapist because the relationship isn’t relational.
In order to treat you effectively, with compassion, and with a focus on helping the individual being able to access their adult ego state by compassionately reparenting themselves – which is ultimately where the magic lives – your therapist needs to be able to relate to you, to your specific and unique circumstances, and to your own humanity, and tie it all back into their own experiences (both professional and lived).
Yes, AI can ask you pertinent questions – maybe. If the structured query language (SQL) that underpins it is competent. It can repeat what you say back to you. But it cannot guide you, hand you a box of tissues or soothe you when you lift a drain cover and discover what’s there or encourage you to follow the emotional breadcrumb trails that your human therapist will be able to recognise as being a key to your recovery.
Finally, a word on what we mean by artificial intelligence. Ask any AI specialist and they’ll tell you that AI doesn’t actually exist, at least not in the sense we often think about.
What we currently refer to as AI is actually data analytics – it is the digital assessment human inputs of data points and information sources. This human intervention forms both the limitations and the scope of the machine in question.
So, when it comes to therapy, we have to ask ourselves whether we want to place our emotional health into the hands of something has been programmed in a laboratory.
I’m thinking we probably already know the answer to that question.
