Language is the DNA of the mind.
I have a mind. That I am pretty sure of. But how do I know that you, the reader, has a mind? How can I be sure that anyone except myself in this world has a mind in the way that I have one? Philosophers have argued this point for centuries and some believe it is only possible to be certain that you, yourself, have a mind. This is called the problem of other minds. A closely related problem is the qualia problem: even if I know that you have a mind like mine, how can I know what it is like for you to experience the world? Can I ever know what it is like for you to see red? Does your sensation of seeing red match mine?
Let’s take two situations when I might want to know whether someone or something has a mind. In situation 1, I am in a room with a dog and I want to know if the dog has a mind. In situation 2, I am chatting over the internet with ELIZA. I won’t say who or what ELIZA is for the moment.
Situation 1: Max the dog
I am in a room with a dog. Let’s call him ‘Max’. Max is a fairly friendly dog: he jumps up on my lap, barks, chases balls and generally seems to want to play with me. Does this mean he has a mind? No. I cannot be sure that he has a mind because he has yet to talk to me. He can’t say: “please throw the ball” or “I’m hungry”. He could communicate these ideas to me through his body language and behaviour, but he is unable to explicitly talk to me in these terms. In short he lacks language. Language is the ‘DNA of the mind’. As I communicate through language with other people I come to realise that they have minds. As you read this blog, I assume you, the reader, have a mind, and you can be sure that I, the author, have a mind. Unfortunately Max has not given me any indication that he has a mind like me or you. He doesn’t read my blogs and doesn’t write his own blog. He may have a mind hidden away in his brain, but until he reveals something of it, until he provides the ‘encryption key’ of language, I have to reserve judgement on what is going on inside his head.
Situation 2: ELIZA
In situation 2, I am chatting with ELIZA over the internet. This is a text-based chat session: I’m typing and sending messages to ELIZA and responses are coming in via my screen. I cannot see or hear ELIZA and she cannot see or hear me. I ask ELIZA a question and she responds:
>Are you a mind? >Would you prefer if I were not a mind?
So far so good. It seems like I might be communicating with a mind. I ask again.
>Are you a mind? >Come, come, elucidate your thoughts.
This seems promising. Not only is ELIZA talking to me in language but the content of her language seems quite advanced. The sentences are grammatically correct and she uses abstracts words such as ‘elucidate’ and ‘thoughts’.
>What is it like for you to see red? >Oh … to see red? >Yes, what does it feel like for you to see the color red? >You are not really talking about me, are you? >Yes I am. >How long have you been.
Now I’m begining to doubt whether ELIZA really is a mind at all. Some of her answers seem a bit odd. Did she understand my question ‘what is it like for you to see red?’ Why does she think I am not talking about her? Let’s give her a really big test:
>How can I be sure that anyone except myself in the world has a mind in the way that I have one? >Perhaps you don’t want to be sure that anyone except yourself in the world has a mind in the way that you have one.
Ah, now I see what ELIZA is doing. She doesn’t really understand my questions. She just uses part of my question to create a question in reply. In short, ELIZA is failing the Turing Test.
The Turing Test was devised by Alan Turing, a British scientist, who proposed that a machine should be credited with thinking if an evaluator could not reliably distinguish the machine’s answers from a human’s. While ELIZA seems to be very clever at answering my questions it seems clear at the moment that the answers are not human-like. Or at least not a human who is coherent. In the words of her creators, ELIZA is:
programmed to behave as a Rogerian psychotherapist, and is an interesting example of the limitations of early artificial intelligence programs.
Someday however with advances in A.I. it may be possible for machines to convince humans that they are real.
Empathy for Max and ELIZA
So we have Max the dog who clearly shows the behaviour that is typical of a living animal but who is unable to speak, and ELIZA who can talk but lacks any emotional output. Who do I empathise with most? If I had to choose between saving Max from being put down or saving ELIZA from being turned off, without hesitation I would save Max. There is something about him that I can empathise with. Something very close to my own human behaviour that tells me he is alive and worth saving. ELIZA on the other hand seems cold and distant; not human, not animal and probably not worth saving if push comes to shove.
But is empathy a defining criteria of a mind? I would argue not. We can empathise with other human beings and animals but this does not automatically mean they have minds. Empathy is a matter of degrees; it can be more or less. But the ability to use language in communication for me is the defining criteria of a mind. ELIZA has shown me her encryption key and I have been unable to unlock her mind. True this mind is far from being human but it is a start. Max has not shown me any encryption key. The contents of his mind are firmly closed off from mine. I cannot be sure that all his movements, expressions, gestures amount to anything more than behaviour that leads to empathy. Language is the DNA of the mind and without it all that is left is empathy.
Language is the DNA of the mind.