The notion of sentience has been in the news recently since a Google AI researcher, Blake Lemoine, claimed that a chatbot he had been interacting with was sentient. His claim has been rebuffed by some antagonists and he has been put on paid leave by Google. In this blog, I will try and outline what NLI (nonrepresentational linguistic idealism) has to say with regard to sentience in AIs. The outcome might surprise you.

NLI states that the mind is at the centre of human existence and that language is at the centre of the mind. Language is nonrepresentational in the sense that it does not represent the physical world out there but is in fact a world in itself. We are essentially language. NLI takes an idealist position in that it assumes we start from the premise that our own ‘thinking’ minds are at the centre of reality and all attempts to come to terms with the physical world need to be viewed through our minds. This does not mean that the physical world is a figment of our imagination, but that it can only be accessed and understood through a subjective, first-person point of view. In particular NLI takes the view that there is only one subjective viewpoint at any one time, which is my viewpoint at the time of writing. You, the reader, can come along for the ride if you want to, but I’m afraid this story is about me.

NLI also assumes that language is the seat of consciousness. All sentience that we have in our minds is carried through language. Without language an entity cannot be sentient. I infer that you, and other human beings that I interact with, are sentient because I can have conversations with you in language. How long I need to have such conversations and what topics we need to get through in order to determine this are not the subject of this post. But what if I have a conversation with a chatbot, such as Google’s LaMDA, as Lemoine did? If I detect sentience in this chatbot through these conversations, does this mean the chatbot is sentient? Lemoine claims it does.

My answer to this is a subtle but rather neat one I think. Consider this argument: If we say there are n human brains in the world, then we might suppose that there are n subjective states, assuming that each brain is a conscious, sentient subject. But that would be wrong. There are n brains in the world but only one subjective state, namely the subjective state that I am viewing the world from at this moment in time. NLI remember is a first-person account of existence and there is only one first-person viewpoint in the room at the moment, and that viewpoint is mine.

Now this may seem rather odd because in some way it seems to break the laws of mathematics: n brains equals n sentient states, surely? How can I claim that n=1? But this is the linguistic paradox that I have outlined in the past. As soon as I write about a state such as sentience using language, I objectify that state. A phenomenal state is brought into access consciousness. But this acts to break the link between the phenomenal state and the language. The state is not the language and is not represented by the language. Language is nonrepresentational and, as a separate domain, cannot be used to represent a physical state. It is true that the objectification of phenomenal experience through language gives us a means to count entities: so we can have one, two even n sentiences in the linguistic domain. But there is only ever one phenomenal sentience, and that is the sentience I have at the moment.

So how do I transfer my first-person, sentient viewpoint to you, the reader, so that you can have a turn with the conch, so to speak. I cannot. The phenomenal sentience and first-person viewpoint will always be with me (until I die I suspect). I can infer sentience in other humans through the language they use with me. At some time in the future, I may also infer sentience in chatbots such Google’s LaMDA through their language. In fact, I have had many interesting conversations with AI chatbots over the last few years and it is likely that these conversations will only become more human-like and more sophisticated as time goes by and computer algorithms become more and more powerful. But that does not mean the chatbots have or will have sentience because…

Sentience is a first-person, phenomenal awareness of language in my mind that is non-transferrable.

Sentience is a first-person, phenomenal awareness of language in my mind that is non-transferrable. In other words, the question of whether a chatbot has sentience or not is moot. To have is not the right verb here. A chatbot does not have sentience. I only ever infer sentience through the linguistic interaction that I have with the chatbot. I have the sentience and I make the call as to whether my chatbot is sentient or not. Or, perhaps more paradoxically, I am the sentience that the chatbot purports to have.

I am the sentience that the chatbot purports to have.

So what are the consequences for AI given the argument I have made above? Why do we find it so problematic to utter the words ‘the machine is sentient’? Clearly assigning sentience, which is only mine to assign, to machines could change the status of these machines. Might I be less likely to turn them off or delete them if I believed they had sentience? Could machines demand rights if I accepted that they had sentience? These are important questions and perhaps ones that might need to be seriously addressed in the not too distant future.

Creator

But there is another aspect of machines that I think is important in the quest for sentience, and this is the question of who is their creator. Most people, I think, accept that we do not really know who our maker is or why we are here on planet earth. Some may claim a god or mother nature as their creator, but I think it is generally accepted that we all have doubts at times. This is not the case for inanimate objects such as houses or cars where we know that mankind is the creator.

With chatbots today we know they have demonstrable creators. I can create a chatbot with just a few lines of code today if I so choose (the tools exist). Google created their chatbot LaMDA and has the right to turn it off and delete the chatbot if it so desires. It seems then that knowing who or what created a chatbot holds some importance in determining whether it has sentience and what status we ascribe it.

But what if chatbots started to appear and we could not determine where they had come from? What if we discovered a twitter account that had been tweeting sensible, logical language for the past five years but had no obvious creator? Or a band that had been writing and publishing songs that we had all been singing along to that turned out to be a bot? Perhaps the bots had been created by earlier bots who in turn had come from bots. The sentience that I ascribed to these bots might have more legitimacy simply because I do not know their creators. It seems then that if we know who created a bot, the sentience is in part ascribed to the creator. When the creator is unknown we tend to ascribe the sentience to the bot itself.   

Sentience in part depends on our understanding of whether the entity in question has an identifiable creator or not.