The Godfather of AI, Emotional AI, and the Multiple Realisability Thesis.

A Robotic Bat Dreaming of Electric Sheep

AI giant Geoffrey Hinton has proposed that AI might already be emotional. It is an interesting philosophical and practical proposal, qua philosophy of information, philosophy of AI, and philosophy of psychology and emotions. (Yes. Those are all real subdisciplines of contemporary analytic philosophy and philosophy of science.)


How would AI emotion work? There are lots of potential ways. It depends upon how one defines the concept [emotions]. It also depends upon what one includes in a neuroscientific and neuro-psychological explanation and conception of emotion.

In cognitive neuropsychology there is a prevailing view that emotions are reducible to (or alternatively: supervene upon) just more cognitive information processing spread across sub-personal, neurological systems in the brain - from the limbic system to the motor cortex to the central executive and even the vagus nerve.

The multiple realisability thesis for mind proposes that a mind like humans have can be implemented using any information processing and/or computational system (or any other kind of system for that matter) which fulfils the functional roles and functions required to be en-minded: to be, or have, a mind.

Based upon the whole of the above interview, Hinton clearly believes in the multiple realisability thesis not only for mind and cognition, but for emotions and potentially for consciousness (again depending upon how consciousness and the concept thereof are defined.) There are philosophical detractors of the multiple realisability thesis for mind who propose that it might not be possible for a mind to emerge from, or be realised by, any system that does not have the specific neuro-protein based neurology of the evolved mammalian/human brain and its corresponding complex, evolved, bio-electrochemical architecture. 

So when should we say that AI has emotions? This question promises to be a stubborn philosophical and scientific question. I suspect that it will ultimately be answerable only in the sense of observing the behaviour of AI systems and by testing them using psychometric tests developed for humans. This limitation will likely persist because, among other things, the nature of emotions for AI may depend also upon how the architecture and nature of machine/deep learning systems differs from the human brain. Just as an F-16 doesn't need to fly like a hawk, AI doesn't need to think or 'do cognitive information processing' exactly like a human or mammalian brain. 

So it is not unfair or unreasonable to assert that AI does not have to have, realise, or manifest emotions using the same architectures and mechanisms a human brain does, but that it might still produce equivalent emotional behaviour. 

However, as mentioned above: emotional cognitive information processing in the human brain is also known to involve the reptilian brain, the limbic system, and even the vagus nerve (especially the ventral and dorsal Vagus complexes) in addition to the central executive and pre-frontal cortex. These are very physically peculiar and specific systems. 

Therefore, in part the question of whether AI can have emotions is subject to conceptual analysis: what emotions are, and what is the definition of having them. If the low-level evolved systems in the brain like the limbic system are a necessary component of having emotions, then perhaps the most we should say is that AI can have or display an emulation of human emotions. 

I am not adducing a conclusion one way or another. I am only pointing out that the question is not straightforward. In approaching the equally vexing question of whether conscious phenomenal experience is something which science could in-principle completely explain, the philosopher Thomas Nagel once posed the question "What would it be like (subjectively experientially) to be a bat?". It would be wonderful if there were a machine which could feed the bat's 'internal' brain-based experience into our brains and give us 'a taste'. 

However, if we did have that machine: how would we know if we were really experiencing what the bat does the way the bat experiences it directly, or if we were instead only having our own subjective experience of interpreting the bat's internal experiences by way of our own subpersonal neurological processes? 

It seems possible that the only way to get the entire answer in a way we think is satisfying might be to be an actual bat. Likewise for the AI. 

It's possible we may never have the means to answer the question of whether AI would have what we have internally and subjectively when it exhibits external emotional behaviours which closely resemble our own.
 




Comments