Are we closer to understanding the nature of consciousness than we were twenty years ago? Ever since Chalmers posed ‘the hard problem’ of consciousness, of why sentient organisms have subjective experiences, the debate has gone in circles. Papers on the subject variously conclude, There really is a hard problem, there is no hard problem, there is no consciousness etc.
Many claim to have solved the problem of consciousness, and others rebut, saying, ‘nothing of the kind’. Most theories of consciousness start with the assumption that it has to be somehow generated by the brain, though we don’t know quite yet how.
And so Christof Koch’s recent book on the subject, The Feeling of Life Itself comes as a welcome, fresh approach to the problem. Koch, the head of the Allen Institute for Brain Science in Seattle, bases his approach on the work of Guilio Tononi, Professor at the University of Wisconsin. Rather than starting in the brain, Tononi asks: ‘what does consciousness feel like?’
Koch lists five properties of consciousness.
- It is a private experience. I know that I am conscious, but I can’t prove that you or anyone else is conscious.
- The experience is structured, containing many different objects.
- Each conscious experience is informative, and differs from every other experience.
- It is integrated — giving us one whole picture
- It is definite. You can only have one conscious experience at a time, that cannot be reduced to the parts without losing something.
Another curiosity about consciousness is that it is its own cause. Koch provides a metaphor to explain this. In Plato’s dialogue with the ‘stranger from Elea’, the argument is made that for something to exist, it must affect something else, or be affected by something else.
But consciousness is not like this (paranormal powers excepted). We feel sure it exists but it doesn’t appear to cause anything, nor can anyone else observe it. Unless it is its own cause — consciousness arises from and ends in consciousness, like an ouroboros loop.
According to Koch, the basic circuitry of computers and AI systems do not have this feature. This suggests that machines, unlike the human mind, cannot create a whole picture and therefore can never be conscious. The conclusion is similar to Penrose’s in The Emperor’s New Mind, where he points out that the feature of self-reference of the mind poses an insurmountable problem for computer algorithms.
This is bad news for those of us who wanted to have our own pet robot, or for transhumanists hoping to upload their consciousness to a computer, but good news for those of us who were afraid of intelligent machines getting the better of us. Apparently, we can still turn off the switch.
Does the brain itself have the requirements to generate consciousness? Tononi proposes that parts of the posterior cerebral cortex have feedback circuitry that are required to generate consciousness, but is that enough? His theory also requires ‘non-computable’ functionality in the brain, and it’s not clear how that arises.
Tononi and Koch’s work forms a new framework for exploring the nature of consciousness, including questions of whether other animals are conscious, what sort of consciousness they have, and what do we experience when all our thoughts fall silent (as in meditation). We are left with the subtitle of Koch’s book: Why consciousness is widespread but can’t be computed.
One thought on “No, machines will never be conscious!”
Love Koch’s work. But we are aware of consciousness and evidential mediums are tested for this, help many police departments and governments of the world, and can tap into universal consciousness also. Suzanne Giesemann is a great example of direct evidence for all this, etc.