Consciousness is a slippery, and fuzzy, concept. It is a bit like the concept of time as St. Augustine remarked long time ago: “What then is time? If no one asks me, I know what it is. If I wish to explain it to him who asks, I do not know.”
Just look at the graphic. It takes “discovery” (sensors) to bring reality to our perception, to become conscious. Yet, the more we learn about that specific reality the more it tends to become “a given” and it fades away from our perception. Think about roads. How many times do you stop on your track as you step out of your home every morning to say: “Hey, look, there is a road”. You are no longer perceiving it, it has slipped through experience in the unconscious zone. This is just an example and you might say that actually you are still conscious of the presence of a road, you are just not flagging it as an important fact. However, this is what happens to many, most, signals generated by our senses, they never reach the conscious level in our brain.
To further muddy the water, according to the orthodox interpretation of Quantum mechanics consciousness and physical world reality are one and the same, you cannot separate one from the other. It is only by applying a conscious measurement that reality unfolds (probability waves collapse). It takes a conscious decision for the Schrödinger’s cat to be alive or dead (watch the clip). In this interpretation consciousness exists as part of the reality, it is not “derived” from reality. This is usually addressed as the Big-C (Big Consciousness).
The opposite view is that consciousness emerges from biology, which in turns emerges from chemistry, emerging from physics, emerging from math… This view is known as the Little-C (Little Consciousness).
If we take this second interpretation then there is a concrete hope (more than hope, I would say a “plan”) that consciousness can result from sophisticated AI. The jury is still out.
In the words of prof. Subhash Kak printed in the Conversation:
“It is possible that the phenomenon of consciousness requires a self-organizing system, like the brain’s physical structure. If so, then current machines will come up short.
Scholars don’t know if adaptive self-organizing machines can be designed to be as sophisticated as the human brain; we lack a mathematical theory of computation for systems like that. Perhaps it’s true that only biological machines can be sufficiently creative and flexible. But then that suggests people should – or soon will – start working on engineering new biological structures that are, or could become, conscious.”
Notice the adjective “current” to tag the status of machines today not being able to support the emergence of consciousness. In the future, if consciousness is indeed of the type Little-C, there is a possibility, to see consciousness emerge from machine (AI). I would go even further saying that it will be inevitable.
Also notice that although the Big-C and the Little-C are completely different views of the world and one can say they are incompatible with one another, from a practical point of view they might end up to be “experienced” as the same. It is like passing the Turing test: if a machine does it becomes undistinguishable, in that environment, from a human. It does not say that the machine has become “a human”, just that from and experience point of view it is no longer distinguishable. Likewise for consciousness. Even assuming the existence of a Big-C, if AI will eventually generate a Little-C from the point of view of interacting with that system it is the same. We will be interacting with a conscious system.
The evolution towards symbiotic autonomous systems is intertwined with these issues of consciousness.