People often ask whether human-level artificial intelligence will eventually become conscious.
The real question is: Do you want it to be conscious?
Some people think it is largely up to us whether our machines will wake up. That may sound presumptuous. The mechanisms of consciousness—the reasons humans have a vivid and direct experience of the world and of the self—are an unsolved mystery in neuroscience, and some people think they always will be; it seems impossible to explain subjective experience using the objective methods of science. But in the 25 or so years that we, as the human race, have taken consciousness seriously as a target of scientific scrutiny, we have made significant progress. We have discovered neural activity that correlates with consciousness, and we have a better idea of what behavioral tasks require conscious awareness. Our brains perform many high-level cognitive tasks subconsciously.
What is consciousness?
Consciousness, it can tentatively be concluded, is not a necessary byproduct of our cognition. The same is presumably true of AIs. In many science-fiction stories, machines develop an inner mental life automatically, simply by virtue of their sophistication, but it is likelier that consciousness will have to be expressly designed into them. And we have solid scientific and engineering reasons to try to do that. Our very ignorance about consciousness is one. The engineers of the 18th and 19th centuries did not wait until physicists had sorted out the laws of thermodynamics before they built steam engines.
Theory driven by inventions
It worked the other way round: Inventions drove theory. So it is today. Debates on consciousness are often too philosophical and spin around in circles without producing tangible results. The small community of us who work on artificial consciousness aims to learn by doing. Furthermore, consciousness must have some important function for us, or else evolution wouldn’t have endowed us with it. The same function would be of use to AIs. Here, too, science fiction might have misled us. For the AIs in books and TV shows, consciousness is a curse. They exhibit unpredictable, intentional behaviors, and things don’t turn out well for the humans. But in the real world, dystopian scenarios seem unlikely. Whatever risks AIs may pose do not depend on their being conscious. To the contrary, conscious machines could help us manage the impact of AI technology.