Intuitively, consciousness seems to have something to do with intelligence. If we succeed in developing truly intelligent machines will they necessarily be conscious? Or is it even a sensible thing to wonder about?
Slate has an interesting series of video interviews with scientists, philosophers and other experts on Big Questions conducted by journalist and author Robert Wright. One series is on the nature of consciousness and features interviews with Daniel Dennett, Steven Pinker, Freeman Dyson and others.
The interview with Dennett, Professor of Philosophy and Co-Director of the Center for Cognitive Studies at Tufts University, is especially interesting (to us anyway) because he takes a computational approach to modeling the mind.
Thinking about consciousness can take us deep into the philosophical weeds, but it has immediate practical applications as well. For example, there is a renewed interst in the AI community in metacognition which can be defined simply as “thinking about thinking.” An intelligent agent needs to have some control over its cognitive processes to avoid getting trapped in dead ends, unproductive approaches to solving a problem and to robustly adapt to unexpected changes in its environment. See the site for the AAAI Symposium on Metacognition in Computation held in Spring 2006 for some examples of current work in this area.