John Markoff has an article for tomorrow’s New York Times, Scientists Worry Machines May Outsmart Man on a recent AAAI study on the future of AI.
“A robot that can open doors and find electrical outlets to recharge itself. Computer viruses that no one can stop. Predator drones, which, though still controlled remotely by humans, come close to a machine that can kill autonomously. Impressed and alarmed by advances in artificial intelligence, a group of computer scientists is debating whether there should be limits on research that might lead to loss of human control over computer-based systems that carry a growing share of society’s workload, from waging war to chatting with customers on the phone.”
The study was commissioned by AAAI to “to explore and address potential long-term societal influences of AI research and development”. Look for a report published by AAAI later this year. The study involved twenty-five participants who were divided into three subgroups: on concerns, control and guidelines, the nature and timing of disruptive advances, and ethical and legal issues.
There was a panel session earlier this month at IJCAI where some of the study participants discussed highlights from the study. Hopefully this was filmed and the results will be added to the videolectures.net IJCAI09 collection.
While I am generally skeptical of an impending technological singularity, which seems to sum up many of the concerns some have, there are aspects of the future that I do wonder about. At the top of my list is what will happen when virtually all of human knowledge is published on the Web (as it nearly is now) in a for that machines can understand. I’m pretty sure that this will happen in the next decade or two, either through the current Semantic Web approach (as a web of data) or by gradually improving techniques for machine understanding of human languages and images.