A long time ago, in a galaxy far, far away … the DARPA knowledge sharing effort developed an approach to interoperability and agent communication based on a three components: KIF as an expressive but neutral language for encoding knowledge, shared ontologies for capturing domain concepts and relations, and KQML as a language and protocol for system interaction. Maybe it’s better to think of it as a software design pattern rather than an approach. Although the original KSE push failed to ignite the explosive spread of intelligent agents across the Internet that some of us imagined, it has been reused by many multiagent systems frameworks, including FIPA, CoABS, Cougaar and others. Like other classic software patterns, such as MVC, is simple and intuitive and easy to implement and follow.
The RDF approach to realizing the Semantic Web addresses several of the problems that undermined those of the KSE, FIPA and others. Here are three that come first to my mind. (1) The KSE, and to a lesser degree the other frameworks, never had a good way to bind terms to ontologies. The RDF approach of using URIs and namespaces solves this nicely. (2) By grounding everything in the Web via URIs, the RDF approach also solves many problems in building distributed knowledge based systems without having to create a new middleware infrastructure and get everyone to adopt it. (3) Choosing an underlying graph representation for knowledge (RDF) has its advantages and disadvantages, but makes it easy for developers outside the AI KB community to implement, use and map to their legacy representations.
I see RDF and OWL as providing, in a well integrated way, the first two components of the pattern, but what of the third? KQML as first proposed by Gio Weiderhold as the Knowledge Query and Manipulation Language. It was based on a set of standard message types (e.g., ASK and TELL) with associated semantics and protocols. I suggest that the Semantic Web needs a KQML and propose that we build on SPARQL to make it.