Presentation

Searching for Knowledge and Data on the Semantic Web

June 2, 2006

5561856 bytes

Microsoft PowerPoint - Need a reader? Get one here

Workshop on Humans and the Semantic Web, College Park, MD 2 June 2006.

Web search engines like Google have made people "smarter" by providing ready access to the world's knowledge whenever we need to look up a fact, learn about a topic or evaluate opinions. The World-Wide Web Consortium's Semantic Web effort aims to make such knowledge more accessible to computer programs by encoding it on the Web in machine understandable form. The W3C has developed the "markup language" RDF and its extension OWL as standards for expressing knowledge and data and building a new layer of services, tools and applications to support "semantic interoperability" in distributed systems.

As the volume of RDF encoded knowledge on the Web grows, software agents will need their own search engines to help them find the relevant and trustworthy knowledge required to carry out their tasks. We will discuss the general issues underlying the indexing and retrieval of RDF based information and describe Swoogle, a crawler based search engine whose index contains information on over a million RDF documents. Swoogle also serves human knowledge engineers by helping them to find Semantic Web ontologies, terms and data and to understand how and by whom they are being used.

Swoogle (http://swoogle.umbc.edu/) has been running at UMBC and serving users and agents since the summer of 2004. Its document collection currently contains about 1.4 million documents and is growing at a rate of about 3,000 documents a day. We will briefly describe some high level characteristics of this collection and what is says about how the Semantic Web is being used and abused.

7700 downloads

Public

OWL Tweet