Our ELVIS poster presented by Joel Sachs and Cyndy Parr at the NBII All-Nodes Meeting was given an award for the poster generating the most interest. That’s “generating interest” as in “most interesting content” and not weirdest or most controversial. The poster describes a suite of tools being build as part of the NSF and USGS sponsored SPIRE project that is exploring how semantic web technologies can be used by ecological biologists.
The ELVIS (the Ecosystem Location Visualization and Information System) tool suite is motivated by the belief that food web structure plays a role in the success or failure of potential species invasions. Because very few ecosystems have been the subject of empirical foodweb studies, response teams are typically unable to get quick answers to questions like “what are likely prey and predator species of the invader in the new environment?” The focus of ELVIS is on providing evidence, as opposed to giving definitive answers to queries.
Posted in Semantic Web |
October 30th, 2005
OWL leaves the nest is a a panel at the First International Symposium on Agents and the Semantic Web, 16:00-17:30 Friday 4 November 2005. This is part of the 2005 AAAI Fall Symposium Series that is being held at the Hyatt Regency Crystal City, Arlington VA. The panelists are:
- Tim Finin, UMBC, Baltimore MD (moderator)
Norman Sadeh, CMU, Pittsburgh PA
Yannis Labrou, Fujitsu Laboratories of America, College Park MD
Harry Lik Chen, Image Matters LLC, Leesburg VA
Filip Perich, Shared Spectrum Company, Vienna VA
Although the Semantic Web languages and related technology were designed to publish and share information on the Web, it’s always been recognized that they have many other uses. This panel will focus on the use of the Semantic Web technologies in mobile and pervasive computing and communication. Some recent examples that we will touch on include the following. A number of research efforts involving mobile and pervasive computing have adopted OWL to describe services, share information and cooperate. Policies grounded in OWL are being used to control communication and ensure privacy in “smart spaces”. Research projects are using RDF metadata to help manage and route communication in conventional and ad hoc networks. Additional usecases will be covered and the challenges and obstacles for realizing them will be discussed.
The panelists will each make a short preliminary statement and then respond to any or all of the following questions or issues. Workshop participants are encouraged to think up new and provocative issues and to spring them on the panelists without warning and ask for a response.
- Will the impact of RDF and OWL on the systems and communication ultimately be greater than on the World Wide Web?
- How likely are system developers to adopt a multiagent system approach?
- How likely are system developers to adopt the current semantic web technologies?
- Are RDF and OWL the right languages for these kinds of applications? If not, what’s missing?
- Do current ideas for semantic web services (e.g., OWL-S, WMSO) meet your needs? If not, how should they change?
- Declarative policies encoded in RDF are popular in research systems now. Are they ready for real applications?
- What non-web applications do you think will be the first to be deployed by industry or government?
- Will the use of Semantic Web languages drive a unified web-based design in the future mobile computing systems?
- It’s difficult for RDF and OWL to encode and use certain kinds of common sense knowledge (e.g., nearby, faster, closer, typically, probably) essential for building smart applications. How can we address these issues?
October 30th, 2005
Lisp500 is a 500-line implementation of an interpreter for an informally specified dialect of Lisp. Be forewarned that one reason it’s only 500 is that there are neither comments nor blank lines. It has a goodly number of Common Lisp features, though.
October 29th, 2005
Earlier this month Alexa (now owned by Amazon) rolled out Alexa Web Information Service, providing developers with access to its repository of information about the Web. This could be a very useful source of metadata about URLs for semantic web applications and machine learning applications. Some of the services are:
- Access to information about Web sites including traffic data, contact info and related links.
- Access an XML-based search index based on Alexa’s Web crawl.
- Access to Alexa’s enhanced DMOZ-based browse service.
- Access to Alexa’s WebMap to gather links-in and links-out information about URLs.
Several sites already using AWIS are described:
“In your travels you may have noticed Alexa data popping up in unexpected places across the Web — on AdBrite, before you buy an ad you can see the seller’s traffic rank, on HitsLink you can get Alexa data about your site’s referrers, on IceRocket you get Alexa data in your search results.”
You have to register to use the service. The first 10K requests per month are free and after that it’s $0.15 for 1,000 requests
October 26th, 2005
UMBC is searching for a new Dean of its College of Engineering and Information Technology. For more information see the UMBC Dean of the College of Engineering and Information Technology search site. I am happy to field questions from or talk to anyone who might be interested in applying for the position or nominating a colleague. Send me email at email@example.com if you do. I think it’s a great opportunity for someone who wants to help shape and guide a strong College that wants to become even stronger.
October 26th, 2005
Yesterday I installed and played with the new ontology editor, SemanticWorks 2006, by Altova. I posted my 30 minutes user experience on my blog.
In summary, I think the software need some more work. Many functions are rough. This doesn’t mean that I don’t like. I think Altova did a great job in being the first commerical company to offer an ontology editor product.
October 25th, 2005
Joel Spolsky posted his analysis of how some clickfraud scams work:
- First, they create a lot of fake blogs. There are slimy companies that make easy to use software to do this for you. They scrape bits and pieces of legitimate blogs and repost them, as if they were just another link blog. …
- Then, they sign up for AdSense.
- Then you buy or rent a network of zombie PCs (that is, home computers that are attached to the Internet permanently which have been infected by a virus allowing them to be controlled remotely).
- Finally, use those zombie PCs to simulate clicks on the links on your blog. Because the zombie PCs are all over the Internet, they appear to be legit links coming from all over the Internet.
Clearly Google needs an accurate and automated splog detection system.
October 24th, 2005
Torrentocracy (via Boing Boing) is providing a way to automatically create a torrent of RSS attachments:
To put this plainly, this means you can just continue publishing your media through your blog or content management system as you always have, except now whenever your existing RSS feed gets updated, Prodigem reads it and spits out a torrent of your content. You tell your audience about your Prodigem RSS Torrent Feed and you are then automatically on the road to bandwidth redemption.
Such a service that also automatically added semantic markup would be excellent.
October 24th, 2005
Bruce Clay’s Search Engine Relationship Chart depicts the food web of search engines. According to this, the web of search engines is a bipartite graph. Can you guess which search engines are at the core of each connected component?
In Biology, a food web shows “who eats who“. Here the appropriate metaphor is “who feeds who”. It’s nature, red in tooth and claw, none the less. (Spotted on informationlab.org)
October 24th, 2005
The Dogpile metasearch engine returns the top results from Google, Yahoo, MSN and Ask Jeeves. Dogpile now has a great interactive data visualization that shows where the top results were found on a Venn diagram of three sources.
Very intuitive. (Spotted on information aesthetics.)
October 21st, 2005
Microsoft Shared Source Initiative
These new licenses represent a broad spectrum of approaches needed to facilitate an ever-growing, rich set of technologies for release.
The three licenses are:
â€¢ Microsoft Permissive License (Ms-PL) — The Ms-PL is the least restrictive of the Microsoft source code licenses. It allows you to view, modify, and redistribute the source code for either commercial or non-commercial purposes. Under the Ms-PL, you may change the source code and share it with others. You may also charge a licensing fee for your modified work if you wish. This license is most commonly used for developer tools, applications, and components.
â€¢ Microsoft Community License (Ms-CL) — The Ms-CL is a license that is best used for collaborative development projects. This type of license is commonly referred to as a reciprocal source code license and carries specific requirements if you choose to combine Ms-CL code with your own code. The Ms-CL allows for both non-commercial and commercial modification and redistribution of licensed software and carries a per-file reciprocal term.
â€¢ Microsoft Reference License (Ms-RL) — The Ms-RL is a reference-only license that allows licensees to view source code in order to gain a deeper understanding of the inner workings of a Microsoft technology. It does not allow for modification or redistribution. This license is used primarily for technologies such as development libraries.