Singularity Summit: AI and the Future of Humanity

August 13th, 2007

Next month the Singularity Institute for Artificial Intelligence will hold a two day Singularity Summit Palace of Fine Arts Theatre in San Francisco featuring 17 speakers who will address these five questions:

  • What are the pathways to and major challenges of advanced AI?
  • What are the potential benefits and implications?
  • How far away are we from advanced AI?
  • What risks may we face?
  • What should we do to prepare?

This will no doubt be a very interesting and stimulating event. The speakers are a diverse mix and I expect that all of them will serve up a good talk and introduce some interesting ideas. They include researchers who have made some serious contributions to AI, start-up types, professional visionaries and “independent scientists”. The best part of the program, I think, is that it costs only $50 and that includes lunch and a reception! The worst part may be the the “saving humanity” theme swimming just below the surface. I’ve been through several iterations of the AI hype cycle and am not sure I want to see another one premised on saving humanity from an impending apocalypse.

New SwetoDblp RDF dataset released with 11M triples

August 10th, 2007

The LSDIS lab at the University of Georgia has released a new version of the SwetoDblp dataset. This has about 11M triples that capture the data in DBLP enriched with other datasets adding relationships to other entities including publishers, companies and universities. Rather than being a simple mapping of DBLP’s flat XML rendering, it’s based on a good ontology with lots of classes and individuals.

B. Aleman-Meza, F. Hakimpour, I.B. Arpinar, A.P. Sheth: SwetoDblp Ontology of Computer Science Publications, Web Semantics: Science, Services and Agents on the World Wide Web, 2007 (in Press)

It’s a great resource that we have used in collaboration with LSDIS with support from a joint NSF ITR project (ITR-IIS-0325464, ITR-IIS-0325172).

On sharing a cell with a Twitter addict…

August 10th, 2007

On sharing a cell with a Twitter addict…

Trauma Pod at DARPATech 2007

August 9th, 2007

The DARPA Trauma Pod project was featured at the 2007 DARPATech conference. We worked on a small part of the project, which was led by SRI and involved a large team of contractors and universities. The popular Mechanics article described this way.

“A single human will operate the robot remotely during surgery, but Trauma Pod will be able to perform a number of functions, such as fluid administration and surgical assistance, autonomously. The goal is to stabilize injured soldiers as quickly as possible, and previous Trauma Pod designs have included related systems that evacuate the patient. Giroir said that a prototype will be delivered to troops within two years.”

I found the Trauma Pod system to be a very impressive distributed system that comprised multiple autonomous robotic systems, multiple sensing systems, speech understanding, plan recognition and much more.

Google classifies its own blog as spam

August 9th, 2007

Google’s splog classifier came up with an embarrassing false positive this week. It mistakenly thought that its own Google Custom Search Blog on was a splog and disabled it.

Robert McMillan of IDG News Service explained what happened next.

“Blogger’s spam classifier misidentified the Custom Search Blog as spam,” he said via e-mail on Wednesday. Typically Google notifies blog owners when it has spotted content associated with spam on their Web sites to give them a chance to clear up any misunderstandings. However, that didn’t work out in this case. “The Custom Search Blog bloggers overlooked their notification, and after a period of time passed, the blog was disabled.” When blogs are disabled like this, their URL becomes available to the general public. That’s when Srikanth swooped in and wrote the joke post. “It was a case of “URL squatting” and not a security issue or any kind of hack,” Carlson said. Google quickly realized its mistake and the Custom Search Blog is now back in action.

This was first noticed by Blogscoped, which has saved an image of the hack for posterity.

Second life economic crisis, bank run on Ginko Financial

August 8th, 2007

Technology Review reports on Money Trouble in Second Life.

“There’s a long line of avatars waiting to use the automatic-teller machines for Ginko Financial, a virtual bank in the online game Second Life. For more than a week, account holders have been demanding their money back in what some folks are calling a bank run. Set off by high interest rates and a recent ban on in-game gambling, the bank run could ultimately have a major effect on the game’s economy. The theft of approximately $12,000 from the Second Life World Stock Exchange doesn’t help matters either.”

Unlike many MMOGs, Second Life has an active economy based on the Linden Dollar (L$) and encourages users to buy and sell game goods and real estate to one another. While the currency is fictional, TR reports

“But those dollars do have real-world value: players can buy or sell Linden dollars at a rate of about L$270 to $1 on the Lindex market. Second Life’s website even boasts that “thousands of residents are making part or all of their real life income from their Second Life businesses.”

NSF workshop on datamining and cyber-enabled discovery for innovation

August 5th, 2007

NSF is sponsoring a workshop on next generation data mining and cyber-enabled discovery for innovation 10-12 October 2007 in Baltimore’s Inner Harbor.

“This National Science Foundation workshop on Next Generation Data Mining and Cyber Enabled Discovery for Innovation (NGDM’07) will bring together data mining researchers, scientists, and engineers from a diverse background along with domain experts for various emerging problems that are relevant to Cyber Enabled Discovery for Innovation (CDI). The objective is to enhance the understanding of the research problems and facilitate creating an environment for better understanding the challenges in front of the data mining and the CDI community.”

The workshop will focus on five areas

  • Data mining in e-science and engineering
  • Media, pervasive computing, and ubiquitous data mining
  • Data mining in security surveillance, and privacy protection
  • Social science, finance, digital humanities, and data mining
  • The Web, semantics, and data mining

and seeks submissions of relevant five page extended abstract from data mining researchers and practitioners by 3 September 2007.

Google earth crowdsourcing map data

August 2nd, 2007

Dan Karran blogs about a talk by Michael Jones (CTO of Google Earth) at the State of the Map conference, in which Jones describes how they are capturing new data for Google Earth via crowdsourcing.

“Now, everything you see here was created by people in Hyderabad. We have a pilot program running in India. We’ve done about 50 cities now, in their completeness, with driving directions and everything – completely done by having locals use some software we haven’t released publicly to draw their city on top of our photo imagery.”

A podcast of the talk is available.

O’reilly radar described it this way

“Google has been sending GPS kits to India that enable locals to make more detailed maps of their area. After the data has been uploaded and then verified against other participant’s data it becomes a part of the map. The process is very reminiscent of what Open Street Map, the community map-building project, has been doing. The biggest difference is that the data (to my knowledge) is owned by Google and is not freely available back to the community like it is with OSM.”

Why we twitter

August 2nd, 2007

Ebiquity PhD student Akshay Java has been collecting Twitter data since March 2007 and has just written a paper that analyzes the subset from April and May of this year.

Why We Twitter: Understanding Microblogging Usage and Communities, Akshay Java, Xiaodan Song, Tim Finin, and Belle Tseng, Joint 9th WEBKDD and 1st SNA-KDD Workshop, August 2007.

His dataset included about 1350K posts from over 75K users. The paper covers a lot of the standard statistics you would expect — usage trends, basic network properties, top hubs and authorities, community structure, and geographic distribution. Akshay’s title pays homage to the early paper that asked why we blog, but the title also reflects the paper’s key contribution — an attempt to tease out the user’s intention in writing a tweet, i.e. to analyze why people are using Twitter.

Google and behavioral targeting

August 1st, 2007

bulls eyeAn article in CIO Insight, Google wary of behavioral targeting in online ads, quotes Google VP Susan Wojcicki as saying that Google does not plan to do behavioral targeting — building up a profile of a user’s interests over time to better select advertisements for them to see.

“Google Inc. is looking to find more links between the searches its users do in order to better target advertising, but the company is reluctant to go much further than that in tracking their behavior.”

Google is working on improving relevance by looking at what might be thought of as search two grams.

“Google has been testing for several weeks a new advertising feature that delivers ads based not simply on a specific search term, but also on the immediately previous search, she said. A user who types “Italy vacation” into a Google search box might see ads about Tuscany or cheap flights to Europe. Were the same user to subsequently search for “weather,” Google will assume there is a link between “Italy vacation” and “weather” and deliver ads tied to local weather conditions in Italy.”

This is a straightforward idea that can be done without building user models or profiles.