Google robot-controlled car frees users to text

October 9th, 2010

No, this is not an article from The Onion, but Google is working on a computer-controlled car. Two articles for tomorrow’s New York Times describe a research project at Google on developing an autonomous vehicle. Here is a picture of the prototype.

Google autonomous vehicle

In the science science section, John Markoff has a story Google Cars Drive Themselves, in Traffic.

“Anyone driving the twists of Highway 1 between San Francisco and Los Angeles recently may have glimpsed a Toyota Prius with a curious funnel-like cylinder on the roof. Harder to notice was that the person at the wheel was not actually driving. A self-driving car developed and outfitted by Google, with device on roof, cruising along recently on Highway 101 in Mountain View, Calif. The car is a project of Google, which has been working in secret but in plain view on vehicles that can drive themselves, using artificial-intelligence software that can sense anything near the car and mimic the decisions made by a human driver.”

A companion article, also by Markoff, has some additional material, including this interesting note on the current approach.

“One main technique used by the Google team is known as SLAM, or simultaneous localization and mapping, which builds and updates a map of a vehicle’s surroundings while keeping the vehicle located within the map. To make a SLAM map, the car is first driven manually along a route while its sensors capture location, feature and obstacle data. Then a group of software engineers annotates the maps, making certain that road signs, crosswalks, street lights and unusual features are all embedded. The cars then drive autonomously over the mapped routes, recording changes as they occur and updating the map. The researchers said they were surprised to find how frequently the roads their robots drove on had changed.”

The project was the idea of Stanford computer science professor Sebastian Thrun who is also a Principal Engineer at Google, where he helped invent the Street View mapping service. Thrun has led the Stanford team that developed the Stanley robot car which won the 2005 DARPA Grand Challenge that was focused on developing autonomous vehicle technology.

It’s not clear what is the business case for this Google research project. But Google has the cash and the intellectual capital that might actually develop something in this space that can make money.

In a Google blog post from earlier today, What we’re driving at, Thrun gives one motivation.

“Larry and Sergey founded Google because they wanted to help solve really big problems using technology. And one of the big problems we’re working on today is car safety and efficiency. Our goal is to help prevent traffic accidents, free up people’s time and reduce carbon emissions by fundamentally changing car use.

So we have developed technology for cars that can drive themselves. Our automated cars, manned by trained operators, just drove from our Mountain View campus to our Santa Monica office and on to Hollywood Boulevard. They’ve driven down Lombard Street, crossed the Golden Gate bridge, navigated the Pacific Coast Highway, and even made it all the way around Lake Tahoe. All in all, our self-driving cars have logged over 140,000 miles. We think this is a first in robotics research.”

update: Techcrunch has an article speculating on the possible business applications, World-Changing Awesome Aside, How Will The Self-Driving Google Car Make Money?.


Nominate books for the 2011 UMBC New Student Book Experience

September 20th, 2010

Read a good book lately? Why not nominate it for the 2011 UMBC New Student Book Experience, which invites new UMBC students to read the selected book and engage in formal and informal discussions about it as the new year starts.

We are looking for books that (1) are compelling, intellectually stimulating, engaging on multiple levels and capable of generating interesting discussions; (2) address issues meaningful to students of diverse backgrounds; (3) are not widely required in Maryland high schools or made into a recent film; and (4) are available in paperback and not overly long.

You can nominate one or more using this handy Facebook app. The app uses the Google Books API to help identify books given a partial title, so it’s easy to use. After recoding your nomination, you’ll have an opportunity to make an optional post to your Facebook page like the one below, so your friends can see what you suggested. Nominations will close on October 31, 2010 and the selection will be announced in the Spring.


Nominate a book for the 2011 UMBC New Student Book Experience


Google, China and Cyber-security

September 11th, 2010

The US Army War College publishes Parameters as the “US Army’s Senior Professional Journal”. The summer issue has an article by Fort Leavenworth analyst Timothy L. Thomas, Google Confronts China’s Three Warfares, that discusses alleged recent Chinese hacking attacks on Google, censorship, Google’s reactions, and other related events. His article concludes:

“The Chinese probes of the world’s cyber domains have not ceased. Recently, Canadian researchers uncovered a massive Chinese espionage campaign targeting India. In their report, Shadow Network, they outlined the massive campaign emanating from Chengdu, China that harvested a huge quantity of data from India’s military and commercial files. China’s activities against Google and India (and their reconnaissance activities in general) portend a much broader pattern, a long-term strategy to hold military and economic assets of various nations hostage. There are a number of Chinese books that support this supposition. Gaining the high ground in international digital competition is becoming a national objective for the Chinese. China’s previous activities certainly afford them a political advantage in any future conflict.”


Google AI Challenge: Planet Wars

September 6th, 2010

The University of Waterloo’s computer science club is holding another Google-sponsored AI Challenge this Fall. The task is to write a program to compete in a Planet Wars tournament. Your goal is to conquer all the planets in your corner of space or eliminate all of your opponents ships. Starter programs are available in Python, Java, C# and C++ and support for Common Lisp, Haskell, Ruby and Perl is under development. The contest starts on September 10th and ends on November 27th. Sounds like fun!

Planet Wars is inspired by Galcon iPhone and desktop strategy game. Here’s a Planet Wars game in action.




Yahoo! using Bing search engine in US and Canada

August 24th, 2010

Google, Bing, Yahoo!Microsoft’s Bing team announced on their blog that that the Bing search engine is “powering Yahoo!’s search results” in the US and Canada for English queries. Yahoo also has a post on their Yahoo! Search Blog.

The San Jose Mercury News reports:

“Tuesday, nearly 13 months after Yahoo and Microsoft announced plans to collaborate on Internet search in hopes of challenging Google’s market dominance, the two companies announced that the results of all Yahoo English language searches made in the United States and Canada are coming from Microsoft’s Bing search engine. The two companies are still racing to complete the transition of paid search, the text advertising links that run beside and above the standard search results, before the make-or-break holiday period — a much more difficult task.”

Combining the traffic from Microsoft and Yahoo will give the Bing a more significant share of the Web search market. That should help them by providing both companies with a larger stream of search related data that can be exploited to improve search relevance, ad placement and trend spotting. It will also help to foster competition with Google focused on developing better search technology.

Hopefully, Bing will be able to benefit from the good work done at Yahoo! on adding more semantics to Web search.


Google unemployment index estimates and predicts unemployment

August 20th, 2010

The Google Unemployment Index is an economic indicator based on queries sent to Google’s search engine related to unemployment, social security, welfare, and unemployment benefits. Since some of these search terms are probably leading indicators, it can also be used to predict upcoming changes in the actual unemployment rate.


The index is based on queries tracked via Google Insights for Search that are tuned to different countries and you can also focus on particular regions or metropolitan areas and compare the index in several locations. Here’s an example comparing Florida (blue) and Maryland (red).


Researchers prove Rubics Cube solvable in 20 moves or less

August 13th, 2010

Using a combination of mathematical tricks, good programming and 35 CPU-years on Google’s servers, a group of researchers have proved that every position of Rubik’s Cube can be solved in 20 moves or less. The group consists of Kent State mathematician Morley Davidson, Google engineer John Dethridge, math teacher Herbert Kociemba, and programmer Tomas Rokicki.

This is an amazing result and a testament to more than 30 years of work on the problem. The Cube was invented in 1974 and almost immediately the subject for programs to solve it. In 1981, Morwen Thistlethwaite proved that any configuration could be solved in no more than 52 moves. Periodically, tighter upper bounds for the maximum solution length were found. This result ends the quest — there are some configurations (about 300M) that require 20 moves to solve and there are none that require more than 20 moves.

In their own words, here’s how the group solved all 43,252,003,274,489,856,000 Cube positions:

  • We partitioned the positions into 2,217,093,120 sets of 19,508,428,800 positions each.
  • We reduced the count of sets we needed to solve to 55,882,296 using symmetry and set covering.
  • We did not find optimal solutions to each position, but instead only solutions of length 20 or less.
  • We wrote a program that solved a single set in about 20 seconds.
  • We used about 35 CPU years to find solutions to all of the positions in each of the 55,882,296 sets.

This reminds me of the first program I wrote for my own enjoyment, which used brute force to find all solutions to Piet Hein’s Soma Cube. In 1969 I had a summer job as the night operator for an IBM 360 and I would turn off the clock to run my program so that the management wouldn’t know how much computer time I was consuming.

See this BBC story more more information on this amazing result.


Google acquires Metaweb and Freebase

July 16th, 2010

Google announced today that it has acquired Metaweb, the company behind Freebase — a free, semantic database of “over 12 million people, places, and things in the world.” This is from their announcement on the Official Google blog:

“Over time we’ve improved search by deepening our understanding of queries and web pages. The web isn’t merely words — it’s information about things in the real world, and understanding the relationships between real-world entities can help us deliver relevant information more quickly. … With efforts like rich snippets and the search answers feature, we’re just beginning to apply our understanding of the web to make search better. Type [barack obama birthday] in the search box and see the answer right at the top of the page. Or search for [events in San Jose] and see a list of specific events and dates. We can offer this kind of experience because we understand facts about real people and real events out in the world. But what about [colleges on the west coast with tuition under $30,000] or [actors over 40 who have won at least one oscar]? These are hard questions, and we’ve acquired Metaweb because we believe working together we’ll be able to provide better answers.”

In their announcement, Google promises to continue to maintain Freebase “as a free and open database for the world” and invites other web companies use and contribute to it.

Freebase is a system very much in the linked open data spirit, even thought RDF is not its native representation. It’s content is available as RDF and there are many links that bind it to the LOD cloud. Moreover, Freebase has a very good wiki-like interface allowing people to upload, extend and edit both its schema and data.

Here’s a video on the concepts behind Metaweb which are, of course, also those underlying the Semantic Web. What the difference — I’d say a combination of representational details and centralized (Metaweb) vs. distributed (Semantic Web).


Search neutrality: Google and Danny Sullivan weigh in

July 16th, 2010

Web search guru Danny Sullivan has a great response to the NYT editorial on regulating search engine algorithms: The New York Times Algorithm and Why It Needs Government Regulation. Here’s how it starts:

“The New York Times is the number one newspaper web site. Analysts reckon it ranks first in reach among US opinion leaders. When the New York Times editorial staff tweaks its supersecret algorithm behind what to cover and exactly how to cover a story — as it does hundreds of times a day — it can break a business that is pushed down in coverage or not covered at all.”

Google published its own response to the Times piece as a Financial Times op-ed and also posted it to the Google public policy blog: regulating what is “best” in search?

“Search engines use algorithms and equations to produce order and organisation online where manual effort cannot. These algorithms embody rules that decide which information is “best”, and how to measure it. Clearly defining which of any product or service is best is subjective. Yet in our view, the notion of “search neutrality” threatens innovation, competition and, fundamentally,your ability as a user to improve how you find information.”

The penultimate paragraph gives what they say is their strongest argument againt mandating “search neutrality”.

“But the strongest arguments against rules for “neutral search” is that they would make the ranking of results on each search engine similar, creating a strong disincentive for each company to find new, innovative ways to seek out the best answers on an increasingly complex web. What if a better answer for your search, say, on the World Cup or “jaguar” were to appear on the web tomorrow? Also, what if a new technology were to be developed as powerful as PageRank that transforms the way search engines work? Neutrality forcing standardised results removes the potential for innovation and turns search into a commodity.”

This assumes of course, that there is real competition among Internet search engines. Microsoft has been putting a lot of research and development into Bing with good results and it’s been gaining market share. Yahoo is doing very interesting this as well. Consumer choice among a handful of competitors would be the best way to ensure that none abuse their customers.


New York Times editorializes about the Google search ranking algorithm

July 15th, 2010

In what may be a first, today’s New York Times has an editorial about an algorithm. No, they haven’t waded into the P=NP issue, but commented on Google’s algorithm for ranking search results and accusations that Google unfairly biases it for its own self interest.

“In the past few months, Google has come under investigation by antitrust regulators in Europe. Rivals have accused Google of placing the Web sites of affiliates like Google Maps or YouTube at the top of Internet searches and relegating competitors to obscurity down the list. In the United States, Google said it expects antitrust regulators to scrutinize its $700 million purchase of the flight information software firm ITA, with which it plans to enter the online travel search market occupied by Expedia, Orbitz, Bing and others.”

This issue will become more important as the companies dominating Web search (Google, Microsoft and Yahoo) continue to increase their importance and also broaden their acquisition of companies offering web services.

The NYT’s position is moderate, recommending:

Google provides an incredibly valuable service, and the government must be careful not to stifle its ability to innovate. Forcing it to publish the algorithm or the method it uses to evaluate it would allow every Web site to game the rules in order to climb up the rankings — destroying its value as a search engine. Requiring each algorithm tweak to be approved by regulators could drastically slow down its improvements. Forbidding Google to favor its own services — such as when it offers a Google Map to queries about addresses — might reduce the value of its searches. With these caveats in mind, if Google is to continue to be the main map to the information highway, it concerns us all that it leads us fairly to where we want to go.


Google Open Spot Android app finds parking

July 9th, 2010

sf_retrieving_spotGoogle’s Open Spot Android app lets people leaving parking spots share the information with others searching for parking nearby. Running the app shows you parking spots within a 1.5km. New parking spots are assumed to be gone after 20 minutes and removed from the system.

People who announce open spots gain karma points, while those who report false spots, known as griefers, are on notice:

“We’re watching for behavior that looks like a griefer spoofing parking spots. We have a couple of mechanisms available to make sure someone can’t leave a bunch of fake parking spots. If we see this happening we will take steps to fix it.

This is a simple example of a context-aware mobile app that can further benefit from also knowing that you are driving, as opposed to riding, in your car and likely to want to find a parking spot, as opposed to doing 70mph on I-95 as it goes through Baltimore. Moreover, context would also inform that app that you are probably leaving a public parking spot and mark it automatically. However, such a feature should be smart enough to avoid being tagged by Google as a griefer and finding out what punishment Google has in store for you.


Google list of the 1000 most popular Web sites

May 28th, 2010

Google publishes a list of the 1000 most popular Web sites based on unique visitors to the top-level domain. The list is compiled by their (DoubleClick) Ad Planner group and shows estimates for the monthly number of unique visitors and pageviews. Not surprisingly, Facebook tops the list with 540M visitors and 570B page views per month.

Each site is categorized (e.g., as social network, web portal, search engine, etc) though some of these are surely wrong — e.g., #985, dropbox.com, is listed as “Myth & Folklore”. They say that the list excludes “adult sites, ad networks, domains that don’t have publicly visible content or don’t load properly, and certain Google sites.”

If you want to play with the data, a Karl Seguin has downloaded the data, added some additional attributes, and made it available in json. That would make it easy to run your own analysis on them — category distribution, country distribution, average load time, etc.